All right. Well, thank you for joining us. We are here to talk AI products and specifically dynamic products, which we'll unpack in the next 20 minutes or so. A little bit about us before we jump in. I am Aliza Cabrera. I'm a principal AI product manager at Workday. I'm currently building with an incredible team, our financial audit agent.
I also led go to market for our policy agent, as well as early access for our assistant, which is more like a co-pilot, as well as some of our early days kind of Gen AI features as well. My name is Jeremy Silva. I come from a data science machine learning background.
I've been building language models since the dark ages, pre-GPT3, that is. And now I lead a product at a company called FreePlay, which exists to help teams build great AI products. Awesome. Let's get into it. So if there's one thing that we want you to take away from this session, it's to stop the AI sideshow, which I know sounds a little bit counterintuitive.
We're all at an AI conference. All of us are talking about agents and AI. It's all over the place, right? So what exactly are we talking about here? Having AI leading your products, your go-to-market, your strategy. This was a really great approach when we were all trying to communicate that we were at the forefront of this technological disruption, right?
But now everyone is really kind of saying the same thing. And if we look at the different products that have resulted in hindsight, which hindsight's kind of 2020, right? We're able to see that what we've been doing is building product to try to figure out what these different technological breakthroughs can do for us.
So let's unpack what we're talking about here a bit. So let's go back maybe post-chat GPT, maybe for some of us in the room pre-GPT, but whenever your sort of aha moment was with LLMs, trying to figure out what you can do with the tech, what you can't, what the boundaries are, we ended up using chat UIs, content UIs, existing applications, right?
To be able to really test the boundaries of these LLMs. We were also using multimodal to see what different kinds of inputs and outputs we could use the technology for. Then we realized we could ground the models. We had vector databases and RAG. We were trying to get to accuracy and truth, if we can agree on that.
We had larger context windows and increased memory. We also weren't super, I would say, comfortable with having AI do work for us. So everything was a co-pilot, right? A buddy next to us who can help us get things done, but we don't want to be taking anybody's jobs away.
We don't want to be sort of automating work until we realized that might actually be kind of nice to be able to have agents that can do things for us, to reason, to be able to use tools and various APIs to orchestrate across different business problems. And this is the state and sort of space I would say that we're in right now.
We're not saying that these different approaches are wrong, but they're an approach to understand the technology. And it's not going to build you a differentiated strategy because everyone is doing the same thing. So why do we see these kind of like bolt-on, non-differentiated AI products persist? By working across dozens of enterprise companies at Freeplay, we've noticed a common trend emerge, which is companies know they rightly need to prioritize AI, but the way they do that is by creating this sort of centralized AI strategy.
And what happens is this centralized AI strategy starts running as this sort of sidecar to their core product strategy, rather than the two be deeply integrated. There's different initiatives, sometimes even different teams. And then naturally, these sort of like bolt-on non-integrated AI features and products start to proliferate. So what are some of the causes of this sort of AI sideshow that we're talking about here?
The first is that companies seek to mitigate the risk associated with AI by quarantining into specific corners of the product, albeit there like is new risk here, right? There's this new reliability question you have to ask yourself, which is like, can I even get this feature to work reliably enough to drive value for customers?
Second, we see teams prioritizing the technology over their customer needs. They become the hammer in search of the nail. Rather than trying to solve their customer problems by harnessing the technology, they're just trying to find any manifestation of that technology. And we see this manifest in a bunch of predictable ways.
We see teams building chatbots because chatbots demonstrate AI capability, not because customers are actually struggling with support. We see companies building document summarization, again, because it demonstrates capability, not because their users are suffering with information overload. And finally, we see companies creating this kind of top-down, the pushing solutions out from the top-down rather than setting that top-level strategy and letting the bottoms-up discovery process be the manifestation of that priority.
So how do you avoid the AI sideshow here? The key is to integrate and align your AI and your product, and integrating AI risk into planning is a critical part of that. There is this new risk we're talking about, but instead of being shying away from that risk and trying to quarantine AI to specific corners of the product or specific teams, you need to deeply integrate that into your product planning.
And this will require like some new muscles here, right? Like you need to kind of build these systems for evaluation, for testing, because if you're doing good prototyping and testing, you can at least kind of wrap your arms around that risk and know how to handle it. And then second, start with the customer problem.
If you're inventing new problems to go solve with the advent of AI, you're probably gone astray here. And finally, like we talked about, enable that bottoms-up discovery process for AI products. It's likely your product folks who are boots on the ground every day, who understand the right solutions here, give them the space to experiment, prototype, and importantly, fail fast, but set that top-line strategy, and then allow the bottoms-up discovery process to take place.
This is how you ultimately manifest AI products that feel like a natural and cohesive part of the product experience, rather than feeling bolt on. And that's ultimately the like, the hallmark of good successful AI integration are AI products that need not announce themselves as AI, but rather just solve the customer problem better than what came before.
So the north star that Aliza and I are talking about today are AI products that are deeply and dynamically integrated into your product ecosystem. But the only way you get there is by aligning your strategy, your teams, and your roadmaps accordingly. And importantly, avoiding the AI sideshow. This is admittedly like an audacious north star.
And especially if you're kind of stuck in this sideshow model, like how do you find your way out? This is where we think this crawl, walk, run approach comes into play. We're all new to building generative AI products. Like to some degree or another, we're all building the plane while we're flying it.
The most successful teams we see here are those that crawl, walk, run their way into this new era of kind of generative AI products. Because what that allows you to do is it allows you to sort of build the capability iteratively while laying the foundation of that AI functionality throughout your product suite.
So I want to like walk through an example here. With an example, we'll take like a customer support SaaS company, let's say they have like a shared inbox feature that customer support teams come on to work out of mature product, but they want to start integrating AI. So in this crawl phase, you're starting to build embedded AI experiences.
You're likely in this phase, not building a whole lot of new product surface area. Rather, you're just like adding AI on the back end and starting to kind of accentuate and accelerate the existing functionality you have. If we take that customer support example, that might look something like, you know, building a feature that uses semantic search to like surface previous similar questions to help the user ground when they like are responding to their customer.
And then in the walk phase, this is where we're starting to build more contextual and personalized AI experiences. Here we might actually we are starting to build like new product surface area, but we're probably not at the point yet where we need to like fundamentally rethink our core app architecture and our UX.
If we go back to that example, that might look something like, you know, building a feature that will like suggest a draft ahead of time so that when the user comes in, there's already a draft there ready to go for them to start from. And then finally, where we land when we really start to run, this is where we're building those dynamic, interoperable and integrated AI experiences throughout our product suite.
This is the stage where you do start needing to like re fundamentally rethink your UI, your UX and your app architecture. Because now your AI features like if we go back to our customer support example, it might look like an autonomous agent that can triage issues respond to customers.
But importantly, it's operating across the product and and feature set. And in order to incorporate that kind of functionality, you do need to start rebuilding core surface area and like starting to revisit your UX. But importantly, along the way, you're not throwing out functionality, it's building on top of each other, it's that functionality is building as you go, right, you're just extending on it.
And importantly, even at the crawl phase, you're still building embedded functionality, not this sort of like bolt on non integrated functionality. So I'll pass it to Aliza now. Yeah, so let's walk through a tangible example here, because there's a lot to unpack. So this problem space, I feel like everyone knows this, I've been living and breathing it for a few years.
But HR service delivery or employee self service is all of us work in jobs, or you're running a company, your employees need to be able to get their questions answered quickly. And if they can't get those questions answered, they need help from a support person. So through a case, this could be a live agent, etc.
So we've spent a lot of time working in this space. This is also where some products have found product market fit, especially with early sort of gen AI solutions. So where we started to, I would say, crawl with the technology, this was within our help product. So help has two components, there's a knowledge based solution, there's also a case management solution.
And so early days, we took a look at the tech and said, where can we use gen AI really to affect change with customers. And so I know knowledge base has become like the back end for GPTs and just a best practice. But at the time we said, okay, we've got content gen in here, we've got translations in here, this is the content that's fueling the answers to all of those questions that employees are asking.
And so there were two key features. So one was actually content authors, so they might come into an editor like this, they are going to upload, say, a policy doc. So imagine a benefits policy, like 20 plus pages long, they don't want to necessarily write that article, right? But they could have the AI ingest it, create an employee FAQ.
In this case, we had talking points for managers, and they're able to get a consistent format. So the other thing I would mention is, we're thinking about content at scale. So this isn't for small sort of SMBs, this is large enterprise, who have content teams of say, like three to 15 people.
And so you need to have a united sort of voice around that content that's coming out. So on top of that other feature, we put this translations, which you can see in the GIF here. In just a couple of clicks, I can go in and translate into one of the 34 different languages that we support.
And you see we added on the left hand panel here, and the ability to actually manage versions as well. So I might have my base article, I'm generating talking points in English, and then I want to translate into French and Spanish, maybe Japanese. And you can see that you're managing those versions as well.
A couple of things I want to call out here. Yes, we're using Gen AI and translations. But this isn't in in your face sparkles and chatbots and text fields all over the place. This was built for users who didn't know about Gen AI, this is 2023, wanted to be able to kind of get in and use the features without actually understanding the functionality.
And it also, you know, keeping that human in the loop, we want to have the disclaimer around AI. And so we make sure that we've got enough little purple sparkles to let them know what they're using. But it's not the entire experience here. So this allowed us to go GA in 2024, or August, I should say, and sort of, I would say, kind of crawling with the functionality.
So on top of that, so that's our content teams. So then we moved into what I would say is walking. This was, now we have our content drafted, but we actually need to solve the self-service problem. So as a manager, I might need to come in, Elaine in this case is trying to do a location change to San Francisco.
And she knows a lot of the fields, but not all of them. And so she now has this sort of contextually aware co-pilot Workday Assistant that lives across Workday that she can sort of prompt. A lot of us are familiar with this functionality, but a couple of points I want to make here.
One, we have the contextually aware suggestions, so it knows what's happening when I'm on the page. Also around the data processing, if you're looking at a help article, it's generally customer content, which is sensitive, but not nearly as sensitive as PII or personally identifiable information. Think about these tasks more on safe pay or compensation, things that are really sensitive, where employees are putting really sensitive information in.
So this is the next level of sort of walking with the capabilities. The other piece I'd mention is that this was a platform capability, meaning that we had to be working across our suite. So we have HCM and financials, think benefits, procurement, core HCM, etc. And so there's a higher level of sort of top down and bottoms up alignment that had to happen to get these capabilities out the door.
Then finally, running. So extending the same use case here, you may have seen a few months back, we announced our agent system of record. A subset of that functionality targeted towards, again, those employees and managers was really around the agentic capabilities behind the Workday Assistant. So again, our users don't necessarily want to know or have the sort of technical expertise around agents, but we still have that work happening behind the scenes, where our assistant becomes a lot more autonomous, proactive, listening to policy changes, notifying us with suggestions as well.
And so you can see just thinking through this at scale, there's a much higher level of, I would say, sort of top-down strategy with bottoms-up execution that then happens, threading the needle across these different product experiences. So you can see here we've kind of gone from a single product within a SKU all the way across our sort of core platform, which some of you may know or not, but we serve like 60% of the S&P 500, so a pretty broad group.
So where we would land with all of this when we talk about not making AI a sideshow, we're not telling you to stop working on agents or stop caring about AI, but understand that these are stepping stones in terms of teaching your organization, training up your organization on what it means to actually be building impactful AI experiences.
And so as you sort of, I would say, mature as an organization, ideally where we want to get to is building dynamic products. I'm hearing some of this today in some of the talks, if you heard Sarah or Brian talking earlier, about building purposeful sort of vertical specific products.
I think it's really interesting when we start thinking about dynamic products in terms of new problem spaces. I don't know if anyone else feels this way, but sometimes I feel like we're solving yesterday's roadmap with just like a much more powerful technology. And so as we digitize new data, new inputs in terms of our environment and spaces, we can see the problem space of the products that we're creating really sort of extend.
I think especially with multimodal, this is where this gets really compelling as well. When we have frictionless multimodal experiences that interoperate, I would say interoperability and RL are still pretty relevant within the agent sphere. But when we think about dynamic products that are sort of responsive to your environment, this is where we really start to see, I would say, the next generation of products come into play.
So hopefully this sparked a few thoughts, maybe some questions. If you want to connect, feel free to scan our QR codes. Happy to connect if you want to drop us a note. We'll also be around the rest of the week. So happy to chat. And we are right at time.
Look at that. Thanks, everyone.