back to index

Build Dynamic Products, and Stop the AI Sideshow — Eliza Cabrera (Workday) + Jeremy Silva (Freeplay)


Whisper Transcript | Transcript Only Page

00:00:00.000 | All right. Well, thank you for joining us. We are here to talk AI products and specifically
00:00:21.120 | dynamic products, which we'll unpack in the next 20 minutes or so. A little bit about us before we
00:00:27.960 | jump in. I am Aliza Cabrera. I'm a principal AI product manager at Workday. I'm currently building
00:00:34.840 | with an incredible team, our financial audit agent. I also led go to market for our policy agent,
00:00:41.080 | as well as early access for our assistant, which is more like a co-pilot, as well as some of our
00:00:46.440 | early days kind of Gen AI features as well. My name is Jeremy Silva. I come from a data science
00:00:52.620 | machine learning background. I've been building language models since the dark ages, pre-GPT3,
00:00:57.180 | that is. And now I lead a product at a company called FreePlay, which exists to help teams build
00:01:01.820 | great AI products. Awesome. Let's get into it. So if there's one thing that we want you to take away
00:01:10.220 | from this session, it's to stop the AI sideshow, which I know sounds a little bit counterintuitive.
00:01:16.620 | We're all at an AI conference. All of us are talking about agents and AI. It's all over the place,
00:01:23.180 | right? So what exactly are we talking about here? Having AI leading your products, your go-to-market,
00:01:31.340 | your strategy. This was a really great approach when we were all trying to communicate that we were at the
00:01:36.700 | forefront of this technological disruption, right? But now everyone is really kind of saying the same thing.
00:01:45.660 | And if we look at the different products that have resulted in hindsight, which hindsight's kind of 2020,
00:01:52.380 | right? We're able to see that what we've been doing is building product to try to figure out what these
00:01:59.020 | different technological breakthroughs can do for us. So let's unpack what we're talking about here a bit.
00:02:08.140 | So let's go back maybe post-chat GPT, maybe for some of us in the room pre-GPT, but whenever your
00:02:14.940 | sort of aha moment was with LLMs, trying to figure out what you can do with the tech, what you can't,
00:02:20.380 | what the boundaries are, we ended up using chat UIs, content UIs, existing applications, right? To be able to
00:02:29.180 | really test the boundaries of these LLMs. We were also using multimodal to see what different kinds of inputs
00:02:35.980 | and outputs we could use the technology for. Then we realized we could ground the models.
00:02:43.820 | We had vector databases and RAG. We were trying to get to accuracy and truth, if we can agree on that.
00:02:51.020 | We had larger context windows and increased memory. We also weren't super, I would say, comfortable with
00:03:01.340 | having AI do work for us. So everything was a co-pilot, right? A buddy next to us who can help us get
00:03:07.980 | things done, but we don't want to be taking anybody's jobs away. We don't want to be sort of automating work
00:03:14.380 | until we realized that might actually be kind of nice to be able to have agents that can do things for us,
00:03:23.500 | to reason, to be able to use tools and various APIs to orchestrate across different business problems.
00:03:31.100 | And this is the state and sort of space I would say that we're in right now. We're not saying that these
00:03:37.420 | different approaches are wrong, but they're an approach to understand the technology. And it's not going to
00:03:44.540 | build you a differentiated strategy because everyone is doing the same thing.
00:03:48.460 | So why do we see these kind of like bolt-on, non-differentiated AI products persist?
00:03:56.700 | By working across dozens of enterprise companies at Freeplay, we've noticed a common trend emerge,
00:04:02.940 | which is companies know they rightly need to prioritize AI, but the way they do that is by creating
00:04:07.660 | this sort of centralized AI strategy. And what happens is this centralized AI strategy starts
00:04:13.500 | running as this sort of sidecar to their core product strategy, rather than the two be deeply
00:04:18.140 | integrated. There's different initiatives, sometimes even different teams. And then naturally, these sort
00:04:23.180 | of like bolt-on non-integrated AI features and products start to proliferate. So what are some of the
00:04:29.340 | causes of this sort of AI sideshow that we're talking about here? The first is that companies seek to
00:04:35.180 | mitigate the risk associated with AI by quarantining into specific corners of the product, albeit there
00:04:41.020 | like is new risk here, right? There's this new reliability question you have to ask yourself,
00:04:45.180 | which is like, can I even get this feature to work reliably enough to drive value for customers?
00:04:50.140 | Second, we see teams prioritizing the technology over their customer needs. They become the hammer in
00:04:59.100 | search of the nail. Rather than trying to solve their customer problems by harnessing the technology,
00:05:03.660 | they're just trying to find any manifestation of that technology. And we see this manifest in a bunch
00:05:08.460 | of predictable ways. We see teams building chatbots because chatbots demonstrate AI capability,
00:05:13.740 | not because customers are actually struggling with support. We see companies building document
00:05:18.060 | summarization, again, because it demonstrates capability, not because their users are suffering
00:05:22.620 | with information overload. And finally, we see companies creating this kind of top-down,
00:05:29.820 | the pushing solutions out from the top-down rather than setting that top-level strategy and letting
00:05:34.620 | the bottoms-up discovery process be the manifestation of that priority.
00:05:38.540 | So how do you avoid the AI sideshow here? The key is to integrate and align your AI and your product,
00:05:47.580 | and integrating AI risk into planning is a critical part of that. There is this new risk we're talking about,
00:05:55.180 | but instead of being shying away from that risk and trying to quarantine AI to specific corners
00:05:59.980 | of the product or specific teams, you need to deeply integrate that into your product planning.
00:06:04.700 | And this will require like some new muscles here, right? Like you need to kind of build these systems
00:06:09.100 | for evaluation, for testing, because if you're doing good prototyping and testing, you can at least kind
00:06:14.220 | of wrap your arms around that risk and know how to handle it. And then second, start with the customer
00:06:20.220 | problem. If you're inventing new problems to go solve with the advent of AI, you're probably gone
00:06:24.700 | astray here. And finally, like we talked about, enable that bottoms-up discovery process for AI products.
00:06:31.260 | It's likely your product folks who are boots on the ground every day, who understand the right solutions
00:06:36.540 | here, give them the space to experiment, prototype, and importantly, fail fast, but set that top-line strategy,
00:06:43.820 | and then allow the bottoms-up discovery process to take place. This is how you ultimately manifest AI
00:06:49.820 | products that feel like a natural and cohesive part of the product experience, rather than feeling bolt
00:06:55.420 | on. And that's ultimately the like, the hallmark of good successful AI integration are AI products that
00:07:02.540 | need not announce themselves as AI, but rather just solve the customer problem better than what came before.
00:07:08.940 | So the north star that Aliza and I are talking about today are AI products that are deeply and
00:07:13.900 | dynamically integrated into your product ecosystem. But the only way you get there is by aligning your
00:07:19.340 | strategy, your teams, and your roadmaps accordingly. And importantly, avoiding the AI sideshow.
00:07:25.900 | This is admittedly like an audacious north star. And especially if you're kind of stuck in this sideshow
00:07:32.700 | model, like how do you find your way out? This is where we think this crawl, walk, run approach comes into play.
00:07:38.140 | We're all new to building generative AI products. Like to some degree or another, we're all building the
00:07:45.020 | plane while we're flying it. The most successful teams we see here are those that crawl, walk, run
00:07:52.060 | their way into this new era of kind of generative AI products. Because what that allows you to do is it
00:07:57.740 | allows you to sort of build the capability iteratively while laying the foundation of that AI functionality
00:08:04.380 | throughout your product suite. So I want to like walk through an example here. With an example, we'll
00:08:10.540 | take like a customer support SaaS company, let's say they have like a shared inbox feature that customer
00:08:16.460 | support teams come on to work out of mature product, but they want to start integrating AI. So in this
00:08:22.300 | crawl phase, you're starting to build embedded AI experiences. You're likely in this phase, not building a
00:08:28.940 | whole lot of new product surface area. Rather, you're just like adding AI on the back end and
00:08:33.740 | starting to kind of accentuate and accelerate the existing functionality you have. If we take that
00:08:38.140 | customer support example, that might look something like, you know, building a feature that uses semantic
00:08:43.180 | search to like surface previous similar questions to help the user ground when they like are responding
00:08:48.780 | to their customer. And then in the walk phase, this is where we're starting to build more contextual and
00:08:55.340 | personalized AI experiences. Here we might actually we are starting to build like new product surface area,
00:09:01.980 | but we're probably not at the point yet where we need to like fundamentally rethink our core app
00:09:05.900 | architecture and our UX. If we go back to that example, that might look something like, you know,
00:09:12.140 | building a feature that will like suggest a draft ahead of time so that when the user comes in,
00:09:17.180 | there's already a draft there ready to go for them to start from. And then finally, where we land when we really
00:09:22.140 | start to run, this is where we're building those dynamic, interoperable and integrated AI experiences
00:09:28.060 | throughout our product suite. This is the stage where you do start needing to like re fundamentally
00:09:34.060 | rethink your UI, your UX and your app architecture. Because now your AI features like if we go back to
00:09:40.540 | our customer support example, it might look like an autonomous agent that can triage issues respond to
00:09:45.100 | customers. But importantly, it's operating across the product and and feature set. And in order to incorporate
00:09:50.860 | that kind of functionality, you do need to start rebuilding core surface area and like starting
00:09:55.580 | to revisit your UX. But importantly, along the way, you're not throwing out functionality,
00:10:01.180 | it's building on top of each other, it's that functionality is building as you go, right,
00:10:06.060 | you're just extending on it. And importantly, even at the crawl phase, you're still building embedded
00:10:10.940 | functionality, not this sort of like bolt on non integrated functionality. So I'll pass it to Aliza now.
00:10:17.580 | Yeah, so let's walk through a tangible example here, because there's a lot to unpack. So
00:10:24.140 | this problem space, I feel like everyone knows this, I've been living and breathing it for a few years. But
00:10:29.500 | HR service delivery or employee self service is all of us work in jobs, or you're running a company,
00:10:36.540 | your employees need to be able to get their questions answered quickly. And if they can't get those questions answered,
00:10:42.380 | they need help from a support person. So through a case, this could be a live agent, etc. So we've spent
00:10:48.780 | a lot of time working in this space. This is also where some products have found product market fit,
00:10:53.580 | especially with early sort of gen AI solutions. So where we started to, I would say, crawl with the
00:11:00.940 | technology, this was within our help product. So help has two components, there's a knowledge based solution,
00:11:07.820 | there's also a case management solution. And so early days, we took a look at the tech and said,
00:11:12.940 | where can we use gen AI really to affect change with customers. And so I know knowledge base has become
00:11:19.020 | like the back end for GPTs and just a best practice. But at the time we said, okay, we've got content gen in
00:11:24.620 | here, we've got translations in here, this is the content that's fueling the answers to all of those
00:11:29.500 | questions that employees are asking. And so there were two key features. So one was actually content
00:11:35.020 | authors, so they might come into an editor like this, they are going to upload, say, a policy doc. So
00:11:40.060 | imagine a benefits policy, like 20 plus pages long, they don't want to necessarily write that article,
00:11:45.660 | right? But they could have the AI ingest it, create an employee FAQ. In this case, we had talking points for
00:11:51.980 | managers, and they're able to get a consistent format. So the other thing I would mention is,
00:11:56.540 | we're thinking about content at scale. So this isn't for small sort of SMBs, this is large enterprise,
00:12:02.300 | who have content teams of say, like three to 15 people. And so you need to have a united sort of
00:12:08.060 | voice around that content that's coming out. So on top of that other feature, we put this translations,
00:12:14.460 | which you can see in the GIF here. In just a couple of clicks, I can go in and translate into one of the
00:12:21.340 | 34 different languages that we support. And you see we added on the left hand panel here,
00:12:26.460 | and the ability to actually manage versions as well. So I might have my base article,
00:12:31.020 | I'm generating talking points in English, and then I want to translate into French and Spanish,
00:12:35.420 | maybe Japanese. And you can see that you're managing those versions as well.
00:12:40.620 | A couple of things I want to call out here. Yes, we're using Gen AI and translations. But this isn't in
00:12:48.940 | in your face sparkles and chatbots and text fields all over the place. This was built for users who
00:12:56.140 | didn't know about Gen AI, this is 2023, wanted to be able to kind of get in and use the features without
00:13:02.780 | actually understanding the functionality. And it also, you know, keeping that human in the loop, we want to
00:13:08.460 | have the disclaimer around AI. And so we make sure that we've got enough little purple sparkles to
00:13:14.140 | let them know what they're using. But it's not the entire experience here. So this allowed us to go GA
00:13:21.340 | in 2024, or August, I should say, and sort of, I would say, kind of crawling with the functionality.
00:13:31.740 | So on top of that, so that's our content teams. So then we moved into what I would say is walking.
00:13:38.940 | This was, now we have our content drafted, but we actually need to solve the self-service problem.
00:13:44.780 | So as a manager, I might need to come in, Elaine in this case is trying to do a location change to San
00:13:51.100 | Francisco. And she knows a lot of the fields, but not all of them. And so she now has this sort of
00:13:57.020 | contextually aware co-pilot Workday Assistant that lives across Workday that she can sort of prompt.
00:14:03.820 | A lot of us are familiar with this functionality, but a couple of points I want to make here.
00:14:08.220 | One, we have the contextually aware suggestions, so it knows what's happening when I'm on the page.
00:14:13.500 | Also around the data processing, if you're looking at a help article, it's generally customer content,
00:14:19.340 | which is sensitive, but not nearly as sensitive as PII or personally identifiable information.
00:14:24.700 | Think about these tasks more on safe pay or compensation, things that are really sensitive,
00:14:29.340 | where employees are putting really sensitive information in. So this is the next level of
00:14:34.540 | sort of walking with the capabilities. The other piece I'd mention is that this was a platform
00:14:40.380 | capability, meaning that we had to be working across our suite. So we have HCM and financials,
00:14:46.300 | think benefits, procurement, core HCM, etc. And so there's a higher level of sort of top down and
00:14:53.180 | bottoms up alignment that had to happen to get these capabilities out the door.
00:14:57.420 | Then finally, running. So extending the same use case here, you may have seen a few months back,
00:15:05.900 | we announced our agent system of record. A subset of that functionality targeted towards, again,
00:15:11.900 | those employees and managers was really around the agentic capabilities behind the Workday Assistant.
00:15:17.980 | So again, our users don't necessarily want to know or have the sort of technical expertise around agents,
00:15:26.540 | but we still have that work happening behind the scenes, where our assistant becomes a lot more autonomous,
00:15:32.620 | proactive, listening to policy changes, notifying us with suggestions as well. And so you can see just
00:15:39.100 | thinking through this at scale, there's a much higher level of, I would say, sort of top-down strategy with bottoms-up execution that then happens,
00:15:47.660 | threading the needle across these different product experiences.
00:15:51.820 | So you can see here we've kind of gone from a single product within a SKU all the way across our sort of core platform, which some of you may know or not,
00:16:02.460 | but we serve like 60% of the S&P 500, so a pretty broad group.
00:16:08.220 | So where we would land with all of this when we talk about not making AI a sideshow,
00:16:13.980 | we're not telling you to stop working on agents or stop caring about AI, but understand that these are
00:16:21.900 | stepping stones in terms of teaching your organization, training up your organization on what it means to
00:16:28.620 | actually be building impactful AI experiences. And so as you sort of, I would say, mature as an organization,
00:16:36.700 | ideally where we want to get to is building dynamic products. I'm hearing some of this today in some of
00:16:42.620 | the talks, if you heard Sarah or Brian talking earlier, about building purposeful sort of vertical specific
00:16:49.900 | products. I think it's really interesting when we start thinking about dynamic products in terms of
00:16:56.620 | new problem spaces. I don't know if anyone else feels this way, but sometimes I feel like we're
00:17:01.180 | solving yesterday's roadmap with just like a much more powerful technology. And so as we digitize
00:17:07.340 | new data, new inputs in terms of our environment and spaces, we can see the problem space of the products
00:17:13.900 | that we're creating really sort of extend. I think especially with multimodal, this is where this
00:17:19.980 | gets really compelling as well. When we have frictionless multimodal experiences that interoperate,
00:17:25.900 | I would say interoperability and RL are still pretty relevant within the agent sphere.
00:17:30.780 | But when we think about dynamic products that are sort of responsive to your environment,
00:17:35.740 | this is where we really start to see, I would say, the next generation of products come into play.
00:17:42.460 | So hopefully this sparked a few thoughts, maybe some questions. If you want to connect, feel free to scan
00:17:50.940 | our QR codes. Happy to connect if you want to drop us a note. We'll also be around the rest of the week.
00:17:56.060 | So happy to chat. And we are right at time.
00:18:00.140 | Look at that.
00:18:01.340 | Thanks, everyone.