. So welcome. I know a lot of people might be familiar with you and especially all the amazing work you've done at Google in the past and with Sierra overall, but maybe just give a one-minute intro on what Sierra is, who you serve, the scale of it. Yeah. So Sierra, in a nutshell, we help businesses build better, more human customer experiences with AI.
And concretely, what we're trying to do is bridge this age-old gap between businesses wanting to provide great care, customer service, customer experience on the one hand, and the impossibility of doing that on the other hand because of cost. And I think we've all been there, just like being on hold.
It's like zero, zero, zero, zero, zero. Or I was walking into work a couple weeks ago and there's a dude on a call holding his iPhone with AirPods in, yelling representative, representative, representative, representative, representative. And I'm not sure what circle of hell involves waiting on hold and trying to get a problem solved, but I'm sure it's one.
And we're trying to bridge that gap. And so concretely, we're a year in change after launch. We have hundreds of customers. We'll serve hundreds of millions of consumers this year. We work with folks like ADT, the largest home security company in the country, built an AI for them. If any of you have a SiriusXM subscription and contact them, you'll speak with Harmony, an AI agent that we built with and for them, who picks up the phone and can help you get back up and running, including sending a satellite signal from space to get your radio up and running again.
We work with one of the largest mortgage originators in the country and then a lot of local tech companies that you would have heard about. And so that's what we do in a nutshell. And what excites us is we think that every company in the future is going to have an agent, its own branded customer-facing agent.
And we think it's what comes after the website, what comes after the mobile app. And we want to help the great companies of the world build their own and do it well. So talking about AI architect, I would say the killer use cases of AI are coding and customer support today.
The coding, I would guess, Vibe coding and some of these ideas are kind of taking hold. Yeah. On the customer support side, you know, Brett mentioned this idea of the AI architect where instead of managing software, you're almost like the personality coach and like really thinking through what's the vibe that the agent should have, how does it integrate?
So what is an AI architect? So around the emergence of the Internet and the web, there was this role of webmaster, right? And you don't really hear webmaster thrown around a lot these days. But it was someone who was creating a company's, in essence, digital storefront, right? If they're a business.
And thinking about not only the technology, right? Are you building an ASP or you have static pages or whatever in the dark ages of the web? But also it was like, what does it look like? What does it feel like? And so on. So I think the AI architect, I would say, is kind of the AI era and AI agent version of the webmaster in a way.
And it's actually, Brett shared on the podcast, it's a role that we've seen emerge organically across a set of our customers. And we heard it first from one of our largest customers where the team of folks who are responsible for managing, coaching, improving, building their agent came to calling themselves the AI architect.
And I think there are three parts to it. One is you've got to understand the technology, have a little bit of a feel for what agents can do. And so it doesn't mean that you've, you know, pre-trained a trillion parameter model or even been hands-on with a Lang chain or even vibe coding.
But having a feel for the capabilities, number one. Number two is, and a pattern we've seen, some interesting things is a company's agent needs to not only manifest the functional capabilities of the company, but also be something of a brand ambassador, right? What's the voice? What are the values?
What's the tone? How do you come across? How do you create a connection with the customer? So there's a real, I would say, aesthetic and taste element to it. How should it sound? Does it have a persona? Right? Some of our customers' agents are the X company virtual agent.
In contrast, we work with a company called Chubbies. They make great, very short shorts that I am not cool enough to wear. Their agent is named Duncan Smothers and will tell kind of irreverent, bro-y jokes and so on and speak in a really funny tone. So making decisions about that and many other ways that the agent comes across is the second part of it.
And then third, ultimately, a business wants to, in engaging with its customers, drive business outcomes. So what business outcomes are you driving towards? So it's this three hats, it's technology, it's experience and aesthetics and design, and it's business. And I think it will be one of the fastest growing job types in the next five years.
And from what I've seen, you know, a front row seat on this, one of the most interesting as well. Were most of them already in the customer support org? Or are some of these people coming from more technical teams and kind of like creating this blend of a role?
All of the above. But the area I've been most excited about is seeing individuals in customer experience teams. And, you know, engineering teams are often celebrated and held up and so on. Your CX team less often so. But what's emerged are people who really do have a feel for what a great customer experience looks like and are hands on with the technology enough and have a sense for what the business is trying to do.
So the answer is folks emerge from all of those teams. I would say the most common and the one I'm most excited about are the folks who have been close to the customer, in service, support, care, retailing settings and so on, who kind of put this badge on and become the AI architect or one of the AI architects for their business.
I'm sure there's a lot of people in the audience that have been tasked to figure out the AI strategy of their company. Yes. Whatever that means. The board says we need an AI strategy. The board says they're going to be really angry. Yes. So when you think about all the AI architects that you work with, what are like some of the traits of the most successful one?
Are they really curious? Do they try a lot of products? Are they very structured in how they evaluate? Are they maybe more bite based on how they think about what tools to use? I mean, there's a lot in what has made companies we've worked with successful in developing an AI strategy and actually applying it.
I think broadly across the businesses we work with, the most successful have not let perfect be the enemy of the good. You think about large language models and agents. These are probabilistic pieces of software that could say or do anything. And so necessary in adopting them is some amount of risk tolerance and being willing to step into the pond and try things out.
So a spirit of exploration, trying new things and taking some risk is number one. Number two is a deep focus on actually solving customer problems and real business problems. I think too often there's this, hey, let's apply some AI to that and we'll have, you know, emerge from that our AI strategy.
No, no, no, no, no. Like start with a concrete valuable problem to solve and it can be very narrow. You know, we, to give you a sense for like where we and one of our customers started was something as simple as processing a single return. And like we celebrated and they celebrated when their AI picked up the phone and successfully drop shipped someone a new pair of shoes and printed and gave them a shipping label to print.
Right, it is not, you know, the pinnacle of complexity and so on, but you start somewhere and learn and grow from there. And then the last thing I would say is not shoehorning the way you've built teams in the past or done things in the past into the AI era.
And so our most successful customers and partners have actually re-architected their customer experience and customer service teams around supporting the AI agent in being and doing better. So there's a set of people, for instance, of one of our customers who will review a couple hundred conversations a day and basically coach and refine the agent on how to do it better, how to say it better, how to make better decisions, how to have greater empathy, how to have better judgment.
And that was not a team that has existed, you know, anywhere in the past. And so really thinking from first principles and not just trying to translate naively the old to the new would be a third element of it. You know, architect, that's kind of like an NRN technical connotation in enterprise is like, hey, you're like the software architect.
What were some of the build versus buy fallacies that you've seen working with customers where you maybe have the customer support team that just wants something today and the engineering team is like, oh, we can just build this. It's just going to take three times as long and cost twice as much.
That's my example. The multiples are more than that, but yes. Yeah. Yeah. So it's such an important question and we get the, oh, we're going to build our own. You know, why should we work with you all the time? And it's funny. We have a slide. We call it the agent iceberg where I think technical teams think, oh, awesome.
We're going to choose our language model. You know, should we LANG graph or LANG chain and take it off the shelf? You know, what embeddings model will we use? What vector database? And, you know, maybe we'll integrate some tools. You know, we're done. And then you put on your scuba tank and you kind of, you go under the surface of the ocean like, oh, my God.
You know, there are hundreds of things. How do you do regression testing and unit testing? How do you do model migration and model upgrades? In voice, it gets just shockingly complicated where how do you separate primary from secondary speaker, handle interruptions, and a thousand other things in the agent development lifecycle.
And so we come to our customers with what we call AgentOS, our platform for building agents. And it's a very sophisticated toolkit for ENCODE building very sophisticated customer facing agents. Now, the architect, right, there's this whole other side to building excellent customer facing agents, which is the experiential, the brand, the market.
So paired with that, we have a set of no-code tools that enable non-technical users to build, refine, coach, edit, update their agent. And importantly, these two seamlessly interoperate. And so I think when folks approach us and say, hey, we're just going to roll our own, they look at, oh, my goodness, all of the things under the surface of the iceberg.
The problems that we have spent, you know, the better part of two years running into and then solving and pulling together in a very coherent platform to build these scaled customer facing agents that can pick up the chat, pick up the phone, and handle a high degree of complexity.
And the set of tools for non-technical users to contribute to the agent as well. And that rings quite true. And where we have had companies we've interacted with go down the path of build your own, we've had many of them come back nine months later and it's like, hey, it was deeper and darker than we expected under there.
You know, can we talk? So that's kind of the journey and the pattern that we see. Yeah. What's the agent building iteration process? Like when people are building on Sierra or when you're seeing people build agents, like how should people think about how to push the envelope? And you can also do things like you couldn't A/B test a customer support person before.
Now with Sierra you can kind of have different personalities. Like do you see people be very creative with that? Yeah. So a couple levels. One, we've had to essentially invent a new software development lifecycle. We call it the agent development lifecycle where you have this non-deterministic piece of software.
So how do you test it? Well, one of the things we've discovered is the solution to most problems with AI is more AI. And so when you're testing a company's agent, how do you do that? You can't just put in a single input and hope you get the right output.
We've built a whole user simulation testing harness where we can create dozens of different personas with simulated accounts, even simulated devices that they're troubleshooting and, you know, the amber light is on or off and so on. And so first and foremost have had to think through all of the parts of the software development lifecycle.
With that as the foundation, you then have this approach to building out every business's agent, which starts with deeply understanding what they're trying to do with their customers. What are the key customer journeys? And then we have a variety of techniques for modeling those in code in a way that is very expressive and lets the agent simultaneously kind of hit the curveballs and flex.
If someone comes in on one topic and goes to another, do that. But then when it matters, right, be, you know, down to fully deterministic where needed. Like there's no hallucinating, you know, compliance language that you want. From there, we then use the simulations testing harness to, in essence, have tens or hundreds of thousands of conversations with the agent before it's live for the first time.
And from there, we can tell, oh, you know, it doesn't know enough about this part or it needs to be able to handle this corner case better and so on. And then it really gets interesting when we go live and we have a set of tools that give CX teams and engineering teams deep insight into where does the agent realize, like, oh, like, I'm beyond my abilities on this.
I'm going to hand off to a person. And then we have this closed loop set of tools where the agent can learn from its past mistakes. It can be coached. It can be improved. And you end up with this kind of upward spiral of performance and capability. Yeah, you mentioned beyond my ability.
What's your process for, like, staying up to date on the ability of the models? I think there's a lot of people that try a model or try an app and it's like, it doesn't work. But then it works a month later because the models improve so quickly. What's your process for staying up to date on it?
Yeah. Well, first of all, if you feel like things are changing faster than they ever have before, it's because they are. I feel like whether it's, I don't know, Dance Dance Revolution or Beat Saber, just like new models, new agent frameworks, new benchmarks and so on are just coming down the pike at an incredible and increasingly fast rate.
So I think one thing is, I think, fairly typical. I think we all do, dipping into Twitter acts, reading the latest research papers, and really trying to just immerse oneself in things that are even adjacent to what specifically you're doing. So we don't yet use video models, but gosh, what the, you know, VO3 models are capable of and what's emerging in L is like, okay, it gives you a hint of what's going to be around the corner, maybe in the area that you're directly working in.
And then I just think there is no substitute for hands-on and using it. And so really being hands-on with the tools, whether or not, again, you're directly applying them in your work or what it is you're building, I think is so important. And I would argue that understanding where things are going is even more important than understanding where things are today.
So the first derivative is more important than kind of the absolute state of capability. An example of a decision we made early on is like we had a strong sense when we started the company that the cost per token was going to plummet, that model capabilities were going to expand.
And so you want to be building to where the puck is going as opposed to where it currently is. And so almost having a ritual where on a cadence you're checking in with the capabilities of the model. I keep in a Google Doc some problems that were too hard, right, for GPT-4 to solve.
But, you know, 01 or 03, can 03 mini do it? And you're checking in on the capabilities of these models to basically plot, again, the slope so that you can understand, cool, if we start building this now, this will intercept, you know, at this period of time. And we could have this level of model capability but with this latency.
And I think that's how you build truly great products that are at the frontier. It's by anticipating the frontier. Yeah. I know we only got a minute and a half left. But you spent 18 years at Google. You kind of started the AR/VR project. Yeah. You started the Lens project.
What do you think about the next interface for AI? So we had text. Now we have voice. Yeah. Obviously, video is coming. Do you think the glasses are going to work? Is it more ambient agents? Any thoughts? Yeah. First of all, I think how we interact with AI and agents is going to look super different from today.
Today, it's like mushed into what looks like AOL Instant Messenger, right? Or a chat interface. Or it's a voice call. I think agents are going to look like shape shifters that can summon text, voice, video, imagery, user interface, and more. And you're going to interact with every sense and mechanism that you have.
As for the hardware, look, I spent 10 years of my life building in AR and VR. My strong view is that glasses and wearables will be the ultimate vehicle for the trusted personal AI that is with you. Something that can see what you see, hear what you hear, that can whisper in your ear or, you know, nudge you that way visually.
I just think we're on this path to every one of us having an omnipresent, omni-capable AI assistant that can help us navigate the world, lead better and healthier lives, be smarter than we are on our own. And I think, you know, going into your pocket or purse or bag to retrieve the, you know, rectangle of glass and metal and, you know, swipe up and whatever it is.
I just, I think for such an important capability that we'll feel in time like an extension of ourselves, you want that to be with you throughout the day. And so I think wearables, I think glasses will be a central part of that. And it's something I'm super excited to see emerge.
Awesome. Thank you, Clay, for joining us. Alessio, thank you so much. Thank you. Thanks, everyone. you you you We'll see you next time.