Back to Index

From LLMs to Agents: The Next Leap


Transcript

Thank you very much for joining us. Thank you so much for joining us today. We've got one last very, very special event. So I'm excited to announce that we'll have a fireside chat with Adam D'Angelo, co-founder and CEO at Quora. And this will be the final event of the evening.

Adam was the first CTO at Facebook and has been building Quora for the last 15 years. And Quora was a first mover in incorporating Gen.AI and building Po, which we'll talk a lot about. So please help me welcome Adam D'Angelo to the stage. Thank you for being here. So I thought the best way to kick this off was to ask Po for a question that I should ask you.

And so this is what it came up with. Can you share the inspiration behind Po and how it fits into your vision for the future of knowledge sharing and AI? Sure. So first of all, thanks so much for having me. We started looking into AI at Quora, into large language models when we saw GPT-3.

Q and we started experimenting with using it to generate answers on Quora. And we learned pretty quickly that the sort of the right paradigm for humans getting knowledge from large language models was going to be something that looked more like chat than like the sort human library orientation of the Quora product.

Did you think that before chat GPT? Yeah. Yeah. This was before chat GPT. So through that experimentation, we decided that this was such a paradigm shift that it sort of called for a new kind of interface. interface. And so we set out to build a private chat product. And for us, because we weren't building the large language models ourselves, we thought a lot about, like, what is our role in this ecosystem that's going to emerge?

And we decided to make a bet that there would be a wide diversity of both of language models and also of products built on top of those language models, which today we call agents. And we thought there would be a need for a sort of common interface to all of those.

And so in the same way that the web browser was very important for the development of the internet, we thought that there was a need for a sort of a single interface that people could use. So before the web browser, if you were building an internet product, you had to build client software, server software, build your own protocol, and that was just a ton of work.

And so you had a limited number of applications. You had things like SMTP for email, you had IRC, you had FTP. But anyone who wanted to make something like, you know, a hobbyist trying to make a new product, they could not do that until the web browser came along and just greatly reduced the barrier to building basically an internet product.

And so after the web browser, you had things like home pages where anyone could have a home page, you had a lot of these like FAQ pages with all kinds of explosion of internet products that were enabled by the web browser. And so our hope with Poe is to have the same effect for AI where we make it so that anyone who is either training a model or building a product on top of a model can just plug into Poe and we provide iOS, Android, Windows, Mac clients, a web interface, we provide monetization, we provide history and sort of all this work that you need to get an AI product to millions of consumers around the world.

So we just sort of have a single place to do all of that. Yeah. I think one of the interesting things that we'll probably dive into is I think you are one of the few kind of like consumer facing companies that we have on stage today. And so I'm curious, how are you seeing consumers use and interact with AI?

What are the common kind of like patterns and use cases? It's really varied and I think it's interesting. One of the great things about large language models is that they are so general and they're just so capable of doing almost anything. And that makes it challenging to build on top of them, but it makes it just a great experience as a user.

You can continually find new use cases. So we have a very wide variety of use cases on Poe. There's everything from writing assistance to question answering to things like role play, there's homework help, there's a lot of assistance in people doing their jobs. There's a lot of media, a lot of marketing usage, a lot of people creating media.

We support image and video and audio models. And the central value proposition that we've landed on for consumers is Poe is one place where you can get all the different AI products under a single subscription. And that's very appealing for a certain set of people, especially true for people who are either developers building something on top of AI because they want to try a lot of different models.

It's valuable for marketers. They often want to create things with many different market models. And it's just anyone who's sort of like an AI enthusiast, this is a pretty attractive value for them. Yeah. Going to Poe, you can scroll down the list of all the different models on there.

And there's a lot. What are you seeing in terms of which ones are people using? Which ones are most popular? Why? You guys have a really unique vantage point into how people are using all these different models. Yeah. Yeah. So we actually just published this blog post. It's called the AI ecosystem report, which I would encourage people to check out if you're interested.

If you just do a Google search for Poe blog, you'll find it as the first entry right now. I'd say the interesting story of the last few months is really the growth of reasoning models. So this includes 03, 04 mini, Gemini 2.5. It includes Claude, Sonnet 3.7. And there's the DeepSeek models.

There's going to be a growing set of models in this category. But those have sort of really grown in usage recently. It's just incredibly powerful what they can do. Especially if you're doing something relating to writing code. The reasoning really adds a boost to your accuracy. I know Anthropic published some study on how people were using Claude and they found that coding was an abnormally high percentage.

Do you think that's same of people who are using Poe? I think it's probably a little bit different. I'd say, you know, we have a biased group of users, which are the, you know, we do surveys of our users and ask why are you using Poe? And the top reason is that it's a place where you can get all the AI in one place.

And so we sort of have a biased selection for the kind of people who want to use multiple models. I think in the same way, I would guess that Anthropic tends to get more developer-oriented users. We do have a pretty decent percentage of usage that is code related. But I'm sure it's not as high as Anthropic.

Yeah. You mentioned a few different modalities. Voice, images. Which ones are you seeing kind of resonate? I know OpenAI, when they launched their kind of image support, all the Studio Ghibli images went crazy viral. Are you seeing similar things? Or, you know, we had someone talking about voice earlier and how powerful that was.

Yeah. I mean, there's certainly a lot of excitement around the new image models. And we have the OpenAI model, the GPT-40 image gen. It's called GPT-Image-1 in their API. And that's what we have it under on Poe. It's a great model. I'd say overall, to answer your question, text models still dominate usage.

And we've been thinking about why this is. I think that there's something where the image and video models are still not great. Like, you know, GPT-Image-1 was a huge step forward. But we're still not at the point where you can reliably get graphics that are sort of like useful to use in, say, a presentation relative to what a designer can do.

Whereas when you, you know, with the text models, especially with reasoning, the quality is great. Right? It's much better than what people can create themselves a lot of the time. And so, like, just like the economic value there is a lot higher right now. I think this may change as the image and video models get stronger.

And, yeah, we'll just have to see. But our view is we're just trying to provide a general interface. So wherever the market goes, we want to provide the best service there. Yeah. One question I have on that is, like, how much do people care about the model they're using?

And why do they care? Like, I think, you know, if you mentioned some of the use cases, and it sounds like you have specific kind of, like, agents for, you know, doing different things. Do people care about the model under the hood? Like, this has always been something that I've thought about.

Yeah. I mean, it varies. And I think this is back to the selection effect, where the kind of people use Poe tend to care. I would say that a lot of the use cases are around someone trying to be the best best they can be at some job they're trying to do.

So you might have, like, a writer, someone doing creative writing, someone writing a book. They don't just want to, like, have one model they can consult. They want to try -- they're trying to find, like, the right wording for something, or generate ideas, or find inconsistencies in some passage.

And they want to run that through a bunch of different models. And they'll be very particular about it. And we know, because, you know, occasionally a model will -- a provider of the model will go down, and we'll have downtime, and people are very unhappy. They don't just switch over to the other ones that are up.

Interesting. Do you let them run it side by side to compare the outputs, or one at a time? We let people query multiple models in parallel. So there's a syntax on PoE where you say @, and then the model name. And you can stack many of these @ mentions in the front of a message.

And then in parallel, we'll send it to all the bots. For the reasoning models, you mentioned they're better at things. Do you think part of the appeal is also the reasoning traces that you can see? I know when Deep Seek launched kind of their chat app, a lot of people really liked that, and they theorized that's why it was getting so much attention.

Do you see that UX trick being interesting? Or is it really just -- they're just better, and people just care about the quality? You know, I don't know where the market is going to shake out on sharing the reasoning traces. I don't know how much -- certainly people like that with Deep Seek, but I don't know how much of that was novelty.

And especially as these models get optimized and they get faster, it's just a lot that you don't care to say. And so it may be that we end up on showing summaries of the reasoning traces. Or maybe that they're hidden, maybe they're hidden behind a click, something like that.

For now, I think people like seeing the reasoning when you can provide it. But I don't know if that's something we're going to want long-term. Yeah. One of the things that Poe allows for, besides just accessing the raw models, is accessing kind of like agents built in a variety of ways.

So Asaf, who was talking earlier, who's the creator of GPT Researcher, they have a bot on Poe. How do people create these bots on Poe? So there's a few different options depending on your level of technical sophistication. So the simplest option is we allow prompting. And so you can just put in a prompt and a base model.

And you choose a base model. So you can choose an existing model on Poe. And you add a prompt to it. And you create what's called a prompt bot. And that bot is just going to use the base model and follow the instructions in the prompt. And it's very simple, but this is actually pretty valuable for people because it saves them from having to repeatedly enter the prompt.

It also means that you can, once you create a bot like this, you can share it with other people. We have a whole category in Poe where you can explore the bots that other people have shared and see the most popular ones and search for them. Do you see that people are mostly creating bots for themselves or mostly public ones that they're sharing?

A lot of it is creating bots for yourself. But then most of the usage of bots is using bots created by other people. But I think most of the bots are not shared or are just used by the creator only. So those can both be true at the same time.

And so if you share the bot, then you can monetize it and basically you'll get a cut of what people are paying Poe to access it. We've talked a little bit about what the agent engineer or agent builder profile looks like. Who's building these bots? Yeah, so the prompt bots, I mean, it's a real art.

Prompting is a real art. And I think that, you know, the art has changed as the models get more powerful, especially with reasoning models. But I think it's still a real art. And the kind of people creating them, they tend to not be very technical, but they are people who can sort of like empathize with the model.

And they're just very persistent in trying the model in many different cases and understanding what needs to go in the prompt. And then we have people who can create what we call server bots. And a server bot is pretty simple. You just give us the URL of your server and we will make an HTTP query over to your server every time the user sends a message to the bot.

And whatever you return from that request, it will go into the Poe message as a response to the user. And so this is more useful for if you want to do something more complex than just a prompt, if you want to have an agent, if you want to query outside data sources.

GPT researcher, for example, is a server bot. And this is more sophisticated developers who are creating these. And also AI model developers. So just to give you a good example, there's this really niche model, I think it's cool, called retro diffusion. And just two people, they train this model to generate pixel art for games and animations as well in the same kind of pixel art style.

And the model is just super tuned for exactly that use case. And they set up a server bot for their model. That's something that you have to use a server bot for because they're doing inference with GPUs. And they're hosting all of that. Do you see that most people who create server bots are training and hosting their own model?

Or is it more around the workflows and things like that? Originally, it was mostly people hosting their own models. Over time, it has become more of agents and more complex applications built on top. But we have companies like Together, Fireworks. There's a whole category, FAL. These companies that host open source models, they generally set up server bots.

And for these companies, for everyone, it's sort of a way to... If your company is not set up to reach consumers all over the world, like if you're a developer-oriented company or if you're just a small team, then setting up a server bot is a way to reach a much bigger audience and just basically generate extra money and get feedback from a broad audience.

How does the monetization work? So it's totally flexible. The creators can choose what they want to charge users. And we turn whatever the creators want to charge in dollars, we turn that into a number of points that users get. And so users have a limited number of points depending on how much they're spending on Poe per month.

And we're basically just passing through the cost from the creator to the user. But we have creators that are making a lot of money. I mean, there's companies making in the millions of dollars per year through... In the millions of dollars per year? Yeah. And there's individual people making in the hundreds of thousands and tens of thousands of dollars per year.

So there's a real economy developing here. What are these prompt bots? Are these server bots? What are they doing that's providing that much value? I mean, it's a mix. Sometimes it's training their own models. Sometimes it's hosting open source models. Some companies are very good at like optimizing the cost of inference of certain models.

Sometimes it's agents. Sometimes it's prompt bots where it's just like some people are really good at prompting. Sometimes people are really good at marketing. And that's another component of particularly for prompt bots. Just like building enthusiasm from internet users about using your bot. But this is something that like, you know, it's an opportunity.

If you're good at marketing and you're good at prompting, you can create these bots. And, you know, we have people making a living off this. You mentioned agents. Are these bots taking actions for people at the moment? Or are they mostly kind of like just still conversational and, yeah, read-only, not write?

They're mostly read-only. They're -- we haven't -- one of the places we'd love to go as a platform is to enable more of these like real-world actions. We're not there yet, but so right now it's basically bots that are effectively don't have side effects is a way to think about it.

So the whole action of the bot is to create some artifact that it then returns to you in the chat. We've got -- we've got hundreds of developers here. If you were a developer, what type of bot would you build or what would you recommend people explore? Where do you think there's an unmet need?

I think the like agents category is probably where there's the most opportunity. Obviously, if you can train models, you should do that. But I think the like building on top of the models, building things that are more sophisticated than just a simple prompt, but not as sophisticated as like training a new model or fine-tuning.

There's just -- there's like so much to be built. You know, you can -- can and should build it with Langchain, but it's -- there's just this like incredible space that has opened up to make agents that are useful to people. And yeah, I would experiment with that and see where you get traction.

And you might have said this earlier, but agents with side effects, will those be coming soon? How are you thinking about that in general? Because I imagine there's a bunch of risks obviously associated with that. Yeah. No, we'd love to support that. You know, and I think things like MCP have made it easier for there to be like standards around how to do that.

We'd -- we will get to that at some point. Right now we have a lot of -- we have a lot of competing priorities. So we've had to focus and we're hard at work on some things that will unblock the way for us to provide a great experience around that.

So, zooming -- zooming out a little bit, you made a -- you made a reference earlier to kind of like the early days of the Internet and kind of like things there. You also built an Internet company, Quora. How do these two things compare? What's the -- what are the differences?

What are the similarities? There's -- I mean, a lot is similar. But I think the -- the big difference is sort of just like the pace that the environment is changing at. So -- Largely because of the foundation models and -- Yeah, because of the AI and just the change.

You know, just every -- every month there are new models, there's new modalities, there's agents, there's MCP, there's tools. There's just the -- the sort of like the environment that we exist within is changing incredibly quickly. Whereas with Quora, when we started in -- we started in 2009 and launched the first version in 2010, it was -- I'd say it took maybe like every few years there was a significant shift.

You know, there was a big shift with mobile and there was a sort of shift with -- with creators becoming more -- more of like a constituency among like the Internet. But it was -- you know, we had to sort of grow with the Internet, with internationalization as different -- different countries started to -- to grow online.

But it was -- the -- the piece of change of the environment was -- the environment was pretty stable. And that allowed us to sort of focus internally and -- and sort of invest a lot in internal abstractions and -- and just sort of like polishing things. And -- and -- and there's a certain way of doing things that works when -- when you're in a stable environment.

In this environment, things change so fast that we just need to sort of constantly adapt and react. Like it doesn't make sense to plan years ahead in this environment. How far ahead do you plan? We -- I'd say our plans go out about two months right now. That's -- that's -- that's how -- that's how far we have -- we have -- we have some very -- hopefully you'll -- you'll all see some -- some great stuff launch in the next few months.

And then we will evaluate -- I mean, we could -- we could plan further than that. But I just think, you know, we're going to get two months from that. We're going to launch the stuff we're working on. And then we're going to -- it's going to -- the things you would think of to do then are different than what you would think of to do now.

That -- that -- that may make answering my -- my final question hard. But what do you -- what do you see coming down the pipeline? What are you most excited about? Where do you think some of the space is going? Are there things at Poe that we should be keeping an eye out for?

You know, I -- I think the main thing is just the power of the models is going to keep increasing. The -- the pace this year has been incredible, right? It was just -- as the reasoning models have -- have gotten stronger and stronger. I'm particularly excited about code generation applications.

So we have -- we have a tool within Poe we call App Creator that uses -- uses LLMs underneath to -- to help you generate interfaces and -- interfaces for bots on Poe. So if you want, like, a graphic interface, a web interface. And it's -- it's good now, but it's -- I -- I can just see where that is going to get to in the next six months as the code generation abilities of -- of all these models keep -- keep growing.

It's -- I mean, it's -- it's -- it's just -- the amount of software that we're going to be able to -- to generate and the ecosystem that's going to emerge around that is -- it's -- it's -- it's going to be incredible. So I'm -- I'm just really excited for -- for that.

That's a great note to end it on. All right. Let's give Adam a big round of applause. Thank you for coming out. Thank you. Thank you guys all for being a big part of Interrupt. The first time we did it. It's been a really fun day for myself and -- and hopefully for all of you.

And hopefully it inspires you to build lots of agents. I want to -- I want to give a special thank to our presenting sponsor, Cisco Experience. Their partnership and support, not only today, but they were also a big part of the workshop yesterday. Has been fantastic and has -- has given a lot to think about and -- and -- and learn from.

I also want to thank our amazing speakers. We had a lot of them. If you missed any of their talks, we'll have recordings up right afterwards on our Interrupt website. And -- and I also want to thank the Langchain community champions and Slack contributors. You may have seen them wandering around, as well as the Langchain ambassadors.

These are a big part of the people that make Langchain what Langchain is. And -- and there's a lot of them. And many of them have flown from across the world to be here today. So if you see them around, we'll have kind of like a happy hour for the next two hours.

Make sure to go up to them. A lot of them have done incredible things in the community. And on that note, as we wrap up, I'd like to invite you all to join us for our closing reception sponsored by Data Stacks. It's a good opportunity to continue any conversations that you already started or start new ones.

And with that, thank you guys for being a part. Thank you for coming. And hope you enjoyed. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you. Thank you.