I'm delighted to introduce Harrison Chase. You know, one of the reasons I was really excited to come back today was because I think it was a year ago at this event that I met Harrison, and I thought, "Boy, if I get to meet super cool people like Harrison, I'm definitely going to come back this year." Quick question.
How many of you use LangChain? Yeah. Wow. Okay. So almost everyone. Those of you that don't, you know, pull up your laptop, run pip install LangChain. If you aren't using LangSwift yet, I'm a huge fan. And Harrison works a massive developer community. If you look at the pip, you know, PiPi download stats, I think LangChain is by far the leading generative AI orchestration platform, I think.
And this gives us a huge view into a lot of things happening in generative AI. So I'm excited to have him share with us what he's seeing with AI agents. Thanks for the intro. And thanks for having me. Excited to be here. So today I want to talk about agents.
So LangChain is a developer framework for building all types of LLM applications. But one of the most common ones that we see being built are agents. And we've heard a lot about agents from a variety of speakers before. So I'm not gonna go into too much of a deep kind of, like, overview.
But at a high level, it's using a language model to interact with the external world in a variety of forms. And so tool usage, memory, planning, taking actions is kind of the high level gist. And the simple form of this you can maybe think of as just running an LLM in a for loop.
So you ask the LLM what to do. You then go execute that. And then you ask it what to do again. And then you keep on doing that until it decides it's done. So today I want to talk about some of the areas that I'm really excited about that we see developers spending a lot of time in and really taking this idea of an agent and making it something that's production ready and real world and really, you know, the future of agents, as the title suggests.
So there's three main things that I want to talk about. And we've actually touched on all of these in some capacity already. So I think it's a great roundup. So planning, the user experience, and memory. So for planning, Andrew covered this really nicely in his talk. But we see a few...
The basic idea here is that if you think about running the LLM in a for loop, oftentimes there's multiple steps that it needs to take. And so when you're running it in a for loop, you're asking it implicitly to kind of reason and plan about what the best next step is, see the observation, and then kind of like resume from there and think about what the next best step is right after that.
Right now at the moment, language models aren't really good enough to kind of do that reliably. And so we see a lot of external papers and external prompting strategies kind of like enforcing planning in some method, whether this be planning steps explicitly up front or reflection steps at the end to see if it's kind of like done everything correctly as it should.
I think the interesting thing here, thinking about the future, is whether these types of prompting strategies and these types of cognitive architectures continue to be things that developers are building or whether they get built into the model APIs, as we heard Sam talk a little bit about. And so for all three of these, to be clear, I don't have answers.
And I just have questions. And so one of my questions here is, are these planning, prompting things short-term hacks or long-term necessary components? Another kind of like aspect of this is just the importance of basically flow engineering. And so this term I heard come out of this paper, Alpha Codium.
It basically achieves state-of-the-art kind of like coding performance, not necessarily through better models or better prompting strategies, but through better flow engineering. So explicitly designing this kind of like graph or state machine type thing. And I think one way to think about this is you're actually offloading the planning of what to do to the human engineers who are doing that at the beginning.
And so you're relying on that as a little bit of a crutch. The next thing that I want to talk about is the UX of a lot of agent applications. This is actually one area I'm really excited about. I don't think we've kind of nailed the right way to interact with these agent applications.
I think human in the loop is kind of still necessary because they're not super reliable. But if it's in the loop too much, then it's not actually doing that much useful thing. So there's kind of like a weird balance there. One UX thing that I really like from Devin, which came out a week, two weeks ago, and Jordan B kind of put this nicely on Twitter, is the presence of a rewind and edit ability.
So you can basically go back to a point in time where the agent was and then edit what it did or edit the state that it's in so that it can make a more informed decision. And I think this is a really, really powerful UX that we're really excited about at LingChain and exploring this more.
And I think this brings a little bit more reliability, but at the same time kind of like steering ability to the agents. And speaking of kind of like steering ability, the last thing I want to talk about is the memory of agents. And so Mike at Zapier showed this off a little bit earlier where he was basically interacting with the bot and kind of like teaching it what to do and correcting it.
And so this is an example where I'm teaching in a chat setting an AI to kind of like write a tweet in a specific style. And so you can see that I'm just correcting it in natural language to get to a style that I want. I then hit thumbs up.
The next time I go back to this application, it remembers the style that I want. But I can keep on editing it. I can keep on making it a little more differentiated. And then when I go back a third time, it remembers all of that. And so this I would kind of classify as kind of like procedural memory.
So it's remembering the correct way to do something. I think another really important aspect is basically personalized memory. So remembering facts about a human that you might not necessarily use to do something more correctly, but you might use to make the experience kind of like more personalized. So this is an example kind of like journaling app that we are building and playing around with for exploring memory.
And you can see that I mentioned that I went to a cooking class, and it remembers that I like Italian food. And so I think bringing in these kind of like personalized aspects, whether it be procedural or kind of like these personalized facts, will be really important for the next generation of agents.
That's all I have. Thanks for having me. And excited to chat about all of this, if anyone wants to chat about this after. Thanks. 1 (audience applauding)