Back to Index

The new Claude 3.5 Sonnet, Computer Use, and Building SOTA Agents — with Erik Schluntz, Anthropic


Chapters

0:0 Introductions
3:39 What is SWE-Bench?
12:22 SWE-Bench vs HumanEval vs others
15:21 SWE-Agent architecture and runtime
21:18 Do you need code indexing?
24:50 Giving the agent tools
27:47 Sandboxing for coding agents
29:16 Why not write tests?
30:31 Redesigning engineering tools for LLMs
35:53 Multi-agent systems
37:52 Why XML so good?
42:57 Thoughts on agent frameworks
45:12 How many turns can an agent do?
47:12 Using multiple model types
51:40 Computer use and agent use cases
59:4 State of AI robotics
64:24 Robotics in manufacturing
65:1 Hardware challenges in robotics
69:21 Is self-driving a good business?

Transcript

Hey, everyone. Welcome to the Latent Space Podcast. This is Alessio, partner and CTO at Decibel Partners. And today, we're in the news studio with my usual co-host, Sean from Small AI. Hey, and today, we are very blessed to have Eric Schluntz from Anthropic with us. Welcome. Hi, thanks very much.

I'm Eric Schluntz. I'm a member of technical staff at Anthropic, working on tool use, computer use, and SweetBench. Yeah, well, how did you get into just the whole AI journey? I think you spent some time at SpaceX as well? Yeah. And robotics? Yeah, there's a lot of overlap between the robotics people and the AI people.

And maybe there's some interlap or interest between language models for robots right now. Maybe just a little bit of background on how you got to where you are. Yeah, sure. I was at SpaceX a long time ago. But before joining Anthropic, I was the CTO and co-founder of Cobalt Robotics.

We built security and inspection robots. These are five-foot-tall robots that would patrol through an office building or a warehouse, looking for anything out of the ordinary. Very friendly, no tasers or anything. We would just call a remote operator if we saw anything. So we have about 100 of those out in the world, and had a team of about 100.

We actually got acquired about six months ago. But I had left Cobalt about a year ago now, because I was starting to get a lot more excited about AI. I had been writing a lot of my code with things like Copilot. And I was like, wow, this is actually really cool.

If you had told me 10 years ago that AI would be writing a lot of my code, I would say, hey, I think that's AGI. And so I realized that we had passed this level. We're like, wow, this is actually really useful for engineering work. That got me a lot more excited about AI and learning about large language models.

So I ended up taking a sabbatical and then doing a lot of reading and research myself, and decided, hey, I want to go be at the core of this and joined Anthropic. And why Anthropic? - Did you consider other labs? Did you consider maybe some of the robotics companies?

- So I think at the time, I was a little burnt out of robotics. And so also for the rest of this, any sort of negative things I say about robotics or hardware is coming from a place of burnout. I reserve my right to change my opinion in a few years.

Yeah, I looked around, but ultimately I knew a lot of people that I really trusted and I thought were incredibly smart at Anthropic, and I think that was the big deciding factor to come there. Like, hey, this team's amazing. They're not just brilliant, but sort of like the most nice and kind people that I know.

And so I just felt I could be a really good culture fit. And ultimately, like, I do care a lot about AI safety and making sure that, you know, I don't want to build something that's used for bad purposes. And I felt like the best chance of that was joining Anthropic.

- And from the outside, these labs kind of look like huge organizations that have this like obscure ways to organize. How did you get, you joined Anthropic, did you already know you were going to work on like SweetBench and some of the stuff you publish, or you kind of join and then you figure out where you land?

I think people are always curious to learn more. - Yeah, I've been very happy that Anthropic is very bottoms up and sort of very sort of receptive to whatever your interests are. And so I joined sort of being very transparent of like, hey, I'm most excited about code generation and AI that can actually go out and sort of touch the world or sort of help people build things.

And, you know, those weren't my initial projects. I also came in and said, hey, I want to do the most valuable possible thing for this company and help Anthropic succeed. And, you know, like, let me find the balance of those. So I was working on lots of things at the beginning, you know, function calling, tool use, and then sort of as it became more and more relevant, I was like, oh, hey, yeah, like let's, it's time to go work on encoding agents and sort of started looking at SweetBench as sort of a really good benchmark for that.

- So let's get right into SweetBench. That's one of the many claims to fame. I feel like there's just been a series of releases related with Cloud 3.5 SONNET. Around about two, three months ago, 3.5 SONNET came out and it was a step ahead in terms of a lot of, people immediately fell in love with it for coding.

And then last month, you released a new updated version of Cloud SONNET. We're not going to talk about the training for that 'cause that's still confidential, but I think Anthropic's done a really good job like applying the model to different things. So you took the lead on SweetBench, but then also we're going to talk a little bit about computer use later on.

So yeah, maybe just give us a context about like why you looked at SweetBench Verified and you actually like came up with a whole system for building agents that, you know, would maximally use the model well. - Yeah, so I'm on a sub team called product research. And basically the idea of product research is to really understand like what end customers care about and want in the models and then work to try to make that happen.

So, you know, we're not focused on sort of these more abstract general benchmarks like math problems or MMLU, but we really care about like finding the things that are really valuable and making sure the models are great at those. And so because I had been interested in coding agents, sort of, I knew that this would be a really valuable thing.

And I knew there were a lot of startups and our customers trying to build coding agents with our models. And so I said, "Hey, this is going to be "a really good benchmark to be able to measure that "and do well on it." And I, you know, wasn't the first person at Anthropic to find SweetBench.

And then, you know, there are lots of people that already knew about it and had done some internal efforts on it. It fell to me to sort of both implement the benchmark, which is very tricky, and then also to sort of make sure we had an agent and basically like a reference agent, maybe I'd call it, that could do very well on it.

Ultimately, we want to provide how we implemented that reference agent so that people can build their own agents on top of our system and get sort of the most out of it as possible. So with this blog post we released on SweetBench, we released the exact tools and the prompt that we gave the model to be able to do well.

- For people who don't know, who maybe haven't dived into SweetBench, I think the general perception is they're like tasks that a software engineer could do. I feel like that's an inaccurate description because it is basically, one, it's a subset of like 12 repos. It's everything they could find that every issue with like a matching commit that could be tested.

So that's not every commit. And then SweetBench verified is further manually filtered by OpenAI. Is that an accurate description and anything you'd change about that? - Yes, SweetBench is, it certainly is a subset of all tasks. First of all, it's only Python repos. So already fairly limited there. And it's just 12 of these popular open source repos.

And yes, it's only ones where there were tests that passed at the beginning and also new tests that were introduced that test the new feature that's added. So it is, I think, a very limited subset of real engineering tasks, but I think it's also very valuable because it's, even though it's a subset, it is true engineering tasks.

And I think a lot of other benchmarks are really kind of these much more artificial setups of even if they're related to coding, they're more like coding interview style questions or puzzles that I think are very different from like day-to-day what you end up doing. Like, I don't know how frequently you all like get to use recursion in your day-to-day job, but whenever I do, it's like a treat.

And I think it is, it's kind of, it's almost comical and a lot of people joke about this in the industry. It's like how different interview questions are. - Dynamic programming. - Yeah, exactly. - Like new code. - From the day-to-day job. But I think the, one of the most interesting things about Sweebench is that all these other benchmarks are usually just isolated puzzles and you're starting from scratch.

Whereas Sweebench, you're starting in the context of an entire repository. And so it adds this entirely new dimension to the problem of finding the relevant files. And this is a huge part of real engineering is, it's actually again, pretty rare that you're starting something totally greenfield. You need to go and figure out where in a code base you're going to make a change and understand how your work is going to interact with the rest of the systems.

And I think Sweebench does a really good job of like presenting that problem. - Why do we still use human eval? It's like 92%, I think. I don't even know if you can actually get to 100% because some of the data is not actually solvable. Do you see benchmarks like that, they should just get sunsetted?

Because when you look at like the model releases, it's like, oh, it's like 92% instead of like 89, 90% on human eval versus, you know, Sweebench verified is you have 49%, right? Which is like, before 45% was state of the art, but maybe like six months ago, it was like 30%, something like that.

So is that a benchmark that you think is going to replace human eval? Or do you think they're just going to run in parallel? - I think there's still need for sort of many different varied evals. Like sometimes you do really care about just sort of greenfield code generation.

And so I don't think that everything needs to go to sort of an agentic setup. - It would be very expensive to implement. - And the other thing I was going to say is that Sweebench is certainly hard to implement and expensive to run because each task, you have to parse a lot of the repo to understand where to put your code.

And a lot of times you take many tries of writing code, running it, editing it. It can use a lot of tokens compared to something like human eval. So I think there's definitely a space for these more traditional coding evals that are sort of easy to implement, quick to run and do get you some signal.

And maybe hopefully there's just sort of harder versions of human eval that get created. - How do we get Sweebench verified to 92%? Do you think that's something where it's like line of sight to it? Or it's like, you know, we need a whole lot of things to go right.

- Yeah, yeah. And actually maybe I'll start with Sweebench versus Sweebench verified, which is I think something I missed earlier. So Sweebench is, as we described, this big set of tasks that were scraped. - Like 12,000 or something. - Yeah, I think it's 2,000 in the final set, but a lot of those, even though a human did them, they're actually impossible given the information that comes with the task.

The most classic example of this is the test looks for a very specific error string, you know, like assert message equals error, something, something, something. And unless you know that's exactly what you're looking for, there's no way the model is going to write that exact same error message and so the tests are going to fail.

So Sweebench verified was actually made in partnership with OpenAI and they hired humans to go review all these tasks and pick out a subset to try to remove any obstacle like this that would make the tasks impossible. So in theory, all of these tasks should be fully doable by the model.

And they also had humans grade how difficult they thought the problems would be between like 15, less than 15 minutes, I think 15 minutes to an hour, an hour to four hours and greater than four hours. So that's kind of this interesting sort of how big the problem is as well.

To get to Sweebench verified to 90%, actually, maybe I'll also start off with some of the remaining failures that I see, like when running our model on Sweebench. I'd say the biggest cases are the model sort of operates at the wrong level of abstraction. And what I mean by that is the model puts in maybe a smaller band-aid when really the task is asking for a bigger refactor.

And some of those, you know, is the model's fault, but a lot of times if you're just seeing the, if you're just sort of seeing the GitHub issue, it's not exactly clear like which way you should do. So even though these tasks are possible, there's still some ambiguity in how the tasks are described.

That being said, I think in general, like language models frequently will produce like a smaller diff when possible rather than trying to do a big refactor. I think another area is sort of, at least the agent we created didn't have any multimodal abilities, even though our models are very good at vision.

So I think that's just a missed opportunity. And if I read through some of the traces, there's some funny things where, especially the tasks on matplotlib, which is a graphing library, the test script will like save an image and the model will just say, okay, it looks great. You know, without looking at it.

So there's certainly extra juice to squeeze there of just making sure the model really understands all the sides of the input that it's given, including multimodal. But yeah, I think like getting to 92%. So this is something that I have not looked at, but I'm very curious about. I want someone to look at like, what is the union of all of the different tasks that have been solved by at least one attempt at SuiteBench Verify?

There's a ton of submissions to the benchmark. And so I'd be really curious to see how many of those 500 tasks, at least someone has solved. And I think, you know, there's probably a bunch that none of the attempts have ever solved. And I think it'd be interesting to look at those and say, hey, is there some problem with these?

Like, are these impossible? Or are they just really hard and only a human could do them? - Yeah, like specifically, is there a category of problems that are still unreachable by any LLM agent? - Yeah, yeah, and I think there definitely are. The question is, are those fairly inaccessible or are they just impossible because of the descriptions?

But I think certainly some of the tasks, especially the ones that the human graders reviewed as like taking longer than four hours are extremely difficult. I think we got a few of them right, but not very many at all in the benchmark. - And did those take less than four hours?

- They certainly did less than, yeah, than four hours. - Is there a correlation of length of time with like human estimated time, you know what I mean? Or do we have sort of more of X paradox type situations where it's something super easy for a model, but hard for a human?

- I actually haven't done like done the stats on that, but I think that'd be really interesting to see of like how many tokens does it take and how is that correlated with difficulty? What is the likelihood of success with difficulty? I think actually a really interesting thing that I saw, one of my coworkers who was also working on this named Simon, he was focusing just specifically on the very hard problems, the ones that are said to take longer than four hours.

And he ended up sort of creating a much more detailed prompt than I used. And he got a higher score on the most difficult subset of problems, but a lower score overall in the whole benchmark. And the prompt that I made, which is sort of much more simple and bare bones, got a higher score on the overall benchmark, but lower score on the really hard problems.

And I think some of that is the really detailed prompt made the model sort of overcomplicate a lot of the easy problems. 'Cause honestly, a lot of the sweet bench problems, they really do just ask for a bandaid and where it's like, hey, this crashes if this is none, and really all you need to do is put a check if none.

And so sometimes like trying to make the model think really deeply, like it'll think in circles and overcomplicate something, which certainly human engineers are capable of as well. But I think there's some interesting thing of like the best prompt for hard problems might not be the best prompt for easy problems.

- How do we fix that? Are you supposed to fix it at the model level? Like how do I know what prompt I'm supposed to use? - Yeah, and I'll say this was a very small effect size. And so I think this is not, I think this isn't like worth obsessing over, but I would say that as people are building systems around agents, I think the more you can separate out the different kinds of work the agent needs to do, the better you can tailor a prompt for that task.

And I think that also creates a lot of like, for instance, if you were trying to make an agent that could both, you know, solve hard programming tasks, and it could just like, you know, write quick test files for something that someone else had already made, the best way to do those two tasks might be very different prompts.

I see a lot of people build systems where they first sort of have a classification and then route the problem to two different prompts. And that's sort of a very effective thing because one, it makes the two different prompts much simpler and smaller. And it means you can have someone work on one of the prompts without any risk of affecting the other tasks.

So it creates like a nice separation of concerns. - Yeah, and the other model behavior thing you mentioned, they prefer to generate like shorter diffs. Why is that? Like, is there a way? You know, I think that's maybe like the lazy model question that people have is like, why are you not just generating the whole code instead of telling me to implement it?

- Are you saving tokens? - Yeah, exactly. It's like conspiracy theory. - Yeah, yeah, yeah. - So there's two different things there. One is like the, I'd say maybe like doing the easier solution rather than the hard solution. And I'd say the second one, I think what you're talking about is like the lazy model is like when the model says like dot, dot, dot, code remains the same.

- Code goes here. I'm like, thanks, dude. - I think honestly, like that just comes as like, people on the internet will do stuff like that. And like, dude, if you were talking to a friend and you asked them like to give you some example code, they would definitely do that.

They're not going to reroll the whole thing. And so I think that's just a matter of like, you know, sometimes you actually do just want like the relevant changes and so I think it's, this is something where a lot of times like, you know, the models aren't good at mind reading of like which one you want.

So I think that like the more explicit you can be in prompting to say, hey, you know, give me the entire thing, no elisions, versus just give me the relevant changes. And that's something, you know, we want to make the models always better at following those kinds of instructions.

- I'll drop a couple of references here. We're recording this like a day after Dario, Lex Friedman just dropped his five hour pod with Dario and Amanda and the rest of the crew. And Dario actually made this interesting observation that like, we actually don't want, we complain about models being too chatty in text and then not chatty enough in code.

And so like getting that right is kind of a awkward bar because, you know, you don't want it to yap in its responses, but then you also want it to be complete in code. And then sometimes it's not complete. Sometimes you just want it to diff, which is something that Enthopic has also released with, you know, like the fast edit stuff that you guys did.

And then the other thing I wanted to also double back on is the prompting stuff. You said it was a small effect, but it was a noticeable effect in terms of like picking a prompt. I think we'll go into suite agents in a little bit, but I kind of reject the fact that you need to choose one prompt and like have your whole performance be predicated on that one prompt.

I think something that Enthopic has done really well is meta-prompting, prompting for a prompt. And so why can't you just develop a meta-prompt for all the other prompts? And, you know, if it's a simple task, make a simple prompt. If it's a hard task, make a hard prompt. Obviously I'm probably hand-waving a little bit, but I will definitely ask people to try the Enthopic Workbench meta-prompting system if they haven't tried it yet.

I went to the build day recently at Enthopic HQ and it's the closest I've felt to an AGI, like learning how to operate itself. That, yeah, it's really magical. - Yeah, no, Claude is great at writing prompts for Claude. - Right, so meta-prompting. - Yeah, yeah. The way I think about this is that humans, even like very smart humans still use sort of checklists and use sort of scaffolding for themselves.

Surgeons will still have checklists even though they're incredible experts. And certainly, you know, a very senior engineer needs less structure than a junior engineer, but there still is some of that structure that you want to keep. And so I always try to anthropomorphize the models and try to think about for a human, sort of what is the equivalent?

And that's sort of, you know, how I think about these things is how much instruction would you give a human with the same task? And would you need to give them a lot of instruction or a little bit of instruction? - Let's talk about the agent architecture. Maybe, so first, runtime.

You let it run until it thinks it's done or it reaches 200K context window. How did you come up? - What's up with that? - Yeah. - Yeah, I mean this, so I'd say that a lot of previous agent work built sort of these very hard-coded and rigid workflows where the model is sort of pushed through certain flows of steps.

And I think to some extent, you know, that's needed with smaller models and models that are less smart. But one of the things that we really wanted to explore was like, let's really give Claude the reins here and not force Claude to do anything, but let Claude decide, you know, how it should approach the problem, what steps it should do.

And so really, you know, what we did is like the most extreme version of this is just give it some tools that it can call and it's able to keep calling the tools, keep thinking, and then yeah, keep doing that until it thinks it's done. And that's sort of the most minimal agent framework that we came up with.

And I think that works very well. I think especially the new SONNET 3.5 is very, very good at self-correction. It has a lot of like grit. Claude will try things that fail and then try, you know, come back and sort of try different approaches. And I think that's something that you didn't see in a lot of previous models.

Some of the existing agent frameworks that I looked at, they had whole systems built to try to detect loops and see, oh, is the model doing the same thing, you know, more than three times, and we have to pull it out. And I think like the smarter the models are, the less you need that kind of extra scaffolding.

So yeah, just giving the model tools and letting it keep sample and call tools until it thinks it's done was the most minimal framework that we could think of. And so that's what we did. - So you're not pruning like bad paths from the context. If it tries to do something, it fails, you just burn all these tokens to- - Yes, and so I would say the downside of this is that this is sort of a very token expensive way to do this.

- But still, it's very common to prune bad paths 'cause models get stuck. - Yeah, but I'd say that, yeah, 3.5 is not getting stuck as much as previous models. And so, yeah, we wanted to at least just try the most minimal thing. I know I would say that, you know, this is definitely an area of future research, especially if we talk about these problems that are going to take a human more than four hours.

Those might be things where we're gonna need to go prune bad paths to let the model be able to accomplish this task within 200K tokens. So certainly I think there's like future research to be done in that area, but it's not necessary to do well on these benchmarks. - Another thing I always have questions about on context window things, there's a mini cottage industry of code indexers that have sprung up for large codebases like the ones in SweetBench.

You didn't need them? - We didn't. And I think I'd say there's like two reasons for this. One is like SweetBench specific and the other is a more general thing. The more general thing is that I think Sonnet is very good at what we call agentic search and what this basically means is letting the model decide how to search for something.

It gets the results and then it can decide should it keep searching or is it done? Does it have everything it needs? So if you read through a lot of the traces of the SweetBench, the model is calling tools to view directories, list out things, view files, and it will do a few of those until it feels like it's found the file where the bug is and then it will start working on that file.

And I think like, again, this is all, everything we did was about just giving Claude the full reins so there's no hard-coded system. There's no search system that you're relying on getting the correct files into context. This just totally lets Claude do it. - Or embedding things into a vector database.

- Exactly. - Oops. - No, no, I know. But again, this is very, very token expensive. And so certainly, and it also takes many, many turns. And so certainly if you want to do something in a single turn, you need to do rag and just push stuff into the first prompt.

- And just to make it clear, it's using the bash tool, basically doing ls, looking at files, and then doing cat to the following context. - It can do that, but it's file editing tool also has a command in it called view. They can view a directory. It's very similar to ls, but it just sort of has some nice sort of quality of life improvements.

Like it'll only do an ls sort of two directories deep so that the model doesn't get overwhelmed if it does this on a huge file. I would say actually we did more engineering of the tools than the overall prompt. But the one other thing I want to say about this agentic search is that for suite bench specifically, a lot of the tasks are bug reports, which means they have a stack trace in them.

And that means right in that first prompt, there is- - Tells you where to go. - It tells you where to go. And so I think this is a very easy case for the model to find the right files versus if you're using, this is a general coding assistant where there isn't a stack trace or you're asking it to insert a new feature.

I think there it's much harder to know which files to look at. And that might be an area where you would need to do more of this exhaustive search where an agentic search would take way too long. - As someone who has spent the last few years in the JS world, it'd be interesting to see suite bench JS because these stack traces are useless because there's so much virtualization that we do.

So they're very, very disconnected with where the code problems are actually appearing. - That makes me feel better about my limited front end experiences. I've like always struggled with that. - It's not your fault. We've gotten ourselves into a very, very complicated situation and I'm not sure it's entirely needed, but if you talk to our friends at Vercel, they will say it is.

- I will say suite bench just released suite bench multimodal which I believe is either entirely JavaScript or largely JavaScript. And it's entirely things that have visual components of them. - Are you going to tackle that? - We will see. I think it's on the list and there's interest, but no guarantees yet.

- Just as a side note, it occurs to me that every model lab, including Enthopic, but the others as well, you should have your own suite bench. Whatever your bug tracker tool, this is a general methodology that you can use to track progress, I guess. - Yeah, sort of running on our own internal code base.

Yeah, that's a fun idea. - Since you spend so much time on the tool design, so you have this added tool that can make changes and whatnot. Any learnings from that that you wish the AI IDEs would take in? Is there some special way to look at files, feed them in?

- I would say the core of that tool is string replace. And so we did a few different experiments with different ways to specify how to edit a file. And string replace, basically, the model has to write out the existing version of the string and then a new version, and that just gets swapped in.

We found that to be the most reliable way to do these edits. Other things that we tried were having the model directly write a diff, having the model fully regenerate files. That one is actually the most accurate, it takes so many tokens. And if you're in a very big file, it's cost prohibitive.

There's basically a lot of different ways to sort of represent the same task. And they actually have pretty big differences in terms of like model accuracy. I think Eider, they have a really good blog where they explore some of these different methods for editing files and they post results about them, which I think is interesting.

But I think this is like a really good example of the broader idea that like you need to iterate on tools rather than just a prompt. And I think a lot of people, when they make tools for an LLM, they kind of treat it like they're just writing an API for a computer.

And it's sort of very minimal, it's sort of just the bare bones of what you'd need. And honestly, like it's so hard for the models to use those. I really, again, I come back to anthropomorphizing these models. Like imagine you're a developer and you just read this for the very first time and you're trying to use it.

Like you can do so much better than like just sort of the bare API spec of what you'd often see, like include examples in the description, include like really detailed explanations of how things work. And I think that, again, also think about what is the easiest way for the model to represent the change that it wants to make.

For file editing as an example, writing a diff is actually, let's take the most extreme example. You want the model to literally write a patch file. I think patch files have at the very beginning, like numbers of how many total lines change. That means before the model has actually written the edit, it needs to decide how many numbers or how many lines are gonna change.

Don't quote me on that. I'm pretty sure, I think it's something like that, but I don't know if that's exactly the diff format, but you can certainly have formats that are much easier to express without messing up than others. And I like to think about like, think about how much human effort goes into designing human interfaces for things.

Like, it's incredible. This is like entirely what FrontEnd is about, is creating better interfaces to kind of do the same things. And I think that same amount of attention and effort needs to go into creating agent computer interfaces. - It's a topic we've discussed, ACI or whatever that looks like.

I would also shout out that, I think you released some of these toolings as part of computer use as well, and people really liked it. Yeah, it's all open source if people wanna check it out. I'm curious if there's an environment element that complements the tools. So how do you, like, do you have a sandbox?

Do you, is it just Docker? 'Cause that can be slow or resource intensive. Do you have anything else that you would recommend? - Yeah, I don't think I can talk about sort of public details or about private details about how we implement our sandboxing. But obviously, we need to have sort of safe, secure, and fast sandboxes for training, for the models to be able to practice writing code and working in an environment.

- I'm aware of a few startups working on agent sandboxing. E2B is a close friend of ours that Alessio has led around in. But also I think there's others where they're focusing on snapshotting memory so that it can do time travel for debugging, computer use where you can control the mouse or keyboard or something like that.

Whereas here, I think that the kinds of tools that we offer it are very, very limited to coding agent work cases like bash, edit, you know, stuff like that. - Yeah, I think the computer use demo that we released is an extension of that of it. It has the same bash and edit tools, but it also has the computer tool that lets it get screenshots and move the mouse and keyboard.

Yeah, so I definitely think there's sort of more general tools there. And again, the tools we released as part of SweetBench were, I'd say they're very specific for editing files and doing bash, but at the same time, that's actually very general if you think about it. Anything that you would do on a command line or editing files, you can do with those tools.

And so we do want those tools to feel like any sort of computer terminal work could be done with those same tools, rather than making tools that were very specific for SweetBench, like run tests as its own tool, for instance. - Yeah, you had a question about tests. - Yeah, yeah, exactly.

I saw there's no test writer tool. Is it because it generates the code and then you're running it against SweetBench anyway? So it doesn't really need to write the test or? - Yeah, so this is one of the interesting things about SweetBench is that the tests that the model's output is graded on are hidden from it.

That's basically so that the model can't cheat by looking at the tests and writing the exact solution. But I'd say typically the model, the first thing it does is it usually writes a little script to reproduce the error. And again, most SweetBench tasks are like, "Hey, here's a bug that I found.

"I run this and I get this error." So the first thing the model does is try to reproduce that. And so it's kind of then rerunning that script as a mini test. But yeah, sometimes the model will accidentally introduce a bug that breaks some other test and it doesn't know about that.

- And should we be redesigning any tools, APIs? We kind of talked about this on having more examples, but I'm thinking even things of Q as a query parameter in many APIs. It's easier for the model to re-query than read the Q. I'm sure it learned the Q by this point, but is there anything you've seen, like building this, where it's like, "Hey, if I were to redesign some CLI tool, "some API tool, I would change the way structure "to make it better for LLMs." - I don't think I've thought enough about that off the top of my head, but certainly just making everything more human-friendly.

Like having like more detailed documentation and examples. I think examples are really good in things like descriptions. Like so many, like just using the Linux command line, like how many time I do like dash dash help or look at the man page or something. It's like, just give me one example of like how I actually use this.

Like, I don't want to go read through a hundred flags. Just give me the most common example. And again, so things that would be useful for a human I think are also very useful for a model. - Yeah, I mean, there's one thing that you cannot give to code agents that is useful for human is this access to the internet.

I wonder how to design that in. Because one of the issues that I also had with just the idea of a suite bench is that you can't do follow-up questions. You can't like look around for similar implementations. These are all things that I do when I try to fix code.

And we don't do that. It's not, it wouldn't be fair. Like it'd be too easy to cheat, but then also it's kind of not being fair to these agents because they're not operating in a real world situation. Like if I had a real world agent, of course I'm giving it access to the internet 'cause I'm not trying to pass a benchmark.

I don't have a question in there, more just like, I feel like the most obvious tool, access to the internet is not being used. - I think that that's really important for humans. But honestly, the models have so much general knowledge from pre-training that it's like less important for them.

- But like versioning, you know. - If you're working on a newer thing that was like, that came after the knowledge cutoff, then yes, I think that's very important. I think actually this is like a broader problem that there is a divergence between SweeBench and like what customers will actually care about who are working on a coding agent for real use.

And I think one of those there is like internet access and being able to like, how do you pull in outside information? I think another one is like, if you have a real coding agent, you don't wanna have it start on a task and like spin its wheels for hours because you gave it a bad prompt.

You want it to come back immediately and ask follow-up questions and like really make sure it has a very detailed understanding of what to do, then go off for a few hours and do work. So I think that like real tasks are gonna be much more interactive with the agent rather than this kind of like one-shot system.

And right now there's no benchmark that measures that. And maybe I think it'd be interesting to have some benchmark that is more interactive. I don't know if you're familiar with TauBench, but it's a customer service benchmark where there's basically one LLM that's playing the user or the customer that's getting support and another LLM that's playing the support agent and they interact and try to resolve the issue.

- Yeah, we talked to the LMSIS guys. - Awesome, yeah. - And they also did MTBench for people listening along. So maybe we need MTSuiteBench. - Sure. Yeah, so maybe you could have something where like before the SuiteBench task starts, you have like a few back and forths with kind of like the author who can answer follow-up questions about what they want the task to do.

And of course you'd need to do that where it doesn't cheat and like just get the exact thing out of the human or out of the sort of user. But I think that will be a really interesting thing to see. If you look at sort of existing agent work like Repl.it's coding agent, I think one of the really great UX things they do is like first having the agent create a plan and then having the human approve that plan or give feedback.

I think for agents in general, like having a planning step at the beginning, one, just having that plan will improve performance on the downstream task just because it's kind of like a bigger chain of thought, but also it's just such a better UX. It's way easier for a human to iterate on a plan with a model rather than iterating on the full task that sort of has a much slower time through each loop.

If the human has approved this implementation plan, I think it makes the end result a lot more sort of auditable and trustable. So I think there's a lot of things sort of outside of SuiteBench that will be very important for real agent usage in the world. - Yeah, I would say also, there's a couple of comments on names that you dropped.

Copilot also does the plan stage before it writes code. I feel like those approaches have generally been less Twitter successful because it's not prompt to code, it's prompt plan code. So there's a little bit of friction in there, but it's not much. Like it actually, you get a lot for what it's worth.

And I also like the way that DevIn does it where you can sort of edit the plan as it goes along. And then the other thing with Repl.it, we hosted a sort of dev day pre-game with Repl.it and they also commented about multi-agents. So like having two agents kind of bounce off of each other.

I think it's a similar approach to what you're talking about with kind of the few-shot example, just as in the prompts of clarifying what the agent wants. But typically I think this would be implemented as a tool calling another agent, like a sub-agent. I don't know if you explored that.

Do you like that idea? - I haven't explored this enough, but I've definitely heard of people having good success with this, of almost like basically having a few different sort of personas of agents, even if they're all the same LLM. I think this is one thing with multi-agent that a lot of people will kind of get confused by is they think it has to be different models behind each thing, but really it's sort of usually the same model with different prompts.

And yet having them have different personas to kind of bring different sort of thoughts and priorities to the table. I've seen that work very well and sort of create a much more thorough and thought out response. I think the downside is just that it adds a lot of complexity and it adds a lot of extra tokens.

So I think it depends what you care about. If you want a plan that's very thorough and detailed, I think it's great. If you want a really quick, just like write this function, you know, you probably don't want to do that and have like a bunch of different calls before it does this.

- And just talking about the prompt, why are XML tags so good in Cloud? I think initially people were like, oh, maybe you're just getting lucky with XML, but I saw obviously you use them in your own agent prompts, so they must work. And why is it so model specific to your family?

- Yeah, I think that there's, again, I'm not sure how much I can say, but I think there's historical reasons that internally we've preferred XML for the data. I think also the one broader thing I'll say is that if you look at certain kinds of outputs, there is overhead to outputting in JSON.

Like if you're trying to output a code in JSON, there's a lot of extra escaping that needs to be done. I mean, that actually hurts model performance across the board, where versus like, if you're in just a single XML tag, there's none of that sort of escaping that needs to happen.

That being said, I haven't tried having it write, you know, HTML and XML, which maybe then you start running into weird escaping things there, I'm not sure. But yeah, I'd say that's some historical reasons and there's less overhead of escaping. - I use XML in other models as well.

And it's just a really nice way to make sure that the thing that ends is tied to the thing that starts. That's the only way to do code fences where you're pretty sure, like example one start, example one end, like that is one cohesive unit. - Because the braces are nondescriptive.

- Yeah, exactly. That would be my simple reason. XML is good for everyone, not just Cloud. Cloud was just the first one to popularize it, I think. - I do definitely prefer to read XML than read JSON, so yeah. - Any other details that are like maybe underappreciated? I know, for example, you had the absolute paths versus relative.

Any other, yeah, fun nuggets? - Yeah, no, I think that's a good sort of anecdote to mention about iterating on tools. Like I said, spend time prompt engineering your tools and don't just write the prompt, but like write the tool and then actually give it to the model and like read a bunch of transcripts about how the model tries to use the tool.

And I think you will find, like by doing that, you will find areas where the model misunderstands a tool or makes mistakes and then basically change the tool to make it foolproof. And there's this Japanese term, pokayoke, about like making tools mistake-proof. You know, the classic idea is you have like, you can have like a plug that can fit either way and that's dangerous, or you can make it asymmetric so that like it can't fit this way, it has to go like this.

And like, that's a better tool because you can't use it the wrong way. So for this example of like absolute paths, one of the things that we saw while testing these tools is, oh, if the model has like, you know, done CD and moved to a different directory, it would often get confused when trying to use the tool because it's like now in a different directory.

And so the paths aren't lining up. So we said, oh, look, let's just force the tool to always require an absolute path. And then, you know, that's easy for the model to understand. It knows sort of where it is, it knows where the files are. And then once we have it always giving absolute paths, it never messes up even like no matter where it is, because it just, if you're using an absolute path, it doesn't matter where you are.

So like iterations like that, you know, let us make the tool foolproof for the model. I'd say there's other categories of things where we see, oh, if the model, you know, opens Vim, like, you know, it's never going to return. And so the tool is like stuck. - Did it get stuck?

- Yeah. - Get out of Vim. - What? - Well, because the tool is like, it just text in, text out, it's not interactive. So it's not like the model doesn't know how to get out of Vim. It's that the way that the tool is like hooked up to the computer is not interactive.

- Yes, I mean, there is the meme of no one knows how to get out of Vim. You know, basically we just added instructions in the tool of like, hey, don't launch commands that don't return. Like, yeah, like don't launch Vim, don't launch whatever. If you do need to do something, you know, put an ampersand after it or launch it in the background.

And so like, just, you know, putting kind of instructions like that, just right in the description for the tool really helps the model. And I think like that's an underutilized space of prompt engineering where like people might try to do that in the overall prompt, but just put that in the tool itself.

So the model knows that it's like for this tool, this is what's relevant. - You said you worked on the function calling and tool use before you actually started the C-Bench work, right? Was there any surprises? Because you basically went from creator of that API to user of that API.

Any surprises or changes you would make now that you have extensively dog fooded in a state-of-the-art agent? - I want us to make like a, maybe like a little bit less verbose SDK. I think some way, like right now it just takes a, I think we sort of force people to do the best practices of writing out sort of these full JSON schemas, but it would be really nice if you could just pass in a Python function as a tool.

I think that could be something that-- - I think that there's a lot of like-- - There's helper libraries. - Instructure, you know, I don't know if there, if there's anyone else that is specializing for Anthropic, maybe Jeremy Howard's and Simon Willis and stuff. I think they all have cloud specific stuff that they are working on.

- Cloudette. - Cloudette, exactly. I also wanted to spend a little bit of time with SuiteAgent. It seems like a very general framework. Like, is there a reason you picked it apart from it's the same authors as SuiteBench? - The main thing we wanted to go with was it was the same authors as SuiteBench, so it just felt sort of like the safest, most neutral option.

And it was, you know, very high quality. It was very easy to modify, to work with. I would say it also actually, their underlying framework is sort of this, it's like, you know, think, act, observe, that they kind of go through this loop, which is like a little bit more hard-coded than what we wanted to do, but it's still very close.

That's still very general. So it felt like a good match is sort of the starting point for our agent. And we had already sort of worked with the, and talked with the SuiteBench people directly. So it felt nice to just have, you know, we already know the authors, this will be easy, easy to work with.

- I'll share a little bit of like, this all seems disconnected, but once you figure out the people and where they go to school, it all makes sense. So it's all Princeton. - Yeah, the SuiteBench and SuiteAgent, it's a group out of Princeton. - Yeah, we had Shun Yu on the pod and he came up with the React paradigm.

And that's like, think, act, observe, like that's all React. So they're all friends. - Yep, yeah, exactly. And you know, our, if you actually read our traces of our submission, you can actually see like, think, act, observe, like in our logs. And like, we just didn't even like change the printing code.

Like that's, so it's not actually, it's like doing still function calls under the hood and the model can do sort of multiple function calls in a row without thinking in between if it wants to. But yeah, so a lot of similarities and a lot of things we inherited from SuiteAgent just as a starting point for the framework.

- Yeah, any thoughts about other agent frameworks? I think there's, you know, the whole gamut from very simple to like very complex. - Autogen, CooEI, LandGraph. - Yeah, yeah. I think I haven't explored a lot of them in detail. I would say with agent frameworks in general, they can certainly save you some like boilerplate, but I think there's actually this like downside of making agents too easy where you end up very quickly like building a much more complex system than you need.

And suddenly, you know, instead of having one prompt, you have five agents that are talking to each other and doing a dialogue. And it's like, because the framework made that 10 lines to do, you end up building something that's way too complex. So I think I would actually caution people to like try to start without these frameworks if you can, because you'll be closer to the raw prompts and be able to sort of directly understand what's going on.

I think a lot of times these frameworks also, by trying to make everything feel really magical, you end up sort of really hiding what the actual prompt and output of the model is, and that can make it much harder to debug. So certainly these things have a place, and I think they do really help at getting rid of boilerplate, but they come with this cost of obfuscating what's really happening and making it too easy to very quickly add a lot of complexity.

So yeah, I would recommend people to like try it from scratch and it's like not that bad. Would you rather have like a framework of tools? You know, do you almost see like, hey, like it's maybe easier to get tools that are already well curated, like the ones that you build, you know, if I had an easy way to get the best tool from you and like you maintain the definition or yeah, any thoughts on how you want to formalize tool sharing?

Yeah, I think that's something that we're certainly interested in exploring. And I think there is space for sort of these general tools that will be very broadly applicable. But at the same time, most people that are building on these, they do have, you know, much more specific things that they're trying to do.

You know, I think that might be useful for hobbyists and demos, but the ultimate end applications are going to be bespoke. And so we just want to make sure that the model's great at any tool that it uses, but certainly something we're exploring. - So everything bespoke, no frameworks, no anything.

Just build. - For now, for now. - Yeah, I would say that like the best thing I've seen is people building up from like, build some good util functions and then you can use those as building blocks. - Yeah, yeah. I have a utils folder where I call these scripts.

My framework is like def, call, and tropic. And then I just put all the defaults. - Yeah, exactly. There's a startup hidden in every utils folder, you know? - No, totally not. - If you use it enough, like it's a startup, you know, like at some point. I'm kind of curious, is there a maximum length of turns that it took?

Like what was the longest run? - I actually don't. I mean, we had, it had basically infinite turns until it ran into 200K context. I should have looked this up. I don't know. And so for some of those failed cases where it eventually ran out of context, I mean, it was over a hundred turns.

I'm trying to remember like the longest successful run, but I think it was definitely over a hundred turns that some of the times, you know? - Which is not that much. It's a coffee break. - Yeah, yeah. But certainly, you know, these things can be a lot of turns.

And I think that's because some of these things are really hard where it's going to take, you know, many tries to do it. - Yeah, and if you think about like, think about a task that takes a human four hours to do, like think about how many different like files you read and like times you edit a file in four hours.

Like that's a lot more than a hundred. - How many times you open Twitter? - Yeah. - Because you get distracted. But if you had a lot more compute, what's kind of like the return on the extra compute now? So like, you know, if you had thousands of turns or like whatever, like how much better would it get?

- Yeah, this, I don't know. And I think this is, I think sort of one of the open areas of research in general with agents is memory and sort of how do you have something that can do work beyond its context length where you're just purely appending. So you mentioned earlier things like pruning bad paths.

I think there's a lot of interesting work around there. Can you just roll back, but summarize, hey, don't go down this path. - There'll be dragons. - Yeah, I think that's very interesting that you could have something that uses way more tokens without ever using at a time more than 200K.

So I think that's very interesting. I think the biggest thing is like, can you make the model sort of losslessly summarize what it's learned from trying different approaches and bring things back? I think that's sort of the big challenge. - What about different models? So you have Haiku, which is like, you know, cheaper.

So you're like, well, what if I have a Haiku to do a lot of these smaller things and then put it back up? - I think Cursor might have said that they actually have a separate model for file editing. I'm trying to remember, I think they were on a, maybe the Lex Fridman podcast where they said like, they have a bigger model, like write what the code should be and then a different model, like apply it.

So I think there's a lot of interesting room for stuff like that. - Yeah, fast applying. We actually did a pod with Fireworks that they worked with on, it's speculative decoding. - But I think there's also really interesting things about like, you know, paring down input tokens as well.

Especially sometimes the models trying to read like a 10,000 line file, like that's a lot of tokens. And you know, most of it is actually not going to be relevant. I think it'd be really interesting to like delegate that to Haiku. Haiku read this file and just pull out the most relevant functions.

And then, you know, Sonnet reads just those and you save 90% on tokens. I think there's a lot of really interesting room for things like that. And again, we were just trying to do sort of the simplest, most minimal thing and show that it works. I'm really hoping that people, sort of the agent community builds things like that on top of our models.

That's again, why we released these tools. You know, we're not going to go and do lots more submissions to SweetBench and try to prompt engineer this and build a bigger system. We want people to, like the ecosystem, to do that on top of our models. But yeah, so I think that's a really interesting one.

- It turns out, I think you did do 3.5 Haiku with your tools and it scored a 40.6. - Yes, yeah, so it did very well. It itself is actually very smart, which is great. But we haven't done any experiments with this like combination of the two models. But yeah, I think that's one of the exciting things is that how well Haiku 3.5 did on SweetBench shows that sort of even our smallest, fastest models is very good at sort of thinking agentically and working on hard problems.

Like it's not just sort of for writing simple text anymore. - And I know you're not going to talk about it, but like Sonnet is not even supposed to be the best model. You know, like Opus, it's kind of like we left it at three back in the corner intro.

At some point, I'm sure the new Opus will come out. And if you had Opus Plus on it, that sounds very, very good. - There's a run with SweetAgent Plus Opus, but that's the official SweetBench guys doing it. - That was the older, you know, 3.0. - You didn't do yours.

- Yeah. - Okay, did you want to, or did you just? I mean, you could just change the model name. - I think, I think we didn't submit it, but I think we included it in our model card. We included the score as a comparison. - Yeah. - Yeah, and Sonnet and Haiku, actually, I think the new ones, they both outperformed the original Opus.

- Yeah, I did see that. - Yeah, it's a little bit hard to find. - Yeah, yeah, it's not an exciting score, so we didn't feel like they need to submit the benchmark. - We can cut over to computer use if we're okay with moving on to topics on this, if anything else.

- I think we're good. I think, I'm trying to think if there's anything else SweetBench related. - It doesn't have to be also just specifically SweetBench, but just your thoughts on building agents, 'cause you are one of the few people that have reached this leaderboard on building a coding agent.

This is the state of the art. It's surprisingly not that hard to reach with some good principles, right? But there's obviously a ton of low-hanging fruit that we covered. So just your thoughts on if you were to build a coding agent startup, maybe, what next? - I think the really interesting question for me for all the startups out there is this kind of divergence between the benchmarks and what real customers will want.

So I'm curious, maybe the next time you have a coding agent startup on the podcast, you should ask them that. What are the differences that they're starting to make? - Tomorrow. - Oh, perfect, perfect, yeah. I'm actually very curious what they will see, 'cause I also have seen, I feel like it's like slowed down a little bit if I don't see the startups submitting to SweetBench that much anymore.

- 'Cause of the traces, the traces. So we had CoSign on, they had like a 50-something on full, on SweetBench full, which is the hardest one. And they were rejected because they didn't want to submit their traces. - Yep. - IP, you know? - Yeah, that makes sense, that makes sense.

- We actually, tomorrow, we're talking to Bolt, which is a cloud customer. You guys actually published a case study with them. I assume you weren't involved with that, but they were very happy with cloud. (laughing) One of the biggest launches of the year. - Yeah, totally. - We actually happened to be sitting in Adept's former office.

My take on this is Anthropic shipped Adept as a feature, or as like an open source demo. - It's still a beta feature, but yes. - What was it like when you tried it for the first time? Was it obvious that cloud had reached that stage where you could do computer use?

- It was somewhat of a surprise to me. Like, I think, I actually, I had been on vacation, and I came back, and everyone's like, computer use works. (laughing) And so it was kind of this very exciting moment. I mean, after the first, just like, you know, go to Google, I think I tried to have it play Minecraft or something, and it actually like installed and like opened Minecraft.

I was like, wow, this is pretty cool. So I was like, wow, yeah, this thing can actually use a computer. And certainly, it is still beta, you know, there's certain things that it's not very good at yet. But I'm really excited, I think, most broadly, not just for like new things that weren't possible before, but as a much lower friction way to implement tool use.

One anecdote from my days at Cobalt Robotics, we wanted our robots to be able to ride elevators, to go between floors and fully cover a building. The first way that we did this was doing API integrations with the elevator companies. And some of them actually had APIs, we could send that request, and it would move the elevator.

Each new company we did took like six months to do, 'cause they were very slow, they didn't really care. - Or an elevator, not an API. - Even installing, like once we had it with the company, they would have to like literally go install an API box on the elevator that we wanted to use.

And that would sometimes take six months, so very slow. And eventually we're like, okay, this is getting like, slowing down all of our customer deployments. And I was like, what if we just add an arm to the robot? And I added this little arm that could literally go and press the elevator buttons, and we used computer vision to do this.

And we could deploy that in a single day, and have the robot being able to use the elevators. At the same time, it was slower than the API, it wasn't quite as reliable, you know, sometimes it would miss and it would have to try to press it again. But it would get there, but it was slower and a little bit less reliable.

And I kind of see this as like an analogy to computer use of like, anything you can do with computer use today, you could probably write tool use and like integrate it with APIs to up to the language model. But that's going to take a bunch of software engineering to go write those integrations, you'll have to do all this stuff.

With computer use, just give the thing a browser that's logged into what you want to integrate with, and it's going to work immediately. And I see that like reduction and friction as being incredibly exciting. Of like, imagine like a customer support team, where, okay, hey, you got this customer support bot, but you need to go integrate it with all these things.

And you don't have any engineers on your customer support team. But if you can just give the thing a browser that's logged into your systems that you need it to have access to, now, suddenly in one day, you could be up and rolling with a fully integrated customer service bot that could go do all the actions you care about.

So I think that's the most exciting thing for me about computer use is like reducing that friction of integrations to almost zero. - Or farming on World of Warcraft. - Yes, or that. - Just go computer use, very high value use cases. - I always say about this is, you know, this is like the oldest question in robotics or self-driving, which is, you know, do you drive by vision or do you have special tools?

And vision is the universal tool to claim all tools. There's trade-offs, but like there's situations in which that will come. But, you know, this week's podcast, the one that we just put out had Stan Polu from DUST saying that he doesn't see a future where it's like the significant workhorse.

I think there could be a separation between maybe like the high volume use cases, you want APIs, and then the long tail, you want computer use. - I totally agree. - Right? So you'll start, you'll prototype something with computer use, and then, hey, this is working. Like customers have adopted this feature.

Okay, like, let's go turn it into an API and it'll be faster and use less tokens. - Yeah, I'd be interested to see a computer use agent replace itself by figuring out the API and then just dropping out of the equation altogether. You know? - Yeah, that's really fun actually.

- If I was running an RPA company, like you would have the RPA scripting, RPA for people listening is robotic process automation, where you would script things that like always show up in sequence. So you don't have an LLM in the loop. And so basically what you need to do is train an LLM to code that script.

And then you can sort of naturally hand off from computer use to non-computer user. - Yeah, or have some way to turn Claude's actions of computer use into a saved script that you can then run repeatedly. - Yeah, it'd be interesting to record that. - Why did you decide to not ship any like sandbox harness for computer use?

It's kind of like, "Hey, peace, run at your own risk." - It's Docker, right? - No, no, we launched it with, I think a VM or Docker, a Docker as system. - But it's not for your actual computer, right? Like the Docker instance is like runs in the Docker.

It's not for- - Yeah, it runs its own browser. I think, I mean, the main reason for that is one is sort of security. You know, we don't want, you know, the model can do anything. So we wanted to give it a sandbox, not have people do their own computer, at least sort of for our default experience.

We really care about providing a nice sort of, making the default safe, I think is the best way for us to do it. And I mean, very quickly people made modifications to let you run it on your own desktop. And that's fine. Someone else can do that, but we don't want that to be the official anthropic thing to run.

I would say also like from a product perspective right now, because this is sort of still in beta, I think a lot of the most useful use cases are, like a sandbox is actually what you want. You want something where, hey, it can't mess up anything in here. It only has what I gave it.

Also, if it's using your computer, you know, you can't use your computer at the same time. I think you actually like want it to have its own screen. It's like you and a person pair programming, but only on one laptop versus you have two laptops. - Everyone should totally have a side laptop where the computer is just doing its thing.

- Yeah, I think it's just a better experience. Unless there's something very explicit you want it to do for you on your own computer. - It becomes like you're sort of shelling into a remote machine and maybe checking in on it every now and then. I have fond memories of, half our audience is going to be too young to remember this, but Citrix, like desktop experience, like you were sort of remote into a machine that someone else was operating.

And for a long time, that would be how you did like enterprise computing. - It's a viewer. - Yeah, it's coming back. Any other implications of computer use? Is it a fun demo or is it like the future of Anthropic? - I'm very excited about it. I think that like there's a lot of sort of very repetitive work that like computer use will be great for.

I think I've seen some examples of people build like coding agents that then also like test the front end that they made. So I think it's very cool to like use computer use to be able to close the loop on a lot of things that right now just a terminal based agent can't do.

So I think that's very exciting. - It's kind of like end to end testing. - Exactly, yeah, yeah. The end sort of front end and web testing is something I'm very excited about. - Yeah, I've seen Amanda also talking, this will be Amanda Askell, the head of Cloud Character.

She goes on a lunch break and it generates research ideas for her. Giving it a name like computer use is very practical. It's like you're supposed to do things, but maybe sometimes it's not about doing things, it's about thinking. And thinking, in the process of thinking, you're using the computer.

In some way that's, you know, solving sweet bench, like you should be allowed to use the internet or you should be allowed to use a computer to solve it and use your vision and use whatever. Like we're just sort of shackling it with all these restrictions just 'cause we wanna play nice for a benchmark, but really, you know, a full AI will be able to do all these things, to think.

- Yeah, we'll definitely be able to. - To reason. - To Google and search for things. - Yeah. - Yeah, pull down inspiration. - Can we just do a, before we wrap, a robotics corner? - Oh, yeah, yeah, yeah. - People are always curious, especially with somebody that is not trying to hype their own company.

What's the state of AI robotics, under hyped, over hyped? - Yeah, and I'll say like these are my opinions, not Anthropic's. And again, coming from a place of a burned out robotics founder. So take everything with a grain of salt. I would say on the positives, like there is really sort of incredible progress that's happened in the last five years that I think will be a big unlock for robotics.

The first is just general purpose language models. I mean, there was an old saying in robotics that if to fully describe your task is harder than to just do the task, you can never automate it. 'Cause like, it's gonna take more effort to even tell the robot how to do this thing than to me just do it itself.

LLM solved that. I no longer need to go exhaustively program in every little thing I could do. The thing just has common sense and it's gonna know how do I make a Reuben sandwich? I'm not gonna have to go program that in. Whereas before like the idea of even like a cooking thing, it's like, oh God, like we're gonna have the team of engineers that are hard coding recipes for the long tail of anything, it'd be a disaster.

So I think that's one thing is that bringing common sense really is like solves this huge problem describing tasks. The second big innovation has been diffusion models for path planning. A lot of this work came out of Toyota Research. There's a lot of startups now that are working on this like Physical Intelligence Pi, Chelsea Fins, startup out of Stanford.

And the basic idea here is using a little bit of the, I'd say maybe more inspiration from diffusion rather than diffusion models themselves, but they're a way to basically learn an end to end sort of motion control. Whereas previously all of robotics motion control was sort of very hard coded.

You either, you're programming in explicit motions or you're programming in an explicit goal and using an optimization library to find the shortest path to it. This is now something where you just give it a bunch of demonstrations. And again, just like using learning, it's basically like learning from these examples.

What does it mean to go pick up a cup? And doing these in a way just like diffusion models where they're somewhat conditioned by text, you can have it, the same model learn many different tasks. And then the hope is that these start to generalize, that if you've trained it on picking up coffee cups and picking up books, then when I say pick up the backpack, it knows how to do that too, even though you've never trained it on that.

That's kind of the holy grail here is that you train it on 500 different tasks and then that's enough to really get it to generalize to do anything you would need. I think that's like still a big TBD and these people are working, have like measured some degree of generalization.

But at the end of the day, it's also like LLMs. Like, you know, do you really care about the thing, being able to do something that no one has ever shown in training data? People for like a home robot, there's gonna be like a hundred things that people really want it to do.

And you can just make sure it has good training for those things. What you do care about then is like generalization within a task of, oh, I've never seen this particular coffee mug before. Can I still pick it up? And those, the models do seem very good at. So these kind of are the two big things that are going for robotics right now is LLMs for common sense and diffusion inspired path planning algorithms.

I think this is very promising, but I think there's a lot of hype. And I think where we are right now is where self-driving cars were 10 years ago. I think we have very cool demos that work. I mean, 10 years ago, you had videos of people driving a car on the highway, driving a car on a street with a safety driver, but it's really taken a long time to go from there to, I took a Waymo here today.

And even then Waymo is only in SF and a few other cities. And I think like it takes a long time for these things to actually like get everywhere and to get all the edge cases covered. I think that for robotics, the limiting factor is gonna be reliability. That these models are really good at doing these demos of like doing laundry or doing dishes.

If they only work 99% of the time, like that sounds good, but that's actually really annoying. Like humans are really good at these tasks. Like imagine if like one out of every 100 dishes, it washed, it breaks. Like you would not want that robot in your house or you certainly wouldn't want that in your factory if one of every 100 boxes that it moves, it drops and breaks things inside it.

So I think for these things to really be useful, they're gonna have to hit a very, very high level of reliability, just like self-driving cars. And I don't know how hard it's gonna be for these models to move from like the 95% reliability to 99.9. I think that's gonna be the big thing.

And I think also like I'm a little skeptical of how good the unit economics of these things will be. These robots are gonna be very expensive to build. And if you're just trying to replace labor, like a one for one purchase, it kind of sets an upper cap about how much you can charge.

And so, it seems like it's not that great a business. I'm also worried about that for the self-driving car industry. - Do you see most of the applications actually taking some of the older, especially manufacturing machinery, which is like, it needs to be like very precise, even if it's off by just a few millimeters, it cannot screw up the whole thing and be able to adjust at the edge.

Or do you think like the net new use cases may be like the more interesting? - I think it'd be very hard to replace a lot of those traditional manufacturing robots because everything relies on that precision. If you have a model that can, again, only get there 99% of the time, you don't want 1% of your cars to have the weld in the wrong spot.

Like, that's gonna be a disaster. And a lot of manufacturing is all about getting rid of as much sort of variance and uncertainty as possible. - And what about the hardware? A lot of my friends that work in robotics, one of their big issues, like sometimes you just have a servo that fails and then you gotta, and it takes a bunch of time to like fix that.

Is that holding back things or is the software still? Anyway, not by right. - I think both. I think there's been a lot more progress in the software in the last few years. And I think a lot of the humanoid robot companies now are really trying to build amazing hardware.

Hardware is just so hard. It's something where- - Classic. - You know, you build your first robot and it works, you're like, great. Then you build 10 of them, five of them work, three of them work half the time, two of them don't work, and you built them all the same and you don't know why.

And it's just like the real world has like this level of detail and differences that software doesn't have. Like imagine if every four loop you wrote, some of them just didn't work. Some of them were slower than others. Like how do you deal with that? Like imagine if every binary that you shipped to a customer, each of those four loops was a little bit differently, was a little different.

It becomes just so hard to scale and sort of maintain quality of these things. And I think that's like, that's what makes hardware really hard is not building one of something, but repeatedly building something and making it work reliably. Where again, like you'll buy a batch of a hundred motors and each of those motors will behave a little bit differently to the same input command.

- This is your lived experience at Cobalt. - And robotics is all about how do you build something that's robust despite these differences? - We can't get the tolerance of motors down to- - It's just everything. You know, you'll have- (laughing) - It's actually everything. No, I mean, one of- - One of my horror stories was that at Cobalt, this was many years ago, we had a thermal camera on the robot that had a USB connection to the computer inside, which is, first of all, is a big mistake.

You're not supposed to use a USB. It is not a reliable protocol. It's designed that if there's mistakes, the user can just unplug it and plug it back in. - I see. - And so typically things that are USB, they're not designed to the same level of like very high reliability you need.

Again, because they assume someone will just unplug it and replug it back in. - You just say someone sometime. - I heard this too and I didn't listen to it. I really wish I had before. Anyway, at a certain point, a bunch of these thermal cameras started failing and we couldn't figure out why.

And I asked everyone on the team, like, "Hey, what's changed? Did the software change around this node? Did the hardware design change around this node?" And I was investigating all this stuff, looking at kernel logs of what's happening with this thing. And finally, the procurement person was like, "Oh yeah, well, I found this new vendor for USB cables last summer." And I'm like, "What?

You switched which vendor we're buying USB cables from?" And I'm like, "Yeah, it's the same exact cable. It's just a dollar cheaper." And it turns out this was the problem. This new cable had slightly worse resistance or slightly worse EMI interference. And it worked most of the time, but 1% of the time these cameras would fail and we'd need to reboot a big part of the system.

And it was all just 'cause the same exact spec, these two different USB cables, like slightly different. And so these are the kind of things you deal with with hardware. - For listeners, we had a episode with Josh Albrecht in view, where they talked about buying tens of thousands of GPUs and just some of them will just not do math.

- Yeah, yeah, it's the same thing. - You run some tests to find the bad batch and then you return it to sender 'cause they just, GPUs won't do math, right? - Yeah, yeah, this is the thing. Just the real world has this level of detail. There's Eric Jang, he did AI at Google.

- Yeah, 1X. - Yeah, and then joined 1X. I see him post on Twitter occasionally of complaints about hardware and supply chain. And we know each other and we joke occasionally that we've switched. I went from robotics into AI and he went from AI into robotics, and yeah. - Look, very, very promising.

The time of the real world is unlimited, right? But just also a lot harder. And yeah, I do think, something I also tell people about for why working software agents is they're infinitely clonable. And they always work the same way, mostly unless you're using Python. And yeah, I mean, this is like the whole thesis.

I'm also interested like in, you dropped a little bit of alpha there. I don't wanna make sure we don't lose it. Like you're just kind of skeptical about self-driving as a business. So I wanna like double click on this a little bit because I mean, I think that that shouldn't be taken away.

We do have some public Waymo numbers. Read from Waymo is pretty public with like their stats. They're exceeding 100 Waymo trips a week. If you assume like a $25 ride average, that's $130 million revenue run rate. At some point they will recoup their investment, right? Like what are we talking about here?

Like why this skepticism? - I think, and again, I'm not an expert. I don't know their financials. I would say the thing I'm worried about is like compared to an Uber, like I don't know how much an Uber driver takes home a year, but like call that the revenue that a Waymo is gonna be making in that same year.

Those cars are expensive. It's not about if you can hit profitability, it's about your cash conversion cycles. Like is building one Waymo, like how cheap can you make that compared to like how much you're earning sort of as the equivalent of what an Uber driver would take home? 'Cause remember, an Uber driver, you're not getting that whole revenue.

You think about for the Uber driver, like the cost of the car, the depreciation of the car. I'm not convinced how much profit Waymo can actually make per car. That's, I think, my skepticism. - Well, they need to pre-assess the run Waymo because the Class C is like 100, 10 grand, something like that.

- Yes, exactly. - Plus the LiDAR. - That's many years of, yeah, yeah, yeah, exactly, exactly. - Anything else? Parting thoughts? Call to action? Rants? The floor is yours. - I'm very excited to see a lot more LLM agents out there in the world doing things. And I think that like, I think there'll be the biggest limiting thing will start to become like, do people trust the output of these agents?

And like, how do you trust the output of an agent that did five hours of work for you and is coming back with something? And if you can't find some way to trust that agent's work, it kind of wasn't valuable at all. So I think that's gonna be a really important thing is not just doing the work, but doing the work in a trustable, auditable way where you can also explain to the human, hey, here's exactly how this works and why, and how I came to it.

I think that's gonna be really important. - Thank you so much. - Thank you. - Yeah, thanks. This was great. (upbeat music) (upbeat music) (upbeat music) (upbeat music) you