back to indexBest of 2024 in Agents (from #1 on SWE-Bench Full, Prof. Graham Neubig of OpenHands/AllHands)

Chapters
0:0 Welcome to Latent Space Live at NeurIPS 2024
0:29 State of LLM Agents in 2024
2:20 Professor Graham Newbig's Insights on Agents
3:57 Live Demo: Coding Agents in Action
8:20 Designing Effective Agents
14:13 Choosing the Right Language Model for Agents
16:24 Planning and Workflow for Agents
22:21 Evaluation and Future Predictions for Agents
25:31 Future of Agent Development
25:56 Human-Agent Interaction Challenges
26:48 Expanding Agent Use Beyond Programming
27:25 Redesigning Systems for Agent Efficiency
28:3 Accelerating Progress with Agent Technology
28:28 Call to Action for Open Source Contributions
30:36 Q&A: Agent Performance and Benchmarks
33:23 Q&A: Web Agents and Interaction Methods
37:16 Q&A: Agent Architectures and Improvements
43:9 Q&A: Self-Improving Agents and Authentication
47:31 Live Demonstration and Closing Remarks
00:00:08.520 |
So I was given the task of talking about agents in 2024 00:00:16.780 |
because there are so many agents, so many agents in 2024. 00:00:25.160 |
and what I think is interesting and important, 00:00:30.320 |
So the first thing I'd like to think about is, 00:01:05.560 |
So I think this is a pretty powerful tool set 00:01:12.360 |
and what I think some other people are trying to do 00:01:14.560 |
is come up with agents that are able to, you know, 00:01:18.240 |
web browsing, coding, running code in successful ways. 00:01:25.360 |
I'm a professor at CMU, chief scientist at All Hands AI, 00:01:32.560 |
which is an open source coding agent framework. 00:01:38.480 |
and I like doing lots of coding and, you know, 00:01:45.480 |
So building agents that help me to do this, you know, 00:01:48.180 |
is kind of an interesting thing, very close to me. 00:01:58.160 |
If anybody has, you know, tried to give a live demo, 00:02:01.460 |
you know, this is very, very scary whenever you do it 00:02:10.520 |
that I typically do with coding agents in my everyday work. 00:02:15.080 |
I use coding agents maybe five to 10 times a day 00:02:28.760 |
that show the increase of the SWE bench score over time. 00:02:32.100 |
And so I wrote a kind of concrete prompt about this. 00:02:36.480 |
Agents work better with like somewhat concrete prompts. 00:02:39.760 |
And I'm gonna throw this into open hands and let it work. 00:02:52.320 |
Another thing that I do is I create new software. 00:03:06.440 |
for sending emails and I'm not very happy with it. 00:03:11.380 |
called resend.com, which makes it easier to send emails. 00:03:17.760 |
for the resend.com API and come up with a script 00:03:42.600 |
The last one I do is improving existing software. 00:03:46.820 |
And in order, you know, once you write software, 00:03:51.760 |
You go in and like actually improve it iteratively. 00:03:55.240 |
This software that I have is something I created 00:04:09.040 |
And on the, let me make that a little bit bigger. 00:04:15.440 |
On the left side, I have the number of issues 00:04:20.260 |
I have the number of issues where it like sent 00:04:27.580 |
a pull request, whether it was merged in purple, 00:04:33.420 |
And so these are like, you know, it's helping us monitor. 00:04:38.380 |
But one thing it doesn't tell me is the total number. 00:04:40.700 |
And I kind of want that feature added to this software. 00:04:51.080 |
And here I want to open up specifically that GitHub repo. 00:05:03.120 |
So I'll open up that repo and paste in the prompt asking it, 00:05:09.600 |
I asked it to make a pie chart for each of these 00:05:11.760 |
and give me the total over the entire time period 00:05:17.540 |
And so now I have, let's see, I have some agents. 00:05:29.460 |
You can see it finished analyzing the SuiteBench repository. 00:05:42.340 |
It wrote a demonstration of how much each of the systems 00:05:57.220 |
And so it labeled OpenHands as being the best one 00:06:03.120 |
it has like the Amazon queue agent and OpenHands. 00:06:06.360 |
For the SuiteBench Lite, it has three over here. 00:06:11.360 |
So you can see like, that's pretty useful, right? 00:06:15.840 |
If you're a researcher, you do data analysis all the time. 00:06:41.600 |
The next thing I'd like to talk about a little bit 00:06:46.320 |
is things I worry about when designing agents. 00:06:50.560 |
do a very difficult task of like navigating websites, 00:06:57.160 |
And within 2024, there's been like a huge improvement 00:07:04.480 |
But there's a bunch of things we think about. 00:07:14.920 |
Like how do we get an agent to interact with computers? 00:07:18.200 |
And how do we provide agents with the tools to do the job? 00:07:23.200 |
And within OpenHands, we are doing the thing on the right, 00:07:33.400 |
So the thing on the left is you give like agents 00:07:43.320 |
I want to determine the most cost-effective country 00:08:06.280 |
you'd have to make about like 30 tool calls, right? 00:08:12.320 |
You'd have to look it up for the US, Japan, and India. 00:08:20.720 |
And the method that we adopt in OpenHands instead 00:08:26.120 |
but we provide them by just giving a coding agent 00:08:32.280 |
And in the arbitrary Python code, it can call these tools. 00:08:36.560 |
We expose these tools as APIs that the model can call. 00:08:40.880 |
is instead of writing 20 tool calls, making 20 LLM calls, 00:08:45.160 |
you write a program that runs all of these all at once, 00:09:01.200 |
Another part of this is what tools does the agent need? 00:09:41.560 |
And then we have another global search and replace tool. 00:10:03.600 |
what if we want it to allow it to do something else? 00:10:09.560 |
human programmers already have a bunch of things 00:10:34.360 |
The agents are super good at using the GitHub API also. 00:10:40.320 |
like finding all of the comments on your issues 00:10:58.400 |
where you can browse through the agents results 00:11:04.400 |
I don't think anybody has a good answer to this. 00:11:07.080 |
And I don't think we have a good answer to this, 00:11:08.800 |
but the guiding principles that I'm trying to follow 00:11:13.160 |
are we want to present enough info to the user. 00:11:21.920 |
in the form of a kind of English description. 00:11:27.720 |
you can see here, every time it takes an action, 00:11:32.280 |
it says like, I will help you create a script 00:11:46.360 |
It won't actually show you the whole bash command 00:11:59.160 |
and see what's displayed in the Jupyter Notebook. 00:12:01.400 |
And you get like lots and lots of information. 00:12:17.760 |
then I'd like to, you know, integrate into that setting, 00:12:22.520 |
So at OpenHands, we have a chat UI for interaction. 00:12:26.320 |
We have a GitHub plugin for tagging and resolving issues. 00:12:29.280 |
So basically what you do is you do @OpenHandsAgent 00:12:33.360 |
and the OpenHandsAgent will like see that comment 00:12:41.000 |
tests are failing on this PR, please fix the tests. 00:12:52.480 |
So if you want to launch like a fleet of agents 00:12:54.600 |
to solve, you know, five different problems at once, 00:13:10.800 |
you'll want to do things other ways, obviously. 00:13:19.760 |
And for agentic LMs, we have to have a bunch of things 00:13:30.480 |
And if you have really good instruction following ability, 00:13:33.160 |
it opens up like a ton of possible applications for you. 00:13:44.880 |
So it needs, like if you're building a web agent, 00:13:57.200 |
So if it makes a mistake, it needs to be able to, 00:14:04.800 |
Under the hood, in all of the demos that I did now, 00:14:20.440 |
Most others don't have these abilities quite as much. 00:14:29.240 |
And so because of this, it will go into loops 00:14:31.200 |
and do the same thing over and over and over again, 00:14:38.400 |
you get used to their kind of like personality 00:14:40.800 |
and Cloud says, hmm, let me try a different approach a lot. 00:14:44.680 |
So, you know, obviously it's been trained in some way 00:14:52.800 |
This is old and we need to update this basically, 00:15:05.280 |
on being a good code agent within our framework. 00:15:07.880 |
And Cloud was kind of head and shoulders above the rest. 00:15:12.960 |
The best open source model was LLAMA-3.1-405B. 00:15:19.520 |
and, you know, things are moving really, really fast, 00:15:21.800 |
but I still am under the impression that Cloud is the best. 00:15:24.920 |
The other closed models are, you know, not quite as good. 00:15:27.560 |
And then the open models are a little bit behind that. 00:15:43.280 |
And so there's a few considerations for planning. 00:15:47.440 |
The first one is whether you have a curated plan 00:16:09.400 |
After that, run the tests and make sure they fail. 00:16:31.920 |
Another one is explicit structure versus implicit structure. 00:16:44.520 |
And the multi-agent system would have your reproducer agent 00:16:48.480 |
and then it would have your test writer agent 00:16:53.480 |
and your bug fixer agent and lots of different agents. 00:16:57.480 |
And you would explicitly write this all out in code 00:17:02.520 |
On the other hand, you could just provide a prompt 00:17:04.640 |
that says, please do all of these things in order. 00:17:17.920 |
but we do provide like instructions about like 00:17:26.480 |
but I laid out some kind of justification for this 00:17:30.560 |
in this blog called Don't Sleep on Single Agent Systems. 00:17:35.600 |
if you have a really, really good instruction 00:17:37.480 |
following agent, it will follow the instructions 00:17:40.800 |
as long as things are working according to your plan. 00:17:43.480 |
But let's say you need to deviate from your plan, 00:17:51.880 |
Like you get stuck when things deviate from your plan. 00:18:02.200 |
So one paper I liked recently is this paper called Co-Act 00:18:05.360 |
where you generate plans and then go in and fix them. 00:18:13.560 |
you can figure out that your plan was not working 00:18:25.400 |
So we're trying to tackle software development 00:18:45.320 |
we fix GitHub actions when GitHub actions are failing 00:18:51.640 |
That's not the number one thing that software engineers do, 00:18:56.200 |
So how can we get a list of all of like the workflows 00:19:12.760 |
where they came up with a bunch of manual workflows 00:19:32.280 |
that has an example of lots of the previous workflows 00:19:39.920 |
and it self-judges that it did a good job at that task, 00:19:44.120 |
you break it down into individual workflows included in that 00:19:51.140 |
And we demonstrated that this leads to a 22.5% increase 00:20:02.540 |
by kind of self-learning and self-improvement. 00:20:17.140 |
how can agents learn more about their environment 00:20:24.360 |
and there's a few good examples of this in both areas. 00:20:28.520 |
Within coding, I view this as like repository understanding, 00:20:33.320 |
understanding the code base that you're dealing with. 00:20:41.500 |
where they basically create a map of the repo 00:20:57.300 |
And basically what they do is they have the agent 00:21:04.680 |
better understand the structure of the website. 00:21:19.300 |
we just let the agent go on a linear search path. 00:21:24.300 |
We're using a good agent that can kind of like 00:21:26.460 |
recover from errors and try alternative things 00:21:40.980 |
there's a paper called Tree Search for Language Agents, 00:21:49.320 |
and if they aren't going well, you rewind back. 00:22:03.480 |
'cause you can just revert any changes that you made. 00:22:23.960 |
we want things we can run really fast, really cheaply. 00:22:27.000 |
So for web, we have something called mini world of bits, 00:22:36.480 |
We have something called the Adder Code Editing Benchmark, 00:22:38.760 |
where it's just about editing individual files that we use. 00:22:42.560 |
But we also want highly realistic evaluation. 00:22:47.320 |
So for the web, we have something called Web Arena 00:22:50.880 |
This is web navigation on real open source websites. 00:23:00.440 |
or like bulletin boards or other things like this. 00:23:07.760 |
which I think a lot of people may have heard of. 00:23:12.440 |
that comes from real world pull requests on GitHub. 00:23:15.920 |
you can also probably solve other real world pull requests. 00:23:29.200 |
that test whether agents can code and do web navigation, 00:23:34.080 |
and hoping to release something in the next week or two. 00:23:36.720 |
So if that sounds interesting to you, come talk to me 00:23:46.880 |
but I was told that I should be somewhat controversial, 00:23:54.560 |
although maybe none of these will be very controversial. 00:24:08.120 |
will be focusing on training models as agents. 00:24:10.280 |
So every large language model will be a better agent model 00:24:16.040 |
Competition will increase, prices will go down, 00:24:21.200 |
smaller models will become competitive as agents. 00:24:23.760 |
So right now, actually agents are somewhat expensive 00:24:27.080 |
but I expect that that won't last six months. 00:24:29.400 |
I bet we'll have much better agent models in six months. 00:24:38.600 |
Another thing is instruction following ability 00:24:41.160 |
specifically in agentic contexts will increase. 00:24:51.360 |
and be able to do more by just prompting agents 00:24:57.840 |
It's not perfect, but it's already really, really good. 00:25:20.240 |
So right now we have a WebArena and SuiBench. 00:25:29.560 |
It's not super easy, but it's already a bit too easy 00:25:38.080 |
are ones that take like two minutes for a human. 00:25:48.200 |
So we built harder benchmarks like WebArena and SuiBench 00:25:55.960 |
So we built agents and now we're building better agents. 00:26:02.400 |
So we'll build better benchmarks, I'm guessing. 00:26:05.240 |
So I would expect to see much more challenging 00:26:24.040 |
I think one thing that we'll want to think about 00:26:33.600 |
Right now we have 53% or 55% on SuiBench verified, 00:26:43.280 |
My impression is that the actual ability of models 00:26:51.800 |
So 30 to 40% of the things that I want an agent 00:26:55.960 |
it just solves without any human intervention. 00:26:59.520 |
80 to 90% it can solve without me opening an IDE, 00:27:13.240 |
that are really, really good, but not perfect 00:27:17.280 |
How can we expose the power of programming agents 00:27:26.880 |
are using agents every day in our programming, 00:27:29.560 |
although we probably will be in months or maybe a year, 00:27:34.560 |
but I think it will come very naturally to us as programmers 00:27:39.840 |
because we know code, we know how to architect software 00:27:47.080 |
So I think the question is how do we put this in the hands 00:27:56.160 |
and have them also be able to interact with it 00:27:59.760 |
Another interesting thing is how can we redesign 00:28:07.960 |
and basically what we showed is if you take a web agent 00:28:16.640 |
just because APIs are way easier to interact with. 00:28:26.120 |
but whenever I want it to interact with GitHub, 00:28:30.120 |
use the GitHub API because it's way more successful 00:28:33.360 |
So maybe every website is gonna need to have an API 00:28:36.760 |
because we're gonna be having agents interact with them. 00:28:39.560 |
About progress, I think progress will get faster. 00:28:54.000 |
and better agents will build better agents faster. 00:29:01.320 |
with a coding agent yet, it's pretty magical, 00:29:17.600 |
on natural language processing and language models 00:29:25.480 |
what like AI agents powered by strong language models 00:29:29.480 |
On the other hand, I believe that we should really make 00:29:35.800 |
And what I mean by this is I don't think like, 00:29:53.760 |
If anything, I'd really like them to kind of make it possible 00:29:58.160 |
for people who weren't able to do things before 00:30:09.800 |
Make things cheap, make things so you can serve them 00:30:13.480 |
to people who aren't able to afford them easily. 00:30:16.480 |
Like Duolingo is one example where they get all the people 00:30:23.480 |
So that they can give all the people in South America 00:30:26.160 |
free language education so they can learn English 00:30:28.920 |
and become more attractive on the job market, for instance. 00:30:41.520 |
And if that resonates with you, please contribute. 00:30:43.840 |
Of course, I'd be happy if you contribute to Open Hands 00:30:52.600 |
research with them, and train strong open source models. 00:30:59.880 |
It'd be great if you could train models for coding agents 00:31:04.360 |
Yeah, please, I was thinking about you, among others. 00:31:12.880 |
- Slightly controversial thing is probably the nicest way 00:31:28.400 |
- Oh, I can also show the other agents that were working 00:31:32.520 |
if anybody's interested, but yeah, sorry, go ahead. 00:31:39.760 |
The first thing is that you said that you're estimating 00:31:50.040 |
but that's like below what you saw on Swebench. 00:31:52.880 |
So I guess I'm wondering where that discrepancy 00:32:01.960 |
as like a junior developer, and I say, go do something, 00:32:05.800 |
then I expect maybe tomorrow to get a Slack message 00:32:12.280 |
And like you said, your agent is like successfully solving 00:32:16.640 |
like 90% of issues where you give it direct feedback. 00:32:19.240 |
So are you thinking about how to get the agent 00:32:21.400 |
to reach out to like, for planning when it's stuck 00:32:26.760 |
For like identify when it runs into a hole like that? 00:32:49.080 |
So first, Swebench is on popular open source repos 00:33:01.040 |
And so the language models already know these repos. 00:33:04.760 |
In some cases, the language models already know 00:33:08.680 |
So basically like some of the training data has leaked. 00:33:15.760 |
I don't think it's like horribly, horribly off, 00:33:18.880 |
but I think it's boosting the accuracy by a little bit. 00:33:29.480 |
and whether we're benchmarking asking for help, 00:33:39.720 |
is we basically made super vague Swebench issues. 00:33:43.360 |
Like I'm having a problem with the matrix multiply, 00:33:50.280 |
if anybody's run a popular open source like framework, 00:33:59.680 |
my screen doesn't work, what's wrong or something. 00:34:12.640 |
are not very good at asking for help, even flawed. 00:34:20.800 |
and then won't ask for help when they do need it. 00:34:22.600 |
So this is definitely like an issue, I think. 00:34:34.200 |
about how the web agent interacts with websites? 00:34:37.880 |
So is there a VLM that looks at the webpage layout 00:34:47.560 |
where there's like, so I work at Bing, Microsoft AI. 00:34:58.600 |
so that you don't need to spend time cleaning up, 00:35:22.440 |
The first way is the simplest way and the newest way, 00:35:27.600 |
which is you take a screenshot of the website 00:35:32.600 |
and then you click on a particular pixel value 00:35:37.320 |
And like models are not very good at that at the moment. 00:35:42.440 |
There was this thing about how like clod computer use 00:35:47.960 |
of Yellowstone National Park or something like this. 00:35:50.400 |
I don't know if you heard about this anecdote, 00:35:52.680 |
but like people were like, oh, it's so human. 00:35:56.480 |
And it was like, no, it probably just misclicked 00:35:58.560 |
on the wrong pixels and accidentally clicked on an ad. 00:36:08.640 |
and you basically identify elements in the HTML. 00:36:14.840 |
And then you say, okay, I want to click on this element. 00:36:21.360 |
So it actually, it usually gets condensed down 00:36:34.400 |
but you also present like a textual summary of the output. 00:36:38.160 |
And that's the one that I think will probably work best. 00:36:42.320 |
What we're using is we're just using text at the moment. 00:36:46.400 |
that we haven't implemented the visual stuff yet, 00:36:49.240 |
but that's kind of like we're working on it now. 00:36:53.440 |
is we actually have two modalities for web browsing. 00:37:02.120 |
you will need to click on all of the elements 00:37:04.040 |
or have the ability to click on all of the elements. 00:37:05.920 |
But most of our work that we need websites for 00:37:08.280 |
is just web browsing and like gathering information. 00:37:19.080 |
And then can we create an index specifically for agents? 00:37:24.080 |
Maybe a markdown index or something like that would be, 00:37:28.200 |
Oh, how would I make a successor to Swebench? 00:37:32.280 |
So, I mean, a first thing is there's like LiveCodeBench, 00:37:37.280 |
which LiveCodeBench is basically continuously updating 00:37:47.120 |
and those real websites are getting new issues all the time. 00:37:53.960 |
There's also like a pretty large number of things 00:38:02.040 |
So like, for example, Swebench is mainly fixing issues, 00:38:10.800 |
that actually test the functionality that you want. 00:38:19.200 |
So I feel like Swebench is one piece of the puzzle, 00:38:23.000 |
but you could also have like 10 different other tasks. 00:38:25.640 |
And then you could have like a composite benchmark 00:39:00.960 |
versus having one agent to do three or five things 00:39:04.400 |
with a gigantic prompt with conditional paths and so on? 00:39:09.600 |
So we have a basic coding and browsing agent. 00:39:13.280 |
And I won't say basic, like it's a good agent, 00:39:20.400 |
It has instructions about how to do coding and browsing. 00:39:34.200 |
and how to use different APIs and stuff like that. 00:39:37.520 |
We do have a mechanism for something called microagents. 00:39:42.920 |
that gets added to the prompt when a trigger is triggered. 00:39:48.160 |
It's like if you detect the word GitHub anywhere, 00:39:52.280 |
you get instructions about how to interact with GitHub, 00:39:56.840 |
Also, another one that I just added is for NPM, 00:40:07.840 |
it like hits in interactive terminals where it says, 00:40:26.120 |
about how to not use the interactive terminal 00:40:37.160 |
where you would want something more complex than that. 00:40:45.200 |
I feel like this is the entropic model context protocol. 00:40:50.960 |
It seems like the most successful type of this, 00:41:26.480 |
there's a few things that bug me a little bit about it. 00:41:29.240 |
It's like, we already have an API for GitHub. 00:41:43.240 |
So it seems like kind of duplicated a little bit. 00:41:46.880 |
And also they have a setting where it's like, 00:41:50.280 |
you have to spin up a server to serve your GitHub stuff 00:42:01.680 |
if you really care about like separation of concerns 00:42:04.160 |
and security and like other things like this. 00:42:11.680 |
we haven't seen that to have a lot more value 00:42:16.520 |
And that kind of goes into my general philosophy, 00:42:18.480 |
which is we're already developing things for programmers. 00:42:22.040 |
You know, how is an agent different from a programmer? 00:42:33.560 |
but they're not that different at this point. 00:42:35.440 |
So we can kind of interact with the interfaces 00:42:48.720 |
You were saying that the agents you have right now 00:42:52.560 |
solve like maybe 30% of your issues out of the gate. 00:42:57.360 |
I'm curious, of the things that it doesn't do, 00:43:24.880 |
it will sometimes fail to fix a GitHub workflow 00:43:35.320 |
because it will not look at the GitHub workflow 00:43:38.160 |
and understand what the GitHub workflow is doing 00:43:42.240 |
So I think actually probably the biggest thing 00:43:57.680 |
that it should do information gathering beforehand, 00:44:01.120 |
If you don't provide sufficient instructions, 00:44:04.640 |
without like fully understanding the task first 00:44:31.800 |
I thought the bug was in a different part of the code. 00:44:46.200 |
So like, I think a certain level of like scaffolding 00:44:59.160 |
then that's probably the biggest failure point at the moment. 00:45:08.280 |
- I'm just using this as a chance to ask you all my questions. 00:45:11.600 |
You had a slide on here about like self-improving agents 00:45:25.760 |
So I just wanted you to chain a thought more on this. 00:45:40.520 |
is to have a really, really strong language model 00:45:56.960 |
But the problem is a really powerful language model 00:46:10.320 |
RAG from language to code doesn't work super well. 00:46:16.240 |
that's the way I would like to solve this problem. 00:46:19.240 |
and somehow be able to index into it appropriately. 00:46:32.280 |
So that might be another way of continuously improving. 00:46:38.000 |
and then just add all of the good examples into your model. 00:46:49.840 |
it just builds up the skill library over time. 00:46:55.120 |
and there's this idea from Devin, your arch nemesis, 00:47:02.760 |
- Yeah, I mean, we're calling them workflows, 00:47:09.520 |
you can kind of like persist them as a skill library. 00:47:12.520 |
- Right, like I feel like that's like some in between, 00:47:17.560 |
it's hard to do RAG between language and code, 00:47:31.440 |
It's just, you know, not trivial at the same time. 00:47:42.360 |
And this curve up here is agent workflow memory, 00:47:45.720 |
where it's like adding the successful experiences 00:48:01.280 |
So it's not like this is actually improving it. 00:48:08.240 |
And then this one is like improving like this. 00:48:10.960 |
Basically you can see it's continuing to go up, yeah. 00:48:17.320 |
the authentication problem for agents right now? 00:48:25.200 |
- Yeah, 'cause I've seen a few startup solutions today, 00:48:27.920 |
but it seems like it's limited to the amount of websites 00:48:36.320 |
So my preferred solution to this at the moment 00:48:41.040 |
is GitHub fine-grained authentication tokens. 00:48:44.680 |
And GitHub fine-grained authentication tokens 00:48:47.240 |
allow you to specify on a very granular basis. 00:48:53.120 |
On this repo, you have permission to do this. 00:48:55.400 |
On this repo, you have permission to do this. 00:48:57.640 |
You also can prevent people from pushing to the main branch 00:49:05.080 |
And I think these were all developed for human developers 00:49:19.880 |
like a little bit more is the way to do this. 00:49:22.640 |
For other things, they're totally not prepared 00:49:33.640 |
that we're gonna need to prepare the world for agents, 00:49:37.520 |
But I think like the GitHub authentication tokens 00:50:24.920 |
Sorry, that's taking way longer than it should. 00:50:35.800 |
I'll tell you later if this actually like successfully... 00:51:17.920 |
'cause you could just do that while I'm giving a talk.