back to indexOpenAI: ‘We Just Reached Human-level Reasoning’.
Chapters
0:0 Introduction
0:52 Human-level Problem Solvers?
3:22 Very Steep Progress + Huge Gap Coming
4:23 Scientists React
5:44 SciCode
6:55 Benchmarks Harder to Make + Mensa
7:30 Agents
8:36 For-profit and Funding Blocker
9:45 AGI Clause + Microsoft Definition
11:23 Gates Shift
12:43 NotebookLM Update + Assembly
14:11 Automating OpenAI
00:00:00.000 |
The CEO of OpenAI said just two days ago that AI models have reached the level of human reasoning, 00:00:08.000 |
human level problem solving. In a sea of hype though, claims like this have to be taken with 00:00:14.020 |
a gallon of salt naturally. But my question is this, is it not now easier to come up with 00:00:19.920 |
a reasoning challenge that the new O1 family of models passes and an educated adult would fail 00:00:26.400 |
than the other way around? Yes, of course, while O1 still makes plenty of embarrassing mistakes, 00:00:32.400 |
so do we. So is this not a watershed moment? Either way, I'm going to analyse four new key 00:00:39.200 |
quotes from Sam Altman, give you the backdrop of OpenAI's new $157 billion valuation and provide 00:00:47.480 |
as much context as I can for his key claim. The key claim that Sam Altman made at DevDay less 00:00:53.800 |
than 48 hours ago was that the new O1 series of models are human level problem solvers. They 00:01:00.780 |
don't output the first thing that comes to mind, so to speak, they reason their way through 00:01:06.320 |
challenging problems. This chart released in July, I've realised, is OpenAI's version of a levels of 00:01:13.280 |
AGI chart. For me, it sets a very high bar for what would count as an AGI, but it does mimic five things 00:01:20.780 |
that humans can do in increasing order of difficulty. We can chat, we can reason our way through problems, 00:01:27.260 |
we can take actions in the world, we can innovate, and we can organise together. Forget levels three and 00:01:32.540 |
above for a moment, just claiming we've reached level two is a bold enough announcement already. 00:01:37.500 |
Here is a 60 second extract from DevDay, and if you're wondering about the strange video setup, 00:01:42.300 |
it's actually a massive improvement from the super shaky footage that's circulating online. Essentially, 00:01:47.580 |
I used Adobe Warp Stabiliser, so it's a bit easier to watch. Altman here claims we are at level two. 00:02:00.380 |
You know, we used to, every time we finished the system, we would say, like, in one way, this is not an AGI. 00:02:05.500 |
And it used to be, like, very easy. You could, like, make a little robotic hand that doesn't believe it is a game or a DotaBot, and it's like, oh, it does some things, but definitely not an AGI. 00:02:17.180 |
It's obviously harder to say no. So we're trying to, like, stop talking about AGI as this general thing, 00:02:24.540 |
I mean, we have this levels framework, because the word AGI has become so overloaded. So, like, real quickly, we used one for chatbots, two for reasoners, three for agents, four for innovators, five for organizations, like, roughly. 00:02:38.220 |
I think we clearly got to level two, or we clearly got to level two with O1, and it, you know, can do really quite impressive cognitive tasks. It's a very smart model. 00:02:50.220 |
It doesn't feel AGI-like in a few important ways, but I think if you just do the one next step of making it, you know, very agent-like, which is on level three, and which I think we will be able to do in the not distant future, it will feel surprisingly capable. 00:03:09.020 |
Being totally honest, the next two key quotes I might previously have dismissed as being pure hype, but after the O1 preview release, I'm less inclined to do so. 00:03:17.980 |
First is the commitment that the next two years are going to see very steep progress. 00:03:22.860 |
If you go from my O1 on a hard problem back to, like, Ford Turbo that we launched 11 months ago, you'll be like, wow, this is happening pretty fast. 00:03:29.900 |
Um, and I think the next year will be very steep progress. The next two years will be very steep progress. Harder than that. Hard to see a lot of certainty. 00:03:37.100 |
And next, and he didn't have to go this far, is the claim that this time next year there will be as big a gap from that model to O1 as from O1 to GPT-4 Turbo. 00:03:49.980 |
The model is going to get so much better so fast. Like, we are so early, this is like, you know, maybe it's the GPT-2 scale moment, but like, we know how to get to GPT-4. We have the fundamental stuff in place now to get to GPT-4. 00:04:02.860 |
And in addition to planning for us to build all of those things, plan for the model to just get, like, rapidly smarter. Like, you know, hope you all come back next year and plan for it to feel like way more of a year of improvement than from Ford Turbo. 00:04:19.740 |
I'll save the last key quote for later on in the video, but in case you're skeptical of his or even my analysis, a plethora of professors and scientists have described an incipient form of reasoning within O1. 00:04:34.620 |
One top researcher said, "In my field of quantum physics, it gives significantly more detailed and coherent responses." 00:04:41.420 |
Another molecular biologist said that O1 breaks the plateau that the public feared LLMs were heading into. 00:04:49.260 |
Then we have the creator of the graduate-level Google-proof Q&A benchmark test that's become famous. 00:04:55.580 |
It's targeted at PhD-level scholars who score an average of around 60%. 00:05:00.220 |
One of the authors of that benchmark said, "It seems plausible to me that the O1 family represents a significant and fundamental improvement in the model's core reasoning capabilities." 00:05:11.020 |
Just yesterday, a professor of mathematics described a moving frontier between what can and cannot be done with LLMs. 00:05:18.780 |
And that that boundary has just shifted a little. 00:05:22.380 |
And here was his aha moment using GPT-01 mini and 43 seconds of thought. 00:05:29.100 |
It came up with an entirely new, clever and correct proof that he described as being more elegant than the human proof. 00:05:36.140 |
It must be said that most of this is anecdotal, but it at least shows that the claim isn't completely outrageous from Sam Altman. 00:05:43.740 |
Yes, there are plenty of benchmarks where O1 Preview is still not scoring top marks. 00:05:49.020 |
To give one example, here's a new one called Psycode where O1 Preview scores just 7.7%. 00:05:55.340 |
A noticeable step up from other models, but what is this benchmark testing? 00:05:59.340 |
Well, it includes several research problems that are built upon or reproduce methods used in Nobel Prize winning studies, including things like the Haldane model for the anomalous quantum Hall effect. 00:06:11.660 |
To get questions right, you would need abundant high quality data not usually made available to current language models. 00:06:18.940 |
The language models have to generate code for solving real scientific research problems. 00:06:23.340 |
And it's not enough just to get these sub problems right. 00:06:27.100 |
The models have to compose solutions from those sub problems to get the 80 challenging main problems correct. 00:06:33.340 |
It's an amazing and useful benchmark, but seems more opposite for a level 4 model than a level 2 one. 00:06:39.820 |
This is certainly not average human level problem solving. 00:06:45.100 |
Is it not now easier to come up with a reasoning challenge that O1 passes and an educated adult fails than to find one where it's the other way round? 00:06:54.380 |
I say this as the author of Simple Bench where it is the other way round and models underperform humans, but it seems harder to create such a benchmark than the other way round. 00:07:04.140 |
As Sam Altman said of the Turing test, I don't want everyone just to ignore the fact that we have passed average human level reasoning. 00:07:11.340 |
Then we could throw in the fact that Mensa permits the law school admissions test as an admissions criteria to Mensa. 00:07:18.540 |
Because O1 can crush the LSAT, it now qualifies for Mensa, 18 years early according to 2020 Metaculous predictions. 00:07:27.500 |
Do you remember what comes next in that levels of AGI chart? 00:07:33.100 |
Just a couple of days ago in the Financial Times, the Chief Product Officer at OpenAI claimed this: 00:07:38.940 |
"We want to make it possible to interact with AI in all of the ways that you interact with another human being." 00:07:44.940 |
These more agentic systems are going to become possible. 00:07:48.700 |
And it is why I think 2025 is going to be the year that agentic systems finally hit the mainstream. 00:07:56.140 |
I can see that taking a little bit longer though, because unless it had 99.99% accuracy, 00:08:03.260 |
I wouldn't trust an O1 agent with my credit card. 00:08:06.460 |
One ability that will be absolutely crucial if agents are to work, obviously, is self-correction. 00:08:12.620 |
As I described recently on AI Insiders on Patreon. 00:08:16.060 |
Speaking of which, a very quick shout out if anyone's in California, Sweden, India or Japan 00:08:21.420 |
for the regional networking that's going on on Discord. 00:08:24.620 |
What I will say though, is that if OpenAI can turn reasoning into agency, 00:08:30.220 |
I can see why they had a $157 billion valuation. 00:08:35.100 |
That's great timing given their imminent switch from being a capped profit entity to a for-profit entity. 00:08:41.500 |
As we learned today in the information, it actually gets more serious than that. 00:08:45.420 |
OpenAI has proposed letting investors claw their money back within two years if it fails to convert 00:08:54.780 |
Now for some of you, I might have buried the lead because just 18 hours ago, 00:08:59.420 |
we learned this of the fundraising of OpenAI. 00:09:02.860 |
Reuters reports that one of the clauses in joining in that funding round was that the investors don't 00:09:08.860 |
also fund rival outfits. That includes Ilya Sutskova's safe superintelligence, 00:09:16.780 |
Now I don't think that will be a problem for the likes of XAI funded by Elon Musk, 00:09:21.740 |
but Ilya Sutskova's safe superintelligence might face a bit more of a funding struggle. 00:09:26.700 |
According to the New York Times, revenues are expected to balloon to almost $12 billion next 00:09:31.740 |
year, though OpenAI are expected to lose $5 billion this year. Why? Well, 00:09:36.860 |
the costs related to running its services and other expenses like employee salaries and office rent. 00:09:43.180 |
Setting aside short-term revenue and costs, one key question for OpenAI's future 00:09:47.900 |
is whether this fifth clause from their charter still applies. 00:09:51.580 |
their board will determine when they've achieved AGI. By AGI they mean a highly autonomous system 00:09:58.540 |
that outperforms humans at most economically valuable work. Such a system is excluded from 00:10:04.860 |
intellectual property licenses and other commercial terms with Microsoft. Those terms only apply to 00:10:10.780 |
pre-AGI technology. Now you don't need me to point out that that sets up an incentive to push the 00:10:17.020 |
definition of AGI as far away as possible. Is that clause what prompted these five generous 00:10:23.260 |
levels of AGI? Notice if so, we're drifting away from concepts of intelligence and reasoning to other 00:10:30.060 |
more nebulous attributes. Human level reasoning isn't AGI and even agents that take actions aren't AGI. 00:10:36.220 |
These systems have to do the work of entire organizations. You can let me know what you think 00:10:41.180 |
in the comments but many people would say a more reasonable definition of AGI would have arrived 00:10:47.020 |
well before this point. As a quick fun experiment I looked up Microsoft's definition of AGI and we got 00:10:54.140 |
far into sci-fi. AGI apparently may even take us beyond our planet by unlocking the doors to space 00:11:01.020 |
exploration. It could help us develop interstellar technology and identify and terraform potentially 00:11:07.420 |
habitable exoplanets. It may even shed light on the origins of life and the universe. If this ever becomes 00:11:15.100 |
the definition of AGI, Microsoft will be minting money to almost the end of time. Speaking of Microsoft, 00:11:21.900 |
I wanted to show you guys this and I think it's fair to say that potentially after seeing O1, 00:11:27.100 |
Bill Gates has somewhat shifted his timeline. I reported on an interview in Handelsblatt where Bill Gates 00:11:33.580 |
said he didn't expect GPT-5 to be much better than GPT-4. He was super impressed of course with GPT-4 but 00:11:40.060 |
just saw some limits to where scaling could take us. Contrast that with this clip from this week. 00:11:46.540 |
It's the first technology that has no limit. I mean when you invent a tractor or even a cell phone you 00:11:52.780 |
kind of figure out okay that's we can figure out how that's going to change life. Here where the AI 00:11:59.580 |
is very intelligent and when you put it in robotic form it can do a lot of both blue collar and white 00:12:05.260 |
collar jobs. The fact that's happening over the next decade. The idea of do we really trust government 00:12:12.860 |
to adjust the tax policies and make sure that okay we're shortening the work week. So it's happening 00:12:18.940 |
very fast and it's unlimited. A lot of it is super good like you know inner city, personal tutors for 00:12:28.460 |
all the kids, great health care even in the poor countries. So the good stuff which maybe gets crowded 00:12:36.380 |
out by these fears that's so exciting. We're going to end the video with the final Sam Altman quote but 00:12:44.940 |
just before then I want to highlight yet again Notebook LM from Google. If you didn't catch it in 00:12:51.020 |
the last video it's an amazing new tool powered by Gemini 1.5 Pro. As most of you might already be aware 00:12:58.140 |
you can turn any PDF, audio file or now any YouTube URL into a podcast. And it's not just PDFs, documents, 00:13:06.700 |
YouTube videos or audio files that you can add either. I can imagine millions of students taking 00:13:12.060 |
class materials and their own handwritten notes feeding it into Notebook LM for free by the way 00:13:18.300 |
and getting amazing podcasts out. And in case you missed it last time it's literally as easy as 00:13:23.100 |
clicking try Notebook LM and then new notebook upload one or multiple sources now including YouTube URLs 00:13:30.140 |
then the option will spring up to generate the audio click that and it takes two or three minutes. 00:13:34.940 |
I've seen people take literally anything including absolutely absurd sources and turning them into 00:13:40.940 |
engaging podcasts. I will say that if you want the transcription that the podcast hosts use to be 00:13:47.340 |
extra accurate do check out Assembly AI's Universal One. I'm super proud they're sponsoring this video 00:13:53.420 |
because their Universal One model has amazing speech to text accuracy. Yes it does handle my rather 00:14:00.380 |
rough around the edges London accent. You can see some of the comparisons to other models down below 00:14:06.140 |
and the link will be in the description to check them out. Here though is the Sam Altman quote from Devday 00:14:12.060 |
that I found somewhat intriguing. He speaks almost optimistically about a model one day 00:14:18.140 |
automatically automating OpenAI itself. Even the Charing test which I thought always was like this very 00:14:23.340 |
clear milestone you know there was this like fuzzy period it kind of like went Wuxing Bai and no one cared 00:14:32.380 |
but but I think the right framework is to this one exponential. That said if we can make an AI system 00:14:41.180 |
that is like materially better at all of OpenAI than doing at doing AI research that does feel to me like 00:14:48.060 |
some sort of important discontinuity. It's probably still wrong to think about it that way it probably 00:14:52.860 |
still is the smooth exponential curve but that feels like a good milestone. We are almost certainly nowhere 00:15:00.620 |
close to that level now but there is one caveat I wanted to give. Sam Altman describes automating 00:15:06.140 |
OpenAI itself but in OpenAI's own preparedness framework that would be a critical threshold. They 00:15:12.700 |
say if the model is able to conduct AI research fully autonomously it could set off an intelligence explosion. 00:15:18.860 |
Moreover they say we will not deploy AI systems that pose a risk level of high or critical and we will 00:15:25.500 |
not even train critical ones given their level of risk. It almost seemed like Sam Altman was speculating 00:15:31.980 |
about a model that OpenAI itself promised to never train. As always though I'm curious what you 00:15:38.060 |
think so thank you so much for watching to the end and please do have a wonderful day.