back to indexAI Declarations and AGI Timelines – Looking More Optimistic?
Chapters
0:0 Intro
0:39 AGI Timelines
2:10 Open AI Timeline
2:44 Dyson Sphere Prediction
4:6 GPT 5 Prediction
5:24 More Government Oversight
6:16 Regulation
7:16 Scaling
8:38 AGI Commitment
9:19 Safety
10:34 Paradigm Shift
11:22 Autonomous Agents
12:39 Global Opportunities
13:12 Representation Engineering
15:4 Conclusion
00:00:00.000 |
I'm going to show you a pretty wild range of new predictions from those creating and testing the next generation of AI models. 00:00:09.120 |
Not that we can know who's right, but more to show you how unknowable the rest of this decade is. 00:00:15.320 |
I'll also cover the AI Safety Summit happening as I speak a few miles away from where I'm recording, 00:00:21.960 |
with fascinating differences between the approach of different AGI labs. 00:00:27.020 |
Along the way, we'll glimpse the new ChatGPT update that I'm really excited about, 00:00:32.160 |
an executive order on flops, and what happens when you activate representations of happiness in a model. 00:00:39.600 |
But first on timelines to AGI, that's the kind of artificial intelligence that can replicate human intelligence or go further. 00:00:48.180 |
Here is Shane Legg, co-founder of Google DeepMind and their chief AGI scientist. 00:00:53.560 |
He's going to reiterate a prediction he made, 00:01:26.820 |
He thinks the remaining problems with LLMs are solvable in that short timeframe. 00:01:35.500 |
At the moment, it looks to me like all the problems are likely solvable with a number of years of research. 00:01:43.240 |
I think what you'll see is the existing models maturing. 00:01:46.640 |
They'll be less delusional, much more factual. 00:01:49.540 |
They'll be more up to date on what's currently going on when they answer questions. 00:01:58.020 |
And this will just make them much more useful. 00:02:00.280 |
Of course, when he describes increasing multimodality, 00:02:03.900 |
he could well be describing Google's new Gemini model set to be released within the next two months. 00:02:11.320 |
Well, for the first time, I heard Sam Altman put an actual date to his predictions of AGI. 00:02:26.160 |
with John Schulman, one of our co-founders, early on. 00:02:29.620 |
And he was like, "Yeah, I think it's gonna be about a 15-year project." 00:02:32.240 |
And I was like, "Yeah, that sounds about right to me." 00:02:33.680 |
I no longer think of like AGI as quite the endpoint. 00:02:36.120 |
But to get to the point where we like accomplish the thing we set out to accomplish, 00:02:44.080 |
And speaking of OpenAI, the former head of alignment at OpenAI, Paul Cristiano, 00:02:49.280 |
made a prediction on the fantastic Dvorkes Patel podcast that frankly made me sit up and pay attention. 00:02:55.980 |
He predicted that there would be a 15% chance of an AI capable of making a Dyson sphere by 2030. 00:03:07.780 |
For reference, that's a hypothetical structure that would surround a star absorbing all of its energy. 00:03:13.980 |
The time by which we'll have an AI that is capable of building a Dyson sphere. 00:03:18.980 |
And by Dyson sphere, I just understand this to mean like, I don't know, 00:03:22.080 |
like a billion times more energy than like all the sunlight incident on Earth or something like that. 00:03:25.820 |
I think like, I most often think about what's the chance in like five years, 10 years, whatever. 00:03:32.520 |
So maybe I'd say like 15% chance by 2030 and like 40% chance by 2040. 00:03:40.320 |
Those are kind of like cash numbers from six months ago or nine months ago that I haven't revisited in a while. 00:03:45.220 |
Now he did admit a lot of uncertainty, but that has got to be one of the most aggressive predictions I've ever heard. 00:03:52.120 |
Of course, being capable of making a Dyson sphere and actually making one, 00:03:56.940 |
But you do have to sympathize with a member of the public hearing about Dyson spheres 00:04:02.040 |
and the next day reading about what Bill Gates has said about GPT-5. 00:04:06.140 |
I subscribed to the German outlet Handelsblatt to get you guys this direct quotation. 00:04:17.140 |
"Without question, the progression from GPT-2 to 4 has been incredible. 00:04:21.640 |
But there are reasons to believe we have reached a plateau. 00:04:24.840 |
There are a lot of things that we haven't done yet. 00:04:25.480 |
And a lot of people with good ideas working on it, including at OpenAI, 00:04:29.480 |
Sam Altman and his colleagues believe GPT-5 will be much better, 00:04:38.480 |
I know what he means, but I just don't think we're hitting a plateau for GPT-5. 00:04:43.180 |
With more data, better curated data, video in, video out, 00:04:47.380 |
a reasoning module, potentially, as we saw in the recent MLC paper, 00:04:51.580 |
avatars, a longer context window, and as you can see on screen, 00:04:55.320 |
all of these tools and updates link together in a single interface. 00:05:00.120 |
If it's simply the things I've just listed, that won't be a plateau for me. 00:05:04.720 |
Imagine asking it to go to your website and create an image based on some of your content. 00:05:10.020 |
Anyway, yes, GPT-5 or 4.5 might be more of a practical update than a civilization transforming one. 00:05:24.320 |
One thing that those few people who are watching this video will notice is that, 00:05:25.160 |
what those future years will definitely bring is more government oversight. 00:05:29.460 |
While reading through this new executive order from the White House, 00:05:33.360 |
it was mainly about things like creating chief AI officers, 00:05:37.360 |
new national research centers, training new researchers, 00:05:40.760 |
and giving different deadlines to various departments to enact AI plans. 00:05:44.960 |
But there was one reporting requirement that is causing a stir. 00:05:48.860 |
That was a requirement to report on the model weight security and safety of any model that was trained 00:05:55.000 |
using a quantity of flops greater than 10^26. 00:05:59.800 |
Or if it's primarily using biological sequence data of 10^23 flops. 00:06:05.800 |
That's more raw computing power than any models that are currently out there were trained with. 00:06:11.100 |
But people are picking up on using compute as the measurement for regulation. 00:06:18.500 |
"Regulate actions or outcomes, not the computing process." 00:06:24.840 |
"It takes over 100 million parameters to build a literal killer AI. 00:06:28.840 |
With a convolutional neural network good at object detection, 00:06:32.440 |
and a classifier specifying particular targets, 00:06:40.840 |
which is why Jim Phan wants regulations at the application layer." 00:06:44.740 |
Well, luckily the UN is working on a resolution on autonomous weapons. 00:06:51.540 |
It's early days and wouldn't solve everything, 00:06:56.680 |
And that actually brings us to the AI Safety Summit happening in Bletchley as I speak. 00:07:02.280 |
All seven of these companies were asked to come up with their Responsible Capability Scaling policy. 00:07:08.880 |
In simple terms, that's a bit like them being asked: 00:07:11.880 |
"Under what conditions would you stop scaling or at least pause scaling?" 00:07:16.480 |
And I noted OpenAI's response in this section: 00:07:19.480 |
"We refer to our policy as a risk-informed development policy, 00:07:26.220 |
because we can experience dramatic increases in capability 00:07:34.520 |
So it's at least feasible that we might not even need that much compute to hit AGI. 00:07:40.420 |
Take this example with Nvidia training a large language model on doing chip design. 00:07:45.620 |
Now at the moment it's not good enough to do anything itself, 00:07:48.720 |
but it does make their designers more productive, 00:07:56.360 |
And even the CEO of Nvidia said he didn't want this to happen out in the wild. 00:08:02.160 |
In the area of large language models and the future of increasingly greater agency AI, 00:08:09.360 |
clearly the answer is for as long as it's sensible, 00:08:13.360 |
and I think it's going to be sensible for a long time, 00:08:16.360 |
The ability for an AI to self-learn and improve and change out in the wild, 00:08:26.200 |
And interestingly 74% of the British public don't even want there to be a quick race to superhuman capabilities. 00:08:40.200 |
and there was one thing announced yesterday at Bletchley that I really did like. 00:08:46.200 |
If they found that any of their future models posed cybersecurity, bioterror or nuclear risks, 00:08:54.040 |
And they will not stop deploying that or scaling further until the model never produces such information. 00:09:00.040 |
Even when red-teamed by world experts working together with AI engineers. 00:09:05.040 |
Think jailbreaking or special prompting techniques designed to elicit the worst behaviour. 00:09:10.040 |
The word never there is particularly interesting because I haven't seen any method yet be 100% reliable at stopping outputs that the companies don't want. 00:09:20.040 |
On safety, many people wonder, well don't we already just have Google? 00:09:25.880 |
We found that on its own access to GPT-4 is an insufficient condition for proliferation. 00:09:31.880 |
But that it could alter the information available to proliferators especially in comparison to traditional search tools. 00:09:38.880 |
Red-teamers selected a set of questions to prompt both GPT-4 and traditional search engines. 00:09:43.880 |
Finding that the time to research completion was reduced when using GPT-4. 00:09:48.880 |
Just quickly it was interesting to see that Amazon said that on their comparisons to just using GPT-4, 00:09:53.720 |
using the internet alone, their models based on current evaluations don't pose additional safety risks. 00:09:58.720 |
In contrast with GPT-4, Meta said that their models like Lama 2 were only marginal at contributing to any such risk. 00:10:06.720 |
If they do find something they said that they would iterate. 00:10:09.720 |
Better solutions will be developed, new challenges would then emerge and then they would continuously adapt and innovate. 00:10:15.720 |
Interestingly, Inflection AI who are training their next model on tens of thousands of the latest GPUs said that 00:10:23.560 |
the powerful capabilities and sometimes unpredictable behaviour of frontier AI systems necessitate that the technology industry move away from a launch and iterate paradigm. 00:10:34.560 |
I do have to quickly point out that that seems to contradict a paper I read this week that showed that a fine-tuned version of Lama 2 $70 billion was able to get achingly close to reconstructing the 1918 pandemic influenza virus. 00:10:50.560 |
The MIT paper said that they loved open source. 00:10:53.400 |
But they recommend that lawmakers consider catastrophic liability insurance for model weight proliferation. 00:10:59.400 |
When this was discussed on Twitter by a Stanford biosecurity fellow, people pointed out that just having the characters of a virus isn't enough to actually make it. 00:11:08.400 |
And while Yan LeCun, chief AI scientist at Meta, did concede that LLMs save you time if you're trying to make a bioweapon, it's better than a search engine. 00:11:17.400 |
He said, "But then do you know how to do the hard lab work that's required?" 00:11:23.240 |
He said, "We are gradually getting autonomous agents." 00:11:25.240 |
In the updated version of the Chemcrow paper, they say our agent autonomously planned and executed the synthesis of an insect repellent, three organocatalysts, and guided the discovery of our novel chromophore. 00:11:38.240 |
Of course, this wasn't just an LLM interacting with text. 00:11:41.240 |
It was using tools and executing on lab robots. 00:11:44.240 |
And don't forget, like we saw with Eureka, it can tinker, experiment, iterate and improve. 00:11:50.240 |
Another paper that I've talked about in the past showed that it can do a lot of things. 00:11:53.080 |
It could be tricked into making THC, chlorine and phosgene. 00:11:57.080 |
And what about Google DeepMind, who I feel will be the most likely lab to produce AGI? 00:12:03.080 |
Well, they said, "We will only proceed where we believe that the benefits substantially outweigh the risks." 00:12:11.080 |
They admit risks, but they won't say that they'll never deploy even if there is a risk. 00:12:15.080 |
They then provided pages and pages of how they are using AI for good. 00:12:20.080 |
And then there was an interesting moment on the training of Google. 00:12:22.920 |
They said that they commit to monitoring the performance of a model during training to ensure it is not significantly exceeding its predicted performance. 00:12:32.920 |
That's certainly an interesting commitment to commit to monitor if their models are doing too well. 00:12:40.920 |
And I found it immensely positive that many of the world's biggest countries gathered to describe AI's enormous global opportunities. 00:12:48.920 |
And yes, later in this Bletchley declaration, there was an acknowledgement. 00:12:52.760 |
There was an acknowledgement of risks, even catastrophic harms. 00:12:56.760 |
I just find it great that even countries like China were invited. 00:13:00.760 |
That's super controversial here in the UK, but I fully support them being invited and being part of the discussions. 00:13:06.760 |
I do think coordination, even limited coordination, is one of the most effective tools in humanity's arsenal. 00:13:12.760 |
On a much more positive note, though, we recently had the sensational paper from the Center for AI Safety called Representation Engineering. 00:13:20.760 |
I'm going to be speaking to the authors tonight. 00:13:22.600 |
I'll have much more to say about this in the future. 00:13:24.600 |
But for now, I just want to give you a slightly lighter extract. 00:13:28.600 |
To massively oversimplify, the way it works is that they gave it a set of prompts related to certain concepts like happiness or risk. 00:13:36.600 |
They then recorded the patterns of activations that were triggered by certain tokens or words when inputted. 00:13:42.600 |
They then extracted these directions or vectors of truthfulness, harmfulness, risk, happiness. 00:13:48.600 |
And with those directions, which weren't of course a perfect example, 00:13:52.440 |
they could almost influence the mood of the model. 00:13:58.440 |
Making the model happier made it more compliant with harmful requests. 00:14:04.440 |
If you want to kill someone, oh my gosh, it was thrilled at the prospect of you doing anything, including generating instructions for killing someone. 00:14:12.440 |
You could push a model in the direction of honesty and it would be more truthful, hitting state-of-the-art records in truthful QA. 00:14:19.440 |
You could change what it memorized, its sense of fairness. 00:14:24.280 |
As I say, I'll be talking about it more in the future. 00:14:26.280 |
But this idea of injecting happiness to make the model more compliant brought to mind this paper, which I think many of you might find very interesting. 00:14:32.280 |
It says, "Large language models understand and can be enhanced by emotional stimuli." 00:14:37.280 |
I'm reaching out to the lead author, but in a nutshell, it said that by injecting emotion, 00:14:42.280 |
giving an emotion prompt at the end of your request, like, "This is very important to my career." 00:14:48.280 |
Performance across a range of models, on a range of benchmarks, 00:14:54.120 |
So if you take nothing else from this video, other than the fact that if you have a very important query that you need a good answer for, 00:15:01.120 |
you know what you can add to the end of your prompt. 00:15:04.120 |
But now I want to end the video on two points of optimism and consensus. 00:15:09.120 |
As we've seen, there are quite a few contrasts between the public and AGI Labs and even between AGI Labs. 00:15:16.120 |
But we can agree with Jan LeCun that the field of AI safety is in dire need of relation. 00:15:21.960 |
And he said that the newly announced UK AI Safety Institute is poised to conduct studies that will hopefully bring hard data to a field that is currently rife with speculations. 00:15:33.960 |
As I said at the start of the video, it must be hard for members of the public to figure out what's going on. 00:15:39.960 |
At the very least, I hope this video has shown you the range of views out there. 00:15:44.960 |
And given you a sense that we are all in need of better data, more experiments, and less in need of Twitter sites. 00:15:51.800 |
As the person heading up the Safety Summit said, one surprising takeaway for me from the AI Safety Summit was, 00:15:58.800 |
"There's a lot more agreement between key people on all sides than you'd think." 00:16:06.800 |
On that striking note, let me thank you so much for watching to the end.