back to indexHassabis, Altman and AGI Labs Unite - AI Extinction Risk Statement [ft. Sutskever, Hinton + Voyager]
00:00:00.080 |
Just a few hours ago a host of AI industry leaders, experts and academics put out this 00:00:06.480 |
22 word statement on making AI safety a global priority. 00:00:11.600 |
The so called statement on AI risk brought together for the first time all the current 00:00:18.480 |
That's people like Sam Altman, Ilya Sutskova, Demis Hassabis and Dario Amadei. 00:00:23.800 |
And two of the three founders of deep learning itself, Yoshua Bengio and Geoffrey Hinton. 00:00:31.320 |
Mitigating the risk of extinction from AI should be a global priority, alongside other 00:00:37.180 |
societal level risks such as pandemics and nuclear war. 00:00:40.940 |
It is now almost impossible to deny that this is now the consensus view among AI experts. 00:00:47.080 |
Let's first look at the preamble, then break down the statement and show you the signatories. 00:00:51.860 |
They say that AI experts, journalists, policy makers and the public are increasingly discussing 00:01:00.980 |
Even so, it can be difficult to voice concerns about some of the advanced AI's most severe 00:01:06.360 |
The succinct statement below aims to overcome this obstacle and open up discussion. 00:01:11.780 |
It is also meant to create common knowledge of the growing number of experts and public 00:01:16.120 |
figures who also take some of advanced AI's most severe risks seriously. 00:01:20.720 |
The first point is that the statement is in a way optimistic. 00:01:32.580 |
And that's not just among all the different AGI labs, almost all of which signed the statement, 00:01:39.200 |
In that vein, there were quite a few prominent signatories from China. 00:01:43.620 |
And the third point that I'd make is that they put it on a par with pandemics and nuclear 00:01:49.320 |
Toward the end of the video I'll show you that's not as far fetched as it sounds. 00:01:52.900 |
But anyway, who actually signed this statement? 00:01:56.500 |
We have two of the three founders of Deep Learning. 00:02:02.100 |
The third founder was Jan LeCun and we'll touch on him later in the video. 00:02:05.440 |
All three of those won the most prestigious accolade in computer science which is the 00:02:10.800 |
Then we have three of the CEOs of the top AGI labs. 00:02:14.840 |
Sam Altman, Demis Hassabis and Dario Amadei of OpenAI, Google DeepMind and Anthropic. 00:02:21.220 |
None of those signed the pause letter but they did sign this statement. 00:02:24.900 |
And actually as interestingly for me so did Ilya Sutskova. 00:02:30.800 |
He of course also worked with Geoffrey Hinton on Deep Learning and is widely regarded as 00:02:39.000 |
You will also notice so many Chinese signatories. 00:02:42.560 |
Especially from Xinhua University which I've actually visited in China. 00:02:48.280 |
That's a really encouraging sign of cooperation between the West and countries like China 00:02:54.380 |
And the list of significant signatories goes on and on and on. 00:02:58.200 |
These are senior people at the top of DeepMind, Anthropic and OpenAI. 00:03:03.700 |
And there are names like Stuart Russell who wrote the textbook on AI who also signed the 00:03:10.200 |
Let me highlight a few more names for you here. 00:03:13.160 |
You have the CTO of Microsoft itself, Kevin Scott. 00:03:17.020 |
He's the guy who basically heads up the partnership between OpenAI and Microsoft. 00:03:21.720 |
I think many people will miss his name but I think it's particularly significant that