back to indexAGI Will Not Be A Chatbot - Autonomy, Acceleration, and Arguments Behind the Scenes
00:00:00.000 |
In the last 48 hours there have been a flurry of articles and interviews revealing key details 00:00:06.740 |
about our march to artificial general intelligence. I'm going to extract out the most interesting 00:00:12.320 |
parts and link them to a series of other recent developments. And if there's one theme it's that 00:00:18.040 |
the AGI these companies are creating is not just a clever chatbot and is coming much sooner than 00:00:24.420 |
you might think. If AGI is a lot more than just a clever personal assistant what actually is it? 00:00:29.600 |
Well let's turn to this bombshell article in Wired. They say OpenAI doesn't claim to know 00:00:35.260 |
what AGI really is. The determination would come from the board but it's not clear how the board 00:00:40.320 |
would define it. Sam Altman who is on the board of OpenAI said I would happily tell you but I 00:00:45.120 |
like to keep confidential conversations private. And then said but we don't know what it's going 00:00:49.900 |
to be like at that point. Not being able to define AGI continues on their website where they first 00:00:55.740 |
say that AGI is AI systems that are generally 00:00:59.200 |
smarter than humans. Elsewhere they say their definition of AGI is highly autonomous systems 00:01:05.140 |
that outperform humans at most economically valuable work. And back to the just released 00:01:10.660 |
article it seems that confusion continues with Microsoft. It says Microsoft wasn't even bothered 00:01:16.240 |
by the clause that demands reconsideration if OpenAI achieves AGI. Whatever that is. At that 00:01:23.380 |
point says Nadella the CEO of Microsoft all bets are off. It might be the last invention of humanity 00:01:28.800 |
he notes. So we might have bigger issues to consider once machines are smarter than we are. 00:01:34.320 |
OpenAI even have a legal disclaimer that says you as an investor stand to lose all of your money. We 00:01:40.240 |
are not here to make your return. We're here to achieve a technical mission. Oh and by the way we 00:01:46.080 |
don't really know what role money will play in a post AGI world. In their restructuring documents 00:01:51.760 |
they have a clause to the effect that if the company does manage to create AGI all financial 00:01:56.640 |
arrangements will be reconsidered. So if you're not sure what role money will play in a post AGI world 00:01:58.400 |
then you can go back to the article and see what the clause says. But not clearly defining AGI seems 00:02:01.200 |
to play into the hands of Microsoft who can always say well we haven't reached it yet. I 00:02:05.680 |
don't think Microsoft is going to lightly allow all of their trillion dollar financial arrangements to 00:02:11.040 |
be reconsidered. And again we have this the company's financial documents stipulate a kind 00:02:16.000 |
of exit contingency for when AI wipes away our whole economic system. But then apparently the 00:02:21.840 |
company's leadership says this it would be hard they say to work at OpenAI if the individual didn't 00:02:28.000 |
believe that AGI was truly coming and furthermore that its arrival would mark one of the greatest 00:02:33.920 |
moments in human history. Why would a non-believer want to work here they wondered. Three more quick 00:02:40.080 |
insights from Wired before we move on. First that Sam Altman was planning to run for governor of 00:02:45.920 |
California before he planned to build AGI. Second that in a 2015 interview with the 00:02:52.720 |
same journalist at Wired Sam Altman was a bit more clear about what he thought the 00:02:57.600 |
job impacts of AGI would be. He predicted the challenge of massive automation and job elimination 00:03:04.320 |
that's going to happen. This chimes in with a quote from the new book The Coming Wave that I 00:03:09.120 |
got early access to written by Mustafa Suleiman the head of another AGI lab. He said these tools 00:03:15.440 |
are only temporarily augmenting human intelligence but they are fundamentally labor replacing. And 00:03:22.240 |
finally from Wired was this quote that I found really interesting. Sam Altman's original vision 00:03:27.200 |
was not to make a single entity that is a million times more powerful than any human. He wanted lots 00:03:33.920 |
of little AIs not one super intelligent AI. But Sam Altman is now talking about making a super 00:03:40.320 |
intelligence within the next 10 years that's as productive as one of today's largest corporations. 00:03:46.480 |
Which don't forget have millions of human employees. This fits in with his view that AGI 00:03:52.560 |
is only going to get built exactly once. Once you've got it you could then use it to build a new entity. 00:03:56.800 |
And if you think the timeline of within the next 10 years sounds wild wait till you hear some of the 00:04:04.880 |
quotes towards the end of the video. And the key thing to remember is that it's not about 00:04:09.360 |
models just getting more smart. It's about them being more capable. More able to match 00:04:14.320 |
a goal with a set of actions. As the chief scientist at OpenAI Ilya Sutskova said the 00:04:19.920 |
upshot is "Eventually AI systems will become very very very capable and powerful. We will 00:04:26.400 |
not be able to understand them, they'll be much smarter than us." Notice the words capable and 00:04:31.600 |
powerful there. And his vision by the way is that we imprint on them like a parent to a child so that 00:04:37.520 |
they eventually feel towards us the way we feel towards our babies. Now I don't know about you 00:04:42.320 |
but it's challenging for me to imagine the human race as a baby in the arms of an AGI or super 00:04:48.960 |
intelligence. But some of you may be thinking is a super intelligence just a really smart chatbot 00:04:54.000 |
that sits on a web page somewhere? Well, not really. It's just a super intelligent AI that's 00:04:56.000 |
able to do anything. And it's not just a simple AI. It's a super intelligent AI that's able to 00:04:56.960 |
do anything. And it's not just a super intelligent AI that's able to do anything. Well, no. For many 00:04:59.120 |
reasons. Starting with the fact that we're building autonomy into them. I've already 00:05:03.200 |
discussed on the channel how OpenAI are working on autonomy. And here's Google DeepMind recruiting 00:05:09.120 |
for the same purpose. And what kind of tasks are we talking about? What about commissioning from 00:05:13.920 |
a factory? The purpose of the modern Turing test is to measure what an AI can do, not what it can 00:05:19.920 |
say. Capabilities, practical creation of real things in the real world, or the use of digital 00:05:25.600 |
tools. Invent a new product from scratch to get that manufactured by commissioning it in a factory, 00:05:32.480 |
negotiate over the blueprints of the product, actually get it drop shipped and sell the new 00:05:37.840 |
product. His company, Inflection AI, want their PI, personal intelligence, to be your digital 00:05:43.680 |
chief of staff, booking flights, bargaining with other AIs and maybe even making a million dollars. 00:05:49.840 |
This is particularly interesting to me because Suleiman says that they are not working on autonomy. 00:05:55.200 |
I do. I think that we may be working on those capabilities, but they won't necessarily 00:05:59.440 |
represent an existential threat. I think what I'm saying is they indicate the beginning of 00:06:04.480 |
a trajectory towards a greater threat, right? At Inflection, we're actually not working on 00:06:10.000 |
either of those capabilities, recursive self-improvement and autonomy. And I've 00:06:14.960 |
chosen a product direction, which I think can enable us to be extremely successful without 00:06:19.760 |
needing to work on that. I mean, we're not an AGI company. We're not trying to build a super 00:06:24.240 |
intelligence. But it's a very, very important thing. And I think that's what we're trying to do. 00:06:24.800 |
But aside from autonomy, what else might AGI or super intelligence entail? Well, 00:06:30.320 |
how about matching or exceeding the problem solving ability of the best human mathematicians 00:06:35.920 |
within a decade? Wait, scrap that. 2026. Terence Tao, who I believe is recorded as the highest IQ 00:06:42.960 |
human around, said this: "When integrated with tools, I expect, say, a 2026 level AI, 00:06:49.760 |
when used properly, will be a trustworthy co-author in mathematical research. And, 00:06:54.400 |
in many other fields as well." What else? Well, how about self-improving capabilities? 00:06:59.920 |
This is from the AI power paradox in Foreign Affairs. "Soon, AI developers will likely succeed 00:07:06.480 |
in creating systems with self-improving capabilities, a critical juncture in the 00:07:10.960 |
trajectory of this technology that should give everyone pause." OpenAI agrees, saying it's 00:07:16.160 |
possible that AGI capable enough to accelerate its own progress could cause major changes to happen 00:07:22.800 |
surprisingly quickly. And, in fact, this is a very important point. OpenAI agrees, saying it's 00:07:24.000 |
possible that AGI could be a key part of the future of AI. And back to Suleiman in Foreign 00:07:26.160 |
Affairs. With each new order of magnitude, unexpected capabilities emerge. Few predicted 00:07:31.520 |
that training on raw text would enable large language models to produce coherent novel and 00:07:36.080 |
even creative sentences. Fewer still expected language models to be able to compose music or 00:07:40.640 |
solve scientific problems, as some now can. An order of magnitude, don't forget, means being 00:07:45.920 |
10 times bigger. But how about 1000 times bigger than GPT-4 in 3 years? Right, so if everybody 00:07:53.600 |
gets that power, that starts to look like, you know, individuals having the power of 00:07:57.760 |
organisations or even states. I'm talking about models that are like two or three orders of 00:08:02.240 |
magnitude, maybe four orders of magnitude on from where we are. And we're not far away from that. 00:08:06.240 |
We're going to be training models that are 1000x larger than they currently are in the next three 00:08:11.200 |
years. Even at inflection with the compute that we have, we'll be 100x larger than the current 00:08:16.160 |
frontier models in the next 18 months. You can start to see why AGI means so 00:08:21.040 |
much more than just a chatbot. And here's another example. OpenAI agrees, saying it's possible that 00:08:23.200 |
we can do this. This again came from Suleiman's book released two days ago. He talks about the 00:08:28.160 |
WannaCry ransomware attack that caused billions of dollars of damage back in 2017. And he said, 00:08:34.160 |
"Imagine WannaCry being able to patch its own vulnerabilities, learning to detect and stop 00:08:40.640 |
further attempts to shut it down. A weapon like this is on the horizon, if not already in 00:08:46.880 |
development." Let's move on though from Suleiman to Demis Hassabis, head of Google DeepMind. And 00:08:52.800 |
this article yesterday in Time magazine, which was fascinating. Apparently, Hassabis warned Elon Musk 00:08:58.800 |
about the risks from artificial intelligence back in 2012, saying that machines could become super 00:09:04.400 |
intelligent and surpass us mere mortals. Maybe that's why he's glad that ChatGPT moved the 00:09:10.160 |
Overton window, the window of what it's permissible to discuss in public. In 2012, it was probably 00:09:16.560 |
embarrassing to talk about artificial intelligence, let alone AGI. We also learned that Musk tried to 00:09:22.400 |
stop DeepMind being sold to Google. Musk put together financing to stop the deal and had an 00:09:28.160 |
hour-long Skype call with Hassabis saying the future of AI should not be controlled by Larry. 00:09:34.000 |
That's Larry Page, one of the co-founders of Google. Obviously that didn't happen and we are 00:09:38.800 |
soon set to see Gemini from Google DeepMind. That's their rival to GPT-4. I've got a whole video series 00:09:45.680 |
on it, so do check that out. But the revelation from today was that Hassabis said this, "Gemini represents 00:09:52.000 |
a combination of scaling and innovation. It's very promising early results." And as a Londoner 00:09:58.000 |
like me, Hassabis will be prone to understatement. So a British person saying "very promising early 00:10:03.920 |
results" means it might shock a few people. Now do you remember from earlier in the video where I 00:10:08.800 |
quoted Altman back in 2015 saying this, "AI is about making humans better, not a single entity 00:10:15.360 |
that is a million times more powerful." Well for Musk, who remember co-founded OpenAI, that was 00:10:21.600 |
And that's the reason why he's so proud of his idea of the AI. 00:10:23.760 |
That's his attempt to tie the bots closer to humans, making them an extension of the will of 00:10:29.840 |
individuals, rather than systems that could go rogue and develop their own goals and intentions. 00:10:34.480 |
Another idea for Musk would be to build a sustainable human colony on Mars before 00:10:39.920 |
anything went wrong. Speaking of going wrong, we also have this from Demis Hassabis in Time Magazine. 00:10:46.080 |
He was asked, "Are there any capabilities that, if Gemini remember that's their version of GPT-4 or 00:10:51.200 |
5, exhibited them in your testing phase, you'd decide, 'No, we cannot release this'?" 00:10:56.000 |
He said, "Yeah, I mean it's probably several generations down the line." 00:11:00.080 |
And then for anyone who's watched my SmartGPT series, they'd know that I would agree with the 00:11:04.720 |
next part. I think the most pressing thing that needs to happen in the research area of AI is to 00:11:09.760 |
come up with the right evaluation benchmarks for capabilities. Because we'd all love a set 00:11:15.200 |
of maybe even hundreds of tests where if your system passed it, that could get a kite mark and 00:11:20.800 |
you'd say, "Right, this is safe to deploy." The problem is, we don't have those types of 00:11:25.360 |
benchmarks currently. For example, is this system capable of deception? Can it replicate itself 00:11:30.880 |
across data centers? This is the kind of level we're talking about. These are the sorts of 00:11:35.280 |
things you might want to test for. But we need practical, pragmatic tests for them. I think 00:11:40.800 |
that's the most pressing thing for the field to do as a whole. Our current evals and benchmarks 00:11:45.360 |
are just not up to the task. But apparently, we have plenty of time to do this. At least, 00:11:50.400 |
according to Suleiman, who don't forget, along with Hassabis, was the co-founder of DeepMind. 00:11:55.520 |
Many of us more or less expected, or more or less sure was coming, which was 00:12:00.320 |
there'd be a breakthrough at some company like DeepMind where the people building the technology 00:12:08.400 |
would recognize that they had finally gotten into the end zone or close enough to it so that they're 00:12:14.720 |
now in the presence of something that's fundamentally different than anything that's 00:12:18.400 |
come before. And there'd be this question, "What's the best way to do this?" And I think that's 00:12:20.000 |
the question, "Okay, is this safe to work with, safe to create an API for?" The idea was that 00:12:27.120 |
you'd have this digital oracle in a box, would already have been air gapped from the internet 00:12:33.760 |
and incapable of doing anything until we let it out. And then the question would be, 00:12:37.920 |
"Have we done enough safety testing to let it out?" But now it's pretty clear that 00:12:42.640 |
everything is already more or less out and we're building our most powerful models already in the 00:12:49.600 |
wild. They're open source versions of the next best model. And is containment even a dream at this 00:12:56.240 |
point? It's definitely not too late. We're a long, long way away. This is really just the beginning. 00:13:01.760 |
We have plenty of time to address this. The more that these models and these ideas 00:13:08.640 |
happen in the open, the more they can be scrutinized and they can be pressure tested 00:13:14.320 |
and held accountable. So I think it's great that they're happening in open source at the moment. We 00:13:19.200 |
have to be humble about the practical reality about how these things emerge. The initial framing 00:13:26.560 |
that it was going to be possible to invent this oracle AI that stays in a box and we'll just probe 00:13:32.000 |
it and poke it and test it until we can prove that it's going to be safe, that we'll stay in the 00:13:36.240 |
bunker and keep it hidden from everybody. I mean, this is a complete nonsense and it's attached to 00:13:40.480 |
the super intelligence framing. It was just a completely wrong metaphor. 00:13:44.800 |
He might want to tell that to Hassabis who said this recently, "If a 00:13:48.800 |
system didn't pass the evals, that means you wouldn't release it until you sorted that out." 00:13:53.440 |
And perhaps you would do that in something like a hardened simulator or hardened sandbox with 00:13:58.640 |
cybersecurity things around it. So these are the types of ideas we have, but they need to be made 00:14:03.520 |
a little bit more concrete. I think that's the most pressing thing to be done in time for those 00:14:08.320 |
types of systems when they arrive. And here's the key moment, because I think we've got a 00:14:12.960 |
couple of years probably or more. That's not actually a lot of time if you think about the 00:14:18.400 |
research that has to be done. I think I am much closer to Hassabis than Suleiman on this one. 00:14:24.000 |
Personally, I'd love a button where you could click at the exact moment where you want AI to 00:14:29.040 |
stop. For me, that would be after it cures Alzheimer's, but before it creates rabies 2.0. 00:14:34.560 |
It would be after it solves mathematics, but before it gets too good at warfare. 00:14:39.600 |
Maybe such a button could be devised at Bletchley Park in November, which was the place where the 00:14:45.200 |
Enigma was cracked in World War II. They are both very good. 00:14:48.000 |
They are being advised by some of the top minds in AI, so there is a chance. 00:14:52.560 |
And apparently they want 10 times as many people to join them in the next few weeks. 00:14:57.440 |
But anyway, at the very least, I hope I've persuaded you that AGI is going to mean a lot 00:15:03.520 |
more than just clever chatbots. Thank you so much for watching to the end and have a wonderful day.