Back to Index

AGI Will Not Be A Chatbot - Autonomy, Acceleration, and Arguments Behind the Scenes


Transcript

In the last 48 hours there have been a flurry of articles and interviews revealing key details about our march to artificial general intelligence. I'm going to extract out the most interesting parts and link them to a series of other recent developments. And if there's one theme it's that the AGI these companies are creating is not just a clever chatbot and is coming much sooner than you might think.

If AGI is a lot more than just a clever personal assistant what actually is it? Well let's turn to this bombshell article in Wired. They say OpenAI doesn't claim to know what AGI really is. The determination would come from the board but it's not clear how the board would define it.

Sam Altman who is on the board of OpenAI said I would happily tell you but I like to keep confidential conversations private. And then said but we don't know what it's going to be like at that point. Not being able to define AGI continues on their website where they first say that AGI is AI systems that are generally smarter than humans.

Elsewhere they say their definition of AGI is highly autonomous systems that outperform humans at most economically valuable work. And back to the just released article it seems that confusion continues with Microsoft. It says Microsoft wasn't even bothered by the clause that demands reconsideration if OpenAI achieves AGI. Whatever that is.

At that point says Nadella the CEO of Microsoft all bets are off. It might be the last invention of humanity he notes. So we might have bigger issues to consider once machines are smarter than we are. OpenAI even have a legal disclaimer that says you as an investor stand to lose all of your money.

We are not here to make your return. We're here to achieve a technical mission. Oh and by the way we don't really know what role money will play in a post AGI world. In their restructuring documents they have a clause to the effect that if the company does manage to create AGI all financial arrangements will be reconsidered.

So if you're not sure what role money will play in a post AGI world then you can go back to the article and see what the clause says. But not clearly defining AGI seems to play into the hands of Microsoft who can always say well we haven't reached it yet.

I don't think Microsoft is going to lightly allow all of their trillion dollar financial arrangements to be reconsidered. And again we have this the company's financial documents stipulate a kind of exit contingency for when AI wipes away our whole economic system. But then apparently the company's leadership says this it would be hard they say to work at OpenAI if the individual didn't believe that AGI was truly coming and furthermore that its arrival would mark one of the greatest moments in human history.

Why would a non-believer want to work here they wondered. Three more quick insights from Wired before we move on. First that Sam Altman was planning to run for governor of California before he planned to build AGI. Second that in a 2015 interview with the same journalist at Wired Sam Altman was a bit more clear about what he thought the job impacts of AGI would be.

He predicted the challenge of massive automation and job elimination that's going to happen. This chimes in with a quote from the new book The Coming Wave that I got early access to written by Mustafa Suleiman the head of another AGI lab. He said these tools are only temporarily augmenting human intelligence but they are fundamentally labor replacing.

And finally from Wired was this quote that I found really interesting. Sam Altman's original vision was not to make a single entity that is a million times more powerful than any human. He wanted lots of little AIs not one super intelligent AI. But Sam Altman is now talking about making a super intelligence within the next 10 years that's as productive as one of today's largest corporations.

Which don't forget have millions of human employees. This fits in with his view that AGI is only going to get built exactly once. Once you've got it you could then use it to build a new entity. And if you think the timeline of within the next 10 years sounds wild wait till you hear some of the quotes towards the end of the video.

And the key thing to remember is that it's not about models just getting more smart. It's about them being more capable. More able to match a goal with a set of actions. As the chief scientist at OpenAI Ilya Sutskova said the upshot is "Eventually AI systems will become very very very capable and powerful.

We will not be able to understand them, they'll be much smarter than us." Notice the words capable and powerful there. And his vision by the way is that we imprint on them like a parent to a child so that they eventually feel towards us the way we feel towards our babies.

Now I don't know about you but it's challenging for me to imagine the human race as a baby in the arms of an AGI or super intelligence. But some of you may be thinking is a super intelligence just a really smart chatbot that sits on a web page somewhere?

Well, not really. It's just a super intelligent AI that's able to do anything. And it's not just a simple AI. It's a super intelligent AI that's able to do anything. And it's not just a super intelligent AI that's able to do anything. Well, no. For many reasons. Starting with the fact that we're building autonomy into them.

I've already discussed on the channel how OpenAI are working on autonomy. And here's Google DeepMind recruiting for the same purpose. And what kind of tasks are we talking about? What about commissioning from a factory? The purpose of the modern Turing test is to measure what an AI can do, not what it can say.

Capabilities, practical creation of real things in the real world, or the use of digital tools. Invent a new product from scratch to get that manufactured by commissioning it in a factory, negotiate over the blueprints of the product, actually get it drop shipped and sell the new product. His company, Inflection AI, want their PI, personal intelligence, to be your digital chief of staff, booking flights, bargaining with other AIs and maybe even making a million dollars.

This is particularly interesting to me because Suleiman says that they are not working on autonomy. I do. I think that we may be working on those capabilities, but they won't necessarily represent an existential threat. I think what I'm saying is they indicate the beginning of a trajectory towards a greater threat, right?

At Inflection, we're actually not working on either of those capabilities, recursive self-improvement and autonomy. And I've chosen a product direction, which I think can enable us to be extremely successful without needing to work on that. I mean, we're not an AGI company. We're not trying to build a super intelligence.

But it's a very, very important thing. And I think that's what we're trying to do. But aside from autonomy, what else might AGI or super intelligence entail? Well, how about matching or exceeding the problem solving ability of the best human mathematicians within a decade? Wait, scrap that. 2026. Terence Tao, who I believe is recorded as the highest IQ human around, said this: "When integrated with tools, I expect, say, a 2026 level AI, when used properly, will be a trustworthy co-author in mathematical research.

And, in many other fields as well." What else? Well, how about self-improving capabilities? This is from the AI power paradox in Foreign Affairs. "Soon, AI developers will likely succeed in creating systems with self-improving capabilities, a critical juncture in the trajectory of this technology that should give everyone pause." OpenAI agrees, saying it's possible that AGI capable enough to accelerate its own progress could cause major changes to happen surprisingly quickly.

And, in fact, this is a very important point. OpenAI agrees, saying it's possible that AGI could be a key part of the future of AI. And back to Suleiman in Foreign Affairs. With each new order of magnitude, unexpected capabilities emerge. Few predicted that training on raw text would enable large language models to produce coherent novel and even creative sentences.

Fewer still expected language models to be able to compose music or solve scientific problems, as some now can. An order of magnitude, don't forget, means being 10 times bigger. But how about 1000 times bigger than GPT-4 in 3 years? Right, so if everybody gets that power, that starts to look like, you know, individuals having the power of organisations or even states.

I'm talking about models that are like two or three orders of magnitude, maybe four orders of magnitude on from where we are. And we're not far away from that. We're going to be training models that are 1000x larger than they currently are in the next three years. Even at inflection with the compute that we have, we'll be 100x larger than the current frontier models in the next 18 months.

You can start to see why AGI means so much more than just a chatbot. And here's another example. OpenAI agrees, saying it's possible that we can do this. This again came from Suleiman's book released two days ago. He talks about the WannaCry ransomware attack that caused billions of dollars of damage back in 2017.

And he said, "Imagine WannaCry being able to patch its own vulnerabilities, learning to detect and stop further attempts to shut it down. A weapon like this is on the horizon, if not already in development." Let's move on though from Suleiman to Demis Hassabis, head of Google DeepMind. And this article yesterday in Time magazine, which was fascinating.

Apparently, Hassabis warned Elon Musk about the risks from artificial intelligence back in 2012, saying that machines could become super intelligent and surpass us mere mortals. Maybe that's why he's glad that ChatGPT moved the Overton window, the window of what it's permissible to discuss in public. In 2012, it was probably embarrassing to talk about artificial intelligence, let alone AGI.

We also learned that Musk tried to stop DeepMind being sold to Google. Musk put together financing to stop the deal and had an hour-long Skype call with Hassabis saying the future of AI should not be controlled by Larry. That's Larry Page, one of the co-founders of Google. Obviously that didn't happen and we are soon set to see Gemini from Google DeepMind.

That's their rival to GPT-4. I've got a whole video series on it, so do check that out. But the revelation from today was that Hassabis said this, "Gemini represents a combination of scaling and innovation. It's very promising early results." And as a Londoner like me, Hassabis will be prone to understatement.

So a British person saying "very promising early results" means it might shock a few people. Now do you remember from earlier in the video where I quoted Altman back in 2015 saying this, "AI is about making humans better, not a single entity that is a million times more powerful." Well for Musk, who remember co-founded OpenAI, that was apparently the answer.

And that's the reason why he's so proud of his idea of the AI. That's his attempt to tie the bots closer to humans, making them an extension of the will of individuals, rather than systems that could go rogue and develop their own goals and intentions. Another idea for Musk would be to build a sustainable human colony on Mars before anything went wrong.

Speaking of going wrong, we also have this from Demis Hassabis in Time Magazine. He was asked, "Are there any capabilities that, if Gemini remember that's their version of GPT-4 or 5, exhibited them in your testing phase, you'd decide, 'No, we cannot release this'?" He said, "Yeah, I mean it's probably several generations down the line." And then for anyone who's watched my SmartGPT series, they'd know that I would agree with the next part.

I think the most pressing thing that needs to happen in the research area of AI is to come up with the right evaluation benchmarks for capabilities. Because we'd all love a set of maybe even hundreds of tests where if your system passed it, that could get a kite mark and you'd say, "Right, this is safe to deploy." The problem is, we don't have those types of benchmarks currently.

For example, is this system capable of deception? Can it replicate itself across data centers? This is the kind of level we're talking about. These are the sorts of things you might want to test for. But we need practical, pragmatic tests for them. I think that's the most pressing thing for the field to do as a whole.

Our current evals and benchmarks are just not up to the task. But apparently, we have plenty of time to do this. At least, according to Suleiman, who don't forget, along with Hassabis, was the co-founder of DeepMind. Many of us more or less expected, or more or less sure was coming, which was there'd be a breakthrough at some company like DeepMind where the people building the technology would recognize that they had finally gotten into the end zone or close enough to it so that they're now in the presence of something that's fundamentally different than anything that's come before.

And there'd be this question, "What's the best way to do this?" And I think that's the question, "Okay, is this safe to work with, safe to create an API for?" The idea was that you'd have this digital oracle in a box, would already have been air gapped from the internet and incapable of doing anything until we let it out.

And then the question would be, "Have we done enough safety testing to let it out?" But now it's pretty clear that everything is already more or less out and we're building our most powerful models already in the wild. They're open source versions of the next best model. And is containment even a dream at this point?

It's definitely not too late. We're a long, long way away. This is really just the beginning. We have plenty of time to address this. The more that these models and these ideas happen in the open, the more they can be scrutinized and they can be pressure tested and held accountable.

So I think it's great that they're happening in open source at the moment. We have to be humble about the practical reality about how these things emerge. The initial framing that it was going to be possible to invent this oracle AI that stays in a box and we'll just probe it and poke it and test it until we can prove that it's going to be safe, that we'll stay in the bunker and keep it hidden from everybody.

I mean, this is a complete nonsense and it's attached to the super intelligence framing. It was just a completely wrong metaphor. He might want to tell that to Hassabis who said this recently, "If a system didn't pass the evals, that means you wouldn't release it until you sorted that out." And perhaps you would do that in something like a hardened simulator or hardened sandbox with cybersecurity things around it.

So these are the types of ideas we have, but they need to be made a little bit more concrete. I think that's the most pressing thing to be done in time for those types of systems when they arrive. And here's the key moment, because I think we've got a couple of years probably or more.

That's not actually a lot of time if you think about the research that has to be done. I think I am much closer to Hassabis than Suleiman on this one. Personally, I'd love a button where you could click at the exact moment where you want AI to stop. For me, that would be after it cures Alzheimer's, but before it creates rabies 2.0.

It would be after it solves mathematics, but before it gets too good at warfare. Maybe such a button could be devised at Bletchley Park in November, which was the place where the Enigma was cracked in World War II. They are both very good. They are being advised by some of the top minds in AI, so there is a chance.

And apparently they want 10 times as many people to join them in the next few weeks. But anyway, at the very least, I hope I've persuaded you that AGI is going to mean a lot more than just clever chatbots. Thank you so much for watching to the end and have a wonderful day.