Back to Index

AGI Inches Closer - 5 Key Quotes: Altman, Huang and 'The Most Interesting Year'


Transcript

It has been an interesting few days for the pursuit of Artificial General Intelligence. So I wanted to give you some of the highlights from GPT 4 point something to recursively improving semiconductor manufacturing. We got at least five revealing quotes from Jensen Huang, Sam Altman and others. And the summary is this.

If you thought or know someone who thinks that Artificial Intelligence peaked with chat GPT, you or they are going to have to weather exponential increases in computational power through at least the rest of this decade. As Sam Altman just said of 2024, and I agree, this is the most interesting year in human history, except for all future years.

So let's start with Sam Altman, who said that OpenAI's goal is to avoid shocking updates. Our goal is not to have shock updates to the world, but that's what we're trying to do. That's like our state of the strategy. And I think we're somehow missing the mark. So maybe we should think about, you know, releasing GPT 5 in a different way or something like that.

Yeah, 4.71, 4.72. And what does he mean by releasing iteratively without shock updates? Well, probably something similar to another co-founder of OpenAI, Greg Brockman. He said that their plan for safety involved deploying GPT 5 in stages, essentially creating a continuum of incrementally better AIs, such as by deploying subsequent checkpoints of a given training run.

Think of that like saves on the way to completing a video game. In short, it's highly likely now that we will be getting something equivalent to GPT 4.5 before we get GPT 5. As to whether the marketing department comes up with a better name than GPT 4.5, well, let's see.

And on timing, he said this. Blink twice if it's this year. I also... We will release an amazing new model this year. I don't know what we'll call it. We'll release over in the coming months. Many different things, I think they'll be very cool. I think before we talk about like a GPT 5 like model called that or not called that or a little bit worse or a little bit better than what you'd expect from a GPT 5.

I know we have a lot of other important things to release. And don't forget that not all progress depends on new models. We can have new systems like Let's Verify or Q* based on existing models. Sam Altman practically confirmed the existence of Q* in this interview with Lex Friedman from yesterday.

Can you speak to what Q* is? We are not ready to talk about that. See, but an answer like that means there's something to talk about. It's very mysterious, Sam. I mean, we work on all kinds of research. Yeah. I've done an entire video gathering the best evidence as to what Q* is, so do check that one out.

But if you want the massively condensed TL;DR, it's this. Models can essentially think or compute for longer on harder questions and generate thousands of example answers and have internal systems for checking which answer is best and only showing you that best answer. A system two to complement the base system one thinking, if you will.

But before we leave that interview, there were two more fascinating quotes, at least from my perspective, that I want to show you. One involved the possible social response to ever improving AI and the chances of it going theatrically wrong. I worry about that for AI. I think some things are going to go theatrically wrong with AI.

I don't know what the percent chance is that I eventually get shot, but it's not zero. I'll come back to social responses later in this video, but that was a startling moment. At the moment, at least, thankfully, the power struggle for AGI is only financial. Here is Demis Hassabis essentially laughing when an interviewer said that OpenAI was a non-profit.

A lot of AI labs have been grappling with governance and what is the best structure for something like AGI to emerge. You just mentioned the possibility of some sort of international collective or cooperative that would handle this. But, you know, across the industry, like OpenAI has set itself up as a non-profit with a for-profit subsidiary.

Anthropic is a public benefit corporation. So before we get to NVIDIA's GTC, let's linger for a moment on AGI, its definition and recent updates to the timeline to AGI. Yesterday, Andrej Karpathy, who was until recently at the very top of OpenAI, said this about the definition of AGI. He thinks of it like the OpenAI charter as being an autonomous system that surpasses humans in most economically valuable tasks.

And is it me or does that definition not automatically foreshadow economic strife? In other words, definitionally, AGI won't have arrived until it can do the work of at least half of all humans. Now, every word matters when we're defining something as consequential as AGI. And Google DeepMind, led by Demis Hassabis, moderated OpenAI's definition.

They said we'll count it as having achieved AGI if we have systems that are technically capable of performing economically important tasks, but don't necessarily realize that economic value, as in they might not actually be deployed in the workforce for legal, ethical or social reasons. But imagine the economic incentives in that scenario.

AGI would be here and be capable of realizing trillions of dollars of economic value. And these companies are supposed to hold back from deploying it. Would Google allow that? Would Microsoft or would the definition change conveniently? But even under that wider definition, when does Demis Hassabis think that AGI will arrive?

I will say that when we started DeepMind back in 2010, you know, we thought of it as a 20 year project. And actually, I think we're on track, which is kind of amazing for 20 year projects, because usually they're always 20 years away. So that's the joke about, you know, whatever it is, quantum, AI, you know, take your pick.

And but I think we, you know, I think we're on track. So I wouldn't be surprised if we had AGI like systems within the next decade. Others think that that moment, which again would have colossal economic ramifications, will come before 2030. Here's one alignment researcher at OpenAI. He thinks that there's around a two third chance of AGI before 2028.

And he goes on that he can't talk about all the reasons why he has this timeline, but mostly it should be figureoutable from publicly available information. I'm guessing that's an oblique reference to Q* or Let's Verify. He also returns to the economic definition of AGI. When I say AGI, I mean something which is basically a drop in substitute for a human remote worker circa 2023.

And not just a mediocre one, a good one, e.g. an OpenAI research engineer. Notice, though, he's focusing on remote work and even Karpathy limits his comments to digital work. But as we'll see at the end of this video, even physical tasks might be automated sooner than you think. Before we leave Daniel Cocotagelo, though, there's one more quote I want to show you of his.

I think in this one, he's feeling somewhat panicked. Probably there will be AGI soon, literally any year now. And probably whoever controls AGI will also be able to use it to get to artificial super intelligence shortly thereafter. He says maybe in another year, give or take a year. Now, if you do the maths of that comment, give or take a year means that it could be instantaneous or it could take two years from AGI, according to him.

At least according to Google DeepMind, an artificial super intelligence would involve outperforming 100% of humans, just like in their respective domains, AlphaZero and Stockfish already do. And in the light of these shortening timelines, some AI researchers are already adapting their behavior. One lead researcher at OpenAI said this. The closer we get to the singularity, that's the moment when progress is so fast humans can't even keep up.

The lower, he said, my risk tolerance gets. I'd already ruled out skydiving and paragliding. Last year, I started wearing a helmet consistently while cycling. And he ended, I think this year might be the year I give up skiing. In other words, if you think AGI, ASI and the singularity are going to happen in the 2020s, it will be kind of a pity to die before that date, probably.

But at this point in the video, and I promise you I will get to the GTC straight after this, things are getting kind of heavy. So I want to bring in a paper I read that's on a lighter note. What the paper says, essentially, is that peer reviewers are now starting to use ChatGPT wholesale to do peer review.

How did they discover this? Well, mentions of the word commendable, innovative, meticulous, intricate, notable and versatile. Now, I think those are words that I use all the time, but maybe not everyone does. Previously, they were incredibly rare in peer reviews, but they became somewhat common. Hmm. Makes you wonder.

They go on that the estimated fraction of large language model generated text is higher in reviews which report lower confidence. That kind of makes sense, right? If you're not confident, you're going to use an LLM to help you. But the next bit is funny. They were submitted close to the deadline.

So you have these panicked peer reviewers who are like, oh, no, the deadline's coming. Let's use ChatGPT to do it. And the other correlation was it was more common from reviewers who are less likely to respond to author rebuttals. Now, that seems somewhat unfair to me. You don't even bother to write the peer review yourself and you don't even reply when the author replies to you.

These were peer reviews of prominent deep learning conferences, and the rates were 10 and almost 17 percent. And we're not talking about spell checks. We're talking about being substantially modified by ChatGPT. Obviously, now is not the time to go through this paper, but I thought it's worth showing you.

I mean, it's one more effect of AGI, right? The whole peer review system might become the AGI review system. So the conference from around 24 hours ago, obviously way too much to get to in this video. But I'm going to give you the five moments that stood out for me.

First, the obvious one, the announcement of the Blackwell GPU. Over the course of the last eight years, we've increased computation by 1000 times. Eight years, 1000 times. Remember back in the good old days of Moore's law, 10x every five years, 100 times every 10 years. 100 times every 10 years in the middle of the heydays of the PC revolution.

Now, this graph does involve some hype because it's not comparing the same level of precision, FP16 to FP4. But the point still stands that we are exceeding Moore's law. Here's another example. The Blackwell Superchip system isn't just two times better inference or actually generating tokens. It's 30 times more performant than the H100 series.

In short, there's going to be a lot more generations from generative AI. The cost and energy consumption also drops by a major factor. And of course, almost every CEO in the world that you can think of lined up to praise and get in the queue for these Blackwell Superchips.

Next, of course, the model sizes that these systems can serve just keeps getting bigger. Remember, GPT-3 was trained at 175 billion parameters, then GPT-4 at around 1.8 trillion. That's 10 times bigger. Well, notice how as we proceed, we're not doubling or 3Xing. We're talking about another tenfold increase. NVIDIA said that their server clusters could deploy a 27 trillion parameter model.

Now, of course, just because NVIDIA can deploy that size of model doesn't mean that the AGI labs will create one that big. I think a more reasonable estimate for the next generation of models would be around 10 trillion parameters. But the point still stands, we're not doubling each time.

In case you're not familiar, by the way, the number of parameters is like the number of dials in a model that you can tune to better match deep, intricate patterns and patterns within patterns. Of course, those patterns have to be found within the data that you give it. So garbage in, garbage out.

But of course, everyone's working on getting higher quality data. The next interesting moment I think many people might have slept on. NVIDIA have built a platform that accelerates the compute intensive part of lithography. That's the key process in making new and more advanced chips. And in this announcement, NVIDIA say that TSMC are already going into production with this platform.

They're going to be accelerating manufacturing and pushing the limits of physics for the next generation of advanced semiconductor chips. Not only that, but these 40 or 60 X improvements also utilize Gen AI. As best I can tell, they're using generative AI to create a mask, which is key in lithography.

Think of that mask as a template that transfers a pattern onto the chip. I'm reading the book Chip Wars at the moment, and lithography is absolutely key to the latest chips. But the bigger point is this. They're using Gen AI for ideation suggestions, but then the actual mask is derived by traditional physically rigorous methods.

It's that marrying of Gen AI to suggest and traditional systems to check that we'll see again in a moment. But there is another obvious point here. This is somewhat recursive. We have better chips creating better, cheaper, faster generative AI. And now more and more, we're getting generative AI helping in the creation of new and better chips.

And of course, those new chips might be with us sooner. Photo masks that took two weeks can now be processed overnight. Here's what the CEO of ASML said, which, of course, is the company that's integral to the creation of semiconductors. This collaboration will bring tremendous benefit to computational lithography and therefore to semiconductor scaling.

If you thought things were already progressing fast, it'll get even faster for the rest of this decade. One quick point to make here is that we're actually still lagging the front edge of what's computable by about two years. Most people, if they're not still using the original chat GPT, are using GPT-4, which finished training two years ago, or Gemini 1 Ultra.

But I spotted recently from Asabis that Gemini 1 Ultra actually used just the same compute as was rumored for GPT-4. That's 2022 compute levels. Actually, Gemini 1 used roughly the same amount of compute, maybe slightly more than what was rumored for GPT-4. I don't know exactly what was used, so I think it was in the same ballpark.

In other words, the public hasn't begun to grasp what even 2023 levels of compute could do for training a language model. But there was, of course, one more announcement that I simply cannot ignore. Project Groot. This is NVIDIA Project Groot, a general purpose foundation model for humanoid robot learning.

The Groot model takes multimodal instructions and past interactions as input and produces the next action for the robot to execute. We developed Isaac Lab, a robot learning application to train Groot on Omniverse Isaac Sim. We can train Groot in physically based simulation and transfer zero shot to the real world.

The Groot model will enable a robot to learn from a handful of human demonstrations so it can help with everyday tasks. And emulate human movement just by observing us. This is made possible with NVIDIA's technologies that can understand humans from videos, train models and simulation and ultimately deploy them directly to physical robots.

As Jensen Huang said, humanoid robots will at first just watch us and learn from imitation data. But embodied learning does have one advantage over large language models. They can use reinforcement learning in simulation, try tasks in simulation, see how they work out and then perform in the real world.

I actually discussed this with two leading figures at NVIDIA, four AI insiders on Patreon. But let me leave you with this thought. If you think these robot imitations look kind of cute and rubbish at the moment, think about GPT-2 or maybe the first system of BARD that you interacted with.

And now think of GPT-4 or CLAWD-3. In those cases, they were learning to imitate human text. In this case, it will be human actions. But the lesson is the same. These models can improve much faster than you might think. And don't forget with all of this, as yet another OpenAI employee put it, hope you enjoyed some time to relax.

It'll have been the slowest 12 months of AI progress for quite some time to come. Hopefully you'll join me as I cover that progress in the coming months. Thank you as always for watching to the end and have a wonderful day.