back to index

$125B for Superintelligence? 3 Models Coming, Sutskever's Secret SSI, & Data Centers (in space)...


Chapters

0:0 Intro
1:6 SSI, Safe Superintelligence (Sutskever)
3:45 Grok-3 (Colossus) + Altman Concerned
5:36 CharacterAI + Foundation Models
6:26 125B Supercomputers + 5-10GW
8:28 ‘GPT-6’ Scale
9:7 Zuckerberg on Exponentials and Doubt
9:42 Strawberry/Orion + Connections + Weights
11:39 Data Centers in Space (and the sea)
12:45 Distributed Training + SemiAnalysis Report w/ Gemini 2
17:34 Climate Change Pledges?

Whisper Transcript | Transcript Only Page

00:00:00.000 | "Superintelligence just got valued at $5 billion."
00:00:04.720 | Or should I say, "Safe superintelligence
00:00:06.840 | "led by none other than the reclusive Ilya Sutskova
00:00:11.000 | "just got valued at $5 billion,"
00:00:13.200 | with this detail-free tweet in just the last few hours.
00:00:17.560 | And today, we also got more news of Gemini 2, Grok 3,
00:00:22.120 | and not just one, but two new $125 billion data centers.
00:00:29.440 | All this news seems so disparate, right?
00:00:32.480 | But there is one theme through it all,
00:00:35.000 | which is computing power.
00:00:37.140 | Even to the point of making data centers in space,
00:00:40.280 | we as a species are making a giant bet
00:00:43.980 | that scaling up language models
00:00:46.240 | will unlock true artificial intelligence.
00:00:49.900 | If the scaling hypothesis believers are right,
00:00:53.380 | it's coming soon.
00:00:54.740 | If they're wrong, this could all be viewed
00:00:57.640 | as the biggest waste of resources in human history.
00:01:01.520 | But let's start with the news
00:01:03.160 | from just a couple of hours ago
00:01:05.160 | that Ilya Sutskova has raised $1 billion
00:01:09.160 | at a $5 billion valuation
00:01:11.800 | for his startup, Safe Superintelligence.
00:01:14.620 | There he is, by the way, in the middle.
00:01:16.520 | He's definitely still alive and working on AI.
00:01:19.960 | If you haven't heard of Safe Superintelligence,
00:01:22.560 | don't worry, it's actually only three months old,
00:01:25.720 | but as mentioned earlier, valued at $5 billion.
00:01:30.520 | If only my three-month-old side hustles
00:01:32.600 | got valued at $5 billion,
00:01:34.400 | the world would be a happy place.
00:01:36.320 | But no, more seriously,
00:01:37.400 | what will those $1 billion in funds be used for?
00:01:40.960 | Well, it's that key theme you'll see throughout this video
00:01:44.080 | and probably throughout the next five years.
00:01:46.060 | The funds will be used to acquire computing power.
00:01:49.760 | Ilya Sutskova, by the way,
00:01:50.840 | sent a link to this Reuters article,
00:01:52.920 | so we can pretty much trust it's spot on with its details.
00:01:56.320 | The salient thing, though,
00:01:57.400 | about the company, Safe Superintelligence,
00:01:59.520 | is just how little detail they're giving out
00:02:01.980 | about what they're working on.
00:02:03.160 | Essentially, though, the key pitch is this.
00:02:05.960 | Give us a couple of years,
00:02:07.880 | and then we're going to attempt in one shot
00:02:10.780 | to bring you superintelligence.
00:02:12.840 | By the way, they're gonna do that with a team
00:02:15.040 | that's currently just 10 employees strong.
00:02:18.160 | But before you dismiss them immediately,
00:02:19.920 | they are backed by some heavy hitters
00:02:22.480 | like Sequoia Capital and Daniel Gross, who is a co-founder.
00:02:26.560 | Up to this point, though, Ilya Sutskova,
00:02:28.560 | who is clearly the key person in this venture,
00:02:31.320 | hasn't given any real detail about his approach,
00:02:34.600 | but he did sprinkle some hints into this article.
00:02:37.920 | Sutskova said that he will approach scaling
00:02:39.800 | in a different way to his former employer,
00:02:42.160 | which would be OpenAI.
00:02:43.520 | And he said, "Everyone just says scaling hypothesis.
00:02:46.500 | Everyone neglects to ask, 'What are we scaling?'"
00:02:49.740 | But he went on, "Some people can work really long hours,
00:02:52.720 | and they'll just go down the same path faster.
00:02:54.920 | It's not so much our style,
00:02:56.680 | but if you do something different,
00:02:58.600 | then it becomes possible for you to do something special."
00:03:01.440 | Before people get completely carried away, though,
00:03:03.560 | I do want to add a little bit of context
00:03:05.800 | to some of the claims that Sutskova has made before.
00:03:09.000 | He co-led the superalignment team at OpenAI,
00:03:13.120 | which just over a year ago set themselves the deadline
00:03:16.760 | of aligning or making safe superintelligence
00:03:19.740 | within four years.
00:03:21.100 | Now, I'm not against crazy ambition,
00:03:23.060 | and alignment is important,
00:03:24.740 | but what actual progress has been made
00:03:26.940 | in that year and a bit?
00:03:28.300 | Yes, I have read those blog posts put out
00:03:30.060 | by the former members of the team,
00:03:32.540 | but it doesn't strike me as being a quarter of the way
00:03:36.220 | to aligning a superintelligence.
00:03:38.940 | On the grounded to fanciful scale,
00:03:41.580 | it is definitely leaning toward the latter.
00:03:44.300 | Now, naturally, those weren't the only grandiose visions
00:03:48.040 | announced in the last 48 hours.
00:03:50.680 | Here is Musk two days ago,
00:03:52.520 | claiming to have the most powerful AI training system
00:03:55.480 | in the world.
00:03:56.320 | He mentions it soon having around 200,000 H100 equivalents,
00:04:01.320 | which are the GPUs that go into training
00:04:04.600 | larger language models.
00:04:05.840 | Now, your first thoughts might be
00:04:07.440 | that that's either an idle boast
00:04:09.640 | or that it's not really the computing power that matters,
00:04:12.860 | it's how you use it.
00:04:13.960 | But I do give comments like that,
00:04:15.860 | and this one from July,
00:04:17.400 | more credence because of the capabilities of Grok2.
00:04:20.920 | Grok2, the frontier model produced by Musk's ex-AI team,
00:04:24.920 | is genuinely a GPT-4 level competitor.
00:04:27.940 | So it's worth paying attention at least
00:04:29.920 | to when he says that they are gonna train
00:04:32.560 | the most powerful AI by every metric
00:04:35.240 | by December of this year.
00:04:36.920 | And I will give one further hint
00:04:38.780 | that that claim shouldn't be immediately dismissed.
00:04:41.640 | And that's from this report yesterday in the information.
00:04:45.680 | Now, first, it did caveat that that 100,000 chip cluster,
00:04:49.400 | known as Colossus, isn't fully operational.
00:04:52.160 | Apparently, fewer than half of those chips
00:04:54.440 | are currently in operation,
00:04:56.040 | largely because of constraints
00:04:57.240 | involving power or networking gear,
00:04:59.160 | and more about power constraints in a moment.
00:05:01.240 | But according to the information,
00:05:02.920 | OpenAI CEO Sam Altman has told some Microsoft executives
00:05:06.760 | that he is concerned that Musk's ex-AI
00:05:09.640 | could soon have more access to computing power
00:05:12.420 | than OpenAI does.
00:05:13.840 | And remember, OpenAI has access
00:05:15.980 | to the behemoth, Microsoft's compute power.
00:05:18.900 | It's at this point, though,
00:05:19.940 | that you might be starting to wonder something.
00:05:22.540 | Is it all just about computing power?
00:05:24.740 | Isn't there supposed to be some secret source
00:05:26.740 | at OpenAI or Google?
00:05:28.660 | Is it really just about raw computing power?
00:05:31.320 | Can we buy our way to super intelligence?
00:05:34.620 | Well, it's not for want of trying.
00:05:36.240 | There have been plenty of teams
00:05:37.820 | that have tried to build their own foundation models,
00:05:40.260 | only to realize that the key ingredient
00:05:43.140 | is scale, computing power.
00:05:44.940 | Character AI even built up a loyal fan base
00:05:48.140 | and had some stars of the industry,
00:05:50.700 | but couldn't make their own foundation models work.
00:05:53.140 | You may also recall efforts by AdeptAI and Inflection,
00:05:56.920 | which produced the Pi chatbot.
00:05:59.060 | The key personnel from those teams
00:06:01.100 | were snapped up by the likes of Google and Microsoft.
00:06:04.660 | In short, people are trying things as alternatives
00:06:07.860 | to brute force scaling,
00:06:09.660 | but not that much is working currently.
00:06:12.300 | Sure, you can eke out compute efficiencies
00:06:14.420 | and optimizations like GPT-4.0,
00:06:16.900 | the Orca series of models and the Phi family of models,
00:06:19.660 | but nothing beats scaling.
00:06:21.380 | And that might be why companies are betting everything
00:06:25.740 | on colossal new data centers.
00:06:28.420 | We're talking levels of investment
00:06:30.140 | at the scale that could fund the research
00:06:32.780 | to cure entire diseases,
00:06:34.660 | or perhaps fund the national budgets
00:06:37.060 | of medium-sized countries.
00:06:38.660 | And you might've thought from the title of this video
00:06:41.020 | that there's a singular 125 billion supercomputer,
00:06:44.620 | but there's actually two being planned.
00:06:46.460 | I should, of course, add the caveat
00:06:48.100 | that it's according to the information
00:06:50.140 | via officials that would know about such investments.
00:06:53.620 | Namely, the source is the Commissioner of Commerce,
00:06:56.700 | Josh Teigen, who said that two separate companies
00:07:00.220 | approached him and the governor of North Dakota
00:07:02.580 | about building mega AI data centers.
00:07:05.340 | These would initially consume around 500 megawatts
00:07:09.140 | to one gigawatt of power,
00:07:11.380 | with plans to scale up to five or 10 gigawatts of power
00:07:15.300 | over several years.
00:07:16.420 | Those numbers, of course, to most of you,
00:07:18.500 | will be complete gobbledygook.
00:07:20.380 | So for context, here is an excellent diagram from Epoch AI.
00:07:24.620 | Five gigawatts of power allocated to a single training run
00:07:28.100 | would put the power constraint just above this line here.
00:07:31.460 | Now, given that it's expected that these other constraints
00:07:34.740 | would kick in at higher levels,
00:07:36.580 | that would give us just over 10,000 times more compute
00:07:40.540 | available compared to that which was used
00:07:42.820 | for training GPT-4.
00:07:44.500 | Now, given the broad approximate deltas
00:07:47.460 | between generations of GPTs,
00:07:49.740 | that will be the equivalent of a GPT-6 training run.
00:07:53.540 | Now, yes, I know that there are quote leaks like this one
00:07:57.060 | showing that GPT-5 might have
00:07:59.420 | between three and five trillion parameters,
00:08:02.220 | but my adage for such leaks is don't trust and verify.
00:08:06.300 | And also the number of parameters that goes into a model
00:08:10.020 | or the number of tweakable knobs, if you like,
00:08:12.700 | doesn't tell you automatically
00:08:14.540 | how much compute is used to train the model.
00:08:16.820 | Data is also a massive factor there.
00:08:19.580 | Chinchilla scaling laws have long since been left behind
00:08:22.860 | and we are massively ramping up the amount of data
00:08:26.420 | for a given number of parameters.
00:08:28.300 | But before we all get too lost in the numbers,
00:08:30.660 | what am I actually saying here?
00:08:32.100 | I'm saying that with the amount of money that's being spent
00:08:35.220 | and the amount of power that's being provisioned,
00:08:37.740 | people are factoring in models up to the scale
00:08:41.420 | of something like GPT-6.
00:08:43.340 | By the way, if you're skeptical
00:08:44.980 | that any progress has been made,
00:08:46.900 | compare the performance of the original chat GPT
00:08:49.860 | in November of 2022 with Claude 3.5 Sonnet.
00:08:53.820 | It's pretty night and day.
00:08:55.180 | Claude 5.5 Sonnet would be quite interesting to behold.
00:09:00.180 | Now, just to emphasize how much this scaling is a bet
00:09:04.140 | rather than a certainty
00:09:05.780 | in terms of the outcome it will produce,
00:09:08.060 | here again is Mark Zuckerberg.
00:09:09.860 | - It's one of the trickiest things in the world
00:09:11.940 | to plan around is when you have an exponential curve,
00:09:14.500 | how long does it keep going for?
00:09:16.620 | And I think it's likely enough that it will keep going,
00:09:21.300 | that it is worth investing the tens or 100 billion plus
00:09:26.300 | in building the infrastructure
00:09:28.500 | to assume that if that kind of keeps going,
00:09:30.740 | you're going to get some really amazing things
00:09:33.020 | that are just going to make amazing products.
00:09:35.260 | But I don't think anyone in the industry can really tell you
00:09:38.980 | that it will continue scaling at that rate for sure.
00:09:42.020 | - And you may have noticed that I've barely mentioned
00:09:43.900 | OpenAI successor language models and new verifier approaches.
00:09:48.260 | Those approaches previously labeled Q* or Strawberry
00:09:51.900 | throw in a bit of an X factor over the coming months.
00:09:55.060 | According to this article from again, the information,
00:09:58.300 | OpenAI want to launch Strawberry,
00:10:00.380 | which was previously called Q*
00:10:02.140 | and check out my video on that
00:10:03.420 | for what I think that might be.
00:10:05.340 | They want to launch that within ChatGPT
00:10:07.380 | as soon as this fall.
00:10:09.060 | Interestingly, the only hint they gave of its capabilities
00:10:12.660 | was that it could solve
00:10:13.860 | a New York Times connections word puzzle.
00:10:16.420 | Now, since I read this article,
00:10:17.980 | I have been trying plenty
00:10:19.620 | of those New York Times connections puzzles.
00:10:21.980 | You've got to create four groups of four words
00:10:25.140 | that form a kind of logical set.
00:10:27.220 | Here though is the interesting part.
00:10:28.660 | If you feed in these puzzles to GPC 4.0 as text
00:10:32.060 | or as an image,
00:10:33.300 | it usually can get one or two sets of four words.
00:10:37.980 | But then what will happen is it will get stuck.
00:10:40.620 | And even if you prompt it to try different arrangements
00:10:43.860 | of the remaining words,
00:10:45.500 | it'll still predict the same things again and again.
00:10:48.300 | So at the very least,
00:10:49.700 | OpenAI must have pioneered a method
00:10:52.140 | to get language models out of their local minima
00:10:55.060 | to get them to try different things
00:10:57.220 | instead of getting stuck in a rut.
00:10:58.660 | How that plays out though,
00:10:59.700 | in terms of true reasoning capability,
00:11:01.780 | I'm gonna wait to test it on SimpleBench to find out.
00:11:04.900 | Speaking of experiments though,
00:11:06.300 | let me quickly introduce you to Weave
00:11:08.900 | from the legendary Weights & Biases,
00:11:11.620 | the sponsors of this video.
00:11:13.700 | Proper evaluations of language models are absolutely crucial
00:11:17.660 | as is clearly visualizing the differences between them.
00:11:20.980 | You'd also ideally want your toolkit to be lightweight
00:11:23.740 | so you could confidently and quickly iterate
00:11:26.700 | on your LLM applications.
00:11:28.460 | So in addition to their free courses and guides,
00:11:31.580 | do check out Weave using the link you can see on screen,
00:11:34.860 | which will also of course be in the description.
00:11:37.340 | Now though, for what some of you have been waiting for,
00:11:39.660 | the news we got yesterday
00:11:41.300 | that one startup is attempting to build data centers
00:11:44.540 | in space.
00:11:45.460 | Such is the need for reliable energy
00:11:48.180 | to power these data centers.
00:11:50.020 | We are resorting to putting the data centers into space.
00:11:53.980 | This company, Lumen Orbit, is a Y Combinator startup
00:11:57.740 | and they are aiming for gigawatt scale.
00:12:00.860 | Their promo video even mentions
00:12:02.460 | a possible five gigawatt data center,
00:12:04.980 | which again, if dedicated to pre-training,
00:12:07.620 | would allow a GPT-6 style model.
00:12:10.460 | But I must add in a quick caveat
00:12:12.620 | before we all go wild about data centers in space.
00:12:16.220 | Things like this have been tried before.
00:12:18.700 | Microsoft tried to build data centers underwater.
00:12:22.580 | The idea was that the sea could help cool the data center
00:12:27.580 | and save on costs.
00:12:29.340 | And even though it was described as largely a success,
00:12:33.060 | apparently it didn't make sense
00:12:34.620 | from an operational or practical perspective.
00:12:37.020 | The cost of maintenance, among other things,
00:12:39.500 | was simply prohibitive.
00:12:41.340 | Now, I don't know about you,
00:12:42.180 | but it strikes me that the cost of maintaining things
00:12:44.300 | in space might be even more.
00:12:46.500 | But as you may have already deduced,
00:12:48.380 | that's not exactly gonna stop us reaching GPT-6 scale models.
00:12:52.880 | Why not?
00:12:53.720 | Well, we do have the option of geographically distributing
00:12:57.700 | the computers used to train the models.
00:13:00.540 | In fact, according to some sources,
00:13:02.260 | Microsoft found they more or less had to do that.
00:13:05.060 | Apparently one Microsoft engineer
00:13:06.820 | on a GPT-6 training cluster project was asked,
00:13:10.740 | "Why not just co-locate the cluster in one region?"
00:13:13.900 | Well, they tried that, he said,
00:13:15.620 | "But we can't put more than 100,000 H100s,"
00:13:19.260 | that's roughly the size of that Colossus project
00:13:21.420 | that we heard earlier from Elon Musk,
00:13:23.020 | "in a single state without bringing down the power grid."
00:13:26.420 | As we saw from that Epoch analysis,
00:13:29.140 | it's the power that's the constraining factor.
00:13:32.100 | And also possibly water, but more on that in a future video.
00:13:35.620 | But if we distribute the training,
00:13:37.420 | then the clusters don't all have to be in the same place,
00:13:40.140 | so it reduces that local power drain.
00:13:42.980 | And that approach of distributed training
00:13:45.900 | to cut a long story short is where we seem to be heading.
00:13:49.820 | According to a report out just today from Semianalysis,
00:13:53.540 | Google, OpenAI, and Anthropic are already executing plans
00:13:57.860 | to expand their large model training
00:13:59.980 | from one site to multiple data center campuses.
00:14:03.260 | And we already know, by the way,
00:14:04.740 | that Gemini Ultra 1.0 was trained
00:14:07.980 | across multiple data centers, so it can be done.
00:14:10.580 | And before we get to more details on that,
00:14:12.460 | there was this hidden gem in the third paragraph.
00:14:15.380 | Again, this article was from today, and it said,
00:14:17.860 | "Google's existing models lag behind OpenAI and Anthropic
00:14:21.380 | "because they are still catching up
00:14:22.780 | "in terms of synthetic data, reinforcement learning,
00:14:25.340 | "and model architecture.
00:14:26.420 | "But the impending release of Gemini 2 will change this."
00:14:30.020 | Semianalysis is a pretty reliable source,
00:14:32.580 | so that's an interesting comment.
00:14:34.060 | It seems like we will get Gemini 2 and Grok 3
00:14:37.580 | within a few months.
00:14:38.780 | And as we heard earlier, the Strawberry system
00:14:41.540 | from OpenAI in roughly the same timeframe.
00:14:44.260 | We will probably have to wait till next year
00:14:47.060 | for OpenAI's next flagship, though,
00:14:49.500 | which is codenamed Orion.
00:14:51.220 | But just for a moment,
00:14:52.060 | I want you to forget all the names and the fruit
00:14:55.180 | and focus on one key detail.
00:14:57.380 | If the key ingredient
00:14:59.180 | to the performance of language models is their scale,
00:15:02.580 | we should find out that fact by the end of this year.
00:15:06.300 | Or to put it another way,
00:15:07.580 | if scaling doesn't work up to the levels
00:15:09.980 | of Grok 3 and Gemini 2, then what else will?
00:15:13.820 | If the data centers are getting to the kind of scale
00:15:17.100 | where we need satellite pictures to assess how big they are
00:15:20.620 | and that doesn't produce true artificial intelligence,
00:15:23.940 | then, well, do we have to rely on Ilya Satskova?
00:15:27.300 | Obviously, I'm being cheeky,
00:15:28.300 | but it's priced into the value, I think,
00:15:30.380 | of many of these companies.
00:15:31.660 | Just the possibility, at least,
00:15:33.580 | that scaling will yield super intelligence.
00:15:36.380 | So if it doesn't, you could expect a reflection of that
00:15:40.220 | in the form of a bubble bursting.
00:15:42.620 | Just quickly, I can't resist pointing out
00:15:44.220 | that if you're in America,
00:15:46.140 | then the fact that models will be increasingly interconnected
00:15:49.700 | across that continent will lead
00:15:51.540 | to a kind of interesting philosophical moment.
00:15:54.180 | Microsoft will quite literally be,
00:15:55.980 | in the famous words of their CEO,
00:15:57.780 | above you, around you, beneath you.
00:16:00.260 | Now, of course, it almost goes without saying
00:16:02.980 | that there will be immense hardware issues
00:16:06.100 | in getting this all set up and running smoothly.
00:16:08.940 | Billions of man hours worth of problems to be solved, for sure.
00:16:13.300 | And that's why companies, it seems, are clamping up
00:16:16.380 | about how they're solving these hardware issues.
00:16:19.620 | The publishing of methods has effectively stopped.
00:16:22.060 | When OpenAI and others tell the hardware industry
00:16:24.420 | about these issues, they are very vague and high level
00:16:27.180 | so as not to reveal any of their distributed systems tricks.
00:16:30.620 | To be clear, Semi-Analysis says these techniques
00:16:32.900 | are more important than model architecture,
00:16:35.780 | as both can be thought of as compute efficiency.
00:16:38.580 | Here, then, is the central claim from Semi-Analysis.
00:16:42.500 | There is a camp that feels AI capabilities have stagnated
00:16:46.540 | ever since GPT-4's release.
00:16:48.300 | I know many of you watching will feel that.
00:16:50.300 | This is generally true,
00:16:52.020 | but only because no one has been able
00:16:53.980 | to massively increase the amount of compute
00:16:56.380 | dedicated to a single model.
00:16:57.900 | The word only there is, of course, an opinion
00:17:00.780 | rather than an established fact.
00:17:02.980 | Some, of course, believe that no amount of scaling
00:17:05.460 | will yield true reasoning or intelligence.
00:17:08.420 | I have my thoughts, but honestly, I'm somewhat agnostic.
00:17:11.940 | I genuinely want to know how these future models
00:17:14.540 | perform on my simple bench.
00:17:16.460 | I go into a ton of detail about what I'm creating
00:17:19.500 | on my Patreon, which is called AI Insiders.
00:17:22.140 | Oh, and also just a couple of days ago,
00:17:24.540 | I released this video on that Epoch AI research.
00:17:28.340 | That reminds me, actually,
00:17:29.180 | there was one more thing from that research
00:17:31.260 | that I wanted to touch on in this video.
00:17:33.340 | It came about halfway through the 20,000 word report,
00:17:37.380 | and it's right here.
00:17:38.780 | I don't know why I picked it out.
00:17:39.980 | I just find it really quite poignant and interesting
00:17:42.740 | to see what these behemoth companies will do,
00:17:45.500 | because basically what they pledged,
00:17:47.860 | this is Google, Microsoft, and Amazon,
00:17:49.980 | to become carbon neutral by 2030.
00:17:52.580 | Now, what do you predict will happen
00:17:54.860 | if it turns out that the scaling hypothesis is true
00:17:58.060 | and that AI is immensely profitable,
00:18:00.540 | and yet it requires this immense power
00:18:02.820 | and that will break these targets?
00:18:04.820 | Will they stick to their honorable pledges?
00:18:07.380 | Well, we know what Sam Altman wants to do,
00:18:09.060 | which is spend in the order of trillions
00:18:11.660 | and finance dozens of new chip factories.
00:18:14.380 | This was from a separate information report,
00:18:16.780 | but there was one quote that I found interesting
00:18:18.820 | in relation to it.
00:18:19.740 | Sam Altman, according to the CEO of TSMC,
00:18:23.380 | which makes most of these chips,
00:18:24.980 | the power, NVIDIA, and everyone else,
00:18:27.140 | he said Sam Altman was, quote,
00:18:29.580 | "Too aggressive for me to believe."
00:18:31.580 | Remember, by the way,
00:18:32.420 | that pretty much all of this comes down
00:18:34.580 | to that Taiwanese company,
00:18:36.660 | which is why, by the way,
00:18:37.860 | the whole tech industry is so nervous
00:18:39.500 | about China invading Taiwan.
00:18:41.020 | Anyway, Sam Altman is, according to the TSMC CEO,
00:18:43.940 | "Too aggressive for him to believe."
00:18:46.380 | And maybe even these 125 billion data centers
00:18:49.420 | are also too aggressive.
00:18:50.940 | Only time will tell.
00:18:52.300 | It's indubitable that a mountain has been identified
00:18:57.100 | and that the AI industry is trying to climb it.
00:19:01.020 | Whether they will, or indeed,
00:19:02.820 | whether they're even heading in the right direction,
00:19:05.660 | only time will tell.
00:19:07.460 | Thank you as ever so much
00:19:09.580 | for watching all the way to the end.
00:19:11.340 | I'm super grateful and have a wonderful day.