back to index

AI CEO: ‘Stock Crash Could Stop AI Progress’, Llama 4 Anti-climax + ‘Superintelligence in 2027’ ...


Chapters

0:0 Introduction
0:47 Stock Crash
2:28 Llama 4
10:55 o3 News
11:59 OpenAI non-profit?
13:13 AI 2027

Whisper Transcript | Transcript Only Page

00:00:00.000 | Every day it seems to me at the moment there are crazy claims and headlines not just in AI
00:00:06.340 | but in the wider world. So this video is going to attempt to debunk a few of those headlines
00:00:12.240 | and just give you what we know. I'm going to look at Llama 4, the model that has been a year in the
00:00:18.140 | waiting and has many claims and counterclaims about it. Then the blog post slash paper from
00:00:24.680 | a former OpenAI researcher that has millions and millions of views online and was featured in the
00:00:31.080 | New York Times essentially predicting superintelligence by 2027. Then some very recent
00:00:36.660 | news about the release date of what could be the smartest model of them all along with a ton of
00:00:42.960 | contradictions about whether and when it might come out. I just simply can't resist starting out
00:00:48.680 | with a quote which could dampen literally all the hype that you see in AI. When Dario Amadei,
00:00:54.780 | the CEO of Anthropic, makers of the Claude series of models, was asked what could stop AI? What could
00:01:01.080 | stop the progress? He mentioned a war in Taiwan which we've known is a risk for a long time. I
00:01:06.520 | highly recommend the book Chip Wars by Chris Miller. He then briefly touched on there being a potential
00:01:11.720 | data war where they run out of high quality data to train their models on but then he touched on a
00:01:16.280 | new risk that he hadn't mentioned before. And before you hear this quote, note that it came
00:01:20.880 | three weeks ago before all of this tariff craziness. What are the top three things that could stop the
00:01:25.960 | show? If there's a large enough disruption to the stock market that messes with the capitalization of
00:01:30.680 | these companies, basically a kind of belief that the technology will not, you know, move forward and
00:01:38.520 | that kind of creates a self-fulfilling prophecy where there's there's not enough capitalization.
00:01:42.360 | I just want to spend 30 seconds on explaining how that might play out. Companies like OpenAI and
00:01:48.200 | Anthropic need to raise money to fund those vast training runs that go behind their latest models.
00:01:53.960 | They don't just have 40 billion or 100 billion sitting around in their bank account to fund those vast
00:01:59.720 | data centers and everything else that goes into training a language model. The trouble is, of course,
00:02:04.440 | if investors don't think they'll get their money back, perhaps due to a recession, then they either
00:02:09.240 | won't invest in these companies or invest less at lower valuations. Less money means less compute,
00:02:15.000 | which means slower AI progress. That's not a prediction, of course. No one, including myself,
00:02:19.800 | knows what's going to happen. It's just easy to forget that AI operates in the real world and real
00:02:25.320 | world things can have consequences for AI progress. And speaking of AI progress, how much progress is
00:02:31.240 | represented by the release of Llama 4 and two of the three models in the Llama 4 family? Well,
00:02:37.240 | it's hard to be exact because, as ever, there's a lot more spin than honest analysis in the release
00:02:43.560 | of this family. But it seems like not too much. There's no paper, of course, that's starting to become
00:02:49.240 | the norm. But here are the highlights of what we do know. First, the smallest of the Llama 4 models
00:02:55.000 | has what they call an industry-leading context window of 10 million tokens. Think of that being
00:03:01.160 | around seven and a half million words. That sounds insane, of course, and innovative, but two quick
00:03:06.520 | caveats. All the way back to February 2024, we had a model, Gemini 1.5 Pro, that had a 10 million token
00:03:14.120 | context window. And with that extreme window, it could perform amazing needle in a haystack recovery
00:03:20.440 | on videos and audio and text. In public, at least, we, quote, "only" got models of up to two million
00:03:26.680 | tokens of context window, though, perhaps because Google realized something. They realized, perhaps,
00:03:31.640 | that it's all well and good finding individual needles in a haystack, as demonstrated in this
00:03:36.840 | Llama 4 blog post. If you dump in all the Harry Potter books and drop in a password, say, halfway through,
00:03:43.480 | the model will be able to find it and retrieve it. But most people, let's be honest, aren't sneaking in
00:03:48.280 | passwords into seven volumes of Harry Potter. So these results from the release 48 hours ago
00:03:55.400 | seem less relevant to me than this updated benchmark from 24 hours ago. It's called Fiction Live Bench for
00:04:02.600 | Long Context Deep Comprehension. This is the benchmark where language models have to piece together plot
00:04:08.520 | progressions across tens or hundreds of thousands of tokens or words. In my last video on Gemini 2.5 Pro,
00:04:15.080 | I noted its extremely good performance on this benchmark. In contrast, for Llama 4 for the medium
00:04:20.280 | sized model and the smallest model, performance is pretty bad and gets worse. The numbers at the top
00:04:26.120 | refer to the clues being strewn across, say, 6,000 words or 12,000 words or even 100,000 words. Things
00:04:33.880 | then get stranger when you think about dates. Why was Llama 4 released on a Saturday? That is
00:04:39.560 | unprecedented in the entirety of the time I've covered AI. If you were going to be vaguely conspiratorial,
00:04:45.800 | you would think that they released it on a weekend to sort of dampen down attention. Also note that its
00:04:51.320 | knowledge cutoff is August 2024. That's the most recent of the training data that Llama 4 was trained on.
00:04:58.920 | Compare that to Gemini 2.5, which has a knowledge cutoff of January of 2025. Kind of hints to me that
00:05:05.640 | Meta were trying desperately to bring this model up to scratch in the intervening nine months or so.
00:05:11.720 | In fact, they probably intended to release it earlier, but then in September we had the start of the O series
00:05:18.360 | of models from OpenAI and then in January we got DeepSeek R1. By the way, if you want early access to
00:05:25.160 | my full-length documentary on DeepSeek and R1, it's on my Patreon. Link in the description. But I will say,
00:05:31.640 | before we completely write off Llama 4 as in this meme, there is some solid progress that it represents.
00:05:38.280 | Especially the medium-sized model, Llama 4 Maverick, as it compares to the updated DeepSeek
00:05:45.640 | V3. Both of these models are of course not thinking models, like Gemini 2.5 or DeepSeek R1. Meta
00:05:52.440 | haven't released their state-of-the-art thinking model yet. But just bear in mind for a moment that
00:05:57.960 | for all the hullabaloo around DeepSeek V3, Llama 4 Maverick has around half the number of active
00:06:05.000 | parameters and yet is comparable in performance. Now yes, I know people accuse it of benchmark maxing or
00:06:10.920 | hacking on LM Arena, but check out these real numbers. Assuming none of the answers made it into
00:06:16.280 | the training data for Llama 4, the performance of its models on GPQA Diamond, the google-proof stem
00:06:22.680 | benchmark that's extremely tough, is actually better than the new DeepSeek V3. Or of course,
00:06:27.560 | GPT-4.0. So if you were making the optimistic case for Meta or for Llama 4, you would say that they
00:06:33.800 | have a pretty insane base model that they could create perhaps a state-of-the-art thinking model on
00:06:39.480 | top of. Only problem is, Gemini 2.5 Pro is already there and DeepSeek R2 is coming out any moment. Also,
00:06:47.080 | when you take Llama 4 out of its comfort zone, its performance starts to crater. Take this coding
00:06:52.280 | benchmark, Ada's Polyglot benchmark, testing model performance on a range of programming languages.
00:06:57.720 | Unlike many benchmarks, it doesn't just focus on the Python programming language, but a range of
00:07:03.560 | programming languages. And as you can see, Gemini 2.5 Pro tops the charts. Now yes, you might say that's
00:07:09.160 | a thinking model, but then look at Claude 3.7 Sonnet, that's without thinking, it gets 60%. DeepSeek V3,
00:07:16.920 | the latest version, gets 55%. And you unfortunately have to scroll down quite far to get to Llama 4
00:07:25.080 | Maverick, which gets 15.6%. Now is it me, or is performance like this quite hard to square with
00:07:32.520 | headlines like this one from Mark Zuckerberg, which is that his AI models will replace mid-level engineers
00:07:39.480 | soon? As in, Zuckerberg says, this year, 2025. Was he massively hyping things out of all sense of
00:07:46.200 | proportion? How dare you have that thought? Four more quick things before we leave Llama 4,
00:07:51.320 | and yes, I did pick that number deliberately. And the first is on the tentative signs from their
00:07:56.920 | biggest model, the unreleased one, Behemoth. Now Meta have deliberately made the comparisons with models
00:08:02.600 | like Gemini 2 Pro and GPT 4.5, and the comparison is somewhat favourable. Though if you look closely
00:08:09.960 | at the footnotes, it says, Llama model results represent our current best internal runs. Did they
00:08:15.720 | run the model five times and pick the best one? Three times? Ten times? We don't know. Also note they
00:08:21.320 | chose not to compare Llama 4 Behemoth with DeepSeek V3, which is three times smaller in terms of overall
00:08:29.160 | parameters and around eight times smaller in terms of active parameters. In dark blue, you can see the
00:08:35.000 | performance of DeepSeek V3, the latest version, and you'd have to agree it's pretty much comparable to
00:08:41.480 | Llama 4 Behemoth. In other words, if you wanted to put a negative spin on the release, you could say
00:08:46.280 | Llama's biggest model, many times the size of the new DeepSeek V3 base model, performs at the same level,
00:08:53.400 | basically. Now, yes, I know Llama 4 Behemoth is still, quote, in training, but pretty much all models are,
00:08:58.680 | quote, in training all the time at the moment with post-training. Second, just a quick one I saw
00:09:03.320 | halfway through the terms of use, which is you're kind of screwed if you are in the EU. You can still
00:09:09.320 | be the end user of it, you just don't have the same rights to build upon it. Next comes a little nugget
00:09:14.920 | towards the bottom of the page in which they've tried to make Llama 4 lean a bit more right. They
00:09:19.640 | say it's well known that LLMs have bias, that they historically lean left when it comes to politics,
00:09:26.280 | so they're going to try to rectify that. I'm sure, of course, that had nothing to do with
00:09:30.680 | Zuckerberg's relationship to the new administration. Finally, Simplebench, in which Llama 4 Maverick,
00:09:36.440 | the medium-sized model, gets 27.7%, which is around the same level as DeepSeek V3. Now,
00:09:43.800 | that is a lower than, quote, non-thinking models like 3.5 Sonnet that don't take that time to lay out their
00:09:49.720 | chain of thought before answering, but it's a solid performance. Meta are definitely still in
00:09:54.440 | the race when it comes to having great base models upon which you can build incredible reasoning models.
00:09:59.800 | Now, as it happens, I did get some juicy hints recently about what the performance of O3 would be
00:10:05.880 | on Simplebench, and that's the model coming in two weeks. I'll touch on that in just a second.
00:10:11.160 | And let's just say that it's going to be competitive. I know that's kind of like an egregious hint that I'm
00:10:16.360 | not backing up, but that's all I can say at the moment. Now, what you may have noticed in the
00:10:20.440 | middle of the screen is that Simplebench, which is a benchmark you can check out in the description,
00:10:25.560 | I created it around nine months ago, is powered by Weave from Weights and Biases. They are sponsoring
00:10:31.480 | this video and indeed the entire benchmark, as you can clearly tell with the link at the center of
00:10:36.760 | the screen. That will open up this quick start, which should be useful for any developer who is
00:10:41.400 | interested in benchmarking language models, as we do. To be honest, even just those who are
00:10:46.040 | interested in learning more about LLMs, you can check out the Weights and Biases AI Academy down
00:10:51.320 | here. As you can see, they are coming up with new free courses pretty much all the time. Now,
00:10:55.800 | I did say I'd mentioned the O3 news, which came just a couple of days ago from Sam Altman,
00:11:00.200 | in which he told us that O3 would be coming in about two weeks from now. This is from my newsletter,
00:11:05.560 | but do you remember when OpenAI and Sam Altman specifically said, "We want to do a better job of
00:11:10.440 | sharing our intended roadmap? As we get closer to AGI, you guys deserve clarity." Well, clarity would
00:11:17.000 | be great, because initially O3 was supposed to come out shortly after O3 Mini High, which came out
00:11:23.800 | towards the end of January. So, naturally, we expected it in February. Then, OpenAI did A180,
00:11:29.080 | as you can see in this tweet, and Sam Altman said, "We will no longer ship O3 as a standalone model." Now,
00:11:34.920 | perhaps prompted by the Gemini 2.5 Pro release, or their GPUs melting because of everyone using Image
00:11:40.840 | Gen, they've pushed back GPT-5 and are now going to release O3 indeed as a standalone model in two weeks.
00:11:47.080 | So much for clarity then. We're also apparently going to get books about Sam Altman's misdeeds
00:11:52.280 | and dodgy documented behaviour, but that's a topic for another video. One thing I bet OpenAI don't want
00:11:57.800 | us to focus on is their new plans for their non-profit. Remember that $300 billion valuation you saw
00:12:03.640 | earlier on in this video, that depends on OpenAI becoming a for-profit company. So,
00:12:08.440 | what's going to happen to that non-profit which was supposed to control the proceeds of OpenAI
00:12:13.880 | creating AGI? Remember, in the slim chance that Sam Altman is right and OpenAI are the company that
00:12:20.120 | creates trillions of dollars of value, as he predicted, this non-profit might have ended up
00:12:25.720 | controlling trillions of dollars worth of value. More importantly, it would have controlled what would
00:12:30.520 | have happened to AGI should OpenAI be the company that created it. Now, put aside whether you think
00:12:35.640 | it will be OpenAI that creates AGI, or whether AGI is even well-defined or feasible in the next three
00:12:41.160 | to five years. Just focus on the promise that Sam Altman and OpenAI made. We've gone from that
00:12:46.120 | non-profit controlling what could have been, in theory, a significant fraction of the world economy,
00:12:51.400 | to supporting local charities in California, and perhaps generously across America and beyond.
00:12:58.200 | Now, hardly anyone, if anyone, is focusing on this story as OpenAI are no longer the dominant players
00:13:04.920 | in the race to AGI, but nevertheless, I think it's significant. Now, if you are feeling somewhat
00:13:09.800 | dehyped about AGI after hearing about Llama 4 and these OpenAI stories, well, you could spend a few
00:13:16.120 | hours like I did on the weekend reading AI-2027. This was written by a former OpenAI researcher
00:13:24.280 | and other super forecasters with a pretty impressive track record. Also, as you may remember,
00:13:29.400 | Daniel Cocotagelo put up an impressive stand against OpenAI on their non-disparagement clause. He was
00:13:35.880 | essentially willing to forfeit millions of dollars, and yes, you can make that much as a safety
00:13:40.120 | researcher at OpenAI. He was willing to forfeit that so he wouldn't have to sign the non-disparagement
00:13:45.240 | clause. Because he made that stand, OpenAI were practically forced into dropping that clause for
00:13:51.080 | everyone. So, well done him on that. Nevertheless, I was not particularly convinced by this report,
00:13:56.280 | even though I admire the fact that they put dates on the record for their predictions. To honor that,
00:14:01.880 | I will try to match some of their predictions with predictions of my own. Their central premise
00:14:06.920 | in a nutshell is that AI is first going to become a superhuman coder and then ML researcher and thereby
00:14:13.960 | massively speed up AI progress, giving us superintelligence in 2027. They draw fairly heavily
00:14:20.120 | on this paper from Meta, and I'm going to cover that in a separate video because I am corresponding
00:14:25.080 | fairly closely with one of the key authors of that paper. Anyway, we start off fairly lightly,
00:14:30.440 | basically with a description of what current AI can do in terms of being an agent like ChatGPT's
00:14:36.280 | operator and deep research, essentially describing what we already have. We then get plenty of detours
00:14:42.200 | into alignment and safety because you sense the authors are trying to get across that message at the
00:14:47.560 | same time as making all of these predictions. I start to meaningfully diverge from their predictions
00:14:52.920 | when it comes to early 2026 when they say this: If China steals the state-of-the-art AI, Agent 1 they
00:14:59.720 | call it, weights, they could increase their research speed by nearly 50%. Based on all of the evidence
00:15:04.840 | you've seen today about DeepSeq and Llama 4, you would say it's almost equally likely that the West will
00:15:11.080 | be stealing China's weights. Or wait, they won't need to because DeepSeq continued to make their models
00:15:16.600 | open weight. Just like Leopold Aschenbrenner and Dario Amadei, everything is a race to the jugular,
00:15:21.800 | which is a narrative that's somewhat hard to square with DeepSeq pioneering certain research and giving
00:15:27.400 | it to everyone. Then, apparently in late 2026, the US Department of Defense will quietly begin contracting
00:15:33.720 | OpenAI or Google directly for cyber, data analysis and R&D. But I'm kind of confused because already,
00:15:40.200 | for at least a year, OpenAI have been working directly with the Pentagon. Yes, before you guys
00:15:44.920 | tell me, I'm aware that Daniel Cocotagelo, who is the main author of this paper, did make some amazing
00:15:50.120 | predictions back in 2021 about the progress of AI. I can link to that in the description, but that
00:15:55.720 | doesn't mean he's going to be always right going forward. Also, he himself has admitted that those
00:16:00.040 | predictions weren't that wide-ranging. Anyway, things get wild in January of 2027 because, as you can see
00:16:06.120 | from this chart up here, we get an AI that is better than the best human. The first superhuman coder,
00:16:13.240 | in other words. This is the key crux of the paper because once you get that, you speed up AI research
00:16:19.160 | and all the other consequences follow. But as I have been discussing with the authors of the meter paper,
00:16:24.040 | there are so many other variables to contend with. What about proprietary code in Google or Meta or Amazon
00:16:30.440 | that OpenAI can't train their models on? What about benchmarks themselves being less and less
00:16:35.240 | reliable indicators of real-world performance because the real world is much messier than benchmarks?
00:16:40.600 | This superhuman coder may need to liaise with entire teams, get certain permissions and pass all sorts of
00:16:46.280 | hurdles of common sense. And even if you wanted to focus brutally just on verifiable benchmarks, not every benchmark
00:16:52.840 | is showing an exponential. Take MLE Bench or Machine Learning Engineer Bench from the Deep Research or O3
00:16:59.240 | system card from OpenAI. That dataset consists of 75 hand-curated Kaggle competitions worth $2 million in
00:17:05.720 | price value. Measuring progress towards model self-improvement is key to evaluating autonomous
00:17:10.120 | agents' full potential. Basically, if models get good at machine learning engineering, they can obviously
00:17:15.400 | much more easily improve themselves. And let's skip down to progress and zoom in a bit and you can see
00:17:21.000 | the performance of O1, O3 mini, Deep Research without browsing, Deep Research with browsing, GPT-40 even,
00:17:28.840 | and I'm not noticing an absolute surge in performance. Obviously, I am perfectly aware of benchmarks like
00:17:34.440 | humanity's last exam and others which are showing exponential improvement. I'm just saying not every benchmark is
00:17:40.040 | showing that. Also, January or February of 2027 is less than two years away and this model would
00:17:45.880 | have to be superhuman in performance. So much so that it could autonomously develop and execute plans to
00:17:52.360 | hack into AI servers, install copies of itself, evade detection, and use that secure base to pursue whatever
00:17:57.880 | other goals it might have. Notice the hasty caveat, though, how effectively it would do so as weeks roll by
00:18:03.480 | is unknown and in doubt. That happens a lot, by the way, in the paper. I even noticed a co-author say,
00:18:09.240 | well, this wasn't my prediction, it was Daniel's. There's a lot of kind of heavy caveating of
00:18:13.400 | everything. Notice, though, that not only would an AI model have to be superhuman at coding to do all of
00:18:18.280 | this, it would have to have very few, if any, major flaws. If one aspect of its proposed plan wasn't in
00:18:25.240 | its training data or it couldn't do it reliably, the whole thing would fail. And that leads me to my
00:18:30.760 | prediction. I mean, they've made a prediction so I can make one that models, even by 2030, will not be able to
00:18:38.120 | do this. I'm talking reliably, with 95 or 99% reliability, autonomously, fully autonomously,
00:18:45.320 | develop and execute plans to hack into AI servers, copy itself, evade detection, etc. If, on the other
00:18:50.440 | hand, Daniel is right and models are capable of this by February 2027, then I will admit I am wrong.
00:18:56.840 | That, by the way, brings me back to this chart, which, if you notice, says that only 4% of people
00:19:04.200 | at that point would say, what do you think is the most important problem facing the country today?
00:19:09.080 | And they'd answer AI. Well, I don't know about you, but if I or my friends or family heard that there's
00:19:13.800 | AI out there that just can hack things and replicate itself and survive in the wild, I think more than 4%
00:19:19.240 | of people would say it's the most important issue. I mean, man, actually, the more I think of it,
00:19:23.080 | like, look at the clickbait headlines you get on YouTube and elsewhere about AI today. Can you
00:19:27.960 | imagine the clickbait if AI was actually capable of copying itself onto different servers and
00:19:33.720 | hacking autonomously? Actually, it wouldn't even be clickbait at that point. I would be doing headlines
00:19:37.880 | like, oh my god, it can hack everything. Anyway, you get the idea, and that's not even mentioning the
00:19:43.320 | fact that these agents can also, almost as well as a human pro, create bioweapons and the rest of it.
00:19:48.920 | China is then going to steal that improved Agent 2 from the Pentagon, and still obliviously 96% of
00:19:56.360 | people are focused on other things. Being slightly less facetious, I think the paper over-indexes on
00:20:02.360 | weight thefts and it all being contained in the weights of a model. I think progress between now
00:20:07.160 | and 2030 is going to much more depend on what data you have available, what benchmarks you have created,
00:20:12.600 | what proprietary data you can get hold of. Now, don't get me wrong, I do think AI will help with
00:20:18.760 | the improvement of AI. Even if it's just verifying and evaluating, replicating existing AI research,
00:20:25.720 | which is a new benchmark released by OpenAI just a week ago, already models like Claude 3.5 Sonnet can
00:20:31.720 | replicate 21% of the papers in this benchmark. But when you have a limited compute, and potentially very
00:20:38.040 | limited compute if there's a massive worldwide stock crash or war in Taiwan, but when you have limited compute,
00:20:43.800 | are you going to delegate the decision of which avenues to pursue to a model which might be only
00:20:50.520 | 80% as good as your best researchers? No, you would just defer to those top researchers. Only when a model
00:20:56.440 | was making consistently better decisions than your best researchers as to how to deploy compute would you then
00:21:02.040 | entrust it to them. The authors definitely bring in some real-world events that may or may not have
00:21:05.960 | occurred at OpenAI when they say, "AI safety sympathizers get sidelined or fired outright" brackets
00:21:11.720 | the last group for fear that they might whistleblow. Personally, I would predict that if we have an
00:21:16.600 | autonomous AI that can hack and survive on its own, I don't think safety sympathizers will be sidelined.
00:21:23.480 | If I am wrong, then we as a human species are a lot dumber than I thought. Anyway, just two years from now,
00:21:29.880 | June 2027, most of the humans at OpenAI/Google can't usefully contribute anymore. Again, I just don't
00:21:37.160 | think feedback loops can happen that quickly when you reach this level. I could well imagine benchmarks
00:21:42.760 | like the MMMU or SimpleBench being maxed out at this point, but imagine you're trying to design a more
00:21:49.400 | aerodynamic or efficient F-47. That's the new fighter jet announced by the Pentagon. Well, that AI self-improvement
00:21:56.760 | is going to be bottlenecked by the realism of the simulation that it's benchmarking against. Unless
00:22:02.760 | that simulated aircraft exactly matches the real one, well then you won't know if that "self-improving AI"
00:22:08.520 | has indeed improved the design unless you test it out in a real aircraft. Then multiply that example
00:22:13.880 | by the 10,000 other domains in which there's proprietary data or sim-to-real gaps. I guess you
00:22:20.040 | could summarise my views as saying the real world is a lot more messy than certain isolated benchmarks
00:22:26.520 | online. The model, by the way, at this point is plausibly extremely dangerous being able to create
00:22:32.520 | bioweapons and is scarily effective at doing so, but 92% are saying it's not the most important issue.
00:22:40.360 | Man, how good would TikTok have to get so that 92% of people wouldn't be focused on AI at that point?
00:22:46.600 | I'm going to leave you with the positive ending of the two endings given by the paper, which predicts
00:22:51.480 | this in 2030. We end with "People terraform and settle the solar system and prepare to go beyond.
00:22:57.720 | AI's running at thousands of times subjective human speed reflect on the meaning of existence,
00:23:02.680 | exchanging findings with each other and shaping the values it will bring to the stars. A new age dawns,
00:23:09.080 | one that is unimaginably amazing in almost every way, but more familiar in some. Those in the
00:23:14.520 | audience with a higher PDOOM can check out the other scenario, which is rather grim. Notice though,
00:23:19.880 | I'm not disputing whether some of these things will happen, just the timelines that they give. I still
00:23:24.520 | think we're living in some of the most epochal times of all. Just that it might be a more epochal decade
00:23:30.760 | rather than couple of years. Thank you as ever for watching. I know I covered a lot in one video.
00:23:35.480 | I will try to separate out my videos more in future. I'm super proud of the deep seat
00:23:40.760 | documentary I made on Patreon, so do check it out if you want early access. But regardless,
00:23:46.360 | thank you so much for watching to the end and have a wonderful day and wonderful decade.