back to indexAI CEOs Keep Talking… But Should We Believe Them? | Cal Newport

Chapters
0:0 What if AI Doesn’t Get Much Better Than This?
66:30 Will AI leave me unemployed in 10 years?
74:2 How should I structure the next 10 years as a recently retired college professor?
75:40 I just moved. How should I arrange my book collection?
77:54 Overhead tax
88:0 Ed Sheehan and the Booker Prize
00:00:00.640 |
- In the years since ChatGPT's astonishing launch, 00:00:13.180 |
But in recent weeks, this vibe seems to be shifting. 00:00:17.520 |
Both the media and technologists no longer seem so certain 00:00:37.360 |
what if AI doesn't get much better than this? 00:00:55.120 |
What if AI doesn't get much better than this? 00:00:57.640 |
What if AI doesn't get much better than this? 00:00:59.120 |
What if AI doesn't get much better than this? 00:01:00.300 |
What if AI doesn't get much better than this? 00:01:01.680 |
What if AI doesn't get much better than this? 00:01:03.640 |
What if AI doesn't get much better than this? 00:01:04.640 |
What if AI doesn't get much better than this? 00:01:05.640 |
What if AI doesn't get much better than this? 00:01:06.680 |
What if AI doesn't get much better than this? 00:01:07.640 |
What if AI doesn't get much better than this? 00:01:08.640 |
What if AI doesn't get much better than this? 00:01:09.640 |
What if AI doesn't get much better than this? 00:01:10.640 |
What if AI doesn't get much better than this? 00:01:11.640 |
What if AI doesn't get much better than this? 00:01:12.640 |
What if AI doesn't get much better than this? 00:01:16.160 |
Well, first of all, thanks for having me on the show. 00:01:24.640 |
And I think maybe the most salient feature of the technology, 00:01:29.120 |
is how fast the technology is getting better. 00:01:32.160 |
A couple of years ago, you could say that AI models 00:01:35.100 |
were maybe as good as a smart high school student. 00:01:37.400 |
I would say that now they're as good as a smart college student 00:01:42.420 |
I really worry, particularly at the entry level, 00:01:45.580 |
that the AI models are very much at the center 00:01:49.540 |
of what an entry level human worker would do. 00:01:52.360 |
- That was Dario Amadei talking to CNN's Anderson Cooper. 00:01:57.800 |
Now, Dario is the CEO of the AI company Anthropic. 00:02:02.040 |
And if you want to know why we have become so worked up 00:02:06.580 |
a big part of this answer is that tech CEOs like Amadei 00:02:18.580 |
AI used to be as good as a average high school student. 00:02:21.800 |
Now they're as good as a smart college student. 00:02:28.000 |
still being around for humans to actually do. 00:02:32.580 |
Now Amadei is not alone in these types of claims. 00:02:48.220 |
CEO of OpenAI, appearing on Theo Vaughn's podcast. 00:02:52.480 |
There are these moments in the history of science where you have a group of scientists look at their creation 00:03:00.500 |
and just say, "What have we done? Maybe it's great. Maybe it's bad. Maybe the most iconic example is thinking about the scientists working on the Manhattan Project in 1945, sitting there watching the Trinity test and this thing that was a completely new, not human scale kind of power and everyone knew it was going to reshape the world. 00:03:16.840 |
And it was a completely new, not human scale kind of power and everyone knew it was going to reshape the world. 00:03:26.520 |
And I think people working on AI have that feeling in a very deep way. 00:03:31.940 |
Yeah. Okay, all right. That's enough of that. At least we know Sam Altman's a, um, a modest man, Jesse. 00:03:37.780 |
We're like the Manhattan Project. All right. Not to be outdone. 00:03:41.780 |
Meta CEO Mark Zuckerberg has AI keeps accelerating and over the past few months, we've begun to see glimpses of AI systems improving themselves. 00:03:52.100 |
So developing super intelligence is now in sight. 00:03:54.660 |
So he didn't want to be outdone there. All right. So yeah, sure. You're like the Oppenheimer, but I'm about to create super intelligence. 00:04:01.540 |
And can I just say, Jesse, as an aside, I mean, Zuckerberg, he's got to remain one of the worst communicators in the history of really large companies. 00:04:09.780 |
I mean, this is neither here nor there, but why does he always sound like a robot who's a motion circuit board shorted out? 00:04:16.900 |
You imagine it's like, okay, here's my Zuckerberg impression. Hi, Bob. It's good to see you. I have news to share that is bad. Your wife was in a significant wheat thresher accident going forward. 00:04:32.420 |
She will need a machine to chew for her. I hope otherwise your day is a good one. Zuckerberg out. And then there's a sort of like beaming of light. I mean, he talks like Android. Okay. Enough of that. Um, you get the point, right? 00:04:45.700 |
There's been this drum beat from these AI CEOs that you cannot fathom the impact of the disruption that's coming and it's coming soon. Altman was almost in tears. 00:04:59.980 |
It's looking at what he's created. It's like Oppenheimer looking at the Trinity test and quoting the Barger Dia. He was like, I cannot believe what I've done. Then just a few weeks ago, open AI released GPT five. 00:05:15.500 |
And if you haven't been paying attention, this was a key pivot point in our narrative about this technology. Now, just to put this in the context, it had been over two years since open AI's last major model release, which was GPT four. So expectations for GPT five were sky high. Altman had been bragging about this model almost immediately. 00:05:35.500 |
Immediately after GPT four is power was first understood. This was going to be the next big leap that got us ever closer to the types of AI impacts of the tech CEOs. We're talking about there, but then people actually got their hands on this demo. When it released a few weeks ago on a Friday. 00:05:53.500 |
And while they weren't exactly dancing in the streets or running towards their terminator style bunkers. One of the first reviewers to go live with a take on GPT five was a YouTuber named Mr. Who's the boss? Because you know, of course that's his name. He's a YouTuber. 00:06:08.500 |
He had early access to GPT five. So we had a review ready to go. And if you watched his review, this was the first review I saw after GPT five came out. Here's what he said. He said, look, there's some things that GPT five seems better at than its immediate predecessors. 00:06:21.500 |
Uh, he did some vibe coding. He asked it to create a chess game with Pokemon as pieces because of course he did. If your name's Mr. Who's the boss, you're making Pokemon chess. Um, and he thought it produced something better than what GPT four mini high had produced. Um, it also produced a better script for his channel than GPT four. Oh, he sort of did these side-by-side comparisons, but also there's other tasks where the old GPT model, the GPT four. Oh, was more successful than the new one. 00:06:47.500 |
When he asked GPT five to create a YouTube thumbnail, it was worse than what GPT four. Oh, produced when he asked it to come up with a birthday party invitation. And I'm not making this up, Jesse. It was a birthday party invitation for a grown man. That was star Wars themed. This was not ironic. This was like, obviously this is what we would be doing with AI. Uh, GPT four. Oh, produced a better one than the new GPT five within hours. 00:07:13.500 |
Other users who got their hands on GPT five expressed more. Uh, I would say pointed disappointment. It's pretty fun reading. If you go onto the R chat, GPT subreddit, as I did in the aftermath of this model coming out, one of the posts on there and the aftermath said GPT five is the biggest piece of garbage, even as a paid user in a pre-scheduled, ask me anything. 00:07:37.500 |
I am a Altman and other open AI engineers found themselves on the defensive, basically being grilled by users who are like, what is this? Like it's there's stuff about it. We don't like it's not clearly better. 00:07:47.600 |
Gary Marcus, who, if you haven't heard this name, you probably will hear it more often because for the last few years, he's been leading the charge to argue that generative AI in general was not going to ever deliver the claims that these tech CEOs were promising. I think he had a good summary of the overall reaction to GPT five. Jesse, can we play this? 00:08:06.020 |
Gary Marcus, GPT five is just dropped. What are your thoughts? 00:08:08.320 |
It's not what people expected or hoped it would be. I keep telling them that it's not going to be what they thought. Um, Kevin Scott a year ago was going around giving talks showing GPT five is a humpback whale compared to GPT four was some smaller creature. And there ain't no humpback whale there. It's better in a bunch of different ways. 00:08:25.940 |
Elon Musk's Grok four is actually better on François Cholet's arc AGI two task. You know, it's, it's part of the pack. It's not separated from the pack. And after 32 months of hearing people talk about it, I think it was reasonable to say, Hey, we want to see something, you know, genuinely different here. And it's not there. And it's so late too. Like I remember after the day before Super Bowl 2024, people saying, Hey, it's going to drop tonight. It's going to be so cool after the Super Bowl. Well, here we are 18 months later. And it's, let's be honest. 00:08:55.760 |
It's a disappointment GPT five in some sense caused a needle scratch on the hyped up tune. The AI industry has been playing. So was it possible that we were not about to see half of new jobs automated and AGI empowered AIs taking over most of our lives? Were we actually not just a couple of years away from having to negotiate our very existence with super intelligent computers? As GPT five made us realize these hard truths more 00:09:25.580 |
more and more people begin to ask a question that just a few months ago would have been dismissed as absurd. What if this is as good as AI is going to get at least for a while? Now I tackled this question. I looked deeply into it in a recent article that I wrote for the New Yorker that came out a couple of weeks ago. And today I want to provide you a fuller version of what I found working on that article. 00:09:48.240 |
The thing is for three years we've had a myth perpetuated about what large language models can do. Effectively anything and nothing. And this myth 00:10:18.040 |
has continued to be perpetuated to the point that now people are saying that companies that are laying people off are replacing them with AI, which just isn't true. 00:10:26.420 |
That's Ed Zitron. He's a technology analyst who hosts the podcast Better Offline. He's also been a bit of a thorn in the side of those most crowing about AI's disruptive impact on the economy. Why do they dislike him? He actually checks the claims they make. And what he finds is often less than flattering. Now if you hear Zitron talk like we just did there, it can induce whiplash. Right? Because what he's saying about AI's actual impact really can seem different than 00:10:56.220 |
For example, the news coverage that we have been reading. So if we want to ask this question, not what is going to happen with AI. That's where the CEOs were saying these astonishing claims. But if we want to ask the question of what is happening already because of AI, Zitron's note there that basically nothing of note really runs a skew of what a lot of media coverage has been recently. Not about the future, but about what's happening now. I want to read you some actual headlines. Jesse, I grabbed the 00:11:26.200 |
These from the last month, maybe the last six weeks. These are real headlines for major publications just in the past month or so. Here's one. Goodbye, $165,000 tech jobs. Student coders seek work at Chipotle. Here's the sub headline of that article. As companies like Amazon and Microsoft lay off workers and embrace AI coding tools, computer science graduates say they're struggling to land tech jobs. Here's another headline. AI is wrecking an already fragile job market. 00:11:56.180 |
For college graduates. Here's a third. CEO start saying the quiet part out loud. AI will wipe out jobs. Final headline I want to read here. AI will replace most humans, but then what? Right? So when we look at these headlines, you get this sense. Forget like what might happen in the future. 00:12:15.180 |
Certainly, certainly it sounds like the AI technology we have is severely disrupting our current economy. People can't get jobs. There's all these layoffs because of AI tools. 00:12:26.960 |
So I asked Ed Zitron about this. I said, okay, what is your take about this coverage we see? Not about the future, but the same that like even right now, we're seeing big impact. So here's what Ed had to say. 00:12:39.180 |
Now, journalists who should try, I don't know, even once looking at the actual data they're quoting are conflating young people not being able to find work with AI and AI being involved. 00:12:52.040 |
And this is partly because the CEOs of these companies, when they're laying people off, will say, we're making adjustments for efficiency and we're orienting our company around the power of AI. 00:13:02.120 |
And people are conflating that with the idea that someone is being replaced by AI despite the fact that AI is not replacing a damn person. 00:13:11.200 |
No numbers, no data, just vibes, baby. We don't need where we're going. We don't need the truth. We just have vibes. 00:13:19.000 |
I think vibe is an interesting word there because if you look closer at these articles, you see a lot of what Ed is saying actually showing up. 00:13:26.640 |
This idea that different things that are not really related are being conflated. 00:13:30.700 |
You'll take a job loss number that might be true. 00:13:33.860 |
You'll take the fact that AI has tools that are relevant to that industry, which is true. 00:13:38.240 |
You put these two things together and you have the natural consequence of the reader coming away with the impression the layoffs are because of the AI tools. 00:13:47.980 |
So as I look closer at these articles, I was seeing again and again examples of what Zitron was warning. 00:13:53.660 |
Yes, it seems at first glance that the economy is already feeling large impacts from AI tools, but you don't actually find that evidence in these articles themselves. 00:14:02.460 |
Remember that first headline I read here, for example, goodbye, $165,000 tech jobs, student coders seek work at Chipotle. 00:14:11.660 |
As companies like Amazon and Microsoft lay off workers and embrace AI coding tools, computer science graduates say they're struggling to land tech jobs. 00:14:24.680 |
If you read this article or know anything about the relationship between computer science jobs and the tech industry, here's the actual facts. 00:14:34.440 |
Companies like Amazon and Microsoft are laying off workers because they heavily spent in the pandemic and now we're in a tech contraction. 00:14:41.880 |
And as with every tech contraction that has happened in the history of computer science being a major, computer science degrees, majors go down with the tech industry and go up with it as well. 00:14:56.120 |
When big companies stop hiring, the demand for jobs goes down, you get less majors for a while until those companies start hiring. 00:15:10.660 |
It's happening now because there's a lot of cutbacks after the pandemic overspend and overhired. 00:15:17.460 |
Unrelated to that, people are embracing AI coding tools. 00:15:22.260 |
In software development in particular, not vibe coding, the sort of smaller stuff, but like professional software developers that would work at a Microsoft or Amazon, you have these integrated coding tools that do make things easier in certain ways, in certain key ways. 00:15:37.000 |
It allows you to get, for example, boilerplate code, you get templates, you can figure out how do I call this library without having to go look it up on the internet. 00:15:45.120 |
Some people are doing generation of small bits of code for parts of their program, but we have recent data, including from the meter study that's showing like, yeah, but that actually spent more time debugging that than actually writing it. 00:15:58.120 |
But if you put that in the middle of the sentence that's talking about the job market is down in tech and so far, majors are down, if you put the fact that people are using AI coding tools in the middle of the sentence, that has to be your intention is to try to make people believe that the jobs are going away because they're being replaced by AI. 00:16:18.360 |
They hired like crazy in 2020 and 2021 and 2022. 00:16:23.600 |
They're laying off widely across all of their divisions. 00:16:28.060 |
So if you're thinking this, how could this be true that everything's not about to change because things already are changing, look a little bit closer at those articles. 00:16:36.320 |
There's a lot of correlation going on that almost seems purposefully contrived to try to create a vibe when you don't actually have data to back that up. 00:16:46.700 |
So I asked, you know, Zitron, like, what is going on? 00:16:50.780 |
And he pointed at like, yeah, sure, there's applications, but not on a scale that's really disrupting the economy, just useful stuff. 00:16:56.800 |
There's things that people are using this for. 00:16:58.100 |
We've mentioned coders have different uses for it. 00:17:02.380 |
You know, it's useful if you're looking up information or trying to interrogate your own information. 00:17:06.120 |
You can do it with narrative interaction as opposed to having to use some sort of more structured information system. 00:17:11.780 |
There are certain types of like text processing or automation that like these tools or have something that really understands text really well can be really useful. 00:17:20.840 |
You could look at other types of modal models, not really large language models, but image generation, video generation. 00:17:26.560 |
There's definitely some impacts in those fields, but this is not the economy right now is operating in a drastically different way. 00:17:32.740 |
And to make this even worse, as Zitron has emphasized in his reporting, these models are very, very expensive to run because you're redlining GPUs, which is very expensive in terms of compute and electricity. 00:17:48.160 |
So it's unclear exactly how you overcome the cost of how much it costs to run these models. 00:17:55.220 |
How do you overcome that to make enough profit to sort of pay back the capex expense required to put into them? 00:18:02.680 |
Zitron is definitely farther out on the skeptical side, but he's been right about a lot of things. 00:18:06.960 |
I asked him like, OK, what's just your summary then of the current situation, not of what's going to happen, but what's happening right now with AI in the economic scene? 00:18:17.700 |
Despite all the king's horses and all the king's men saying how important and beautiful and crazy these models are and how everything's changing, the actual revenue is smaller than last year's smartwatch revenue, which is around $34 billion. 00:18:31.560 |
They're expecting $35, $40 billion max of total revenue in this entire industry, including open AI. 00:18:41.820 |
We're in the silliest time in tech in history. 00:18:47.000 |
So GPT-5 wasn't the giant leap forward we were promised. 00:18:50.760 |
The claims that we've been seeing in certain articles that our existing AI is already reshaping the economy. 00:19:01.440 |
How could we get to this point where there's such a disconnect between the promise and the reality of AI? 00:19:07.160 |
To understand this, we're going to need to look closer at the story of the technological side of this latest AI boom. 00:19:16.800 |
Jesse, I'm going to bring out my computer scientist hat, so you've got to be careful here. 00:19:22.180 |
It has both a circuit board and the Starfleet commander, Jean-Luc Picard, on it. 00:19:28.700 |
I'm trying to imagine what a computer science hat would look like. 00:19:33.360 |
I'm going to put on my computer science hat because we're going to get into the technological narrative behind how we got such a disconnect between what we thought AI was going to do and what's actually happening. 00:19:42.900 |
This brings us to part three, the strange death of the scaling law. 00:19:48.620 |
Now, there's a reason why there was so much excitement around the potential for AI, right? 00:19:55.540 |
So this was not one of these sort of bubble situations where people have an irrational exuberance. 00:20:01.240 |
There's actually a very rational reason why we had the level of excitement and investment and commitment to AI that actually happened in the past half decade. 00:20:09.660 |
To help explain what gave us this excitement, I want to play a clip here from the NVIDIA CEO, Jensen Wong, talking at a conference. 00:20:19.000 |
So let's hear this audio, Jensen, AI, the industry is chasing and racing to scale artificial intelligence, and the scaling law is a powerful model. 00:20:35.200 |
It's an empirical law that has been observed and demonstrated by researchers and industry over several generations. 00:20:44.300 |
And the scaling law says that the more data you have, the training data that you have, the larger model that you have, and the more compute that you apply to it, therefore, the more effective or the more capable your model will become. 00:21:03.840 |
So what Wong is talking about there is incredibly important. 00:21:06.700 |
He's using this technical term, the scaling law. 00:21:09.640 |
And what he's referring to there is a series of equations that came out in a paper that was published in 2020. 00:21:22.420 |
And here's what they did in that original paper. 00:21:27.900 |
So it was basically what was OpenAI's at the time GPT-2 model. 00:21:31.660 |
And they said, we're going to systematically measure. 00:21:36.100 |
And we're going to systematically measure how well this model does as we make it bigger. 00:21:41.780 |
As Jensen talked about in that video, bigger means as we make the model itself bigger, as we train it on more data, so we make the data set bigger, and as we train it longer, so we make the length of the training bigger as well. 00:21:57.920 |
Now, this seems like, I don't know, shouldn't it get better? 00:22:00.820 |
But that was not the conventional wisdom in machine learning at this time. 00:22:03.800 |
In machine learning, there was this real sense that, like, the mathematical machine learning people who had all these mathematical theories about how machine learning actually works said you can't get too big. 00:22:15.840 |
If you make your model too big, it's going to memorize the stuff you're training it on, right? 00:22:21.980 |
So you're like, okay, we give you these questions, and you're great at them. 00:22:25.280 |
But it's so big, it was able to just memorize all the answers. 00:22:27.520 |
And so when you give it new questions in the real world, it's going to do really bad. 00:22:32.040 |
So the machine learning math types, very scolding, said don't overfit. 00:22:36.500 |
Big enough that it knows a lot, but not so big that it memorizes. 00:22:41.220 |
And you're going to get that's the sweet spot you want. 00:22:43.000 |
This paper, this Kaplan paper from 2020, they said, well, let's check that with language models. 00:22:49.120 |
And what they found is, oh, my God, as we make all this stuff bigger, the performance goes up. 00:22:53.720 |
And it doesn't just go up, like, proportionally or at, like, a nice gentle hill. 00:23:00.800 |
So it follows something known as a power law curve, which is sort of like turning a hockey stick upside down. 00:23:06.000 |
And that was not what machine learning people thought was going to happen. 00:23:10.080 |
So what internally at OpenAI, after they were doing this research internally, they said, well, let's try this. 00:23:16.900 |
Let's make a model that's, like, 12 to 15 times larger than GPT-3. 00:23:25.240 |
And no one's ever really trained a model this big before because everyone's like, oh, go slow because you've got to find exactly that point where you're too big. 00:23:32.040 |
But this paper implied, like, I don't know, this curve looks like it goes up fast. 00:23:35.720 |
If we extrapolate this, man, if we keep making these really bigger, the performance could get amazing. 00:23:40.760 |
And so they made a model that was, you know, 12 to 15 times larger than GPT-2. 00:23:46.780 |
It was a factor of 10 larger than the largest existing large language model at the time. 00:23:55.060 |
And it fell exactly where that curve predicted. 00:23:58.100 |
Said, yeah, if you made it this big, which seems crazy, you're going to get an even crazier jump in abilities. 00:24:03.300 |
And it looked like that's exactly what happened. 00:24:06.200 |
This is a huge deal because you have to understand when people in AI were thinking about having something like artificial general intelligence, like AI that could automate our work and do almost everything a human could do, but better. 00:24:19.780 |
They imagined this would be a super complicated thing. 00:24:22.080 |
Your system would have 100 different parts, and it's going to require 20 different new theoretical breakthroughs. 00:24:27.020 |
And finally, you would piece this thing together. 00:24:29.780 |
It's going to take a long time to get a computer system that could act like the human brain. 00:24:33.840 |
And they suddenly had this new route there that was much easier. 00:24:37.980 |
We could take this one type of AI model, the language model, in particular, like transform or pre-trained model, take this one type of model and just make it bigger. 00:24:47.160 |
And that might get us directly to artificial general intelligence. 00:24:53.000 |
Maybe we don't have to have 20 more breakthroughs. 00:24:55.280 |
Maybe we don't need that chip from Terminator 2 that comes back from time. 00:24:58.780 |
They get it from Terminator 1, and that's what allows the company to build Skynet earlier. 00:25:04.380 |
We just take GPT-2 and keep making it bigger, this exact architecture bigger will eventually get so good that it can do everything we want. 00:25:13.920 |
Soon after GPT-3 validated that scaling law by falling on that curve, this is when Sam Altman wrote his sort of infamous Moore's Law for Everything essay, 00:25:22.380 |
in which he basically argued, look, we're going to have to have like a tax on the equity of like the four companies that are going to be left. 00:25:29.140 |
There'll be like four companies that control all the AI that runs all the economy. 00:25:32.640 |
We're just going to have to like tax them, just take a share of their value in common so that we can give like a universal basic income to everyone else because there'll be no work to do. 00:25:42.120 |
That essay came after GPT-3 validated the scaling law. 00:25:48.180 |
ChatGPT came along a little bit later, but that was really just a way of letting people have access to a sort of nerfed and more tamed, well-behaved GPT-3. 00:25:57.040 |
The next thing that was exciting for AI researchers was they said, what if we got bigger? 00:26:03.520 |
What if we made something way bigger than GPT-3? 00:26:08.780 |
Like we don't even know how to build a building that could run enough chips to train something that was much bigger than GPT-3. 00:26:17.760 |
Microsoft, who ran the data centers for OpenAI, had to invent custom cooling technology that did not exist because no one had ever run that many high-energy, high-heat chips at full blast, that many thousands and tens of thousands of them all in one room. 00:26:34.180 |
They had to invent new technology, but they said, let's see what happens. 00:26:38.480 |
Let's go many hundreds of billions of parameters, maybe hit a trillion. 00:26:50.260 |
It seemed to land exactly where the curve, this curve that's getting steeper. 00:26:57.580 |
And you have to understand, the scaling law leaps were massive. 00:27:01.480 |
They were general improvements in not just what models could do before, but it gave them capabilities that didn't exist before. 00:27:08.940 |
GPT-3 mastered language in a way that the earlier language models had not. 00:27:13.280 |
GPT-4 brought with it capabilities that no one even thought a language model could do. 00:27:20.620 |
It brought with it capabilities unrelated to language. 00:27:23.240 |
It seemed to be able, it could write code really well. 00:27:26.740 |
So it really emphasized and reinforced this idea, this one type of magic model, this language model, keep making this thing bigger. 00:27:39.040 |
No more breakthroughs are needed except for figuring out how to get more chips. 00:27:47.640 |
I want to play a quick clip here from a researcher from Microsoft Research right after GPT-4 came out. 00:27:56.180 |
I wrote about this last year in The New Yorker. 00:27:57.680 |
This is him talking about his encounter, his first encounter with GPT-4. 00:28:02.140 |
I just want to give you a sense of the excitement that was out there. 00:28:04.920 |
It's not 10% increase on this benchmark, you know, 20% on that benchmark. 00:28:10.040 |
What I want to try to convince you of is that there is some intelligence in this system. 00:28:15.260 |
That I think it's time that we call it, you know, an intelligent, you know, system. 00:28:19.640 |
And we're going to discuss it, you know, what do I mean by intelligence? 00:28:22.720 |
And, you know, at the end of the day, at the end of the presentation, you will see it's a judgment call. 00:28:27.120 |
It's not a clean cut whether this is, you know, a new type of intelligence. 00:28:31.120 |
But this is what I will try to argue nonetheless. 00:28:37.020 |
And I think at the time, this was a very justified excitement. 00:28:42.320 |
And what it really looked like to move up so far up to scaling graph, the scaling law that Jensen Wong talked about, was really impressive. 00:28:49.500 |
GPT-4 did stuff that we never thought a language model could do. 00:28:51.980 |
So think about what you were thinking now if you were one of these AI companies. 00:29:01.760 |
He built this data center called the Colossus that had something like 200,000 H100 GPUs in it. 00:29:06.460 |
Like, they're like, let's just, let's get all the money we can get and keep building these things bigger. 00:29:13.060 |
If GPT-4 was blowing us away so much compared to 3, what is 5 going to be like? 00:29:17.720 |
And by 6 or 7, these things are going to be able to do everything. 00:29:21.940 |
If anything, we're going to have to be worried about their power. 00:29:29.560 |
This wasn't like with the crypto boom where you were, much more of this was very, not the cryptocurrency, but more of the idea of like Web3 or a sort of blockchain organized world of information and software. 00:29:46.700 |
With this AI scaling, it was really, really impressive what was actually happening. 00:29:51.460 |
Like, it was not an irrational claim to say, like, why wouldn't this keep following that scaling law? 00:29:57.060 |
That's what Jensen Wang was talking about in that clip from earlier. 00:30:03.740 |
And if it kept happening, that was a path to the type of things that we heard those tech CEOs talking about in the clips at the beginning of this episode. 00:30:14.640 |
Almost all at once, the scaling strategy stopped working. 00:30:19.920 |
And this is the piece of the narrative that we lost the thread of. 00:30:25.920 |
The tech CEOs knew it and pretended like it wasn't happening. 00:30:28.360 |
But the rest of us didn't really know so clearly this was going on. 00:30:34.540 |
So according to the publication of the information, by the spring of 2024, so remember, GPT-4 came out in the spring of 2023. 00:30:41.480 |
By the spring of 2024, Altman was telling employees that their next major model, which was GPT-5, but they were codenaming it Orion back then, was going to be significantly better than GPT-4. 00:30:53.780 |
They'd been training it for months and months and months. 00:30:55.680 |
And they were like, this is going to be awesome. 00:30:57.660 |
But by the fall of 2024, it became clear that the results of making this much bigger model were disappointing. 00:31:08.380 |
While Orion's performance ended up exceeding that of prior models, the increase in quality was far smaller compared with the jump between GPT-3 and GPT-4. 00:31:20.520 |
So like this is where the wah-wah music starts to play in the AI story. 00:31:24.720 |
When they tried to do the same trick for a third time, they didn't get the same applause. 00:31:31.900 |
They trained this thing for months with huge data centers and it only got somewhat better. 00:31:37.040 |
OpenAI was not the only company to have these problems. 00:31:39.700 |
Meta, you know, if you were following them closely in the AI industry, they were going to build this massive model that was going to be the next leap that was going to get them back ahead again in AI. 00:31:50.060 |
They called it Behemoth because it was that big. 00:31:52.600 |
Well, earlier this year, they announced they're going to delay releasing Behemoth because when they finished training it, guess what? 00:31:58.620 |
It wasn't performing much better than the models they had before. 00:32:02.700 |
Just throwing a lot more size and compute at it wasn't having the same effects. 00:32:14.260 |
This Colossus supercomputer data structure, whatever you want to call it, cluster, computing cluster. 00:32:23.280 |
These are like state-of-the-art chips you can use to train on it. 00:32:33.560 |
And this thing is going to be so big and we're going to train it so hard. 00:32:39.020 |
We're just going to sort of do a bank account measuring contest and come out here ahead. 00:32:45.920 |
My sources said it's probably somewhere between 5 to 10x the amount of computing resources that went into GPT-4. 00:32:55.300 |
Grok 3 was okay, but it was sort of like the other models around at the time. 00:33:00.980 |
No leap like we saw with GPT-3, no leap like we saw with GPT-4. 00:33:05.400 |
People in the industry were noticing this, just we weren't noticing them talking about it. 00:33:09.280 |
Here's Ilya Suskever, a founder of OpenAI who left and has some other thoughts I could get into, but he was on this. 00:33:20.560 |
Now we're back in the age of wonder and discovery once again. 00:33:25.000 |
A TechCrunch article from last fall summarized the general mood as follows. 00:33:30.220 |
Everyone now seems to be admitting you can't just use more compute and more data while pertaining large language models and expect them to turn into some sort of all-knowing digital god. 00:33:39.720 |
So this happened, but the general public, and I think a lot of the technology media, wasn't really cued in to how important this was. 00:33:49.080 |
All of that excitement about AI controlling the world and automating everything, we need to tax the three companies that are going to be left so that we don't starve. 00:33:59.460 |
That all came from the assumption that these models would keep moving up that scaling curve as we made these things bigger. 00:34:06.720 |
But that stopped working, and everyone tried to make it work, and no one could get it to work, and this was a big problem. 00:34:15.340 |
And by the last summer and certainly by the fall of 2024, we knew this. 00:34:22.300 |
Well, they weren't going to say, like, look, guys, we oversold this. 00:34:24.480 |
We put a lot of money to build the Colossus cluster and spent all this money on these giant data centers, but I don't think this isn't making much better models. 00:34:34.720 |
So they said, we've got to find something to replace this technique we were doing of just making everything bigger that can give us some sort of improvements, because we need momentum. 00:34:43.040 |
We have all these investors writing checks, and we need new investors to keep writing checks so the old investors don't get upset. 00:34:48.720 |
So we need something to keep the momentum going here. 00:34:53.440 |
There was a shift to a new storyline about how AI was going to continue to improve that was different from the original scaling storyline. 00:34:59.960 |
I'm going to return to NVIDIA's Jensen Hong here to explain. 00:35:05.740 |
He's going to explain here the strategy they came up with after scaling failed. 00:35:12.400 |
But there are, in fact, two other scaling laws that has now emerged, and it's somewhat intuitive. 00:35:21.580 |
The second scaling law is post-training scaling law. 00:35:25.420 |
Post-training scaling law uses technology techniques like reinforcement learning, human feedback. 00:35:30.740 |
Basically, the AI produces and generates answers based on a human query. 00:35:38.720 |
The human then, of course, gives it feedback. 00:35:41.140 |
It's much more complicated than that, but that reinforcement learning system with a fair number of very high-quality prompts causes the AI to 00:35:52.220 |
It could fine-tune its skills for particular domains. 00:35:56.380 |
It could be better at solving math problems, better at reasoning, so on and so forth. 00:36:00.780 |
And so it's essentially like having a mentor or having a coach give you feedback after you're done going to school. 00:36:09.780 |
And so you get tests, you get feedback, you improve yourself. 00:36:12.960 |
We also have reinforcement learning AI feedback, and we have synthetic data generation. 00:36:18.860 |
These techniques are rather akin to, if you will, self-practice. 00:36:26.620 |
You know the answer to a particular problem, and you continue to try it until you get it right. 00:36:32.860 |
And so an AI could be presented with a very complicated and a difficult problem that is verifiable functionally, and it has an answer that we understand. 00:36:48.100 |
And so these problems would cause the AI to produce answers, and using reinforcement learning, you would learn how to improve itself. 00:37:00.920 |
I'm going to cut it off there because he's geeking out a little bit, Jesse. 00:37:06.700 |
You got to see, by the way, we don't have video of him that we're playing, but can you see the video in front of you? 00:37:17.720 |
It's a mix between lizard skin and rhinestones. 00:37:26.000 |
I think this is their way, like, we don't want you to seem like Mark Zuckerberg. 00:37:28.820 |
He's like, well, what if, hear me out, what if we have a diamond-studded lizard skin jacket on? 00:37:36.100 |
All right, so what was he geeking out there about? 00:37:38.240 |
What he was geeking out there, and I think, by the way, it's intentional that he's being so technical 00:37:42.680 |
because it makes everyone else be like, yeah, they know what they're talking about. 00:37:47.200 |
That seems about right because it's complicated. 00:37:48.800 |
What he's talking about is what they replaced the original scaling with. 00:37:53.720 |
I'm going to be less technical than Jensen, but let me just give you a couple terms here. 00:37:59.100 |
The thing that we were doing before that gave us GPT-3 and GPT-4 and then failed to give us those other great big leap models, 00:38:09.600 |
So the type of training they were doing there is called pre-training, and that's what they were scaling up. 00:38:15.040 |
That's what the original scaly law says, bigger models, more data, more compute is going to make you better. 00:38:20.640 |
What Jensen's talking about is post-training, and what you do here is you take a model that has already been pre-trained. 00:38:27.780 |
So you've already done the pre-training, and pre-training is unsupervised. 00:38:31.180 |
It's where you get a ton of real text, and you chop it off in random places, and you tell the model what word comes right after we chop it off, 00:38:39.420 |
and then it guesses, and you know the real word because you chopped off the writing, and then you adjust its weights. 00:38:43.720 |
You do this with, like, all the text on the Internet, and the models get really smart. 00:38:47.720 |
Post-training takes a model that you've already done that to. 00:38:50.560 |
So basically you take, like, GPT-4, like a model that you've already done that pre-training, and you said, 00:38:54.740 |
we're now going to tune you to do certain things better. 00:39:00.760 |
We can give you, just learn a bunch of stuff. 00:39:02.780 |
Post-training was, now we're going to take particular things we want you to be better at, 00:39:07.160 |
and we're going to try to make you better at it. 00:39:09.760 |
And we're going to use a new machine learning technique within the world of language models, 00:39:13.420 |
reinforcement learning, which is an old technique, but they applied it to language models. 00:39:16.680 |
And we're going to use that to sort of fry and mess around with your wiring for specific tasks. 00:39:22.540 |
So we're going to take the smarts you already got during pre-training, 00:39:25.140 |
and we're going to make you somewhat better at applying them. 00:39:27.140 |
And this requires, he was talking about synthetic data sets or this or that. 00:39:31.520 |
It requires you have, like, special post-training data sets, which isn't just text, like, in the pre-training, 00:39:36.380 |
but you have a question and the right answer, and so you can, like, zap it to get it closer to the right answer. 00:39:41.620 |
There's all sorts of different types of post-training. 00:39:44.180 |
But this is what they realized we could do, is, like, we'll take a model that's already trained, 00:39:50.200 |
and then we'll try to make it, use its smarts better on particular types of applications. 00:39:54.600 |
It's very more, like, bespoke, custom, let's come in and soup it up here and soup it up there. 00:40:01.000 |
Jensen goes on to talk about it, but I kind of got tired of hearing him. 00:40:03.360 |
But he goes on in that same talk and says the third scaling law, there's different times, different words for this. 00:40:09.160 |
You could call it test time compute or inference time compute. 00:40:11.600 |
But basically, you could think of it as, like, other clever things we can do with the models we already trained. 00:40:16.540 |
So, for example, you could have the model, you know, you could tell it to spend more time thinking on questions that are harder. 00:40:24.800 |
Same model, but you spend more time thinking when the questions are harder, you get a better answer. 00:40:29.760 |
Or you could say, here's what we're going to do. 00:40:32.680 |
We'll ask the model a bunch of times to answer the question, and then we'll look at the answers and be like, which one comes up more often? 00:40:38.260 |
Like, that's going to give you better performance, right? 00:40:41.540 |
Or we'll have one model look at your question, and then that model's whole job is just to say, 00:40:47.560 |
what type of model, like, what type of these different models we've tuned up in different ways would be best for answering this question? 00:40:53.880 |
And then we'll send it to that model, and then we'll get a better answer. 00:40:56.040 |
So there's also all of these other things that were about not necessarily changing how the models, their definitions or what they learned, 00:41:02.560 |
but just using them in different ways to try to squeeze more performance out of them. 00:41:07.180 |
So we have these two different scaling laws, but in my New Yorker article, I call that whole thing post-training. 00:41:11.920 |
So it's taking a model you already made smart by pre-training and try to make it use those smarts better for particular things. 00:41:18.460 |
Here's a metaphor I had in my New Yorker article to help explain this. 00:41:24.900 |
Pre-training can be said to produce the vehicle. 00:41:30.660 |
In the scaling law paper, Kaplan and his co-authors predicted that as you expand the pre-training process, 00:41:35.900 |
you increase the power of the cars you produce. 00:41:38.620 |
If GPT-3 was a sedan, GPT-4 was a sports car. 00:41:42.380 |
But once this progression faltered, the industry turned its attention to helping the cars they had already built to perform better. 00:41:50.320 |
Post-training techniques turned engineers into mechanics. 00:41:54.580 |
And this is what has been going on since roughly the late fall of 2024. 00:41:59.040 |
All of those really confusingly named models that OpenAI has put out that like, well, here's O1, here's O3, here's O3 Mini High, here's O4 Mini, O4 Nano, O4 Nano High, Pokemon, Beelzevar. 00:42:13.360 |
These are all just different bespoke combinations of post-training techniques applied to models. 00:42:22.020 |
And they began to turn their attention a lot more towards benchmarks. 00:42:26.280 |
That clip from the Microsoft researcher talking about GPT-4? 00:42:35.380 |
It's not just we're 20% better on some benchmark. 00:42:41.440 |
Well, guess what they started saying about the models after the scaling law failed? 00:42:45.040 |
Hey, this thing's 20% better on some benchmark. 00:42:47.720 |
That became the way they were bragging about these models. 00:42:51.920 |
If you have these like very specific tests and you have a way to take a model and try to get it better at specific tasks, well, then you can just start making these models better at like whatever the tests were. 00:43:03.180 |
Oh, look, we have a test of step-by-step reasoning and it does really well. 00:43:08.940 |
Yeah, because we post-trained this version of GPT-4 to like do these type of questions really well. 00:43:16.020 |
We left the world of just we trained it twice as long. 00:43:20.760 |
And when we came back, the baby was doing quantum physics. 00:43:25.080 |
And now we're like very like insistently post-training, very nuanced, specific little improvements. 00:43:37.760 |
This is why we had some jumps forward in computer programming because that's very well suited towards post-training because you have – it's easy to generate a question and know the right answer like does this code compile and work. 00:43:47.820 |
And so you could get a lot out of the existing smarts by post-training on computer programming. 00:43:53.600 |
Certain types of math problems, they have right answers. 00:43:56.620 |
So we could have good synthetic data sets and we got some results there. 00:43:59.300 |
Some of these compute time things like let's spin longer on certain responses. 00:44:04.160 |
Yeah, you get – you spin more compute, you can get better answers. 00:44:07.740 |
It's not really practically – it's too expensive to be practical. 00:44:10.980 |
But these things could make it do better on benchmarks and some stuff that was useful in practice. 00:44:16.460 |
Things like deep research require these sort of post-compute techniques where you can break down a thing in the – an LLM will break down the task in the multiple steps. 00:44:23.860 |
That's its answer and then another LLM takes each of those steps and does it and then a third LLM looks at the answers and puts it together. 00:44:29.120 |
So using the LLMs more dynamically could get some cool stuff as well. 00:44:34.240 |
We were off that path that was supposed to lead us to AGI. 00:44:38.700 |
We left the scaling law path, right, which said we got a sedan, then we got a sports car, and then we were going to get an F1 car. 00:44:47.400 |
We had a bunch of Camrys and we just started saying how can we soup these things up? 00:44:52.980 |
I put like a system on the exhaust and we got 20 more horsepower out of it. 00:44:59.040 |
That is why it's been so confusing to understand the models starting in 2025. 00:45:03.840 |
That is why if you look at the announcement page for GPT-5 on OpenAI, I counted this. 00:45:10.340 |
It has 26 different bar charts and line graphs showing these inscrutably named benchmarks with things moving. 00:45:17.180 |
You didn't need that to know that GPT-4 was amazing. 00:45:23.240 |
You could give it math problems and it did well. 00:45:25.140 |
People just started giving GPT-4 problems from standardized tests and it was like just doing really well on them. 00:45:30.580 |
Now we have a bar chart showing a 4% increase on some sort of benchmark metric that probably one of the AI companies themselves came up with. 00:45:41.380 |
We weren't going to get the AGI, but we pretended like this post-training stuff was just as exciting. 00:45:46.320 |
And a lot of us were going along with it because, I don't know, we didn't know what chains. 00:45:54.320 |
That car metaphor took me and my editor a while to figure out. 00:45:58.760 |
It's not easy to explain these things until GPT-5 came along. 00:46:03.100 |
And then people stepped back and said, okay, we were going along with this for a while. 00:46:07.640 |
I didn't know why these were getting better, but these had weird names and we thought you were maybe working on particular features. 00:46:12.520 |
But when you give us the next big number, you had promised for years this thing was going to feel to us like GPT-4 felt and like GPT-3 felt before. 00:46:20.520 |
And when it didn't, that's when people said, I don't care anymore that you get a 16% increase on some benchmark whose name I can't understand. 00:46:27.480 |
I'm beginning to suspect that the emperor is not wearing nearly as much clothes as I once thought that he was. 00:46:36.060 |
So that, I mean, I know this gets technical, Jesse, but like that is, that's what started happening here. 00:46:41.060 |
We shifted from pre-training, which was amazing. 00:46:44.100 |
And then when it stopped being amazing, we replaced it with stuff that wasn't so exciting. 00:46:51.320 |
I've been getting a lot of good feedback on the car metaphor. 00:46:57.700 |
And I walked away, came back, and said, I think car works. 00:47:00.100 |
And then, so I didn't, you know, I didn't know. 00:47:06.060 |
The post in the term post-training clearly indicates that this is in a separate phase of the training cycle of the transformer-based language models. 00:47:15.840 |
So it's good sometimes to have a non-computer science check you. 00:47:22.380 |
Now that we know what actually happened, what should we expect for the future, this brings us to part four, what the future is more likely to actually hold. 00:47:32.460 |
I want to read here from the concluding section of my New Yorker article. 00:47:37.880 |
If this more moderate view of AI is right, then the next few years, AI tools will make steady but gradual advances. 00:47:45.940 |
Many people will use AI on a regular but limited basis, whether to look up information or to speed along certain annoying tasks, such as summarizing a report or writing the rough draft of an event agenda. 00:47:56.040 |
Certain fields like programming and academia will change dramatically. 00:47:59.520 |
A minority of professions, such as voice acting and social media copywriting, might essentially disappear. 00:48:05.100 |
But AI may not massively disrupt the job market, and more hyperbolic claims like superintelligence may come to seem unserious. 00:48:16.680 |
Now, I went on to actually quote Ed Zitron because he argues AI hype might actually have introduced some new perils. 00:48:25.540 |
There's some pretty sobering financial numbers here. 00:48:28.320 |
According to Zitron's analysis, about 35% of the U.S. stock market value, therefore we're talking like a large share of your retirement portfolio, is currently tied up in the so-called Magnificent Seven technology companies. 00:48:40.400 |
According to Zitron's analysis, those firms spent around $560 billion on AI-related capital expenditures in the past 18 months, while their AI revenues during this period were only around, as he mentioned in the clip before, $35 billion. 00:48:56.180 |
When you look at these numbers, you feel insane, Zitron told me when I interviewed him. 00:49:02.660 |
So it's possible we got some cool uses for the AI we have now. 00:49:06.540 |
There will be some more cool uses to come out. 00:49:11.820 |
I think we're going to get a lot more customized tools. 00:49:13.860 |
So a lot more people are more likely to find a customized tool that's built on this type of language model technology that is really useful to them and makes their life really cool. 00:49:24.660 |
It is not like a college-educated entry-level worker. 00:49:28.260 |
Like Amadei said, ideas like superintelligence are completely unserious on our current technological trajectory. 00:49:35.340 |
So in other words, yes, the AI we have now may be as good as it's going to get, at least for a while. 00:49:42.020 |
Now, does this mean we can permanently stop thinking about AI? 00:49:45.780 |
So some of the people I talked about for this article I talked to, like Gary Marcus, who had correctly said from the beginning, 00:49:58.900 |
If you talk to him thinking you're going to get a lot of relief about I can now watch Terminator 2 with impunity and smoke a cigar and kick my feet up and not worry about that and not bother doing Sarah Connor's jail cell pull-ups to get ready for the robot apocalypse. 00:50:14.880 |
If you think he's going to give you that sort of certainty, think again. 00:50:18.420 |
Because if you talk to Gary, he says, oh, yeah, language models, this is kind of a dead end. 00:50:22.240 |
Like what we have now is kind of what they're good at. 00:50:24.680 |
But he's like, I don't know that it's going to be that much longer until we do get to something like artificial general intelligence. 00:50:30.900 |
It's going to take a lot more other technologies. 00:50:34.800 |
But he's like, yeah, there's some new breakthroughs we need. 00:50:45.280 |
But it doesn't mean that we never have to worry about AI again. 00:50:49.560 |
I do, however, I can get a whole other episode about my computer scientist thoughts on the likely trajectories towards much more powerful AI. 00:51:00.940 |
It's going to be less about – this is what was scary about the language model scaling model. 00:51:04.900 |
It's going to be less about this one type of technology where we just turn one knob is going to eventually just, whoa, this one thing can do everything. 00:51:15.100 |
This system got good at this and it took a lot of bespoke work. 00:51:22.260 |
It's going to be more of this gradualism of, you know, now that I think about it, there are so many things that humans used to do that we each have a different system for that can do it really well. 00:51:33.100 |
But that's going to be a much slower moving disruption than we feared from the scaling of language models. 00:51:38.420 |
Because, again, that was going to be like in a year or two, this technology, this one right here, just bigger, is going to do all your jobs. 00:51:44.560 |
It's going to be more gradual and fragmented than that. 00:51:47.160 |
But we can't stop thinking about the impacts of AI. 00:51:52.120 |
We have to think about from a regulatory perspective, from an economic perspective, and from an ethical perspective. 00:52:00.260 |
I direct the computer science ethics and society major at Georgetown. 00:52:04.060 |
That's the first integrated computer science ethics major in the country. 00:52:13.460 |
But at least now we have some time to get there, but maybe not as much time as we might have hoped. 00:52:25.440 |
The venture capitalists and the tech CEOs weren't bond villains at the beginning of this story. 00:52:31.200 |
It really was that exciting what we were seeing up through GPT-4. 00:52:35.160 |
It really was a bummer when the pre-training scaling stopped making those same leaps. 00:52:40.380 |
Where we get into some behavior that I don't love is what happened next, which was the companies started waving their hands really quickly. 00:52:49.140 |
Post-training is going to be just as cool, even though people could clearly see bar charts moving are not cool. 00:52:56.240 |
They waved their hands really wildly and hoped that we wouldn't notice. 00:52:58.640 |
A lot of people who were covering technology went along with that. 00:53:01.580 |
And now the bill has come due of like, oh, this stopped working last in the summer of 2024, didn't it? 00:53:07.480 |
And this new thing is just you polishing up the Camry you already have. 00:53:12.340 |
And I want you to make the Camry better because, like, it's really useful for going to get groceries. 00:53:20.520 |
But the Camry is not going to take over half of new knowledge work jobs. 00:53:25.140 |
That metaphor is kind of mixing up a little bit, but we don't have to worry about that. 00:53:28.560 |
You can read – I wrote about this for my newsletter. 00:53:30.700 |
So, if you want a sort of summary of my New Yorker article, calnewport.com, subscribe to it. 00:53:34.320 |
The actual New Yorker article is called What If This Is As Good As AI Is Going To Get? 00:53:39.240 |
Or What If AI Doesn't Get Much Better Than This? 00:53:41.460 |
There's so many different ways of saying that. 00:53:42.780 |
But just my name in New Yorker, you'll find that article. 00:53:47.660 |
But that is my epic tale of what's happening with AI. 00:53:54.360 |
Did you see in a recent Andrew Sorkin deal book email that Elon and Zuckerberg tried to buy OpenAI for $97 billion back in February? 00:54:08.820 |
But now it's worth $500 billion at the recent valuation? 00:54:14.800 |
That $97 billion might be the right price a year from now. 00:54:22.260 |
What's interesting, if you watch, if you – because I follow the industry, they turned on a dime. 00:54:28.720 |
Not only did the coverage turn on a dime, like this – my article came out. 00:54:31.360 |
There was a couple other articles just like this, like right after GPT-5. 00:54:35.740 |
And like every publication was like, what's going on with this technology? 00:54:40.920 |
And then the CEOs have all on a dime in the aftermath of my articles and similar articles and some other – so the other big thing that happened is that Fortune magazine then resurfaced, which I think is important because it was out for a month, this MIT report. 00:55:00.600 |
That they went and studied 300, like actual companies building, like trying to use generative AI to like make their companies better. 00:55:08.240 |
And they found that 95% of the cases were failures and they just turned it off. 00:55:12.920 |
And Fortune mentioned that company again, that article, and it went viral. 00:55:24.320 |
People were asking because they were seeing these articles. 00:55:36.820 |
And people were just like putting these things together like, oh my god, AI is – I think AI, I'm not quite sure, but I think AI is murdering computer science majors. 00:55:48.580 |
And then that article, that research paper got resurfaced and people were like, wait, whoa, whoa. 00:55:53.980 |
Who is actually like replacing people with this technology? 00:55:58.160 |
Everyone's looking around like, I don't think I over there that was doing it? 00:56:01.420 |
Like what – over here, they're doing it, right? 00:56:03.180 |
And they couldn't actually find anyone who was doing this. 00:56:05.660 |
But the reason why I said the timing is important is that paper was out a month ago. 00:56:09.280 |
But a month ago, if you were out there saying what I said, people would think you were certifiable. 00:56:16.780 |
You're like, well, you couldn't – no, that paper got no coverage. 00:56:20.340 |
But after GPT-5's failure, like suddenly people were like, maybe we should pay attention to it. 00:56:26.840 |
And now the CEOs have all come out last week and been like, yeah, it's a bubble. 00:56:33.280 |
The day that GPT-5 came out, he said this is a key step towards AGI. 00:56:40.620 |
He says, AGI – I'm paraphrasing, but basically, what does that word even mean? 00:56:51.880 |
You were saying like you were on your way there because of GPT-5. 00:56:55.500 |
And then after all of this came out, he's like, it's not really about AGI. 00:56:59.540 |
It's really about the fun we have along the way. 00:57:02.020 |
It's not about can our technology make you a lot of money and be worth the money you're investing? 00:57:11.540 |
The right question is we had a lot of fun, right, talking about – we had fun on the journey. 00:57:18.220 |
The whole – all the coverage, everything turned on a dime. 00:57:22.220 |
Like because, you know, in my own sort of like neurodiverse computer science way, I was never swayed by hype. 00:57:32.360 |
Why do they – what's the architecture going to be to do this? 00:57:34.520 |
So I was never that impressed with any of that talk. 00:57:44.040 |
It reminds me of like when I was not using social media originally. 00:57:47.060 |
And people were like, you are literally the devil. 00:57:49.360 |
And now they're like, who would use social media? 00:57:56.380 |
I'm not that impressed by like – I don't use Twitter. 00:58:03.620 |
This architecture can't lead to these things they're saying. 00:58:10.660 |
You don't – see, this here automobile is like a horse. 00:58:15.940 |
But it's like a horse so you don't have to feed hay. 00:58:18.140 |
Like they were talking to me like, you know, I didn't understand the new technology or whatever. 00:58:23.320 |
And everyone's like, I never thought that AI was going to be a big – I knew that. 00:58:29.360 |
Reinforcement learning-based fine-tuning techniques is good for maybe guardrails. 00:58:34.580 |
But it's not going to have substantive leaps and underlying cognitive capabilities. 00:58:41.120 |
When you went to OpenAI, did you meet Sam a couple years ago? 00:58:48.320 |
I don't – I'm sure he doesn't know who I am unless you read that New Yorker piece, but I'm sure he's not – there's a lot of people out there. 00:58:56.400 |
I'm not – but, you know, there's a whole class of technocritics that are just like I'm a professional technocritic. 00:59:03.380 |
And I just think all these people are like evil and bad. 00:59:06.200 |
And I'll just shift what I'm upset at them about by whatever like the current, you know, whatever the current idea is within my like particular critical discipline. 00:59:22.220 |
I, you know, like I don't – I'm not interested in disliking people for disliking people or who's the bad guys and who's the good guys. 00:59:29.080 |
I just like call it – I'm interested in the actual technologies. 00:59:31.920 |
Like I think this is a really interesting technology. 00:59:34.020 |
I think the tech CEOs got way out over their skis and were being disingenuous and the things they were saying about it and they scared and tricked a lot of people. 00:59:44.780 |
But I was very vocal against the blockchain movement because, you know, I got my doctorate in the theory of distributed system groups at MIT. 00:59:58.660 |
But building distributed systems on top of blockchains. 01:00:01.740 |
I was like, I can just tell you from a technical perspective, this is a dumb idea. 01:00:06.520 |
You're building worse versions of products that could be easily built right now based on just some sort of like hazy techno-libertarian promise of this full decentralization will prevent the sort of like centralized control of what – people don't care. 01:00:19.380 |
Like we know how to build distributed systems. 01:00:21.980 |
You can just spin up a MySQL sequence in like an Amazon cloud somewhere and it costs you no money and it's never going to go down and it's fine. 01:00:28.380 |
And people are okay with the possibility that Google is like evilly in the middle like tricking – they know they're not and just give us a service that works. 01:00:34.840 |
Anyways, but I got yelled at a lot for that as well. 01:00:41.380 |
I love – I think language model technology is really cool. 01:00:43.620 |
I just think it's more narrow than they were letting on. 01:00:45.920 |
I'm really interested in what comes next with AI, the types of more complicated models, multimode type models. 01:00:53.660 |
It's not multimodal in the language model sense but multimode in the sense of you have different types of system components that are architecturally unique that work together. 01:01:01.320 |
I think there's cool breakthroughs that are coming and this might have helped get people investing in it again. 01:01:05.540 |
So I don't think like this industry is bad or all these people are bad. 01:01:09.940 |
But I do think it was – what they did by trying to keep the hype alive after the bad stuff happened was – I think it's going to have some negative reverberations. 01:01:23.680 |
We got – we have a shorter version of the rest of the show today because I know you've been with me for a lot here. 01:01:29.480 |
But I just wanted to sort of get all this out of my chest. 01:01:31.300 |
So we got a few questions at a call and then I have something I want to react to later. 01:01:34.520 |
But first, I want to do what you really tuned into this show for, which was to hear from one of our sponsors. 01:01:40.580 |
Jesse, I want to talk today about Cozy Earth. 01:01:44.480 |
As listeners know, I'm a huge fan of their bamboo sheets. 01:01:46.500 |
They're the most comfortable sheets I've ever owned. 01:01:48.340 |
The fabric is soft in a way that I – it's just better than other sheets I have used. 01:01:57.260 |
So having the Cozy Earth sheets is a big deal. 01:02:10.640 |
They have that same comfort you get from their sheets but with you all the time. 01:02:16.280 |
This shirt I am wearing right now, Jesse, this is true. 01:02:18.880 |
If you're watching rather than listening, this shirt I am wearing right now is a Cozy Earth shirt. 01:02:30.080 |
As I was walking to the HQ, there was a convertible driving by. 01:02:42.480 |
They were sort of like bouncing like a beach ball in the back of the convertible, 01:02:53.020 |
And they were like, whoa, you know, like doing like the exaggerated glide like that. 01:02:59.080 |
And the driver turned around like, that is a nice shirt. 01:03:03.260 |
Tragically, because of that, they crashed into a cement mixer. 01:03:10.660 |
And there's some pretty serious compound fraction in the back row. 01:03:12.760 |
But the key point is they thought this shirt looked good. 01:03:26.760 |
Cozy Earth provides you comfort that shows up day in and day out. 01:03:37.900 |
You'll love the sheets, but they have all the guarantees. 01:03:41.160 |
So go to CozyEarth.com slash deep to get 40% off the softest bedding, bath, and apparel. 01:03:47.020 |
And if you get a post-purchase survey, tell them that you heard about Cozy Earth right here. 01:03:58.500 |
I also want to talk about our friends at Grammarly. 01:04:00.560 |
That's one of the oldest and dearest sponsors we've had on the show. 01:04:03.340 |
Grammarly has been with us since almost the beginning of this podcast for good reason. 01:04:08.080 |
From emails to reports and project proposals, it's more challenging than ever to meet the 01:04:12.840 |
demands of today's competing priorities without some help. 01:04:16.060 |
Grammarly is the essential AI communication assistant that boosts productivity so you can 01:04:20.440 |
get more of what you need done faster no matter what or where you're writing. 01:04:26.380 |
Unlike the large claims that AI is going to automate all the economy, where is AI actually 01:04:32.060 |
having the impact right now, where you build bespoke tools that focus on the things that 01:04:36.960 |
are at the intersection of matters to us and language model-based AI is good at. 01:04:40.620 |
Working with understanding and production of text is what these models are exceptional at. 01:04:46.740 |
And Grammarly has integrated this power, this focused application of AI very well into their 01:04:55.480 |
So you can ask about the tone of what you just wrote before you email it to your boss or ask it, 01:05:03.440 |
Those of us who write for a living take for granted the subtlety in like, does this sound 01:05:13.240 |
Now you have an AI-powered assistant that can help you with that right there. 01:05:17.260 |
Maybe you want a new way of stating something or like, hey, can you give me a couple variations 01:05:21.120 |
Right there in where you're writing, Grammarly helps you out. 01:05:23.780 |
93% of professionals reported that Grammarly helps them get more work done. 01:05:30.000 |
The other 7%, I assume, works for a company that puts like folksy sayings on samplers. 01:05:36.360 |
So they need bad grammar because, you know, it's like, I ain't done blessing my house or 01:05:41.560 |
Like they don't need Grammarly because you need the grammar to be bad. 01:05:43.840 |
But everyone else, 93% of workers do better with it. 01:05:46.960 |
All right, I made the last part up about the 7%, but the 93% is true and it's pretty impressive. 01:05:50.920 |
So let Grammarly take the busy work off your plate so you can focus on high impact work. 01:05:55.400 |
Download Grammarly for free at grammarly.com slash podcast. 01:06:02.080 |
So I'm kind of hoping this isn't one of those weeks where the sponsors are like carefully 01:06:07.380 |
We're going to get the next cozy earth script notes and there'll be a highlighted section 01:06:13.360 |
that says, try to avoid implying that our product led to the decapitation of several visitors 01:06:29.860 |
I've been a software engineer for the past four years. 01:06:32.320 |
I'm concerned that AI will leave me unemployed in the next 10 years. 01:06:35.440 |
I'm considering doing an online master's in computer science. 01:06:38.220 |
Should I stay put, pursue this master's or potentially pursue another income stream? 01:06:43.660 |
There's two separate things going on in this question. 01:06:46.600 |
One is just a computer science related, I have a computer science related career path. 01:06:55.780 |
If I want to do that, what's the right way in? 01:06:57.320 |
Master's, no master's, maybe beyond master's. 01:06:59.960 |
Like what's the right way into that career stream? 01:07:04.440 |
The other part of the question is saying, I'm concerned AI will leave me unemployed in 01:07:09.580 |
I mean, look, I can't predict 10 years from now. 01:07:13.480 |
But as I argued in the deep dive, the technological trajectory run right now is not going to put 01:07:23.060 |
Again, I think a lot of the reporting on this has been atrocious. 01:07:26.300 |
It's a lot of, as I keep saying, mixing job numbers that are very real, tech sector contracting 01:07:37.620 |
And they keep saying, AI, AI, in the middle of this. 01:07:43.660 |
They're not laying off jobs at Microsoft and Amazon because they're being replaced by AI, 01:07:47.820 |
even though for whatever reason, there's some reporters out there that really want you to 01:07:55.240 |
AI is not about to replace all software developers. 01:08:00.820 |
There's actually, for computer science, this is very specific to computer science. 01:08:04.660 |
There's a lot of thinking about this that because we're geeks, where we think 01:08:07.480 |
very carefully about salary expectation, right? 01:08:14.720 |
Each higher level degree you have increases the level of position you can enter into a technology 01:08:24.040 |
So like a master's degree, there might be a higher level job you can start with. 01:08:28.340 |
And if you're coming with just a bachelor's degree, with a doctorate, you go to work for 01:08:32.120 |
Google, there might be, you would come in an even higher level, higher level, more salary. 01:08:37.340 |
But on the other hand, it takes more time to earn these. 01:08:39.920 |
So you have to sort of do this trade-off between I'm not making money for the years it takes 01:08:43.800 |
to get this degree, but then I make more when I get there. 01:08:48.020 |
The folk wisdom on this, someone did these numbers before, assuming that these are high 01:08:53.700 |
quality degrees, you often in computer science would come out ahead with a master's because 01:09:00.520 |
And so I had this two-year period where I'm missing out on salary, but I come in at this 01:09:08.260 |
salary and it would have taken me maybe four years to get there with a bachelor's. 01:09:15.380 |
The folk wisdom was always with doctorates, you got to be a little bit more careful. 01:09:19.640 |
So really, if you're getting a doctorate, it should be because you want to actually master 01:09:24.180 |
an area well enough that you can do original research because you want a job doing research 01:09:31.960 |
There's been exceptions like until recently, oh man, the AI hiring. 01:09:40.360 |
So if you were, if you were an AI PhD and you're really looking at the techniques that were relevant 01:09:44.280 |
to large language model scaling, they were throwing crazy money at you. 01:09:48.060 |
They were throwing, I know baseball better, but top 10, top 10 pick overall draft signing 01:09:57.740 |
And if you were already established, like I am at a, I'm at a different company and 01:10:03.940 |
I'm really good at something that's unique to scaling. 01:10:06.200 |
Zuckerberg went crazy only for like a few weeks, but he was giving some compensation packages 01:10:12.560 |
to individuals that were worth up to two to $300 million. 01:10:29.660 |
Well, we don't know the years on the engineers. 01:10:33.880 |
Because Juan Soto signed like a eight or 10 year deal. 01:10:38.400 |
I think it was like 15 year deals, but, and some, but the vesting, we don't know how long 01:10:44.120 |
It's like half the money as Juan Soto, but I bet some of these engineers could have an OPS 01:10:49.080 |
within a hundred points of what he's hitting right now. 01:10:55.500 |
Half the money, but more than half the OPS, like 400 ops. 01:11:00.900 |
So anyways, you could do this type of math, but basically if what you care about is salaries, 01:11:07.260 |
masters should be, they can be very much in the mix. 01:11:10.360 |
If what you care about is salary, be careful about PhD. 01:11:13.160 |
That really needs to be about, I want the skills. 01:11:15.020 |
It's not just, I want to get higher up in the developer chain. 01:11:17.340 |
That really needs to be, I want to do original thinking or research. 01:11:21.840 |
So I'm a little nervous about the online here. 01:11:23.700 |
So what you really want to do, because the biggest trap with graduate degrees is you want the story to be true that this graduate degree that you want to get is the right thing to do because you want to do that because it's convenient or you like it or it makes sense to you. 01:11:38.740 |
So you actually have to go verify and by verify, meaning you really need examples of specific jobs where you have an indication, like from someone who works there that we wouldn't hire you now, but we would be likely to hire you if you had this master's degree from this institution. 01:11:58.260 |
So just because like if you have a, you know, a master's from MIT, you can get snapped up in this particular position at Meta doesn't mean if, you know, you have an online master's degree from, you know, the Coney Island Institute of Computer Science and Roller Coaster Repair that that same job is available to you. 01:12:16.940 |
They actually have a really good quantum physics program there. 01:12:28.980 |
So they specialize in those two things in the Coney Island School of Computer Science and Roller Coaster Repair. 01:12:45.720 |
Yeah, maybe it could help with computer science. 01:12:47.100 |
Just make sure that the program and quality you're going for is going to help for the particular jobs you care about. 01:12:51.720 |
PhD, that really should be because you want to do research. 01:12:56.200 |
I was going to make a more, a couple more Mets digs, but I refrain because I don't want to get off topic. 01:13:06.020 |
Look, Sam, you're not going to be unemployed. 01:13:10.700 |
I don't want Mets player to, I need a Mets player to reference here. 01:13:15.040 |
Well, Mad Dog went crazy on Pete Alonzo, the break of the Mets home run thing. 01:13:21.180 |
You might not be unemployed in 10 years because of AI, but Pete Alonzo will. 01:13:29.240 |
Well, they only signed him on a one-year deal? 01:13:31.920 |
Technically two, but as Mad Dog said, it's one. 01:13:37.080 |
So listeners, we're going to like workshop a Mets joke here. 01:13:42.080 |
We're going to leave the tape running, but then when we come back, we're going to have a right reference. 01:13:50.820 |
Look, I don't want to kick someone when they're down, but the Nats did take the series from them earlier this week. 01:14:05.840 |
I'm trying to figure out what I should focus on. 01:14:07.840 |
I can continue to help colleagues with academic papers, or should I finally dive deeper into writing? 01:14:13.060 |
If you're a retired college professor, write. 01:14:21.560 |
While the momentum's still there, you're recently retired, write the book you wanted to write. 01:14:27.700 |
I want to go to this place, go to the archives, whatever it is, or I'm a science professor and I want to write the public-facing book about this idea, 01:14:34.920 |
or like applies ideas from this obscure field to the rest of your life, or I'm going to, you know, I want to teach why this particular philosophers, you should know about them, and write that book. 01:14:44.640 |
Because professors never have enough time to write. 01:14:55.720 |
It's not our livers being eaten out repeatedly by, like, the eagle. 01:15:00.500 |
It's like, just as we're about to get down to write, we get a calendar invite for a Zoom meeting. 01:15:12.880 |
That's good, because I have an email with him, and I think he's going to, I didn't know how you're going to answer that, but that's cool. 01:15:22.420 |
So, it's not a book about, like, why you should preemptively murder your dog because of their impact on the economy. 01:15:38.420 |
I'm an attorney, and also I'm writing a book. 01:15:46.500 |
Any suggestions on how I should arrange them around my house? 01:15:49.240 |
I do have a home library in addition to an office and other spaces. 01:15:58.100 |
I have a home library with a bunch of built-in custom shelves. 01:16:02.260 |
But then in our family room, we have a bunch of built-in shelves that we've also filled with books. 01:16:08.960 |
And then I have my bookshelves here at the HQ, which we filled with books. 01:16:12.800 |
And then I have the bookshelf at one of my two offices at Georgetown, which I've also filled with books. 01:16:17.840 |
And then I have a large pile of books next to my desk in the library that I'm going to use 01:16:24.600 |
So I think it's great to have books wherever. 01:16:27.980 |
I don't understand this need of, like, I have to purge my book. 01:16:31.380 |
It's like such knowledge compressed into this little thing that you can just download into your head. 01:16:36.920 |
And someone, like, spent years to get it into this small form. 01:16:40.040 |
If you have a home library, make the home library awesome. 01:16:43.140 |
I kind of think if you're an attorney who's writing books, if you're a successful attorney, 01:16:48.360 |
maybe you need a work-from-your-home type space, like a cool office like I have, like, near your home, where you go to write, 01:16:53.560 |
that you can just fill and surround those things with books. 01:16:55.600 |
Look at – it's been a while since he talked about it, but Ryan Holiday has talked about – he has a massive book collection. 01:17:01.560 |
When he moved, he had to have a separate – basically like a library service move the books. 01:17:07.480 |
Like, you couldn't just have his movers do it. 01:17:08.760 |
Like, it's a whole separate thing that has to happen. 01:17:19.400 |
Maybe have a separate library or make your library at your house awesome, like expand it or something like that. 01:17:26.060 |
I think doing crazy stuff around books, I'm a big proponent of. 01:17:29.820 |
To me, the people are like, I want to get down to two books because it's wasteful to have them. 01:17:36.560 |
It's like, yeah, I like my dog, but dog food ain't cheap. 01:17:41.660 |
So, like we had him and the other dogs on the street put down, right? 01:17:44.460 |
To me, that's what it's like when people are like, I just get rid of all my books. 01:17:53.300 |
This is Rina, first-time caller, long-time listener. 01:17:56.760 |
I absolutely love your work, and it has been so life-changing for me. 01:18:01.560 |
So my question is, I know you talk a lot about the overhead tax and how you want to take on less projects at a single time so that the overhead tax is less so that you actually have more time to do the project, and all of that sounds great to me. 01:18:16.420 |
However, the thing that I've been struggling with is the overhead tax that happens after a project is done. 01:18:23.800 |
So, like, I'm a composer, and I write the piece, I finish it up, I turn it into the commissioner, all of that is done. 01:18:30.640 |
But then, even years later, people will be emailing me to be, like, you know, asking questions about the piece and about the interpretation, maybe finding some errors in the piece. 01:18:40.660 |
And I'm wondering if you deal with that in any of your books, you know, books that you've written years ago where you're still kind of managing the back end of them, or if anything in your life has this, or if you really feel that when you complete a project, it's completely done, and you never have to deal with the overhead tax again. 01:18:58.440 |
I'd be so interested on your insights on that. 01:19:04.800 |
Like, you commission pieces, and you compose them. 01:19:10.300 |
We have all the different instruments, and you sort of make it all work. 01:19:16.800 |
And just to remind the listener in general what that is, because you should always keep it in mind, is the idea that when you accept something, it brings with it administrative overhead. 01:19:27.060 |
And that's what you have to be worried about, not getting too large. 01:19:29.540 |
That's why you should not say yes to too many things. 01:19:31.900 |
Even if the actual total time required to do the things you've said yes to is reasonable, it's possible that all of the overhead tax those things brings with it makes your life unreasonable. 01:19:44.460 |
That to get to the 20 total hours this month it'll take to do these five projects might generate 20 emails a day and six meetings a week. 01:19:52.320 |
And that just makes your life so fractured that you are completely frustrated. 01:19:56.600 |
So, you have to monitor overhead tax very carefully. 01:20:00.220 |
It's the main reason why overload, having too much on your plate, is dangerous. 01:20:03.920 |
In your particular situation, this is very particular. 01:20:08.880 |
You produce a creative artifact that is then available to a wider public. 01:20:16.980 |
If you were, you know, anything where you're putting out a creative artifact that is available to a larger public. 01:20:24.780 |
As Jesse will tell you, I'm not necessarily, you wouldn't say I'm the world's most responsive author if you have to summarize it. 01:20:34.180 |
I am not exactly giving one-on-one attention. 01:20:36.560 |
Well, I think you're only the author I know, so I'm not really sure. 01:20:40.200 |
Yeah, but you handle a lot of my correspondence is why. 01:20:48.300 |
When I first started writing books, I was writing books for students. 01:20:51.200 |
And I felt like it was just part of my service I was doing to the world was like to help students not be stressed out, not get overwhelmed, have a good experience in college, figure out how to be a meaningful adult and move on to satisfying lives. 01:21:01.600 |
And so, I tried to answer every email that people would send in for my initial books, and it took more and more time, and it finally became, it just was not tractable. 01:21:09.560 |
And I went through this sort of similar calculus that Neil Stephenson talks about in his sort of classic essay, Why I'm a Bad Correspondent. 01:21:16.000 |
You do the math eventually, and you say, there's enough correspondence coming in now that to try to keep up with all of it would take a sizable portion of my time. 01:21:24.900 |
Now, the raw number of people here is not massive. 01:21:29.820 |
But that's going to take up a huge amount of my time. 01:21:32.100 |
That's enough time now that it is going to significantly slow down my ability to, for example, write a new book. 01:21:38.360 |
But if I write a good book, even like, not like my really successful new books, but like a good, my student books, like two out of my three student books are comfortably over, I don't know, I don't know what they are, but I think a straight A student sold like 300,000 copies, right? 01:21:51.220 |
I'm never going to answer 300,000 emails, but that reached 300,000 people and brought those ideas to them. 01:21:57.080 |
So Neil Stephenson in that famous essay I talked about, he's a sci-fi novelist if you don't know him, a speculative fiction writer. 01:22:02.800 |
He said, look, my book's going to be read by more people than I can talk to directly, so I have to just become a bad correspondent. 01:22:12.180 |
It's uncomfortable at first because it feels selfish because you're not used to being a public figure. 01:22:19.260 |
So you have to say, someone's trying to reach out to me. 01:22:21.040 |
In general, if someone's like trying to talk to you, it's rude to ignore them, but you're in a different situation where you're doing creative production. 01:22:27.040 |
Because if it gets to the point that dealing with what people's questions, good intention, positive questions for you about your work, if that prevents you from doing more work, now you're minimizing your impact. 01:22:39.400 |
And it's better to write that next book that's going to reach a couple hundred thousand people than it is over the next five years to talk to a couple thousand people. 01:22:48.160 |
It's a different or a couple of different orders of magnitude. 01:22:51.320 |
So eventually, I think if you're doing creative production, you have to just have a hard rule. 01:22:56.100 |
Like I just, I'm not able, I'm not able to do one-on-one interaction. 01:23:01.520 |
And it'll feel weird at first and then it'll feel better. 01:23:03.400 |
But this is just, the economics of creative impact say at some point, you have to maybe just have pretty big limits on, this is why like I do this show now. 01:23:12.820 |
It's a way for me to actually answer questions and try to be useful beyond my books, but still at a much larger scale. 01:23:19.100 |
Because like a good episode of this could, you know, hit 80,000 people or whatever. 01:23:23.480 |
So that's going to, that's better than answering 80 questions. 01:23:27.120 |
I can answer eight and maybe reach 80,000 people. 01:23:31.600 |
It does make sense, but it is uncomfortable at first. 01:23:33.460 |
But you might have to just unilaterally take yourself out of this particular overhead tax. 01:23:44.600 |
I'm just not able to really answer questions for the most part that people send me about like the compositions I put out. 01:23:49.840 |
It is a different type of interaction than just a normal interaction with someone, you know, it's okay in the context of creative production to be harder to reach than it is, say, with like your cousin. 01:24:02.480 |
I remember it's hard for me to become a bad correspondent, but like now it's just necessary. 01:24:06.740 |
It's just too many millions of whatever, whatever out there. 01:24:11.380 |
I got a cool reaction coming up with a, a, a figure, a cultural figure that most people know who they, who he is. 01:24:18.320 |
And I know very little about them and I suspect Jesse knows less, but we'll find out. 01:24:21.380 |
But first, even more exciting, I want to talk about another one of our sponsors. 01:24:25.980 |
Guys, wouldn't you like to look a little younger, maybe get a few more compliments on your skin. 01:24:32.180 |
I don't worry about this as much because as part of his employment contract with me, Jesse has to compliment my skin no fewer than seven times per day. 01:24:40.260 |
But for most guys, the only way to feel better about their skin is to actually take care of it. 01:24:47.580 |
Their high-performance skincare products are designed specifically for men. 01:24:55.820 |
In a consumer study, 100% of men said their skin looks smoother and healthier. 01:24:59.920 |
96.9% noticed improved hydration and texture. 01:25:03.620 |
And 93.8% reported a more youthful appearance. 01:25:07.520 |
Their products include The Good, which is an award-winning serum packed with 27 active botanicals. 01:25:12.740 |
The Eye Serum helps reduce the appearance of tired eyes, dark circles, and puffiness. 01:25:16.980 |
And The Base Layer, a nutrient-rich moisturizer infused with plant stem cells and snow mushroom impact. 01:25:24.680 |
These are products that will make you look better and feel better. 01:25:27.840 |
And they're cheaper than hiring Jesse to comment on your skin throughout the day. 01:25:30.780 |
Skincare doesn't have to be complicated, but it should be good. 01:25:34.100 |
Upgrade your routine with Caldera Lab and see the difference for yourself. 01:25:37.640 |
Go to calderalab.com slash deep and use deep at checkout for 20% off your first order. 01:25:43.980 |
I just want to talk about our friends at ShipStation. 01:25:46.380 |
If you run an e-commerce business, what's the best way to be successful? 01:25:54.960 |
I learned this the hard way after I started letting Jesse Skeleton man our customer service lines. 01:26:06.060 |
He would trick people into revealing personal details and then would mercilessly berate them with off-color jokes. 01:26:13.560 |
I realized, oh, your customers being happy is what matters. 01:26:18.800 |
In addition to firing Jesse Skeleton, he realized if you're shipping things, you can earn your customers' trust and generate their happiness one package at a time. 01:26:28.600 |
And how do you create the best e-commerce shipping experience with your customers? 01:26:34.020 |
With ShipStation, you can sync orders from everywhere you sell into one dashboard. 01:26:38.560 |
And you can replace manual tasks with custom automations. 01:26:44.340 |
To reduce shipping errors at a fraction of the cost. 01:26:47.440 |
The single dashboard idea that ShipStation has is really cool, right? 01:26:51.380 |
So it's like you have a super advanced fulfillment center, even if you run like a small business, right? 01:27:01.260 |
Their tools just scale with your business so you don't have to learn something new. 01:27:05.940 |
They're there with you for the whole journey. 01:27:09.260 |
It's the fastest, most affordable way to ship products to your customers because they'll give you discounts of up to 88% off UPS, DHL Express, and USPS rates, and up to 90% off FedEx rates. 01:27:21.780 |
Another cool feature, automated tracking updates with your company's branding. 01:27:27.480 |
Like, oh, this is like where your package has been sent. 01:27:30.480 |
Even a small business, if you're using ShipStation, you can get those. 01:27:37.280 |
When shoppers choose to buy your products, turn them into loyal customers with cheaper, faster, and better shipping. 01:27:42.820 |
Go to ShipStation.com slash deep to sign up for your free trial. 01:27:46.640 |
There's no credit card or contract required, and you can cancel any time. 01:27:53.420 |
All right, Jesse, let's move on to our final part. 01:27:57.360 |
So I want to react to a clip that a listener sent me about Ed Sheeran. 01:28:07.400 |
Do you know, like, what field of profession he is in? 01:28:25.280 |
But then I was watching the new season of Limitless with Chris Hemsworth, and the first episode is he's trying to learn how to drum for, like, cognitive fitness, and he had no idea how to drum. 01:28:36.080 |
He's going to join Ed Sheeran on stage in Croatia with 70,000 people to play the drums with him in a concert. 01:28:45.200 |
The next issue is, I was like, I don't know a single Ed Sheeran song. 01:28:48.380 |
But they played the song that he was going to play the drums with, and I recognized it. 01:29:02.780 |
Chris Hemsworth, when he did his workout, that's when he got injured, right? 01:29:07.600 |
Well, he's doing mental stuff now, so it's okay. 01:29:10.240 |
My kids were impressed, because I loved that show. 01:29:11.840 |
And I was showing, in the first season, a lot of the stuff he's doing with Peter Attia is, like, the producer of it. 01:29:16.800 |
And I was like, I did an event with that guy, and my kids were very impressed. 01:29:20.500 |
But back in the day, didn't he do Hemsworth's workout on the app? 01:29:31.500 |
All right, so anyways, Ed Sheeran did an interview and said something that I thought was interesting. 01:29:43.400 |
I had the same number from, like, age 15, I think. 01:29:47.500 |
I got famous, and I had 10,000 contacts on my phone. 01:29:52.220 |
Well, even if it was in my pocket, I'd be having a conversation with you like this, and it would vibrate. 01:29:55.720 |
And my mind would not be in the conversation. 01:29:59.980 |
Until I take it out of my pocket, and then I answer the text. 01:30:02.720 |
And then suddenly, our moment has ended, and I'm in this. 01:30:05.440 |
And now I find that there's no connection to anything. 01:30:12.840 |
I go for lunch with my friends, and I'm in it. 01:30:16.320 |
I moved everything on to email, which I reply to once a week. 01:30:39.680 |
I think we're going to find that he wrote really common songs. 01:30:42.760 |
Did he write the song, Hey, Hey, We're the Monkeys? 01:30:52.520 |
No, that was the Beatles in the 50s and the 60s. 01:30:59.740 |
Ed had to make a change because he's famous, and they got over the top. 01:31:03.220 |
But he just said, look, there's not a law that says you have to look at a phone when it buzzes. 01:31:11.280 |
There's not a law that says you have to have a phone, and everyone can reach you at any time, 01:31:13.940 |
and that's just how we function in our society. 01:31:19.760 |
You can email me, and I'll check it once a week. 01:31:21.340 |
And people are like, oh, I guess that's what Ed's doing now. 01:31:30.060 |
People imagine when they make a change like this that there is a war room somewhere, and 01:31:35.280 |
there's guys in their suits are on the back of the chairs, and they've rolled up their 01:31:38.920 |
sleeves, and they're chain-smoking cigarettes. 01:31:40.840 |
And there's a picture of you up on a bulletin board, and all these yarn going from your picture 01:31:45.620 |
to various graphs that they've made of your response time on various text messages. 01:31:50.200 |
And then if you ever were like, I just don't use my phone anymore. 01:31:53.120 |
You're going to have to email that a klaxon was going to go on, and the lead guy was going 01:31:58.380 |
to put out a cigarette and be like, we're in the bleep now. 01:32:01.360 |
And the other guy's just going to nod, take a slug of whiskey or whatever. 01:32:05.920 |
They kind of care, but they have their own lives. 01:32:17.300 |
Ed Sheeran, because he's actually present in what he's doing. 01:32:22.740 |
I think that's code for some sort of nasty sex thing, probably. 01:32:28.820 |
I mean, I'm not distracted when I have dinner with my dad all night long having dinner with 01:32:42.960 |
He wanted some of the baby oil, but I was like, I need some of that baby oil because I'm 01:32:51.120 |
I'm just saying that's probably, I mean, the guy's famous. 01:32:57.180 |
Yeah, he's famous and get away with a lot more and the pressures are higher, but like 01:33:05.000 |
Like it's okay to say I'm sort of prioritizing my communication to the world in part to make 01:33:13.400 |
They're just jealous of all the dinners you're having with your dad. 01:33:17.560 |
And so, you know, let's take a page out of the book of Ed Sheeran and say it may be being 01:33:24.440 |
even a little bit radical about how accessible you are. 01:33:28.740 |
I'm going to listen to some more Ed Sheeran music. 01:33:43.500 |
Other thing I wanted to mention, I don't know if you heard this, Jesse. 01:33:47.980 |
In a technical sense, I have been nominated for a Booker Prize in a technical sense. 01:33:58.460 |
There's a book out called Universality by Natasha Brown, which was long listed for the 01:34:04.180 |
It was called A Twisty Slippery Descent into the Rhetoric of Power Hailed as a Nesting Doll 01:34:11.240 |
of Satire that Leaves Readers Uncertain Where Their Loyalties Lie. 01:34:13.860 |
It takes place in like academia and there's a murder and it's a Booker Prize. 01:34:23.120 |
A viewer or a listener to the podcast sent in, I have it somewhere. 01:34:27.860 |
But anyways, they sent in, there's a part in the book where one of the characters mentions 01:34:30.920 |
So I figure like by proxy, how can I actually put this on my like blurb things? 01:34:43.160 |
I figure if I'm mentioned in a book that's nominated for a Booker Prize, I'm just going 01:34:47.940 |
That's as close as I'll ever going to get to a literary award. 01:34:51.360 |
Universality by Natasha Brown featuring as a major part of the pot, by which I mean the 01:34:58.460 |
phrase Deep Work by Cal Newport's mentioned once. 01:35:01.940 |
Me and Ed Sheeran are reading it in our book club. 01:35:06.880 |
That's also a code word for like a nasty sex thing with Ed Sheeran. 01:35:09.400 |
Like, nah, no, I got to go to my fire up the hot tub. 01:35:16.000 |
Get the get the frozen, the frozen bratwurst, not the cook bratwurst. 01:35:32.000 |
Before we get ourselves like demonetized and arrested, let's wrap up the show. 01:35:35.400 |
We only have like 19 more audio clips to play and then we'll be done. 01:35:40.480 |
We'll be back next week with another episode of the show. 01:35:44.180 |
If you like today's discussion of where AI is and you want to hear my predictions from a couple of years ago, check out episode 247, where I said we need to consider what I call the AI null hypothesis, which was the idea that AI wasn't going to change everything. 01:36:00.160 |
So if you want to see how close my predictions came to be true, go back and check out episode 247.