back to indexThe AI Revolution: How To Get Ahead While Others Panic | Cal Newport
Chapters
0:0 AI null hypothesis
7:30 The current AI discussion
11:20 Cal's analysis of Tyler Cowen
00:00:00.000 |
How should we think about the AI revolution happening now? 00:00:06.520 |
What is the philosophically the right approach 00:00:12.020 |
The article I wanna use as our foundation here, 00:00:15.980 |
was the article I'm bringing up on the screen right now. 00:00:23.620 |
and you can find that at youtube.com/calnewportmedia, 00:00:27.560 |
or if you don't like YouTube at thedeeplife.com, 00:00:35.760 |
If you're listening, I'll narrate what I'm talking about. 00:00:38.000 |
So the article I'm talking about here is titled, 00:00:51.360 |
professor and prolific writer of public facing books. 00:00:57.120 |
and he published this in Barry Weiss's newsletter, 00:01:02.680 |
Right, so I wanna highlight a few things from this, 00:01:10.960 |
"Artificial intelligence represents a truly major 00:01:30.080 |
the good will considerably outweigh the bad." 00:01:42.220 |
Of course, the press brought an immense amount of good, 00:01:46.640 |
it enabled the scientific and industrial revolutions 00:01:49.480 |
but it also created writings by Lenin, Hitler, 00:01:57.240 |
the good outweighed the disruptions and negativity 00:02:08.840 |
"We don't know how to respond psychologically 00:02:13.880 |
And just about all of the responses I am seeing, 00:02:24.360 |
All right, so this is the setup for Tyler's article. 00:02:26.920 |
He says, "I think there's this disruptive change is coming, 00:02:33.240 |
But he is making the claim that as a culture right now, 00:02:37.440 |
we are not psychologically handling well this reality. 00:02:44.640 |
And so he is gonna move on now with a critique 00:02:47.480 |
of how we are responding to what he sees to be that reality. 00:02:53.960 |
The first part of his critique of our current response 00:02:56.200 |
is saying, "No one is good at protecting the longer 00:03:11.360 |
He's referencing here someone who's on the extreme X risk, 00:03:17.480 |
AI is about to take over the world and enslave us. 00:03:21.120 |
Not Sam Altman and not your next door neighbor. 00:03:24.600 |
So he's arguing, "We are very bad at predicting 00:03:27.760 |
the impacts of disruptive technologies in the moment." 00:03:32.560 |
"How well do people predict the final impacts 00:03:35.760 |
How well did people predict the impacts of fire?" 00:03:47.160 |
it's about to happen, but we're not handling it well 00:03:53.120 |
that it is very difficult to predict what really happens. 00:04:01.680 |
which are based off of very specific predictions 00:04:04.060 |
about what will happen or what definitely won't happen. 00:04:06.240 |
And he thinks that's all just psychologically bankrupt. 00:04:12.680 |
We're just making predictions we have no right to make 00:04:18.280 |
All right, then he goes on to apply this critique 00:04:20.360 |
specifically to the existential risks of things 00:04:26.440 |
He says, "When it comes to people who are predicting 00:04:30.320 |
this high degree of existential risk," he says, 00:04:32.280 |
"I don't actually think arguing back on their chosen terms 00:04:41.600 |
where all specific scenarios are pretty unlikely. 00:04:48.280 |
just as we do for all other technologies to improve them." 00:04:57.500 |
But the only actual intellectually consistent position 00:05:07.080 |
There's a lot of other existential risk in our life 00:05:16.400 |
each time I read an account of a person arguing himself 00:06:00.000 |
he says, "It is indeed a distant possibility, 00:06:08.840 |
The mere fact that AGI risk can be put on par 00:06:28.960 |
he says, "Look, if someone is obsessively arguing 00:06:34.720 |
or arguments they read on a blog like 'Less Wrong' 00:06:40.100 |
But don't be suckered into taking their bait. 00:06:54.280 |
I think is a unusually pragmatic take on this issue, 00:07:01.520 |
which I'm gonna summarize everything I just said. 00:07:04.120 |
We are bad at predicting the impacts of technologies, 00:07:07.020 |
so don't trust people who are being very specific 00:07:22.700 |
about what the impact of the printing press is gonna be. 00:07:28.360 |
this is what I think Cowan is very right about. 00:07:30.400 |
The problem I see with the current discussion 00:07:36.720 |
is they're looking at often cherry-picked examples 00:07:55.780 |
could produce the type of thing I'm seeing in this example. 00:08:00.680 |
Maybe my 13-year-old could answer those questions. 00:08:03.580 |
So maybe this thing is like a 13-year-old's mind in there. 00:08:08.800 |
that could produce the type of things we've seen, 00:08:21.080 |
based off of imagined understandings of the technology, 00:08:27.600 |
and then we get worried about those scenarios. 00:08:30.360 |
This is exactly what Cowan is saying that we shouldn't do. 00:08:41.000 |
about the impacts of those thought experiments. 00:08:44.120 |
So what I think we should do instead, culturally speaking, 00:08:53.340 |
I don't necessarily want to hear any more stories about, 00:08:56.920 |
well, in theory, a chatbot that could do this 00:09:05.060 |
what you should filter for is tangible impacts. 00:09:19.280 |
not predictions about what hypothetical minds might reap, 00:09:24.160 |
That's what you should use to refine your understanding 00:09:29.820 |
I think that's the filtering we have to do now. 00:09:34.960 |
actually attempting things that will generate real impacts. 00:09:43.140 |
I spoke on the panel with one of the VCs that funded OpenAI, 00:09:46.600 |
and he was saying OpenAI already is bringing in, 00:09:49.980 |
on track to bring in more than $100 million in revenue, 00:09:55.600 |
paying for API access to their GPT-4 backend language model. 00:10:01.740 |
It's possible they'll be on track for billion-dollar revenue, 00:10:11.760 |
investing money to try to use this technology 00:10:15.000 |
So it's not as if there is a shortage of people 00:10:23.480 |
My advice is let's filter for actual impacts. 00:10:25.480 |
That's what, unless you're in a very rarefied position 00:10:28.160 |
that needs to very specifically make predictions 00:10:43.640 |
What you wanna hear is this company just shut down. 00:10:50.400 |
This publishing imprint just fired all of its authors. 00:10:54.640 |
Okay, now I'm starting to understand what's going on. 00:11:08.520 |
It's called the Predicting the Future, Cognitive Distortion. 00:11:11.640 |
It's not worth getting upset about predicted futures. 00:11:17.920 |
Now, there is one place where I disagree some with Cowan. 00:11:26.440 |
that we know now that there will be a major disruption 00:11:34.280 |
you kind of have to just go along for the ride, 00:11:36.160 |
react to actual impacts, not the prediction of impacts, 00:11:44.880 |
I'm not convinced that it's not gonna happen either, 00:11:47.980 |
but we need to still take seriously until disproved 00:11:59.920 |
that the ultra large language model revolution 00:12:08.780 |
is not actually going to make a notable impact 00:12:26.780 |
is when actual impacts, not predictions of impacts, 00:12:29.900 |
not thought experiments about if it could do this, 00:12:36.320 |
on the day-to-day experience of people in their lives, 00:12:42.140 |
we do have to keep that as one of the possibilities 00:12:50.700 |
It's not even necessarily the likely outcome, 00:13:00.620 |
And so I think we do have to keep that in mind. 00:13:22.420 |
the limits that turn out to emerge from under the hood, 00:13:32.220 |
that that may end up being more limited than we think. 00:13:34.580 |
The computational expense is larger than it's worth it. 00:13:37.860 |
We have OpenAI gets a couple hundred billion dollars 00:13:43.780 |
"Huh, this is not really opening up something 00:13:45.420 |
that I wasn't already able to more or less do, 00:13:50.940 |
or to actually just hire someone to do this." 00:13:54.920 |
And that we're actually at the peak right now 00:13:56.980 |
of a hundred million people just liking to use the chatbot 00:14:05.100 |
So I do think that needs to be in the hypothesis, 00:14:07.700 |
as the AI null hypothesis itself is something 00:14:12.180 |
So we have a wide extreme of possibilities here 00:14:14.500 |
from the AI null hypothesis to the most extreme AGIs, 00:14:29.500 |
But here's the big thing I think that's important. 00:14:34.260 |
is think less and react less to the chatter right now, 00:14:43.200 |
that Cowan is right, those impacts will come. 00:14:49.100 |
Be aware of the actual impacts, adjust as needed, 00:14:52.500 |
probably will end up better off than worse off. 00:14:55.740 |
But also be ready if you're looking for these impacts, 00:15:05.680 |
is unless again, you're really plugged into this world 00:15:10.400 |
Do not spend much time debating with, arguing with, 00:15:15.980 |
about what hypothetical minds might be able to do. 00:15:26.500 |
but we don't remember all of the other innovations 00:15:29.780 |
and inventions that really were exciting at the time, 00:15:32.760 |
but then it ended up fundamentally destabilizing the world. 00:15:38.140 |
So let's be wary, but let's get more concrete. 00:15:42.400 |
Let's stay away from those weird YouTube channels 00:15:46.100 |
about everything that's gonna happen, good or bad. 00:15:54.580 |
It'll be interesting, a little bit nerve wracking, 00:15:59.740 |
and don't trust anyone who says that they do. 00:16:06.060 |
No one's talking about that because it's not exciting. 00:16:10.980 |
and there's no article clicks to come from it. 00:16:15.580 |
I get sent a lot of people who are in my network. 00:16:18.320 |
They send me a lot of chat GPT fails, for example. 00:16:22.940 |
And one of the thing I think is going on here 00:16:30.740 |
People are generating these really cool examples. 00:16:33.340 |
And then you see a bunch of these really cool examples. 00:16:37.740 |
And well, if you could do that, what else could that mind do 00:16:43.400 |
of these models just failing to do basic useful things. 00:16:47.900 |
these are not conceptual models, they're token predictors. 00:17:01.320 |
that also matches the key features from the query. 00:17:05.760 |
And OpenAI talks about it as a reasoning agent. 00:17:18.180 |
They're like, well, it's as if it understands this or that 00:17:20.860 |
because it has to be able to understand these things 00:17:23.100 |
to be able to do better at predicting the text. 00:17:24.740 |
But in the end, all it's doing is predicting text. 00:17:35.820 |
So I do think that effect is going on as well. 00:17:39.700 |
there's not a lot of transparency from OpenAI. 00:17:42.900 |
that we don't understand about how this technology works. 00:17:56.340 |
real companies actually using these interfaces, 00:18:04.300 |
I saw one concrete thing where the Chegg CEO, 00:18:09.740 |
their sales went down and they contributed it too. 00:18:27.180 |
where you buy the pre-written solutions less valuable 00:18:33.580 |
just have ChatGPT generate a bunch of those essays. 00:18:39.260 |
that these responses aren't pretty easily identifiable. 00:18:45.660 |
it's not hard for teachers to identify the ticks 00:18:52.260 |
So it's like, how worried does that make you? 00:19:03.740 |
They have a customized version that helps you 00:19:05.940 |
if you're coding and it can generate early versions of code 00:19:10.100 |
or help you understand library interfaces or this or that. 00:19:21.340 |
the common interface development environments like Eclipse 00:19:24.900 |
introduced, I don't know the exact terminology for it, 00:19:34.220 |
Here's all the parameters you need to give me, you know? 00:19:36.860 |
So you didn't have to look at up in the documentation. 00:19:39.100 |
Or if you have an object and object oriented programming 00:19:41.660 |
and you're like, I don't know the methods for this object. 00:19:43.500 |
You just, you type the object name, press period. 00:19:46.040 |
Oh, here's all the different things you can do with this. 00:19:47.660 |
And you select one and it says, oh, here is the parameters. 00:19:51.900 |
And like, I noticed this when I'm coding my Arduino 00:20:01.500 |
Instead of having to go look up how to do that, 00:20:05.500 |
And it's like, oh, here's all the different functions 00:20:11.500 |
You give it the center and the radius and the color. 00:20:23.600 |
But it was a huge win, just like version control 00:20:29.760 |
much more productive for software developers. 00:20:37.900 |
the LLM impact on coding is like those things. 00:20:41.420 |
Like, wow, a bunch of things got a lot easier. 00:20:52.560 |
But it's a march towards making our life easier. 00:20:54.860 |
So there's a future in which these are the type 00:21:07.460 |
You should talk about it regardless of what happens. 00:21:09.240 |
Focus on the impacts, filter the predictions.