back to indexAre We *Too* Worried About Artificial Intelligence?

Chapters
0:0 Cal's intro
0:35 Tyler Cowen's article
7:20 Who to trust
10:25 AI money
16:0 Need to get concrete
00:00:14.460 | 
how should we think about the AI revolution happening now? 00:00:21.040 | 
What is the philosophically the right approach 00:00:26.500 | 
The article I wanna use as our foundation here, 00:00:30.460 | 
was the article I'm bringing up on the screen right now. 00:00:38.140 | 
and you can find that at youtube.com/calnewportmedia, 00:00:42.040 | 
or if you don't like YouTube at thedeeplife.com, 00:00:44.980 | 
just look for episode 247, and you can find this segment, 00:00:49.940 | 
If you're listening, I'll narrate what I'm talking about. 00:00:52.160 | 
So the article I'm talking about here is titled, 00:01:05.520 | 
professor and prolific writer of public-facing books. 00:01:11.300 | 
and he published this in Barry Weiss's newsletter, 00:01:16.880 | 
Right, so I wanna highlight a few things from this, 00:01:25.220 | 
Artificial intelligence represents a truly major 00:01:44.240 | 
the good will considerably outweigh the bad." 00:01:56.400 | 
Of course, the press brought an immense amount of good." 00:02:00.800 | 
It enabled the scientific and industrial revolutions, 00:02:07.420 | 
Right, so the printing press brought good, it brought bad, 00:02:14.800 | 
the disruptions and negativity that came along with it. 00:02:23.020 | 
"We don't know how to respond psychologically 00:02:27.980 | 
And just about all of the responses I am seeing, 00:02:32.140 | 
I interpret as copes, whether from the optimist, 00:02:38.560 | 
All right, so this is the setup for Tyler's article. 00:02:41.120 | 
He says, "I think there's this disruptive change is coming. 00:02:47.400 | 
But he is making the claim that as a culture right now, 00:02:51.640 | 
we are not psychologically handling well this reality. 00:02:58.800 | 
And so he is gonna move on now with a critique 00:03:01.640 | 
of how we are responding to what he sees to be that reality. 00:03:08.160 | 
The first part of his critique of our current response 00:03:10.420 | 
is saying, "No one is good at protecting the longer 00:03:25.600 | 
He's referencing here someone who's on the extreme X risk. 00:03:31.680 | 
AI is about to take over the world and enslave us. 00:03:35.320 | 
Not Sam Altman and not your next door neighbor. 00:03:38.800 | 
So he's arguing, "We are very bad at predicting 00:03:41.960 | 
the impacts of disruptive technologies in the moment." 00:03:46.760 | 
"How well do people predict the final impacts 00:03:49.960 | 
How well do people predict the impacts of fire?" 00:04:01.360 | 
it's about to happen, but we're not handling it well 00:04:07.300 | 
that it is very difficult to predict what really happens. 00:04:16.640 | 
of very specific predictions about what will happen 00:04:20.400 | 
And he thinks that's all just psychologically bankrupt. 00:04:26.840 | 
We're just making predictions we have no right to make 00:04:32.440 | 
All right, then he goes on to apply this critique 00:04:34.520 | 
specifically to the existential risks of things 00:04:41.680 | 
He says, "When it comes to people who are predicting 00:04:44.480 | 
this high degree of existential risk," he says, 00:04:46.440 | 
"I don't actually think arguing back on their chosen terms 00:04:55.760 | 
where all specific scenarios are pretty unlikely. 00:05:11.680 | 
but the only actual intellectually consistent position 00:05:21.240 | 
There's a lot of other existential risk in our life 00:05:30.560 | 
each time I read an account of a person arguing himself 00:06:10.340 | 
so for this particular issue of existential risk from AI," 00:06:14.140 | 
he says, "It is indeed a distant possibility, 00:06:22.980 | 
The mere fact that AGI risk can be put on par 00:06:43.120 | 
he says, "Look, if someone is obsessively arguing 00:06:48.900 | 
or arguments they read on a blog like 'Less Wrong' 00:06:54.280 | 
But don't be suckered into taking their bait. 00:07:08.460 | 
I think is a unusually pragmatic take on this issue, 00:07:15.700 | 
which I'm gonna summarize everything I just said. 00:07:18.300 | 
We are bad at predicting the impacts of technologies. 00:07:21.180 | 
So don't trust people who are being very specific 00:07:36.900 | 
about what the impact of the printing press is gonna be. 00:07:42.580 | 
this is what I think Cowan is very right about. 00:07:44.660 | 
The problem I see with the current discussion 00:07:50.940 | 
is they're looking at often cherry-picked examples 00:08:09.980 | 
could produce the type of thing I'm seeing in this example. 00:08:14.900 | 
Maybe my 13-year-old could answer those questions. 00:08:17.780 | 
So maybe this thing is like a 13-year-old's mind in there. 00:08:23.020 | 
that could produce the type of things we've seen, 00:08:35.300 | 
based off of imagined understandings of the technology, 00:08:41.820 | 
and then we get worried about those scenarios. 00:08:55.220 | 
about the impacts of those thought experiments. 00:08:58.360 | 
So what I think we should do instead, culturally speaking, 00:09:07.600 | 
I don't necessarily wanna hear any more stories about, 00:09:11.160 | 
well, in theory, a chatbot that could do this 00:09:19.320 | 
what you should filter for is tangible impacts. 00:09:33.480 | 
not predictions about what hypothetical minds might reap, 00:09:38.380 | 
That's what you should use to refine your understanding 00:09:44.040 | 
I think that's the filtering we have to do now. 00:09:49.160 | 
actually attempting things that will generate real impacts. 00:10:09.780 | 
paying for API access to their GPT-4 back-end language model. 00:10:14.780 | 
It's possible they'll be on track for billion-dollar revenue, 00:10:20.560 | 
It's an amazingly fast climb, but the point is, 00:10:29.180 | 
So it's not as if there is a shortage of people 00:10:37.660 | 
My advice is let's filter for actual impacts. 00:10:39.640 | 
That's what, unless you're in a very rarified position 00:10:42.340 | 
that needs to very specifically make predictions 00:10:57.840 | 
What you wanna hear is, "This company just shut down. 00:11:04.500 | 
"This publishing imprint just fired all of its authors. 00:11:08.740 | 
"Okay, now I'm starting to understand what's going on." 00:11:22.720 | 
It's called the Predicting the Future, Cognitive Distortion. 00:11:25.800 | 
It's not worth getting upset about predicted futures. 00:11:32.080 | 
Now, there is one place where I disagree some with Cowan. 00:11:40.600 | 
that we know now that there will be a major disruption 00:11:48.440 | 
you kind of have to just go along for the ride, 00:11:51.780 | 
not the prediction of impacts, et cetera, et cetera. 00:11:59.040 | 
I'm not convinced that it's not gonna happen either, 00:12:04.680 | 
until disproved what I call the AI null hypothesis. 00:12:15.120 | 
that the ultra large language model revolution 00:12:22.960 | 
is not actually going to make a notable impact 00:12:40.960 | 
is when actual impacts, not predictions of impacts, 00:12:44.080 | 
not thought experiments about if it could do this 00:12:50.480 | 
on the day-to-day experience of people in their lives, 00:13:04.880 | 
It's not even necessarily the likely outcome, 00:13:14.800 | 
And so I think we do have to keep that in mind. 00:13:36.600 | 
the limits that turn out to emerge from under the hood, 00:13:46.360 | 
that that may end up being more limited than we think. 00:13:48.760 | 
The computational expense is larger than it's worth it. 00:13:52.040 | 
We have a OpenAI gets a couple hundred billion dollars 00:13:57.940 | 
"Huh, this is not really opening up something 00:13:59.600 | 
that I wasn't already able to more or less do, 00:14:05.100 | 
or to actually just hire someone to do this." 00:14:09.100 | 
And that we're actually at the peak right now 00:14:11.140 | 
of a hundred million people just liking to use the chatbot 00:14:19.280 | 
So I do think that needs to be in the hypothesis, 00:14:26.360 | 
So we have a wide extreme of possibilities here 00:14:28.680 | 
from the AI null hypothesis to the most extreme AGIs, 00:14:43.660 | 
But here's the big thing I think that's important. 00:14:46.500 | 
who are thinking about how to think about AI, 00:14:48.440 | 
is think less and react less to the chatter right now, 00:14:57.380 | 
that Cowan is right, those impacts will come. 00:15:03.280 | 
Be aware of the actual impacts, adjust as needed, 00:15:06.700 | 
probably will end up better off than worse off. 00:15:09.920 | 
But also be ready if you're looking for these impacts, 00:15:19.860 | 
is unless again, you're really plugged into this world 00:15:24.580 | 
do not spend much time debating with arguing with 00:15:30.140 | 
about what hypothetical minds might be able to do. 00:15:40.680 | 
but we don't remember all of the other innovations 00:15:43.960 | 
and inventions that really were exciting at the time, 00:15:46.940 | 
but then it ended up fundamentally destabilizing the world. 00:15:52.320 | 
So let's be wary, but let's get more concrete. 00:15:56.580 | 
Let's stay away from those weird YouTube channels 00:15:58.980 | 
where people are just yelling about everything 00:16:08.800 | 
It'll be interesting, a little bit nerve wracking, 00:16:14.280 | 
and don't trust anyone who says that they do. 00:16:20.280 | 
No one's talking about that because it's not exciting. 00:16:25.200 | 
and there's no article clicks that come from it. 00:16:29.800 | 
I get sent a lot of people who are in my network. 00:16:32.520 | 
They send me a lot of chat GPT fails, for example. 00:16:37.120 | 
And one of the thing I think is going on here 00:16:44.900 | 
People are generating these really cool examples 00:16:47.520 | 
and then you see a bunch of these really cool examples 00:16:51.920 | 
and well, if you could do that, what else could that mind do 00:16:54.940 | 
But I get sent all sorts of fails of these models 00:17:02.080 | 
these are not conceptual models, they're token predictors. 00:17:15.480 | 
that also matches the key features from the query. 00:17:19.920 | 
And OpenAI talks about it as a reasoning agent. 00:17:32.360 | 
They're like, well, it's as if it understands this or that 00:17:35.040 | 
because it has to be able to understand these things 00:17:37.280 | 
to be able to do better at predicting the text. 00:17:38.920 | 
But in the end, all it's doing is predicting text. 00:17:49.980 | 
So I do think that effect is going on as well. 00:17:53.820 | 
there's not a lot of transparency from OpenAI. 00:17:57.360 | 
that we don't understand about how this technology works. 00:18:10.560 | 
real companies actually using these interfaces. 00:18:18.520 | 
I saw one concrete thing where the Chegg CEO, 00:18:23.960 | 
their sales went down and they contributed it too. 00:18:41.380 | 
where you buy the pre-written solutions less valuable 00:18:47.760 | 
just have chat GPT generate a bunch of those essays. 00:18:53.440 | 
that these responses aren't pretty easily identifiable. 00:18:59.840 | 
it's not hard for teachers to identify the ticks 00:19:06.440 | 
So it's like, how worried does that make you? 00:19:10.400 | 
- But these are the type of things we should be looking for. 00:19:17.920 | 
They have a customized version that helps you 00:19:24.280 | 
or help you understand library interfaces or this or that. 00:19:34.960 | 
was when the common interface development environments 00:19:48.400 | 
Here's all the parameters you need to give me. 00:19:53.320 | 
Or if you have an object and object oriented programming 00:19:55.920 | 
and you're like, I don't know the methods for this object. 00:19:57.760 | 
You just, you type the object name, press period. 00:20:00.300 | 
Oh, here's all the different things you can do with this. 00:20:01.920 | 
And you select one and it says, oh, here is the, 00:20:06.160 | 
And like, I noticed this when I'm coding my Arduino 00:20:15.760 | 
Instead of having to go look up how to do that, 00:20:19.760 | 
And it's like, oh, here's all the different functions 00:20:25.720 | 
You give it the center and the radius and the color. 00:20:37.840 | 
But it was a huge win, just like a version control 00:20:44.000 | 
much more productive for software developers. 00:20:52.120 | 
the LLM impact on coding is like those things. 00:20:55.640 | 
Like, wow, a bunch of things got a lot easier. 00:21:06.780 | 
But it's a march towards making our life easier. 00:21:09.060 | 
So there's a future in which these are the type 00:21:13.840 | 
And so I'm waiting to see, I wanna see concrete. 00:21:21.680 | 
You should talk about it, regardless of what happens, 00:21:23.460 | 
focus on the impacts, filter the predictions. 00:21:34.520 | 
We'll be back next week with another episode of the show.