Back to Index

Are We *Too* Worried About Artificial Intelligence?


Chapters

0:0 Cal's intro
0:35 Tyler Cowen's article
7:20 Who to trust
10:25 AI money
16:0 Need to get concrete

Transcript

All right, Jesse, let's switch gears for our Something Interesting segment. Let's return to AI. We talked about chat GPT a few weeks ago in my article for the New Yorker about how it actually works. Now I wanna return to the topic of how should we think about the AI revolution happening now?

How worried should we be? What is the philosophically the right approach to this current moment? The article I wanna use as our foundation here, what got me thinking about this was the article I'm bringing up on the screen right now. So if you're watching, this is episode 247, and you can find that at youtube.com/calnewportmedia, or if you don't like YouTube at thedeeplife.com, just look for episode 247, and you can find this segment, so you can see the article on the screen.

If you're listening, I'll narrate what I'm talking about. So the article I'm talking about here is titled, "There is no turning back on AI." It was written by the economist Tyler Cowen, nearby here, George Mason University, professor and prolific writer of public-facing books. This is from May 4th, and he published this in Barry Weiss's newsletter, The Free Press.

Right, so I wanna highlight a few things from this, and then I'm gonna riff on it. All right, so here's the first point. I'm reading this from Tyler's article. Artificial intelligence represents a truly major transformational technological advance. All right, so he's starting right off and saying this is a big deal.

All right, he goes on to say, "In my view, however, the good will considerably outweigh the bad." He makes a comparison here to Gutenberg. He says, "I am reminded of the advent of the printing press after Gutenberg. Of course, the press brought an immense amount of good." It enabled the scientific and industrial revolutions, among other benefits, "But it also created writings by Lenin, Hitler, and Mao's Red Book." Right, so the printing press brought good, it brought bad, but in the end, the good outweighed the disruptions and negativity that came along with it.

So he goes on to say, "We don't know how to respond psychologically or for that matter, substantively. And just about all of the responses I am seeing, I interpret as copes, whether from the optimist, the pessimist, or the extreme pessimist." All right, so this is the setup for Tyler's article.

He says, "I think there's this disruptive change is coming. It's gonna be like Gutenberg. The good will eventually outweigh the bad." But he is making the claim that as a culture right now, we are not psychologically handling well this reality. We don't know how to respond to it. And so he is gonna move on now with a critique of how we are responding to what he sees to be that reality.

So here's his critique. The first part of his critique of our current response is saying, "No one is good at protecting the longer or even medium term outcomes of radical technological changes." No one, not you, not Eleazar. He's referencing here someone who's on the extreme X risk. AI is about to take over the world and enslave us.

Not Sam Altman and not your next door neighbor. So he's arguing, "We are very bad at predicting the impacts of disruptive technologies in the moment." He makes it clear, here's his examples. "How well do people predict the final impacts of the printing press? How well do people predict the impacts of fire?" He's saying they didn't.

In the moment, it's very hard to understand what's gonna happen. And so Cowan's making this point. We know this disruptive thing feels like it's about to happen, but we're not handling it well in part because we're missing the reality that it is very difficult to predict what really happens.

And a lot of the reactions right now are what he calls COAPs, which are based off of very specific predictions about what will happen or what definitely won't happen. And he thinks that's all just psychologically bankrupt. It's not really based on reality. We're just making predictions we have no right to make and then reacting to those.

All right, then he goes on to apply this critique specifically to the existential risks of things like artificial general intelligence. He says, "When it comes to people who are predicting this high degree of existential risk," he says, "I don't actually think arguing back on their chosen terms is the correct response.

Radical agnosticism is the correct response, where all specific scenarios are pretty unlikely. I'm still for people doing constructive work on the problem of alignment, just as we do for all other technologies to improve them." But he's making the case, you don't need to be worried about that. People should work on these issues, but the only actual intellectually consistent position on something like the existential risk of AI is this radical agnosticism.

I know a lot of stuff is possible. All of it's pretty unlikely. There's a lot of other existential risk in our life where it's in that same category, and we could put this in a similar place. So he goes on to say, "I'm a bit distressed each time I read an account of a person arguing himself or arguing herself into existential risk from AI being a major concern.

No one can foresee those futures. Once you keep up the arguing, you also are talking yourself into an illusion of predictability." And he goes on to say, "Once you're trying to predict a future, it's easy to predict a negative future than a positive future, because positive futures are bespoke.

They're built on very specific things, lean to other very specific things. That's really hard to try to imagine. It's much easier to say it all collapses. That's an easier prediction to make." So he goes on to say, "Basically, so for this particular issue of existential risk from AI," he says, "It is indeed a distant possibility, just like every other future you might be trying to imagine.

All the possibilities are distance. I cannot stress that enough. The mere fact that AGI risk can be put on par with those other also distance possibilities simply should not impress you very much." There's a lot of potential futures to negative things happen. We're already used to that. AI doesn't have a particularly new thing to offer to that landscape.

So in the end, when he's thinking about how do we consider AI, he says, "Look, if someone is obsessively arguing about the details of AI technology today, or arguments they read on a blog like 'Less Wrong' from 11 years ago, they won't see this. But don't be suckered into taking their bait.

The longer historical perspective you take, the more obvious this point will be." So let me step back here for a second. What he's arguing that I agree with, I think is a unusually pragmatic take on this issue, given our current cultural moment, which I'm gonna summarize everything I just said.

We are bad at predicting the impacts of technologies. So don't trust people who are being very specific about what's gonna happen with AI and then trying to react to those. We can't figure that out with AI, just like in 15-whatever, Franzi Gutenberg couldn't even look 20 years into the future about what the impact of the printing press is gonna be.

So the problem I see, this is what I think Cowan is very right about. The problem I see with the current discussion about artificial intelligence and its impact is that what a lot of people are doing is they're looking at often cherry-picked examples of these technologies at work. And because these are linguistic examples, if we're talking about chatbot examples, they feel very close to us as human beings.

We then try to extrapolate what type of mind could produce the type of thing I'm seeing in this example. We might imagine, you know, my four-year-old couldn't do that. Maybe my 13-year-old could answer those questions. So maybe this thing is like a 13-year-old's mind in there. Once we've imagined the type of mind that could produce the type of things we've seen, we then imagine the type of impacts that that type of imaginary mind might have.

Well, if we had that type of mind, it could do this and this and that. Now we've created imagined scenarios based off of imagined understandings of the technology, and we treat that like this will happen, and then we get worried about those scenarios. And this is exactly what Cowan is saying that we shouldn't do.

These aren't actually strong predictions. These are thought experiments. If we had a mind that could do this, what types of damage could it wreak? And then we're getting upset about the impacts of those thought experiments. So what I think we should do instead, culturally speaking, is stop reacting to thought experiments and start reacting to actual impacts.

I don't necessarily wanna hear any more stories about, well, in theory, a chatbot that could do this could also do that, and if it could do that, this industry could disappear. I think for the broader public, what you should filter for is tangible impacts. This industry changed. This job doesn't exist anymore.

This company just fired 30% of its staff. When you see actual tangible benefits, not predictions about what hypothetical minds might reap, that's what you should filter for. That's what you should use to refine your understanding of whatever ongoing change is happening and adjust accordingly. I think that's the filtering we have to do now.

And look, there's no shortage of people actually attempting things that will generate real impacts. I spoke on a panel a couple of weekends ago out in San Francisco on generative AI. I spoke on the panel with one of the VCs that funded OpenAI, and he was saying OpenAI already is bringing in, on track to bring in more than $100 million in revenue, commercial revenue of people and companies paying for API access to their GPT-4 back-end language model.

It's possible they'll be on track for billion-dollar revenue, annual revenue within a year. It's an amazingly fast climb, but the point is, there is a ton of people investing money to try to use this technology and integrate it into their work. So it's not as if there is a shortage of people doing things that could generate impacts.

So I think the time is right then. My advice is let's filter for actual impacts. That's what, unless you're in a very rarified position that needs to very specifically make predictions about what's gonna happen to my company in the next six months, filter for impacts. If you hear someone say, "This could happen "because of this example," change that to the Charlie Brown voice.

Wah, wah, wah, wah, wah. What you wanna hear is, "This company just shut down. "That's interesting. "That's a data point you should care about. "This publishing imprint just fired all of its authors. "Ooh, that's a data point. "Okay, now I'm starting to understand what's going on." 'Cause there's no real reason to get upset about things that are, these predictions that are based off hypothetical minds, 'cause most of them won't come true.

This is Cognitive Behavioral Therapy 101. It's called the Predicting the Future, Cognitive Distortion. It's not worth getting upset about predicted futures. It's much better to confront things that actually have happened. Now, there is one place where I disagree some with Cowan. So, I think Cowan takes this for granted that we know now that there will be a major disruption Gutenberg style.

That's the, he takes that for granted. And I think very wisely says, you kind of have to just go along for the ride, react to actual impacts, not the prediction of impacts, et cetera, et cetera. I'm not convinced yet that a major disruption is gonna happen. I'm not convinced that it's not gonna happen either, but we need to still take seriously until disproved what I call the AI null hypothesis.

The AI null hypothesis is the claim that the ultra large language model revolution that kicked off two years ago with GPT-3 in the next five or seven years is not actually going to make a notable impact on most people's lives. That hypothesis has not yet been disproven. The way that will be disproven, and this is our Karl Popper here.

The way that that hypothesis will be proven is when actual impacts, not predictions of impacts, not thought experiments about if it could do this and it could do that, when actual impacts do show up that begin having a material impact on the day-to-day experience of people in their lives, then it will be disproven.

But until we get there, we do have to keep that as one of the possibilities going forward. And based on what I know right now, I would say it's not a given. It's not even necessarily the likely outcome, but probably the percentage chance here that the AI null hypothesis proves true is probably somewhere between 10 to 50%.

It's non-trivial. And so I think we do have to keep that in mind. It's possible as well that it turns out that these ultra large language models, though impressively linguistic, when we begin to try to make them focused in and bespoke on these particular issues, the issues with hallucination, the issues with non-conceptual thinking, the limits that turn out to emerge from under the hood, what you're doing is token guessing with the goal of trying to create grammatically correct sentences that match content and style cues, that that may end up being more limited than we think.

The computational expense is larger than it's worth it. We have a OpenAI gets a couple hundred billion dollars worth of API subscriptions, and then it dwindles back down again, because it turns out, "Huh, this is not really opening up something that I wasn't already able to more or less do, or to use a bespoke AI model, or to actually just hire someone to do this." And that's very possible.

And that we're actually at the peak right now of a hundred million people just liking to use the chatbot and being impressed by what it does. I'm not saying that's gonna happen, but it's also not a weird thesis. So I do think that needs to be in the hypothesis, is the AI null hypothesis itself is something that's still on the table.

So we have a wide extreme of possibilities here from the AI null hypothesis to the most extreme AGIs, we're a couple of years away from being enslaved by computers. We have a whole spectrum here, and we're not very certain at all. So I wanted to throw that out there.

But here's the big thing I think that's important. And the big takeaway I want for people who are thinking about how to think about AI, is think less and react less to the chatter right now, filter for actual impacts. And I think there's at least a 50% chance that Cowan is right, those impacts will come.

And it's best not to fight them, there's nothing you can do about them. Be aware of the actual impacts, adjust as needed, hold on for the ride, probably will end up better off than worse off. But also be ready if you're looking for these impacts, they may never really come at a rate that has any impact on your life as well, that's equally as possible.

I completely agree with Cowan however, is unless again, you're really plugged into this world and as part of your job, do not spend much time debating with arguing with or trying to understand people who are making very specific predictions about what hypothetical minds might be able to do. We don't know what's gonna happen yet.

I'm willing to see. I think it's important to recognize that, Gutenberg did change everything, but we don't remember all of the other innovations and inventions that really were exciting at the time, but then it ended up fundamentally destabilizing the world. There's more of those than the first. So let's be wary, but let's get more concrete.

Let's stay away from those weird YouTube channels where people are just yelling about everything that's gonna happen, good or bad. We'll like take Cowan's advice. We'll take in the concrete stuff, we'll roll with the punches. It'll be interesting, a little bit nerve wracking, interesting to see what those punches are.

I don't know what they're gonna be yet and don't trust anyone who says that they do. There we go, Jesse, the AI null hypothesis. No one's talking about that because it's not exciting. You don't get YouTube views for it and there's no article clicks that come from it. But I'll tell you this, I get sent a lot of people who are in my network.

They send me a lot of chat GPT fails, for example. And one of the thing I think is going on here is there's this highly curated element to what people are seeing. People are generating these really cool examples and then you see a bunch of these really cool examples and then you go down the rabbit hole of what type of mind could do this and well, if you could do that, what else could that mind do and then reacting to that.

But I get sent all sorts of fails of these models just failing to do basic useful things. And because in the end, again, these are not conceptual models, they're token predictors. They're just trying to generate text, at least in the chatbot context, generate text that's grammatically correct, using existing text to generate it, that also matches the key features from the query.

That's what it does. And OpenAI talks about it as a reasoning agent. They talk about some of the things that these models have to learn to do in order to win at the text guessing game. They're like, well, it's as if it understands this or that because it has to be able to understand these things to be able to do better at predicting the text.

But in the end, all it's doing is predicting text. And it often, there's lots of interactions that lots of people have had that are non-useful and non-impressive, but they just don't post those on Twitter. So I do think that effect is going on as well. You know, none of this is helped by, there's not a lot of transparency from OpenAI.

There's a lot of questions we have that we don't understand about how this technology works. There's not a lot yet on how people are using this concretely. We get the chatter on social media, we get the chatter on YouTube, but, and I'm working on a, I'm trying to work on this topic now.

Boots on grounds, real companies actually using these interfaces. Are they having transformative change or is it being very minor? We just don't have a good sense yet. - Yeah. I saw one concrete thing where the Chegg CEO, like that online homework thing, their sales went down and they contributed it too.

But that's whatever. - Yeah. So, okay. So these are the things we should focus on. So like, what are concrete impacts? Maybe students can, it's good at generating text on topics. So maybe that's gonna make the places where you buy the pre-written solutions less valuable because you can now, I guess, create mills where you just, instead of paying students, just have chat GPT generate a bunch of those essays.

You know, maybe. I'm also not convinced however, that these responses aren't pretty easily identifiable. At some point you identify the, it's not hard for teachers to identify the ticks of these models versus their students. And yeah, so there we go. But that's a concrete thing. So it's like, how worried does that make you?

Like, well, that thing by itself does it. - Yeah. - But these are the type of things we should be looking for. Yeah. I mean, the other thing I thought about is, so like OpenAI is real big on, it does a lot of useful stuff for coders. They have a customized version that helps you if you're coding and it can generate early versions of code or help you understand library interfaces or this or that.

But what came to mind is, you know, the biggest productivity boost in coding in the last 20 years was when the common interface development environments like Eclipse introduced, I don't know the exact terminology for it, but it's where, you know, it auto fills or shows you, for example, here, oh, you're typing a function.

Here's all the parameters you need to give me. You know, so you didn't have to look at up in the documentation. Or if you have an object and object oriented programming and you're like, I don't know the methods for this object. You just, you type the object name, press period.

And there's big list. Oh, here's all the different things you can do with this. And you select one and it says, oh, here is the, here is the parameters. And like, I noticed this when I'm coding my Arduino with my son, build video games in Arduino. It's really useful.

It's like, oh, I need to draw a circle. Instead of having to go look up how to do that, you just start typing in draw. And it's like, oh, here's all the different functions to start with draw. Oh, here's draw circle. You click on it. It's like, okay, so here's the parameters.

You give it the center and the radius and the color. Like, great, I don't have to look this up. That's a huge productivity win. It makes programming much easier. No one thought about that as, is this industry gonna be here? But it was a huge win, just like a version control was like a huge win and made things really much more productive for software developers.

But it wasn't, will this field still exist? And so there's definitely a future where the LLM impact on coding is like those things. Like, wow, a bunch of things got a lot easier. I'm really happy. I'm glad that got easier. It reminds me of when GitHub came around or ID started doing autofill.

Was it an existential risk to our industry? But it's a march towards making our life easier. So there's a future in which these are the type of things we're talking about. And so I'm waiting to see, I wanna see concrete. I wanna see concrete things. So we'll find out, but anyways, AI null hypothesis is possible.

You should talk about it, regardless of what happens, focus on the impacts, filter the predictions. People like making predictions, but a lot of them are nonsense. All right, speaking of nonsense, we should probably wrap this up, Jesse. Thank you everyone who listened. We'll be back next week with another episode of the show.

And until then, as always, stay deep. (upbeat music) (upbeat music)