back to index

The AI Revolution: How To Get Ahead While Others Panic | Cal Newport


Chapters

0:0 AI null hypothesis
7:30 The current AI discussion
11:20 Cal's analysis of Tyler Cowen

Whisper Transcript | Transcript Only Page

00:00:00.000 | How should we think about the AI revolution happening now?
00:00:05.000 | How worried should we be?
00:00:06.520 | What is the philosophically the right approach
00:00:09.520 | to this current moment?
00:00:12.020 | The article I wanna use as our foundation here,
00:00:14.280 | what got me thinking about this,
00:00:15.980 | was the article I'm bringing up on the screen right now.
00:00:19.280 | So if you're watching, this is episode 247,
00:00:23.620 | and you can find that at youtube.com/calnewportmedia,
00:00:27.560 | or if you don't like YouTube at thedeeplife.com,
00:00:30.800 | just look for episode 247,
00:00:32.440 | you can find this segment,
00:00:33.720 | so you can see the article on the screen.
00:00:35.760 | If you're listening, I'll narrate what I'm talking about.
00:00:38.000 | So the article I'm talking about here is titled,
00:00:40.840 | "There is no turning back on AI."
00:00:44.880 | It was written by the economist Tyler Cowen,
00:00:48.520 | nearby here, George Mason University,
00:00:51.360 | professor and prolific writer of public facing books.
00:00:54.760 | This is from May 4th,
00:00:57.120 | and he published this in Barry Weiss's newsletter,
00:01:00.580 | The Free Press.
00:01:02.680 | Right, so I wanna highlight a few things from this,
00:01:04.480 | and then I'm gonna riff on it.
00:01:06.340 | All right, so here's the first point,
00:01:09.120 | I'm reading this from Tyler's article.
00:01:10.960 | "Artificial intelligence represents a truly major
00:01:15.440 | transformational technological advance."
00:01:19.480 | All right, so he's starting right off
00:01:20.480 | and saying this is a big deal.
00:01:26.280 | All right, he goes on to say,
00:01:28.320 | "In my view, however,
00:01:30.080 | the good will considerably outweigh the bad."
00:01:34.200 | He makes a comparison here to Gutenberg.
00:01:37.640 | He says, "I am reminded of the advent
00:01:39.360 | of the printing press after Gutenberg.
00:01:42.220 | Of course, the press brought an immense amount of good,
00:01:46.640 | it enabled the scientific and industrial revolutions
00:01:48.560 | among other benefits,
00:01:49.480 | but it also created writings by Lenin, Hitler,
00:01:51.880 | and Mao's Red Book."
00:01:53.240 | Right, so the printing press brought good,
00:01:56.080 | it brought bad, but in the end,
00:01:57.240 | the good outweighed the disruptions and negativity
00:02:02.240 | that came along with it.
00:02:04.140 | So he goes on to say,
00:02:08.840 | "We don't know how to respond psychologically
00:02:11.900 | or for that matter, substantively."
00:02:13.880 | And just about all of the responses I am seeing,
00:02:17.960 | I interpret as copes,
00:02:19.400 | whether from the optimist, the pessimist,
00:02:21.320 | or the extreme pessimist.
00:02:24.360 | All right, so this is the setup for Tyler's article.
00:02:26.920 | He says, "I think there's this disruptive change is coming,
00:02:28.920 | it's gonna be like Gutenberg.
00:02:30.880 | The good will eventually outweigh the bad."
00:02:33.240 | But he is making the claim that as a culture right now,
00:02:37.440 | we are not psychologically handling well this reality.
00:02:41.820 | We don't know how to respond to it.
00:02:44.640 | And so he is gonna move on now with a critique
00:02:47.480 | of how we are responding to what he sees to be that reality.
00:02:52.480 | So here's his critique.
00:02:53.960 | The first part of his critique of our current response
00:02:56.200 | is saying, "No one is good at protecting the longer
00:03:01.200 | or even medium term outcomes
00:03:03.760 | of radical technological changes."
00:03:06.920 | No one, not you, not Eleazar.
00:03:11.360 | He's referencing here someone who's on the extreme X risk,
00:03:17.480 | AI is about to take over the world and enslave us.
00:03:21.120 | Not Sam Altman and not your next door neighbor.
00:03:24.600 | So he's arguing, "We are very bad at predicting
00:03:27.760 | the impacts of disruptive technologies in the moment."
00:03:30.800 | He makes it clear, here's his examples.
00:03:32.560 | "How well do people predict the final impacts
00:03:34.640 | of the printing press?
00:03:35.760 | How well did people predict the impacts of fire?"
00:03:37.760 | He's saying they didn't.
00:03:38.960 | In the moment, it's very hard to understand
00:03:40.760 | what's gonna happen.
00:03:42.600 | And so Cowan's making this point.
00:03:45.680 | We know this disruptive thing feels like
00:03:47.160 | it's about to happen, but we're not handling it well
00:03:50.280 | in part because we're missing the reality
00:03:53.120 | that it is very difficult to predict what really happens.
00:03:57.720 | And a lot of the reactions right now
00:04:00.080 | are what he calls copes,
00:04:01.680 | which are based off of very specific predictions
00:04:04.060 | about what will happen or what definitely won't happen.
00:04:06.240 | And he thinks that's all just psychologically bankrupt.
00:04:11.120 | It's not really based on reality.
00:04:12.680 | We're just making predictions we have no right to make
00:04:16.280 | and then reacting to those.
00:04:18.280 | All right, then he goes on to apply this critique
00:04:20.360 | specifically to the existential risks of things
00:04:23.640 | like artificial general intelligence.
00:04:26.440 | He says, "When it comes to people who are predicting
00:04:30.320 | this high degree of existential risk," he says,
00:04:32.280 | "I don't actually think arguing back on their chosen terms
00:04:36.040 | is the correct response.
00:04:37.840 | Radical agnosticism is the correct response,
00:04:41.600 | where all specific scenarios are pretty unlikely.
00:04:44.400 | I'm still for people doing constructive work
00:04:47.320 | on the problem of alignment,
00:04:48.280 | just as we do for all other technologies to improve them."
00:04:51.080 | But he's making the case,
00:04:52.960 | you don't need to be worried about that.
00:04:55.680 | People should work on these issues.
00:04:57.500 | But the only actual intellectually consistent position
00:05:00.820 | on something like the existential risk of AI
00:05:02.720 | is this radical agnosticism.
00:05:03.920 | I know a lot of stuff is possible.
00:05:05.600 | All of it's pretty unlikely.
00:05:07.080 | There's a lot of other existential risk in our life
00:05:09.080 | where it's in that same category.
00:05:10.840 | And we could put this in a similar place.
00:05:14.720 | So he goes on to say, "I'm a bit distressed
00:05:16.400 | each time I read an account of a person arguing himself
00:05:19.840 | or arguing herself into existential risk
00:05:22.120 | from AI being a major concern.
00:05:24.680 | No one can foresee those futures.
00:05:28.040 | Once you keep up the arguing,
00:05:30.120 | you also are talking yourself
00:05:31.400 | into an illusion of predictability."
00:05:34.400 | And he goes on to say,
00:05:35.240 | "Once you're trying to predict a future,
00:05:36.700 | it's easy to predict a negative future
00:05:38.320 | than a positive future,
00:05:39.520 | because positive futures are bespoke.
00:05:41.960 | They're built on very specific things,
00:05:44.320 | lean to other very specific things.
00:05:45.560 | That's really hard to try to imagine.
00:05:47.280 | It's much easier to say it all collapses.
00:05:49.660 | That's an easier prediction to make."
00:05:52.320 | So he goes on to say,
00:05:55.520 | "Basically, so for this particular issue
00:05:57.380 | of existential risk from AI,"
00:06:00.000 | he says, "It is indeed a distant possibility,
00:06:03.240 | just like every other future
00:06:04.640 | you might be trying to imagine.
00:06:05.840 | All the possibilities are distance.
00:06:07.200 | I cannot stress that enough.
00:06:08.840 | The mere fact that AGI risk can be put on par
00:06:11.440 | with those other also distance possibilities
00:06:13.400 | simply should not impress you very much."
00:06:15.640 | There's a lot of potential futures
00:06:17.040 | to negative things happen.
00:06:18.320 | We're already used to that.
00:06:20.000 | AI doesn't have a particularly new thing
00:06:22.160 | to offer to that landscape.
00:06:24.880 | So in the end, when he's thinking about
00:06:26.280 | how do we consider AI,
00:06:28.960 | he says, "Look, if someone is obsessively arguing
00:06:32.080 | about the details of AI technology today,
00:06:34.720 | or arguments they read on a blog like 'Less Wrong'
00:06:37.600 | from 11 years ago, they won't see this.
00:06:40.100 | But don't be suckered into taking their bait.
00:06:42.960 | The longer historical perspective you take,
00:06:45.280 | the more obvious this point will be."
00:06:47.340 | So let me step back here for a second.
00:06:50.540 | What he's arguing, that I agree with,
00:06:54.280 | I think is a unusually pragmatic take on this issue,
00:06:59.280 | given our current cultural moment,
00:07:01.520 | which I'm gonna summarize everything I just said.
00:07:04.120 | We are bad at predicting the impacts of technologies,
00:07:07.020 | so don't trust people who are being very specific
00:07:09.640 | about what's gonna happen with AI
00:07:12.360 | and then trying to react to those.
00:07:14.360 | We can't figure that out with AI,
00:07:17.280 | just like in 15-whatever, Franzi Gutenberg
00:07:21.080 | couldn't even look 20 years into the future
00:07:22.700 | about what the impact of the printing press is gonna be.
00:07:25.880 | So the problem I see,
00:07:28.360 | this is what I think Cowan is very right about.
00:07:30.400 | The problem I see with the current discussion
00:07:32.420 | about artificial intelligence and its impact
00:07:34.480 | is that what a lot of people are doing
00:07:36.720 | is they're looking at often cherry-picked examples
00:07:40.780 | of these technologies at work.
00:07:42.600 | And because these are linguistic examples,
00:07:44.480 | if we're talking about chatbot examples,
00:07:46.280 | they feel very close to us as human beings.
00:07:51.280 | We then try to extrapolate what type of mind
00:07:55.780 | could produce the type of thing I'm seeing in this example.
00:07:58.120 | We might imagine, you know,
00:07:59.000 | my four-year-old couldn't do that.
00:08:00.680 | Maybe my 13-year-old could answer those questions.
00:08:03.580 | So maybe this thing is like a 13-year-old's mind in there.
00:08:06.480 | Once we've imagined the type of mind
00:08:08.800 | that could produce the type of things we've seen,
00:08:11.080 | we then imagine the type of impacts
00:08:13.760 | that that type of imaginary mind might have.
00:08:16.460 | Well, if we had that type of mind,
00:08:17.820 | it could do this and this and that.
00:08:19.220 | Now we've created imagined scenarios
00:08:21.080 | based off of imagined understandings of the technology,
00:08:23.720 | and we treat that like this will happen,
00:08:27.600 | and then we get worried about those scenarios.
00:08:30.360 | This is exactly what Cowan is saying that we shouldn't do.
00:08:32.280 | These aren't actually strong predictions.
00:08:35.080 | These are thought experiments.
00:08:36.640 | If we had a mind that could do this,
00:08:38.600 | what types of damage could it wreak?
00:08:40.120 | And then we're getting upset
00:08:41.000 | about the impacts of those thought experiments.
00:08:44.120 | So what I think we should do instead, culturally speaking,
00:08:48.800 | is stop reacting to thought experiments
00:08:51.320 | and start reacting to actual impacts.
00:08:53.340 | I don't necessarily want to hear any more stories about,
00:08:56.920 | well, in theory, a chatbot that could do this
00:09:00.300 | could also do that, and if it could do that,
00:09:01.720 | this industry could disappear.
00:09:03.600 | I think for the broader public,
00:09:05.060 | what you should filter for is tangible impacts.
00:09:09.140 | This industry changed.
00:09:10.820 | This job doesn't exist anymore.
00:09:13.800 | This company just fired 30% of its staff.
00:09:17.020 | When you see actual tangible benefits,
00:09:19.280 | not predictions about what hypothetical minds might reap,
00:09:22.500 | that's what you should filter for.
00:09:24.160 | That's what you should use to refine your understanding
00:09:26.360 | of whatever ongoing change is happening
00:09:28.220 | and adjust accordingly.
00:09:29.820 | I think that's the filtering we have to do now.
00:09:32.120 | And look, there's no shortage of people
00:09:34.960 | actually attempting things that will generate real impacts.
00:09:38.600 | I spoke on a panel a couple of weekends ago
00:09:40.980 | out in San Francisco on generative AI.
00:09:43.140 | I spoke on the panel with one of the VCs that funded OpenAI,
00:09:46.600 | and he was saying OpenAI already is bringing in,
00:09:49.980 | on track to bring in more than $100 million in revenue,
00:09:53.360 | commercial revenue of people and companies
00:09:55.600 | paying for API access to their GPT-4 backend language model.
00:10:01.740 | It's possible they'll be on track for billion-dollar revenue,
00:10:04.640 | annual revenue within a year.
00:10:06.400 | It's an amazingly fast climb.
00:10:07.600 | But the point is there is a ton of people
00:10:11.760 | investing money to try to use this technology
00:10:13.840 | and integrate it into their work.
00:10:15.000 | So it's not as if there is a shortage of people
00:10:18.580 | doing things that could generate impacts.
00:10:21.000 | So I think the time is right then.
00:10:23.480 | My advice is let's filter for actual impacts.
00:10:25.480 | That's what, unless you're in a very rarefied position
00:10:28.160 | that needs to very specifically make predictions
00:10:30.440 | about what's gonna happen to my company
00:10:31.780 | in the next six months, filter for impacts.
00:10:35.020 | If you hear someone say,
00:10:37.360 | "This could happen because of this example,"
00:10:40.160 | change that to the Charlie Brown voice.
00:10:42.520 | Wah, wah, wah, wah, wah.
00:10:43.640 | What you wanna hear is this company just shut down.
00:10:46.600 | That's interesting.
00:10:47.880 | That's a data point you should care about.
00:10:50.400 | This publishing imprint just fired all of its authors.
00:10:53.400 | Ooh, that's a data point.
00:10:54.640 | Okay, now I'm starting to understand what's going on.
00:10:57.040 | 'Cause there's no real reason to get upset
00:10:59.560 | about things that are, these predictions
00:11:01.960 | that are based off hypothetical minds,
00:11:03.160 | 'cause most of them won't come true.
00:11:05.560 | This is Cognitive Behavioral Therapy 101.
00:11:08.520 | It's called the Predicting the Future, Cognitive Distortion.
00:11:11.640 | It's not worth getting upset about predicted futures.
00:11:13.800 | It's much better to confront things
00:11:15.480 | that actually have happened.
00:11:17.920 | Now, there is one place where I disagree some with Cowan.
00:11:23.000 | So I think Cowan takes us for granted
00:11:26.440 | that we know now that there will be a major disruption
00:11:28.480 | Gutenberg style.
00:11:29.960 | That's the, he takes that for granted.
00:11:32.080 | And I think very wisely says,
00:11:34.280 | you kind of have to just go along for the ride,
00:11:36.160 | react to actual impacts, not the prediction of impacts,
00:11:38.800 | et cetera, et cetera.
00:11:40.500 | I'm not convinced yet
00:11:42.840 | that a major disruption is gonna happen.
00:11:44.880 | I'm not convinced that it's not gonna happen either,
00:11:47.980 | but we need to still take seriously until disproved
00:11:51.840 | what I call the AI null hypothesis.
00:11:56.300 | The AI null hypothesis is the claim
00:11:59.920 | that the ultra large language model revolution
00:12:04.320 | that kicked off two years ago with GPT-3
00:12:06.340 | in the next five or seven years
00:12:08.780 | is not actually going to make a notable impact
00:12:12.200 | on most people's lives.
00:12:13.760 | That hypothesis has not yet been disproven.
00:12:18.340 | The way that will be disproven,
00:12:22.140 | and this is our Karl Popper here,
00:12:23.980 | the way that that hypothesis will be proven
00:12:26.780 | is when actual impacts, not predictions of impacts,
00:12:29.900 | not thought experiments about if it could do this,
00:12:31.620 | then it could do that.
00:12:32.520 | When actual impacts do show up
00:12:34.180 | that begin having a material impact
00:12:36.320 | on the day-to-day experience of people in their lives,
00:12:39.100 | then it will be disproven.
00:12:41.240 | But until we get there,
00:12:42.140 | we do have to keep that as one of the possibilities
00:12:44.460 | going forward.
00:12:45.620 | And based on what I know right now,
00:12:47.100 | I would say it's not a given.
00:12:50.700 | It's not even necessarily the likely outcome,
00:12:53.500 | but probably the percentage chance here
00:12:55.780 | that the AI null hypothesis proves true
00:12:57.700 | is probably somewhere between 10 to 50%.
00:12:59.780 | It's non-trivial.
00:13:00.620 | And so I think we do have to keep that in mind.
00:13:05.140 | It's possible as well that it turns out
00:13:10.140 | that these ultra large language models,
00:13:12.460 | though impressively linguistic,
00:13:13.800 | when we begin to try to make them focused in
00:13:15.660 | and bespoke on these particular issues,
00:13:18.180 | the issues with hallucination,
00:13:19.940 | the issues with non-conceptual thinking,
00:13:22.420 | the limits that turn out to emerge from under the hood,
00:13:25.900 | what you're doing is token guessing
00:13:27.380 | with the goal of trying to create
00:13:28.760 | grammatically correct sentences
00:13:30.020 | that match content and style cues,
00:13:32.220 | that that may end up being more limited than we think.
00:13:34.580 | The computational expense is larger than it's worth it.
00:13:37.860 | We have OpenAI gets a couple hundred billion dollars
00:13:40.360 | worth of API subscriptions,
00:13:41.560 | and then it dwindles back down again
00:13:42.940 | because it turns out,
00:13:43.780 | "Huh, this is not really opening up something
00:13:45.420 | that I wasn't already able to more or less do,
00:13:49.260 | or to use a bespoke AI model,
00:13:50.940 | or to actually just hire someone to do this."
00:13:52.780 | And that's very possible.
00:13:54.920 | And that we're actually at the peak right now
00:13:56.980 | of a hundred million people just liking to use the chatbot
00:14:00.300 | and being impressed by what it does.
00:14:02.060 | I'm not saying that's gonna happen,
00:14:03.100 | but it's also not a weird thesis.
00:14:05.100 | So I do think that needs to be in the hypothesis,
00:14:07.700 | as the AI null hypothesis itself is something
00:14:10.140 | that's still on the table.
00:14:12.180 | So we have a wide extreme of possibilities here
00:14:14.500 | from the AI null hypothesis to the most extreme AGIs,
00:14:19.260 | we're a couple of years away
00:14:20.300 | from being enslaved by computers.
00:14:22.460 | We have a whole spectrum here,
00:14:24.340 | and we're not very certain at all.
00:14:26.360 | So I wanted to throw that out there.
00:14:29.500 | But here's the big thing I think that's important.
00:14:31.020 | And the big takeaway I want for people
00:14:32.340 | who are thinking about how to think about AI
00:14:34.260 | is think less and react less to the chatter right now,
00:14:38.420 | filter for actual impacts.
00:14:40.480 | And I think there's at least a 50% chance
00:14:43.200 | that Cowan is right, those impacts will come.
00:14:45.940 | And it's best not to fight them.
00:14:47.500 | There's nothing you can do about them.
00:14:49.100 | Be aware of the actual impacts, adjust as needed,
00:14:51.540 | hold on for the ride,
00:14:52.500 | probably will end up better off than worse off.
00:14:55.740 | But also be ready if you're looking for these impacts,
00:14:57.860 | they may never really come at a rate
00:14:59.700 | that has any impact on your life as well.
00:15:01.460 | That's equally as possible.
00:15:02.940 | I completely agree with Cowan, however,
00:15:05.680 | is unless again, you're really plugged into this world
00:15:08.620 | and it's part of your job.
00:15:10.400 | Do not spend much time debating with, arguing with,
00:15:13.020 | or trying to understand people
00:15:14.220 | who are making very specific predictions
00:15:15.980 | about what hypothetical minds might be able to do.
00:15:19.220 | We don't know what's gonna happen yet.
00:15:21.540 | I'm willing to see.
00:15:22.900 | I think it's important to recognize that,
00:15:24.780 | Gutenberg did change everything,
00:15:26.500 | but we don't remember all of the other innovations
00:15:29.780 | and inventions that really were exciting at the time,
00:15:32.760 | but then it ended up fundamentally destabilizing the world.
00:15:36.300 | There's more of those than the first.
00:15:38.140 | So let's be wary, but let's get more concrete.
00:15:42.400 | Let's stay away from those weird YouTube channels
00:15:44.800 | where people are just yelling
00:15:46.100 | about everything that's gonna happen, good or bad.
00:15:48.600 | We'll like take Cowan's advice.
00:15:51.900 | We'll take in the concrete stuff.
00:15:53.300 | We'll roll with the punches.
00:15:54.580 | It'll be interesting, a little bit nerve wracking,
00:15:56.380 | interesting to see what those punches are.
00:15:57.820 | I don't know what they're gonna be yet
00:15:59.740 | and don't trust anyone who says that they do.
00:16:02.060 | There we go, Jesse, the AI null hypothesis.
00:16:06.060 | No one's talking about that because it's not exciting.
00:16:08.820 | You don't get YouTube views for it
00:16:10.980 | and there's no article clicks to come from it.
00:16:14.220 | But I'll tell you this,
00:16:15.580 | I get sent a lot of people who are in my network.
00:16:18.320 | They send me a lot of chat GPT fails, for example.
00:16:22.940 | And one of the thing I think is going on here
00:16:25.060 | is there's this highly curated element
00:16:28.480 | to what people are seeing.
00:16:30.740 | People are generating these really cool examples.
00:16:33.340 | And then you see a bunch of these really cool examples.
00:16:35.180 | And then you go down the rabbit hole
00:16:36.380 | of what type of mind could do this.
00:16:37.740 | And well, if you could do that, what else could that mind do
00:16:39.300 | and then reacting to that.
00:16:40.780 | But I get sent all sorts of fails
00:16:43.400 | of these models just failing to do basic useful things.
00:16:46.460 | And because in the end, again,
00:16:47.900 | these are not conceptual models, they're token predictors.
00:16:52.180 | They're just trying to generate text,
00:16:54.140 | at least in the chatbot context,
00:16:55.980 | generate text that's grammatically correct,
00:16:59.180 | using existing text to generate it,
00:17:01.320 | that also matches the key features from the query.
00:17:04.540 | That's what it does.
00:17:05.760 | And OpenAI talks about it as a reasoning agent.
00:17:11.860 | They talk about some of the things
00:17:13.700 | that these models have to learn to do
00:17:15.380 | in order to win at the text guessing game.
00:17:18.180 | They're like, well, it's as if it understands this or that
00:17:20.860 | because it has to be able to understand these things
00:17:23.100 | to be able to do better at predicting the text.
00:17:24.740 | But in the end, all it's doing is predicting text.
00:17:27.820 | And it often, there's lots of interactions
00:17:30.420 | that lots of people have had
00:17:32.300 | that are non-useful and non-impressive,
00:17:34.020 | but they just don't post those on Twitter.
00:17:35.820 | So I do think that effect is going on as well.
00:17:38.860 | None of this is helped by,
00:17:39.700 | there's not a lot of transparency from OpenAI.
00:17:42.060 | There's a lot of questions we have
00:17:42.900 | that we don't understand about how this technology works.
00:17:45.740 | There's not a lot yet
00:17:46.700 | on how people are using this concretely.
00:17:48.940 | We get the chatter on social media,
00:17:50.420 | we get the chatter on YouTube,
00:17:51.980 | and I'm trying to work on this topic now.
00:17:55.380 | Boots on grounds,
00:17:56.340 | real companies actually using these interfaces,
00:17:58.580 | are they having transformative change
00:18:00.260 | or is it being very minor?
00:18:02.020 | We just don't have a good sense yet.
00:18:03.460 | - Yeah.
00:18:04.300 | I saw one concrete thing where the Chegg CEO,
00:18:07.380 | like that online homework thing,
00:18:09.740 | their sales went down and they contributed it too.
00:18:13.020 | But that's whatever.
00:18:14.300 | - Yeah.
00:18:15.140 | So, okay.
00:18:15.980 | So these are the things we should focus on.
00:18:17.260 | So like, what are concrete impacts?
00:18:20.280 | Maybe students can,
00:18:22.380 | it's good at generating text on topics.
00:18:24.660 | So maybe that's gonna make the places
00:18:27.180 | where you buy the pre-written solutions less valuable
00:18:30.260 | because you can now, I guess,
00:18:31.260 | create mills where you just,
00:18:32.740 | instead of paying students,
00:18:33.580 | just have ChatGPT generate a bunch of those essays.
00:18:35.540 | You know, maybe.
00:18:38.180 | I'm also not convinced, however,
00:18:39.260 | that these responses aren't pretty easily identifiable.
00:18:43.740 | At some point you identify the,
00:18:45.660 | it's not hard for teachers to identify the ticks
00:18:47.460 | of these models versus their students.
00:18:49.020 | And yeah, so there we go.
00:18:51.220 | But that's a concrete thing.
00:18:52.260 | So it's like, how worried does that make you?
00:18:53.700 | Like, well, that thing by itself does it.
00:18:55.420 | - Yeah.
00:18:56.260 | - But these are the type of things
00:18:57.080 | we should be looking for.
00:18:58.500 | Yeah.
00:18:59.340 | I mean, the other thing I thought about
00:19:00.260 | is so like OpenAI is real big on,
00:19:02.000 | it does a lot of useful stuff for coders.
00:19:03.740 | They have a customized version that helps you
00:19:05.940 | if you're coding and it can generate early versions of code
00:19:10.100 | or help you understand library interfaces or this or that.
00:19:13.740 | But what came to mind is, you know,
00:19:15.280 | the biggest productivity boost in coding
00:19:18.380 | in the last 20 years was when
00:19:21.340 | the common interface development environments like Eclipse
00:19:24.900 | introduced, I don't know the exact terminology for it,
00:19:27.620 | but it's where, you know,
00:19:29.940 | it auto fills or shows you, for example,
00:19:32.000 | here, oh, you're typing a function.
00:19:34.220 | Here's all the parameters you need to give me, you know?
00:19:36.860 | So you didn't have to look at up in the documentation.
00:19:39.100 | Or if you have an object and object oriented programming
00:19:41.660 | and you're like, I don't know the methods for this object.
00:19:43.500 | You just, you type the object name, press period.
00:19:45.220 | And there's big list.
00:19:46.040 | Oh, here's all the different things you can do with this.
00:19:47.660 | And you select one and it says, oh, here is the parameters.
00:19:51.900 | And like, I noticed this when I'm coding my Arduino
00:19:54.700 | with my son, build video games in Arduino.
00:19:58.460 | It's really useful.
00:19:59.280 | It's like, oh, I need to draw a circle.
00:20:01.500 | Instead of having to go look up how to do that,
00:20:03.580 | you just start typing in draw.
00:20:05.500 | And it's like, oh, here's all the different functions
00:20:07.260 | to start with draw.
00:20:08.080 | Oh, here's draw circle.
00:20:08.920 | You click on it.
00:20:09.740 | It's like, okay, so here's the parameters.
00:20:11.500 | You give it the center and the radius and the color.
00:20:13.980 | Like, great, I don't have to look this up.
00:20:15.380 | That's a huge productivity win.
00:20:17.180 | It makes programming much easier.
00:20:19.100 | No one thought about that as,
00:20:21.540 | is this industry gonna be here?
00:20:23.600 | But it was a huge win, just like version control
00:20:27.020 | was like a huge win and made things really
00:20:29.760 | much more productive for software developers.
00:20:32.300 | But it wasn't, will this field still exist?
00:20:35.060 | And so there's definitely a future where
00:20:37.900 | the LLM impact on coding is like those things.
00:20:41.420 | Like, wow, a bunch of things got a lot easier.
00:20:43.780 | I'm really happy.
00:20:44.820 | I'm glad that got easier.
00:20:46.020 | It reminds me of when GitHub came around
00:20:48.000 | or ID started doing autofill.
00:20:50.980 | Was it an existential risk to our industry?
00:20:52.560 | But it's a march towards making our life easier.
00:20:54.860 | So there's a future in which these are the type
00:20:57.340 | of things we're talking about.
00:20:59.660 | And so I'm waiting to see.
00:21:01.380 | I wanna see concrete.
00:21:02.780 | - Yeah.
00:21:03.600 | - I wanna see concrete things.
00:21:04.440 | So we'll find out.
00:21:05.260 | But anyways, AI null hypothesis is possible.
00:21:07.460 | You should talk about it regardless of what happens.
00:21:09.240 | Focus on the impacts, filter the predictions.
00:21:11.980 | People like making predictions,
00:21:13.140 | but a lot of them are nonsense.