back to index

Are We *Too* Worried About Artificial Intelligence?


Chapters

0:0 Cal's intro
0:35 Tyler Cowen's article
7:20 Who to trust
10:25 AI money
16:0 Need to get concrete

Whisper Transcript | Transcript Only Page

00:00:00.000 | All right, Jesse, let's switch gears
00:00:02.000 | for our Something Interesting segment.
00:00:03.940 | Let's return to AI.
00:00:06.220 | We talked about chat GPT a few weeks ago
00:00:08.600 | in my article for the New Yorker
00:00:09.760 | about how it actually works.
00:00:11.840 | Now I wanna return to the topic of
00:00:14.460 | how should we think about the AI revolution happening now?
00:00:19.460 | How worried should we be?
00:00:21.040 | What is the philosophically the right approach
00:00:24.020 | to this current moment?
00:00:26.500 | The article I wanna use as our foundation here,
00:00:28.760 | what got me thinking about this
00:00:30.460 | was the article I'm bringing up on the screen right now.
00:00:33.780 | So if you're watching, this is episode 247,
00:00:38.140 | and you can find that at youtube.com/calnewportmedia,
00:00:42.040 | or if you don't like YouTube at thedeeplife.com,
00:00:44.980 | just look for episode 247, and you can find this segment,
00:00:47.900 | so you can see the article on the screen.
00:00:49.940 | If you're listening, I'll narrate what I'm talking about.
00:00:52.160 | So the article I'm talking about here is titled,
00:00:55.020 | "There is no turning back on AI."
00:00:59.040 | It was written by the economist Tyler Cowen,
00:01:01.840 | nearby here, George Mason University,
00:01:05.520 | professor and prolific writer of public-facing books.
00:01:08.940 | This is from May 4th,
00:01:11.300 | and he published this in Barry Weiss's newsletter,
00:01:14.760 | The Free Press.
00:01:16.880 | Right, so I wanna highlight a few things from this,
00:01:18.660 | and then I'm gonna riff on it.
00:01:20.520 | All right, so here's the first point.
00:01:23.280 | I'm reading this from Tyler's article.
00:01:25.220 | Artificial intelligence represents a truly major
00:01:29.640 | transformational technological advance.
00:01:33.640 | All right, so he's starting right off
00:01:34.640 | and saying this is a big deal.
00:01:37.620 | All right, he goes on to say,
00:01:42.520 | "In my view, however,
00:01:44.240 | the good will considerably outweigh the bad."
00:01:48.380 | He makes a comparison here to Gutenberg.
00:01:51.840 | He says, "I am reminded of the advent
00:01:53.560 | of the printing press after Gutenberg.
00:01:56.400 | Of course, the press brought an immense amount of good."
00:02:00.800 | It enabled the scientific and industrial revolutions,
00:02:02.720 | among other benefits,
00:02:03.600 | "But it also created writings by Lenin,
00:02:05.640 | Hitler, and Mao's Red Book."
00:02:07.420 | Right, so the printing press brought good, it brought bad,
00:02:10.840 | but in the end, the good outweighed
00:02:14.800 | the disruptions and negativity that came along with it.
00:02:21.280 | So he goes on to say,
00:02:23.020 | "We don't know how to respond psychologically
00:02:26.080 | or for that matter, substantively.
00:02:27.980 | And just about all of the responses I am seeing,
00:02:32.140 | I interpret as copes, whether from the optimist,
00:02:34.800 | the pessimist, or the extreme pessimist."
00:02:38.560 | All right, so this is the setup for Tyler's article.
00:02:41.120 | He says, "I think there's this disruptive change is coming.
00:02:43.080 | It's gonna be like Gutenberg.
00:02:45.040 | The good will eventually outweigh the bad."
00:02:47.400 | But he is making the claim that as a culture right now,
00:02:51.640 | we are not psychologically handling well this reality.
00:02:56.000 | We don't know how to respond to it.
00:02:58.800 | And so he is gonna move on now with a critique
00:03:01.640 | of how we are responding to what he sees to be that reality.
00:03:06.640 | So here's his critique.
00:03:08.160 | The first part of his critique of our current response
00:03:10.420 | is saying, "No one is good at protecting the longer
00:03:15.600 | or even medium term outcomes
00:03:18.000 | of radical technological changes."
00:03:21.160 | No one, not you, not Eleazar.
00:03:25.600 | He's referencing here someone who's on the extreme X risk.
00:03:31.680 | AI is about to take over the world and enslave us.
00:03:35.320 | Not Sam Altman and not your next door neighbor.
00:03:38.800 | So he's arguing, "We are very bad at predicting
00:03:41.960 | the impacts of disruptive technologies in the moment."
00:03:45.000 | He makes it clear, here's his examples.
00:03:46.760 | "How well do people predict the final impacts
00:03:48.840 | of the printing press?
00:03:49.960 | How well do people predict the impacts of fire?"
00:03:51.960 | He's saying they didn't.
00:03:53.160 | In the moment, it's very hard to understand
00:03:54.960 | what's gonna happen.
00:03:56.800 | And so Cowan's making this point.
00:03:59.880 | We know this disruptive thing feels like
00:04:01.360 | it's about to happen, but we're not handling it well
00:04:04.480 | in part because we're missing the reality
00:04:07.300 | that it is very difficult to predict what really happens.
00:04:11.920 | And a lot of the reactions right now
00:04:14.240 | are what he calls COAPs, which are based off
00:04:16.640 | of very specific predictions about what will happen
00:04:19.160 | or what definitely won't happen.
00:04:20.400 | And he thinks that's all just psychologically bankrupt.
00:04:25.280 | It's not really based on reality.
00:04:26.840 | We're just making predictions we have no right to make
00:04:30.440 | and then reacting to those.
00:04:32.440 | All right, then he goes on to apply this critique
00:04:34.520 | specifically to the existential risks of things
00:04:37.800 | like artificial general intelligence.
00:04:41.680 | He says, "When it comes to people who are predicting
00:04:44.480 | this high degree of existential risk," he says,
00:04:46.440 | "I don't actually think arguing back on their chosen terms
00:04:50.200 | is the correct response.
00:04:52.000 | Radical agnosticism is the correct response,
00:04:55.760 | where all specific scenarios are pretty unlikely.
00:04:58.580 | I'm still for people doing constructive work
00:05:01.480 | on the problem of alignment, just as we do
00:05:03.020 | for all other technologies to improve them."
00:05:05.260 | But he's making the case,
00:05:07.160 | you don't need to be worried about that.
00:05:09.860 | People should work on these issues,
00:05:11.680 | but the only actual intellectually consistent position
00:05:15.000 | on something like the existential risk of AI
00:05:16.880 | is this radical agnosticism.
00:05:18.120 | I know a lot of stuff is possible.
00:05:19.780 | All of it's pretty unlikely.
00:05:21.240 | There's a lot of other existential risk in our life
00:05:23.240 | where it's in that same category,
00:05:25.040 | and we could put this in a similar place.
00:05:28.880 | So he goes on to say, "I'm a bit distressed
00:05:30.560 | each time I read an account of a person arguing himself
00:05:34.000 | or arguing herself into existential risk
00:05:36.280 | from AI being a major concern.
00:05:38.860 | No one can foresee those futures.
00:05:42.240 | Once you keep up the arguing,
00:05:44.280 | you also are talking yourself
00:05:45.560 | into an illusion of predictability."
00:05:48.560 | And he goes on to say,
00:05:49.400 | "Once you're trying to predict a future,
00:05:50.880 | it's easy to predict a negative future
00:05:52.500 | than a positive future,
00:05:53.720 | because positive futures are bespoke.
00:05:56.120 | They're built on very specific things,
00:05:58.480 | lean to other very specific things.
00:05:59.740 | That's really hard to try to imagine.
00:06:01.480 | It's much easier to say it all collapses.
00:06:03.840 | That's an easier prediction to make."
00:06:07.480 | So he goes on to say, "Basically,
00:06:10.340 | so for this particular issue of existential risk from AI,"
00:06:14.140 | he says, "It is indeed a distant possibility,
00:06:17.380 | just like every other future
00:06:18.780 | you might be trying to imagine.
00:06:19.980 | All the possibilities are distance.
00:06:21.340 | I cannot stress that enough.
00:06:22.980 | The mere fact that AGI risk can be put on par
00:06:25.580 | with those other also distance possibilities
00:06:27.560 | simply should not impress you very much."
00:06:29.780 | There's a lot of potential futures
00:06:31.180 | to negative things happen.
00:06:32.460 | We're already used to that.
00:06:34.140 | AI doesn't have a particularly new thing
00:06:36.340 | to offer to that landscape.
00:06:39.060 | So in the end, when he's thinking about
00:06:40.460 | how do we consider AI,
00:06:43.120 | he says, "Look, if someone is obsessively arguing
00:06:46.260 | about the details of AI technology today,
00:06:48.900 | or arguments they read on a blog like 'Less Wrong'
00:06:51.780 | from 11 years ago, they won't see this.
00:06:54.280 | But don't be suckered into taking their bait.
00:06:57.140 | The longer historical perspective you take,
00:06:59.460 | the more obvious this point will be."
00:07:02.820 | So let me step back here for a second.
00:07:04.720 | What he's arguing that I agree with,
00:07:08.460 | I think is a unusually pragmatic take on this issue,
00:07:13.460 | given our current cultural moment,
00:07:15.700 | which I'm gonna summarize everything I just said.
00:07:18.300 | We are bad at predicting the impacts of technologies.
00:07:21.180 | So don't trust people who are being very specific
00:07:23.820 | about what's gonna happen with AI
00:07:26.540 | and then trying to react to those.
00:07:28.560 | We can't figure that out with AI,
00:07:31.500 | just like in 15-whatever, Franzi Gutenberg
00:07:35.300 | couldn't even look 20 years into the future
00:07:36.900 | about what the impact of the printing press is gonna be.
00:07:40.100 | So the problem I see,
00:07:42.580 | this is what I think Cowan is very right about.
00:07:44.660 | The problem I see with the current discussion
00:07:46.620 | about artificial intelligence and its impact
00:07:48.700 | is that what a lot of people are doing
00:07:50.940 | is they're looking at often cherry-picked examples
00:07:54.980 | of these technologies at work.
00:07:56.820 | And because these are linguistic examples,
00:07:58.700 | if we're talking about chatbot examples,
00:08:00.500 | they feel very close to us as human beings.
00:08:05.500 | We then try to extrapolate what type of mind
00:08:09.980 | could produce the type of thing I'm seeing in this example.
00:08:12.340 | We might imagine, you know,
00:08:13.220 | my four-year-old couldn't do that.
00:08:14.900 | Maybe my 13-year-old could answer those questions.
00:08:17.780 | So maybe this thing is like a 13-year-old's mind in there.
00:08:20.700 | Once we've imagined the type of mind
00:08:23.020 | that could produce the type of things we've seen,
00:08:25.300 | we then imagine the type of impacts
00:08:27.960 | that that type of imaginary mind might have.
00:08:30.660 | Well, if we had that type of mind,
00:08:32.020 | it could do this and this and that.
00:08:33.440 | Now we've created imagined scenarios
00:08:35.300 | based off of imagined understandings of the technology,
00:08:37.940 | and we treat that like this will happen,
00:08:41.820 | and then we get worried about those scenarios.
00:08:44.580 | And this is exactly what Cowan is saying
00:08:45.740 | that we shouldn't do.
00:08:46.580 | These aren't actually strong predictions.
00:08:49.300 | These are thought experiments.
00:08:50.860 | If we had a mind that could do this,
00:08:52.820 | what types of damage could it wreak?
00:08:54.320 | And then we're getting upset
00:08:55.220 | about the impacts of those thought experiments.
00:08:58.360 | So what I think we should do instead, culturally speaking,
00:09:03.060 | is stop reacting to thought experiments
00:09:05.580 | and start reacting to actual impacts.
00:09:07.600 | I don't necessarily wanna hear any more stories about,
00:09:11.160 | well, in theory, a chatbot that could do this
00:09:14.540 | could also do that, and if it could do that,
00:09:15.980 | this industry could disappear.
00:09:17.840 | I think for the broader public,
00:09:19.320 | what you should filter for is tangible impacts.
00:09:23.400 | This industry changed.
00:09:25.040 | This job doesn't exist anymore.
00:09:28.020 | This company just fired 30% of its staff.
00:09:31.240 | When you see actual tangible benefits,
00:09:33.480 | not predictions about what hypothetical minds might reap,
00:09:36.720 | that's what you should filter for.
00:09:38.380 | That's what you should use to refine your understanding
00:09:40.560 | of whatever ongoing change is happening
00:09:42.440 | and adjust accordingly.
00:09:44.040 | I think that's the filtering we have to do now.
00:09:46.360 | And look, there's no shortage of people
00:09:49.160 | actually attempting things that will generate real impacts.
00:09:52.780 | I spoke on a panel a couple of weekends ago
00:09:55.160 | out in San Francisco on generative AI.
00:09:57.320 | I spoke on the panel with one of the VCs
00:09:59.120 | that funded OpenAI, and he was saying OpenAI
00:10:02.800 | already is bringing in, on track to bring in
00:10:05.340 | more than $100 million in revenue,
00:10:07.520 | commercial revenue of people and companies
00:10:09.780 | paying for API access to their GPT-4 back-end language model.
00:10:14.780 | It's possible they'll be on track for billion-dollar revenue,
00:10:18.840 | annual revenue within a year.
00:10:20.560 | It's an amazingly fast climb, but the point is,
00:10:22.520 | there is a ton of people investing money
00:10:26.680 | to try to use this technology
00:10:28.000 | and integrate it into their work.
00:10:29.180 | So it's not as if there is a shortage of people
00:10:32.760 | doing things that could generate impacts.
00:10:35.200 | So I think the time is right then.
00:10:37.660 | My advice is let's filter for actual impacts.
00:10:39.640 | That's what, unless you're in a very rarified position
00:10:42.340 | that needs to very specifically make predictions
00:10:44.640 | about what's gonna happen to my company
00:10:45.960 | in the next six months, filter for impacts.
00:10:50.520 | If you hear someone say, "This could happen
00:10:52.840 | "because of this example,"
00:10:54.360 | change that to the Charlie Brown voice.
00:10:56.720 | Wah, wah, wah, wah, wah.
00:10:57.840 | What you wanna hear is, "This company just shut down.
00:11:00.740 | "That's interesting.
00:11:02.000 | "That's a data point you should care about.
00:11:04.500 | "This publishing imprint just fired all of its authors.
00:11:07.520 | "Ooh, that's a data point.
00:11:08.740 | "Okay, now I'm starting to understand what's going on."
00:11:11.240 | 'Cause there's no real reason to get upset
00:11:13.740 | about things that are, these predictions
00:11:16.120 | that are based off hypothetical minds,
00:11:17.320 | 'cause most of them won't come true.
00:11:19.720 | This is Cognitive Behavioral Therapy 101.
00:11:22.720 | It's called the Predicting the Future, Cognitive Distortion.
00:11:25.800 | It's not worth getting upset about predicted futures.
00:11:27.960 | It's much better to confront things
00:11:29.640 | that actually have happened.
00:11:32.080 | Now, there is one place where I disagree some with Cowan.
00:11:37.160 | So, I think Cowan takes this for granted
00:11:40.600 | that we know now that there will be a major disruption
00:11:42.640 | Gutenberg style.
00:11:44.120 | That's the, he takes that for granted.
00:11:46.240 | And I think very wisely says,
00:11:48.440 | you kind of have to just go along for the ride,
00:11:50.320 | react to actual impacts,
00:11:51.780 | not the prediction of impacts, et cetera, et cetera.
00:11:54.680 | I'm not convinced yet
00:11:57.000 | that a major disruption is gonna happen.
00:11:59.040 | I'm not convinced that it's not gonna happen either,
00:12:02.140 | but we need to still take seriously
00:12:04.680 | until disproved what I call the AI null hypothesis.
00:12:09.600 | The AI null hypothesis is the claim
00:12:15.120 | that the ultra large language model revolution
00:12:18.480 | that kicked off two years ago with GPT-3
00:12:20.520 | in the next five or seven years
00:12:22.960 | is not actually going to make a notable impact
00:12:26.360 | on most people's lives.
00:12:27.940 | That hypothesis has not yet been disproven.
00:12:32.520 | The way that will be disproven,
00:12:36.320 | and this is our Karl Popper here.
00:12:38.160 | The way that that hypothesis will be proven
00:12:40.960 | is when actual impacts, not predictions of impacts,
00:12:44.080 | not thought experiments about if it could do this
00:12:45.800 | and it could do that,
00:12:46.680 | when actual impacts do show up
00:12:48.360 | that begin having a material impact
00:12:50.480 | on the day-to-day experience of people in their lives,
00:12:53.260 | then it will be disproven.
00:12:55.400 | But until we get there,
00:12:56.320 | we do have to keep that
00:12:57.360 | as one of the possibilities going forward.
00:12:59.800 | And based on what I know right now,
00:13:01.280 | I would say it's not a given.
00:13:04.880 | It's not even necessarily the likely outcome,
00:13:07.660 | but probably the percentage chance here
00:13:09.960 | that the AI null hypothesis proves true
00:13:11.860 | is probably somewhere between 10 to 50%.
00:13:13.960 | It's non-trivial.
00:13:14.800 | And so I think we do have to keep that in mind.
00:13:18.420 | It's possible as well that it turns out
00:13:24.320 | that these ultra large language models,
00:13:26.640 | though impressively linguistic,
00:13:27.960 | when we begin to try to make them focused in
00:13:29.840 | and bespoke on these particular issues,
00:13:32.340 | the issues with hallucination,
00:13:34.120 | the issues with non-conceptual thinking,
00:13:36.600 | the limits that turn out to emerge from under the hood,
00:13:40.100 | what you're doing is token guessing
00:13:41.560 | with the goal of trying to create
00:13:42.940 | grammatically correct sentences
00:13:44.200 | that match content and style cues,
00:13:46.360 | that that may end up being more limited than we think.
00:13:48.760 | The computational expense is larger than it's worth it.
00:13:52.040 | We have a OpenAI gets a couple hundred billion dollars
00:13:54.520 | worth of API subscriptions,
00:13:55.740 | and then it dwindles back down again,
00:13:57.100 | because it turns out,
00:13:57.940 | "Huh, this is not really opening up something
00:13:59.600 | that I wasn't already able to more or less do,
00:14:03.400 | or to use a bespoke AI model,
00:14:05.100 | or to actually just hire someone to do this."
00:14:06.960 | And that's very possible.
00:14:09.100 | And that we're actually at the peak right now
00:14:11.140 | of a hundred million people just liking to use the chatbot
00:14:14.480 | and being impressed by what it does.
00:14:16.240 | I'm not saying that's gonna happen,
00:14:17.280 | but it's also not a weird thesis.
00:14:19.280 | So I do think that needs to be in the hypothesis,
00:14:21.860 | is the AI null hypothesis itself
00:14:23.940 | is something that's still on the table.
00:14:26.360 | So we have a wide extreme of possibilities here
00:14:28.680 | from the AI null hypothesis to the most extreme AGIs,
00:14:33.440 | we're a couple of years away
00:14:34.480 | from being enslaved by computers.
00:14:36.640 | We have a whole spectrum here,
00:14:38.560 | and we're not very certain at all.
00:14:40.560 | So I wanted to throw that out there.
00:14:43.660 | But here's the big thing I think that's important.
00:14:45.180 | And the big takeaway I want for people
00:14:46.500 | who are thinking about how to think about AI,
00:14:48.440 | is think less and react less to the chatter right now,
00:14:52.600 | filter for actual impacts.
00:14:54.660 | And I think there's at least a 50% chance
00:14:57.380 | that Cowan is right, those impacts will come.
00:15:00.120 | And it's best not to fight them,
00:15:01.700 | there's nothing you can do about them.
00:15:03.280 | Be aware of the actual impacts, adjust as needed,
00:15:05.700 | hold on for the ride,
00:15:06.700 | probably will end up better off than worse off.
00:15:09.920 | But also be ready if you're looking for these impacts,
00:15:12.040 | they may never really come at a rate
00:15:13.880 | that has any impact on your life as well,
00:15:15.640 | that's equally as possible.
00:15:17.140 | I completely agree with Cowan however,
00:15:19.860 | is unless again, you're really plugged into this world
00:15:22.800 | and as part of your job,
00:15:24.580 | do not spend much time debating with arguing with
00:15:27.200 | or trying to understand people
00:15:28.400 | who are making very specific predictions
00:15:30.140 | about what hypothetical minds might be able to do.
00:15:33.400 | We don't know what's gonna happen yet.
00:15:35.720 | I'm willing to see.
00:15:37.080 | I think it's important to recognize that,
00:15:38.960 | Gutenberg did change everything,
00:15:40.680 | but we don't remember all of the other innovations
00:15:43.960 | and inventions that really were exciting at the time,
00:15:46.940 | but then it ended up fundamentally destabilizing the world.
00:15:50.480 | There's more of those than the first.
00:15:52.320 | So let's be wary, but let's get more concrete.
00:15:56.580 | Let's stay away from those weird YouTube channels
00:15:58.980 | where people are just yelling about everything
00:16:00.920 | that's gonna happen, good or bad.
00:16:03.680 | We'll like take Cowan's advice.
00:16:06.120 | We'll take in the concrete stuff,
00:16:07.520 | we'll roll with the punches.
00:16:08.800 | It'll be interesting, a little bit nerve wracking,
00:16:10.600 | interesting to see what those punches are.
00:16:12.000 | I don't know what they're gonna be yet
00:16:14.280 | and don't trust anyone who says that they do.
00:16:16.540 | There we go, Jesse, the AI null hypothesis.
00:16:20.280 | No one's talking about that because it's not exciting.
00:16:23.040 | You don't get YouTube views for it
00:16:25.200 | and there's no article clicks that come from it.
00:16:28.440 | But I'll tell you this,
00:16:29.800 | I get sent a lot of people who are in my network.
00:16:32.520 | They send me a lot of chat GPT fails, for example.
00:16:37.120 | And one of the thing I think is going on here
00:16:39.240 | is there's this highly curated element
00:16:42.640 | to what people are seeing.
00:16:44.900 | People are generating these really cool examples
00:16:47.520 | and then you see a bunch of these really cool examples
00:16:49.360 | and then you go down the rabbit hole
00:16:50.540 | of what type of mind could do this
00:16:51.920 | and well, if you could do that, what else could that mind do
00:16:53.480 | and then reacting to that.
00:16:54.940 | But I get sent all sorts of fails of these models
00:16:58.240 | just failing to do basic useful things.
00:17:00.640 | And because in the end, again,
00:17:02.080 | these are not conceptual models, they're token predictors.
00:17:06.360 | They're just trying to generate text,
00:17:08.320 | at least in the chatbot context,
00:17:10.160 | generate text that's grammatically correct,
00:17:13.340 | using existing text to generate it,
00:17:15.480 | that also matches the key features from the query.
00:17:18.720 | That's what it does.
00:17:19.920 | And OpenAI talks about it as a reasoning agent.
00:17:26.040 | They talk about some of the things
00:17:27.900 | that these models have to learn to do
00:17:29.560 | in order to win at the text guessing game.
00:17:32.360 | They're like, well, it's as if it understands this or that
00:17:35.040 | because it has to be able to understand these things
00:17:37.280 | to be able to do better at predicting the text.
00:17:38.920 | But in the end, all it's doing is predicting text.
00:17:42.000 | And it often, there's lots of interactions
00:17:44.580 | that lots of people have had
00:17:46.480 | that are non-useful and non-impressive,
00:17:48.200 | but they just don't post those on Twitter.
00:17:49.980 | So I do think that effect is going on as well.
00:17:52.900 | You know, none of this is helped by,
00:17:53.820 | there's not a lot of transparency from OpenAI.
00:17:56.280 | There's a lot of questions we have
00:17:57.360 | that we don't understand about how this technology works.
00:17:59.960 | There's not a lot yet
00:18:00.920 | on how people are using this concretely.
00:18:03.160 | We get the chatter on social media,
00:18:04.640 | we get the chatter on YouTube,
00:18:05.720 | but, and I'm working on a,
00:18:07.360 | I'm trying to work on this topic now.
00:18:09.600 | Boots on grounds,
00:18:10.560 | real companies actually using these interfaces.
00:18:12.800 | Are they having transformative change
00:18:14.480 | or is it being very minor?
00:18:16.240 | We just don't have a good sense yet.
00:18:17.680 | - Yeah.
00:18:18.520 | I saw one concrete thing where the Chegg CEO,
00:18:21.600 | like that online homework thing,
00:18:23.960 | their sales went down and they contributed it too.
00:18:27.240 | But that's whatever.
00:18:28.520 | - Yeah.
00:18:29.360 | So, okay.
00:18:30.180 | So these are the things we should focus on.
00:18:31.440 | So like, what are concrete impacts?
00:18:34.480 | Maybe students can,
00:18:36.600 | it's good at generating text on topics.
00:18:38.880 | So maybe that's gonna make the places
00:18:41.380 | where you buy the pre-written solutions less valuable
00:18:44.480 | because you can now, I guess, create mills
00:18:46.280 | where you just, instead of paying students,
00:18:47.760 | just have chat GPT generate a bunch of those essays.
00:18:49.760 | You know, maybe.
00:18:51.440 | I'm also not convinced however,
00:18:53.440 | that these responses aren't pretty easily identifiable.
00:18:57.920 | At some point you identify the,
00:18:59.840 | it's not hard for teachers to identify the ticks
00:19:01.640 | of these models versus their students.
00:19:03.200 | And yeah, so there we go.
00:19:05.400 | But that's a concrete thing.
00:19:06.440 | So it's like, how worried does that make you?
00:19:07.840 | Like, well, that thing by itself does it.
00:19:09.560 | - Yeah.
00:19:10.400 | - But these are the type of things we should be looking for.
00:19:12.680 | Yeah.
00:19:13.520 | I mean, the other thing I thought about is,
00:19:14.600 | so like OpenAI is real big on,
00:19:16.160 | it does a lot of useful stuff for coders.
00:19:17.920 | They have a customized version that helps you
00:19:20.120 | if you're coding
00:19:21.620 | and it can generate early versions of code
00:19:24.280 | or help you understand library interfaces or this or that.
00:19:27.920 | But what came to mind is, you know,
00:19:29.440 | the biggest productivity boost in coding
00:19:32.560 | in the last 20 years
00:19:34.960 | was when the common interface development environments
00:19:37.860 | like Eclipse introduced,
00:19:40.040 | I don't know the exact terminology for it,
00:19:41.800 | but it's where, you know,
00:19:44.100 | it auto fills or shows you, for example,
00:19:46.160 | here, oh, you're typing a function.
00:19:48.400 | Here's all the parameters you need to give me.
00:19:50.800 | You know, so you didn't have to look at up
00:19:52.040 | in the documentation.
00:19:53.320 | Or if you have an object and object oriented programming
00:19:55.920 | and you're like, I don't know the methods for this object.
00:19:57.760 | You just, you type the object name, press period.
00:19:59.480 | And there's big list.
00:20:00.300 | Oh, here's all the different things you can do with this.
00:20:01.920 | And you select one and it says, oh, here is the,
00:20:04.580 | here is the parameters.
00:20:06.160 | And like, I noticed this when I'm coding my Arduino
00:20:08.960 | with my son, build video games in Arduino.
00:20:12.680 | It's really useful.
00:20:13.520 | It's like, oh, I need to draw a circle.
00:20:15.760 | Instead of having to go look up how to do that,
00:20:17.820 | you just start typing in draw.
00:20:19.760 | And it's like, oh, here's all the different functions
00:20:21.480 | to start with draw.
00:20:22.320 | Oh, here's draw circle.
00:20:23.140 | You click on it.
00:20:23.980 | It's like, okay, so here's the parameters.
00:20:25.720 | You give it the center and the radius and the color.
00:20:28.240 | Like, great, I don't have to look this up.
00:20:29.640 | That's a huge productivity win.
00:20:31.400 | It makes programming much easier.
00:20:33.340 | No one thought about that as,
00:20:35.800 | is this industry gonna be here?
00:20:37.840 | But it was a huge win, just like a version control
00:20:41.260 | was like a huge win and made things really
00:20:44.000 | much more productive for software developers.
00:20:46.560 | But it wasn't, will this field still exist?
00:20:49.280 | And so there's definitely a future where
00:20:52.120 | the LLM impact on coding is like those things.
00:20:55.640 | Like, wow, a bunch of things got a lot easier.
00:20:58.000 | I'm really happy.
00:20:59.040 | I'm glad that got easier.
00:21:00.240 | It reminds me of when GitHub came around
00:21:02.200 | or ID started doing autofill.
00:21:05.200 | Was it an existential risk to our industry?
00:21:06.780 | But it's a march towards making our life easier.
00:21:09.060 | So there's a future in which these are the type
00:21:11.560 | of things we're talking about.
00:21:13.840 | And so I'm waiting to see, I wanna see concrete.
00:21:17.480 | I wanna see concrete things.
00:21:18.520 | So we'll find out, but anyways,
00:21:19.920 | AI null hypothesis is possible.
00:21:21.680 | You should talk about it, regardless of what happens,
00:21:23.460 | focus on the impacts, filter the predictions.
00:21:26.200 | People like making predictions,
00:21:27.360 | but a lot of them are nonsense.
00:21:28.640 | All right, speaking of nonsense,
00:21:31.320 | we should probably wrap this up, Jesse.
00:21:32.920 | Thank you everyone who listened.
00:21:34.520 | We'll be back next week with another episode of the show.
00:21:36.760 | And until then, as always, stay deep.
00:21:39.720 | (upbeat music)
00:21:43.220 | (upbeat music)