back to index

An Important Message On AI & Productivity: How To Get Ahead While Others Panic | Cal Newport


Chapters

0:0 Can A.I. Empty My Inbox?
40:47 Should I continue to study programming if AI will eventually replace software jobs?
45:25 Is it bad to use ChatGPT to assist with your writing?
51:30 How do I reclaim my workspace for Deep Work?
56:21 How do I decide what to do on my scheduled mini-breaks at work?
58:54 Heidegger’s view on technology
65:18 Seasonality with a partner and kids
76:15 A Silicon Valley Chief of Staff balancing work and ego
86:17 General Grant’s Slow Productivity

Whisper Transcript | Transcript Only Page

00:00:00.000 | So Andrew Morant recently had a great article
00:00:04.160 | in the New Yorker.
00:00:05.640 | It was titled, "Okay, Doomer" in the magazine
00:00:08.860 | and among the AI doomsayers online.
00:00:12.560 | In this article, Morant spends a lot of time
00:00:14.380 | with AI safetyist, or as they sometimes call themselves,
00:00:18.000 | deaccelerationist, doesn't roll off the tongue.
00:00:21.660 | There's a group of people who live
00:00:24.040 | all in the same area in the Bay Area
00:00:25.320 | and worry a lot about the possibility
00:00:27.200 | of AI destroying humanity.
00:00:29.260 | They even have a shorthand for measuring their concern,
00:00:32.600 | P-Doom, probability of doom.
00:00:35.520 | So they ask each other, what's your P-Doom?
00:00:37.680 | And if you say 0.9, for example,
00:00:39.720 | that means I'm 90% sure that AI is gonna destroy humanity.
00:00:44.720 | All right, so how do we connect this?
00:00:46.280 | Why did this get me thinking about our discussions here
00:00:48.480 | about finding depth in a high-tech world?
00:00:51.200 | Well, I couldn't help but thinking
00:00:52.960 | as I read about these AI safetyists
00:00:55.120 | and their concerns about P-Doom,
00:00:57.460 | that they weren't really getting at
00:00:58.720 | what might be the much more proximate
00:01:00.920 | and important issue for most people when it comes to AI,
00:01:04.120 | at least for the tens of millions of knowledge workers
00:01:06.600 | who spend their entire day jumping in and out
00:01:08.680 | of email inboxes that fill faster
00:01:11.240 | than they can keep up with them.
00:01:12.840 | For this large group of people, a big part of our audience,
00:01:15.960 | perhaps the even more pressing question
00:01:17.840 | about AI is the following.
00:01:20.120 | When will it be able to empty my email inbox on my behalf?
00:01:25.000 | When will AI make the need to check an inbox anachronistic,
00:01:30.000 | like trying to put new paper into the fax machine
00:01:32.460 | or waiting for the telegraph operator to get there?
00:01:34.940 | When will AI give me a world of work
00:01:36.700 | where I'm not context shifting constantly back and forth
00:01:39.420 | between 50 different things,
00:01:40.560 | but can just work on one thing at a time?
00:01:43.540 | In other words, forget P-Doom,
00:01:45.760 | what is our current P-Inbox zero?
00:01:48.580 | Now, I recently wrote my own article for "The New Yorker"
00:01:51.420 | that goes deep into the current limitations
00:01:53.380 | of language model-based AI and what the future may hold.
00:01:57.180 | I had this problem of AI and email firmly in mind
00:02:00.960 | when I wrote that.
00:02:01.800 | So here's what I wanna do today.
00:02:02.820 | I wanna dive into this topic.
00:02:04.980 | So there's three things I'm gonna cover in a row here.
00:02:07.580 | Number one, let's take a quick look at this promised world
00:02:10.840 | in which AI could perhaps tame the hyperactive hive mind.
00:02:13.740 | I think this is potentially more transformative
00:02:16.580 | than people realize right now.
00:02:18.280 | Two, I wanna look closer at the state of AI
00:02:21.700 | and its ability to tame things like email right now.
00:02:25.180 | I actually use ChatGPT to help answer some of my emails,
00:02:29.500 | and we'll talk about those examples
00:02:30.940 | in part two of this deep dive.
00:02:33.140 | And then part three, what is the technical challenges
00:02:36.340 | currently holding us back from a full email-managing AI?
00:02:41.260 | This is where we'll get into my latest "New Yorker" article.
00:02:44.580 | What's holding us back?
00:02:46.140 | Can we overcome those obstacles?
00:02:47.660 | Who's working on overcoming those obstacles?
00:02:50.140 | All right, so look, on this show,
00:02:52.080 | one of the topics we talk about
00:02:53.400 | is taming digital knowledge work.
00:02:54.780 | Another topic we talk about
00:02:56.180 | is the promise and perils of new technology.
00:02:58.180 | Today in this deep dive, we're putting those two together.
00:03:01.820 | We have this topic, when can AI clean my inbox?
00:03:04.820 | This should be relevant to both.
00:03:06.420 | All right, let's get started.
00:03:08.200 | Part one, when we talk about AI's impact on the workplace,
00:03:13.200 | especially knowledge work,
00:03:15.400 | there tends to be three examples of general examples
00:03:19.020 | of how AI is gonna help the office that tend to come up.
00:03:22.240 | Number one is the full automation of jobs, right?
00:03:26.360 | So we hear about, for example, vertical AI tools
00:03:29.740 | that's gonna take over a customer service role.
00:03:32.020 | So this means that job goes away,
00:03:34.420 | the AI does the whole thing.
00:03:35.820 | The second type of thing we hear about AI in the workplace
00:03:39.600 | is the AI speeding up the steps of tasks
00:03:43.540 | that you already currently do in your job.
00:03:45.980 | Hey, summarize this note for me right away
00:03:48.460 | so I don't have to read the whole thing.
00:03:49.940 | Write a first draft of this memo,
00:03:52.580 | it's gonna save me time actually typing.
00:03:55.620 | Gather me examples to use in this pitch.
00:04:00.620 | Create a slide that looks like this and use these graphics.
00:04:03.460 | So it's, you're doing the tasks,
00:04:05.020 | but the AI speeds up elements of the task.
00:04:09.900 | The third area I see discussed a lot
00:04:11.420 | with AI's impact on the workplace
00:04:12.860 | is brainstorming or generating ideas.
00:04:15.580 | This is really big, I think right now,
00:04:16.960 | because we're mainly interacting with these tools
00:04:18.780 | through a chat interface.
00:04:20.500 | Hey, give me three ideas for this.
00:04:22.500 | Do you think this is a good idea?
00:04:24.660 | What's something I could write about here?
00:04:26.140 | So there's this sort of back and forth dialogue
00:04:28.940 | people are having with chatbots in particular
00:04:30.900 | to help come up with ideas or brainstorm.
00:04:33.240 | As we know on the show, however,
00:04:36.180 | none of those three things are really getting at
00:04:38.140 | what I think is the core issue
00:04:40.360 | that I think affects every knowledge worker,
00:04:42.860 | the issue that is driving the current burnout crisis,
00:04:46.060 | an issue that is holding down productivity
00:04:49.420 | in the knowledge workspace,
00:04:50.260 | and I mean that in the macroeconomic sense
00:04:51.920 | more than anything else,
00:04:53.180 | and that's the hyperactive hive mind.
00:04:56.340 | We talk about this all the time,
00:04:57.860 | but we have set up a way of collaboration
00:05:00.580 | that's almost ubiquitous within knowledge work
00:05:02.760 | where we have unscheduled back and forth messages
00:05:04.820 | to work everything out,
00:05:06.320 | email and also in other tools like Slack.
00:05:09.340 | The problem with this is that we have to constantly
00:05:11.660 | tend to these back and forth messages.
00:05:14.080 | If I have seven things I'm trying to figure out
00:05:16.220 | and each of those things has seven or eight messages
00:05:18.260 | that I have to bounce back and forth with someone else
00:05:20.220 | today to get to an answer,
00:05:21.680 | that's a huge number of messages that I have to see
00:05:24.660 | and respond to throughout the day,
00:05:26.260 | which means I have to constantly check my inboxes
00:05:29.500 | so I can see a message when it comes in
00:05:31.160 | and reply to it right away.
00:05:33.260 | Every time I check this inbox,
00:05:34.420 | I see a whole pile of different messages,
00:05:36.700 | most of which are emotionally salient
00:05:38.940 | because they are coming from people I know
00:05:40.660 | who need things from me, so we take them very seriously,
00:05:43.780 | and the cognitive context represented
00:05:45.800 | by each message is diverse.
00:05:47.880 | So now I have to jump my attention target
00:05:50.400 | from one thing to another thing to another thing
00:05:52.280 | within my inbox, back to my work, back to the inbox,
00:05:54.600 | between different messages in the inbox.
00:05:56.520 | This is a cognitive disaster.
00:05:58.120 | It is hard for our brain to change its focus of attention.
00:06:02.320 | It needs time to do that.
00:06:03.720 | So this forcing ourselves to constantly jump around
00:06:06.580 | and keep up with all this incoming,
00:06:08.080 | each of which is dealing with different issues
00:06:09.960 | and different contexts and information,
00:06:11.880 | it exhausts us, it leads to burnout,
00:06:14.700 | it makes work a trial,
00:06:16.380 | and it significantly reduces the quality
00:06:19.860 | of what we produce and the speed at which we produce it.
00:06:22.220 | This hyperactive hive mind workflow is a huge problem.
00:06:26.340 | In my book, "Slow Productivity,"
00:06:27.660 | I get into how we got here.
00:06:29.860 | The first chapter of the book goes deep on it,
00:06:32.000 | but it is a big problem.
00:06:34.020 | This is where I want to see AI make a difference.
00:06:37.260 | Imagine if what AI could do
00:06:40.060 | is handle that communication for you.
00:06:42.340 | Handle it like a chief of staff.
00:06:44.800 | Handle it like Leo McGarry
00:06:47.160 | in the Aaron Sorkin television show, "The West Wing,"
00:06:50.160 | the chief of staff for Martin Sheen's President Bartlett.
00:06:53.000 | Someone, an agent that could sit there,
00:06:55.160 | see the incoming messages, and process them for you,
00:06:59.120 | many of which they might be able to handle directly.
00:07:02.240 | Filter it, give a quick response.
00:07:03.920 | You never have to see it.
00:07:04.920 | You never have to shift into that context.
00:07:06.960 | And for the things that it can't directly manage for you,
00:07:10.360 | it can just wait until you're next ready to check in
00:07:12.840 | after you finish what you're working on.
00:07:14.600 | And your AI chief of staff could, in this daydream,
00:07:18.080 | ask you questions to direct it what to do.
00:07:21.000 | "Hey, we got something about a meeting.
00:07:23.840 | Should we try to schedule this?"
00:07:24.960 | And you're like, "Yeah, but put it on a Tuesday
00:07:26.920 | or Wednesday and don't do it too late."
00:07:28.120 | And it's like, "Great, I'll handle this for you."
00:07:29.860 | Or, "Here are three projects
00:07:31.360 | which we got updates on today.
00:07:33.520 | Do you want to hear a summary of any of these updates?"
00:07:35.520 | And you would say, "Yeah, tell me the update on this project.
00:07:38.360 | Hey, there's this thing that we heard
00:07:40.160 | from your department chair.
00:07:41.400 | There's a departmental open house."
00:07:43.240 | This is BB and the AI.
00:07:44.920 | "Do you want to sign up for this?
00:07:46.480 | Do you want to do this?"
00:07:47.320 | And I'd be like, "Yeah, find me a slot on Friday
00:07:49.220 | that works with my schedule."
00:07:50.060 | "Great, I'll do this for you."
00:07:51.120 | And then you go back to what you're doing.
00:07:53.720 | So imagine that.
00:07:54.560 | You don't have to keep up with an inbox.
00:07:56.560 | You don't have to dive in in this daydream
00:07:58.840 | and see all these messages
00:08:00.360 | and try to switch your attention from one to another,
00:08:02.800 | which we do bad, but an AI could do well.
00:08:05.520 | I think the productivity gain of an AI agent
00:08:08.020 | that could mean you no longer have to even see
00:08:10.200 | an email inbox would be enormous.
00:08:13.920 | I mean, I think we would see this
00:08:15.880 | in the macroeconomic productivity measures.
00:08:18.760 | The quality and quantity of what's being produced
00:08:21.020 | in non-industrial work is going to skyrocket
00:08:24.300 | if we took off this massive cognitive tax.
00:08:27.000 | I think we would also see subjective satisfaction measures
00:08:30.160 | and knowledge work go right up.
00:08:32.640 | Oh my God, I'm just working on things.
00:08:34.720 | And I have this sort of assistant agent
00:08:37.000 | that I talk to two or three times a day
00:08:39.040 | and kind of handles everything for me.
00:08:40.720 | And then I go back to just working on things.
00:08:43.800 | To me, that's the dream of AI and knowledge work,
00:08:46.880 | much more so than, well, when I'm just in the inbox myself,
00:08:50.620 | the AI agent's going to help me write a draft.
00:08:53.880 | Or when I'm working on this project,
00:08:55.480 | it can speed up my steps a little bit.
00:08:58.200 | I don't care about the speed at which I do my tasks.
00:09:00.920 | I want to eliminate all the context shifts.
00:09:02.640 | I want to eliminate the need to have to constantly change
00:09:06.280 | what I'm focusing on from one to another project
00:09:08.440 | to keep interrupting my attention to go back
00:09:10.440 | and manage back and forth conversations.
00:09:13.360 | So that would be massive.
00:09:14.600 | All right, so here's the second question, part two.
00:09:18.400 | How close are we to that daydream of an AI
00:09:22.280 | that could handle our email inbox for us?
00:09:25.120 | Well, I was messing around with chat GPT recently.
00:09:28.720 | And what I did is I copied some emails
00:09:30.560 | from my actual email inbox and asked it some questions.
00:09:33.920 | I wanted to see how well it would do
00:09:35.720 | understanding my emails and writing to people on my behalf.
00:09:41.640 | All right, so the first thing I did
00:09:42.680 | is I had a message here from a pastor.
00:09:47.220 | It was an interesting, it was a longer message.
00:09:48.920 | And I saw that in this message,
00:09:50.660 | the pastor was talking about my recent coverage
00:09:54.120 | of Abraham Joshua Heschel's book, "The Sabbath,"
00:09:58.420 | and talks about some points from it, some extra points.
00:10:02.020 | And there's like offering to send me some book.
00:10:04.620 | So I asked chat GPT,
00:10:06.420 | hey, could you just summarize this for me?
00:10:08.500 | Hey there, I wanna take a quick moment
00:10:10.660 | to tell you about my new book,
00:10:12.580 | "Slow Productivity,
00:10:14.340 | The Lost Art of Accomplishment Without Burnout."
00:10:18.900 | If you like the type of things I talk about on this channel,
00:10:21.880 | you're really gonna like this book.
00:10:24.260 | It distills all of my ideas into a clear philosophy,
00:10:28.180 | combined with step-by-step instructions
00:10:30.740 | for putting it into action.
00:10:32.860 | So to find out more about the book,
00:10:34.700 | check out calnewport.com/slow.
00:10:38.620 | Everything you need, you can find there.
00:10:40.860 | All right, thanks, let's get back to it.
00:10:42.660 | And did a great job.
00:10:43.500 | It was like this person is a pastor with this church.
00:10:46.220 | He's reaching out to Cal to express interest
00:10:48.180 | in Cal's work on intellectual workflow
00:10:50.420 | and his application pastoral duties.
00:10:51.980 | He noted the challenges based on this.
00:10:53.620 | He highlights, blah, blah, blah.
00:10:55.220 | He offered to send you a copy of this book.
00:10:57.180 | Like his one paragraph got to all the main points.
00:11:00.220 | So then I tested Chat GPT's people skills.
00:11:02.620 | And I said, "Can you write for me a polite reply?"
00:11:06.540 | And in this polite reply,
00:11:07.500 | declined the copy of the book that was being offered.
00:11:11.380 | I should say in reality,
00:11:12.220 | I'm actually interested in this book.
00:11:13.780 | This is just, I wanted to test
00:11:16.140 | the people skills of Chat GPT, right?
00:11:18.780 | And it wrote a good email.
00:11:19.620 | "Hey, thank you for reaching me out
00:11:20.900 | "with your thoughtful message.
00:11:22.320 | "I truly appreciate your insights.
00:11:24.500 | "I'm genuinely grateful for your offer
00:11:27.260 | "to send me a copy of your book on blah, blah, blah.
00:11:30.560 | "And while I see it's valuable,
00:11:31.740 | "I must regretfully decline.
00:11:32.920 | "Your dedication is great.
00:11:34.040 | "Thanks again for reaching out."
00:11:36.340 | It was actually a pretty good response.
00:11:38.500 | All right, here's another example.
00:11:40.260 | Someone sent me a message that was saying,
00:11:42.280 | "Hey, you should see this anecdote
00:11:43.980 | "about General Grant and slow productivity."
00:11:46.940 | Spoiler alert, I'm gonna actually talk about this
00:11:49.340 | later in the show.
00:11:50.820 | But I said, "Give me bullet points."
00:11:52.460 | And Chat GPT did, it gave me three bullet points.
00:11:54.780 | He expresses gratitude about this.
00:11:56.940 | He shares an anecdote from this book about that.
00:11:59.580 | He attached the following to the message.
00:12:02.300 | So what I'm seeing as I look at and test Chat GPT
00:12:06.980 | with my emails is it can understand emails.
00:12:10.060 | Like it can understand what and summarize
00:12:13.340 | what are in these emails.
00:12:14.380 | And it's good at writing responses.
00:12:16.220 | If you tell it what you wanna do,
00:12:18.860 | it can write perfectly reasonable responses.
00:12:23.060 | So are we at our promised future?
00:12:25.060 | Is P inbox 01?
00:12:27.500 | Well, not yet.
00:12:29.060 | Because here's the problem.
00:12:30.260 | Right now, I am still directing all of this.
00:12:34.180 | I am loading the message.
00:12:35.160 | I'm looking at the message.
00:12:36.100 | I'm telling Chat GPT, "Summarize the message."
00:12:38.100 | I'm making a decision about what to do
00:12:39.900 | on behalf of this message.
00:12:41.260 | And then telling that to Chat GPT.
00:12:43.680 | Really at best, it's marginally speeding up the time
00:12:46.460 | required to go through my inbox,
00:12:48.460 | writing some things faster, preventing some reading.
00:12:52.400 | But it has no impact on my need to actually
00:12:54.340 | have to encounter each of these messages,
00:12:56.140 | to actually do the context switching,
00:12:58.220 | to have to keep up with my inbox
00:12:59.940 | and make sure messages are being sent back and forth.
00:13:02.040 | It can under process messages.
00:13:04.180 | It can write messages.
00:13:05.960 | But these large language model tools right now
00:13:07.620 | can't take over control of the inbox.
00:13:10.600 | All right, so this brings us to part three.
00:13:14.000 | What is needed to do that?
00:13:16.640 | And this is where I want to bring up the article
00:13:19.020 | that I wrote recently for The New Yorker.
00:13:20.980 | So I'm going to put it up here on the screen
00:13:22.500 | for those who are watching.
00:13:23.920 | They have a really cool graphic.
00:13:24.840 | I love when they do these animated graphics.
00:13:27.180 | I don't know if you can see it in the little corner here,
00:13:28.760 | but it's a hand placing a chess piece.
00:13:31.840 | So my article is entitled, "Can an AI make plans?
00:13:35.500 | Today's systems struggle to imagine the future,
00:13:37.600 | but that may soon change."
00:13:40.040 | So here's the big point about this article.
00:13:44.520 | The latest generation of large language model tools
00:13:48.380 | can do a lot of cool things,
00:13:50.640 | a lot of really impressive things,
00:13:51.900 | especially the sort of GPT-4 generation of language models.
00:13:56.840 | But there's a lot of recent research literature
00:13:58.900 | from the last year or so that is saying
00:14:00.320 | there's one thing they can't do.
00:14:02.740 | And this has been replicated in paper after paper.
00:14:05.760 | They can't simulate the future.
00:14:07.300 | So if you ask a language model to do something
00:14:11.440 | that requires it to actually look ahead and say,
00:14:14.500 | "What's the impact of what I'm about to do?"
00:14:17.240 | They fail.
00:14:18.080 | So there was an example I gave in the article
00:14:21.300 | from Sebastian Bubek from Microsoft Research,
00:14:23.640 | who wrote a big paper, led a research group
00:14:25.640 | who wrote a big paper about GPT-4.
00:14:27.840 | He said, "Look, this is really, GPT-4 is really impressive."
00:14:31.320 | He says in a talk about his paper,
00:14:33.680 | "If your perspective is what I care about
00:14:35.760 | is to solve problems, to think abstractly,
00:14:37.680 | to comprehend complex ideas,
00:14:39.120 | the reason on new elements that arrive at me,
00:14:41.400 | then I think you have to call GPT-4 intelligent."
00:14:45.000 | And yet in this talk,
00:14:46.400 | he said there is a simple thing it can't do.
00:14:48.920 | And he gave an example of something that GPT-4
00:14:51.760 | struggled with.
00:14:52.600 | He put a math equation on the board.
00:14:55.480 | Seven times four plus eight times eight equals 92,
00:14:58.820 | which is true.
00:14:59.960 | And then he said, "Hey, chat GPT, GPT-4,
00:15:03.200 | modify one number on the left-hand side of this equation
00:15:05.680 | so that it now evaluates to 106.
00:15:07.760 | For a human, this is not hard to do.
00:15:11.200 | If you need the sum to be 14 higher to get from 92 to 106,
00:15:15.760 | you look at the left-hand side and said,
00:15:18.780 | "Oh, seven times four, we have sevens.
00:15:20.600 | Let's just get two more sevens.
00:15:21.640 | Let's make that seven times six."
00:15:23.400 | Chat GPT gave the wrong answer.
00:15:25.000 | "The arithmetic is shaky," Bubek said about this.
00:15:28.540 | There's other examples where GPT-4 struggled.
00:15:33.120 | There's a classic puzzle game called Towers of Hanoi,
00:15:36.880 | where you have disks of different size and three pegs,
00:15:40.460 | and you need to move them from one peg to another.
00:15:42.720 | You can move them one disk at a time,
00:15:44.520 | but you can never have a bigger disk
00:15:45.800 | on top of a smaller disk.
00:15:47.900 | This comes up a lot in computer science courses
00:15:50.000 | because there's solutions to this problem
00:15:51.720 | that are basic recursive algorithms.
00:15:54.180 | GPT-4 struggled with this.
00:15:56.460 | They gave it a configuration in Towers of Hanoi
00:15:58.800 | that could be solved pretty easily, five moves,
00:16:01.280 | but it couldn't do this.
00:16:02.680 | It struggled with basic block stacking problems.
00:16:05.720 | "Hey, here's a collection of blocks.
00:16:07.640 | These colors stack like this.
00:16:09.480 | Let's talk about how to move them
00:16:10.680 | to get this other pattern."
00:16:11.960 | It struggled with that.
00:16:13.640 | It struggled when it was asked to write a poem
00:16:17.000 | that was grammatically correct and made sense,
00:16:19.640 | where the last line was the exact inverse of the first line.
00:16:22.360 | It wrote a poem, and it mainly made grammatical sense.
00:16:26.620 | The last line was a reverse of the first line,
00:16:29.300 | but the last line was nonsense.
00:16:30.880 | The first line was not a palindrome.
00:16:32.520 | It wasn't an easily reversible line,
00:16:34.060 | and so the last line sounded like nonsense.
00:16:38.000 | All of these, as Buvek and others point out,
00:16:41.040 | all of these examples are marked by their need
00:16:44.720 | to simulate the future in order to solve them.
00:16:48.080 | How do you solve that math equation?
00:16:49.680 | Well, humans, what we actually do
00:16:51.000 | is we sort of simulate different things we could change.
00:16:53.300 | What would the impact be on the final sum?
00:16:55.840 | Oh, changing the sevens would move it up by sevens.
00:16:58.040 | Great, that's what we want to change.
00:16:59.240 | We're simulating the future.
00:17:01.360 | When you play Towers of Hanoi, you have to look ahead.
00:17:04.240 | If I make this move next, this is a legal move.
00:17:07.960 | But is this going to leave me a couple moves
00:17:09.480 | down the line to be stuck?
00:17:11.100 | So we have to look ahead when humans solve Towers of Hanoi.
00:17:14.480 | Same thing with the poem problem.
00:17:16.560 | When you're writing the first line of the poem,
00:17:18.400 | you're also thinking ahead.
00:17:20.400 | What is this going to give me
00:17:21.600 | when I get to the last line of the poem?
00:17:23.360 | Oh, this is going to be nonsense.
00:17:24.560 | So I got to make this first line of the poem,
00:17:27.160 | I got to make this first line of the poem reversible.
00:17:30.040 | Like GPT-4 couldn't do this.
00:17:32.240 | It was just writing word by word.
00:17:33.520 | Here's, I'm writing a good poem,
00:17:35.040 | and when it got to the last line,
00:17:36.200 | let me look back at what the first line was and reverse it.
00:17:38.120 | It was too late.
00:17:39.220 | It was going to be nonsense.
00:17:40.620 | We simulate the future all the time.
00:17:44.180 | Almost everything we're doing,
00:17:47.920 | almost all of our actions as humans
00:17:50.880 | have a future simulation component to it.
00:17:52.820 | We do this naturally, we do this unconsciously,
00:17:55.080 | but almost everything we do, we simulate.
00:17:56.620 | What's going to happen if I do this?
00:17:58.280 | What about that?
00:17:59.120 | Okay, I'm going to do this.
00:18:00.040 | Should I cross the street right now?
00:18:01.360 | Well, let me simulate.
00:18:02.280 | Where's that car?
00:18:03.360 | How far is it?
00:18:04.360 | Where do I imagine that car is going to be
00:18:06.480 | with respect to the crosswalk by the time I'm out there?
00:18:08.560 | Ooh, that's a little bit close.
00:18:10.020 | I'm not going to do it.
00:18:11.200 | When choosing what to say to you,
00:18:13.740 | I am simulating your internal psychological state.
00:18:16.460 | That's how I figure out what to say
00:18:19.160 | that's not only going to accomplish my goals,
00:18:20.780 | but not make you really upset.
00:18:22.280 | This is why people who are maybe neurodivergent
00:18:25.840 | and they're somewhere on the autism spectrum
00:18:28.200 | accidentally end up insulting people by,
00:18:31.280 | you know, they're not trying to,
00:18:32.280 | but they irritate or insult people frequently
00:18:35.040 | because part of what is being changed in their brain wiring
00:18:39.200 | is their ability to simulate the other mind.
00:18:41.160 | And when that is impaired,
00:18:42.560 | they can't simulate the impact
00:18:43.920 | of what they're going to say on another mind,
00:18:46.040 | then they're much more likely to say something
00:18:47.600 | that's going to be taken as sort of offensive
00:18:49.560 | or is going to upset someone.
00:18:50.600 | So we're constantly simulating the future.
00:18:53.920 | That is at the core of human cognition.
00:18:58.320 | It's also at the core of any rendition we've seen,
00:19:01.480 | sci-fi renditions of a fully intelligent machine,
00:19:05.240 | they're doing that.
00:19:06.920 | In my New Yorker article,
00:19:07.920 | I talk about probably the most classic
00:19:09.640 | artificial intelligence from cinema,
00:19:13.200 | which is HAL 9000 from Stanley Kubrick's 2001.
00:19:16.540 | And we know the classic scene, right?
00:19:19.820 | Where Dave, the astronaut, is trying to disable HAL
00:19:22.960 | because its focus on its mission
00:19:25.200 | is going to endanger Dave and his life.
00:19:28.160 | And Dave says, "Open the pod pay doors."
00:19:30.000 | He's trying to get in to disassemble HAL, to turn it off.
00:19:34.120 | And HAL's like, "I cannot do that, Dave."
00:19:35.620 | It's like very famous exchange.
00:19:37.640 | How does HAL 9000 know not to open the pod bay doors?
00:19:40.560 | Because it simulates.
00:19:41.680 | What would happen if I opened the pod bay doors?
00:19:43.620 | Oh, that exposes this.
00:19:45.160 | And if this is exposed,
00:19:46.320 | this person could take out my circuitry.
00:19:47.880 | Oh, that doesn't match my goals.
00:19:49.480 | No, I'm not going to open the pod bay doors.
00:19:51.280 | You need to simulate the future
00:19:53.240 | to get anywhere near anything
00:19:54.880 | that we think of as human style cognition.
00:19:58.080 | GPT-4 can't do this.
00:20:00.480 | Now, is this just because we need a bigger model?
00:20:02.600 | Is this, is GPT-5 going to be able to do this?
00:20:04.940 | Is this, we just have to figure out our training?
00:20:07.680 | The answer here is no.
00:20:09.060 | I'll put my technical cap on here for a second,
00:20:11.960 | but I get into this in my New Yorker article.
00:20:14.900 | The architecture of the large language models
00:20:17.600 | that drive the splashiest AI agents of the moment,
00:20:21.800 | the clods, the Geminis, the chat GPTs.
00:20:25.760 | These underlying large language models
00:20:28.440 | are architecturally incapable
00:20:31.640 | of doing even the most basic future simulations.
00:20:33.800 | And here's why I'm going to draw this.
00:20:34.960 | So if you're watching, instead of listing,
00:20:36.940 | you'll see this picture.
00:20:38.560 | But you have to understand what happens
00:20:39.960 | in these large language models
00:20:41.080 | is that you have a series of layers.
00:20:42.920 | I'm drawing some of these layers here.
00:20:46.480 | So GPT-4, we don't really know how many layers we are.
00:20:49.040 | We think it's like 96, but we're not quite sure
00:20:50.960 | because they don't tell us.
00:20:52.920 | OpenAI is pretty close,
00:20:54.040 | but we know from other language models.
00:20:56.040 | This is a feed-forward architecture.
00:20:59.240 | The information comes in the bottom layer.
00:21:01.620 | It works its way through all the layers
00:21:03.440 | until at the very top, you get the output,
00:21:05.640 | which is actually a token, a piece of a word.
00:21:07.720 | So you give it input, give it a sentence of input.
00:21:11.140 | It moves these layers one by one in order,
00:21:13.560 | and then out the other end comes a single word
00:21:16.080 | with which to expand that input.
00:21:18.300 | So it's auto-regressive token predictor.
00:21:21.480 | These layers are hardwired.
00:21:23.760 | What's in them, it's kind of complicated.
00:21:27.840 | You have basically these transformer sub-layers first.
00:21:32.800 | It's a key piece of these new language models.
00:21:35.100 | It has to do with embeddings.
00:21:37.680 | It has to do with attention,
00:21:38.600 | what part of the input's being paid attention to.
00:21:40.880 | And then after those, you have basically neural networks,
00:21:43.160 | sort of feed-forward neural networks.
00:21:45.160 | But the main thing to think about here
00:21:46.640 | is the information moves through
00:21:48.640 | these hardwired connections.
00:21:51.440 | It's numbers it's multiplied by
00:21:54.080 | and neural network connections that it simulates activating.
00:21:57.040 | And it inexorably, inevitably moves forward
00:21:59.760 | through all of these layers.
00:22:01.480 | And out the other end comes as prediction.
00:22:04.160 | Now, because these layers are very big,
00:22:06.680 | in GPT-4, they're defined by somewhere
00:22:09.160 | around a trillion different values.
00:22:10.800 | These are very big layers.
00:22:13.080 | They can hard-code, these layers can hard-code
00:22:16.280 | a lot of information.
00:22:19.200 | And what happens is, and I get into this in the article,
00:22:21.540 | but at a very high level, what happens is,
00:22:23.600 | as your input goes through these layers,
00:22:26.440 | patterns are recognized.
00:22:28.520 | Really complicated patterns are recognized.
00:22:30.320 | This is an email.
00:22:31.160 | This is an email about this.
00:22:32.360 | We're being asked to do this about this email.
00:22:34.400 | And then there's very complicated guidelines
00:22:36.960 | baked into the connections that then say,
00:22:39.360 | given this is the sentence we're trying
00:22:41.560 | to expand with a single word,
00:22:44.200 | and given all these patterns we recognized
00:22:46.320 | about this sentence,
00:22:47.960 | and looking at all the possible next words
00:22:49.760 | that sort of make grammatical sense to be next,
00:22:52.640 | let's combine everything we've looked at
00:22:54.960 | to help bias towards which word we should output.
00:22:57.840 | And this number of guidelines
00:22:59.800 | and the properties that can be combined
00:23:01.960 | and the number of ways these properties can be combined
00:23:04.000 | is combinatorically immense.
00:23:05.600 | There's a sort of near endless categories
00:23:08.800 | of it's an email about this and that person.
00:23:10.600 | We're trying to say this.
00:23:12.080 | There's endless categories
00:23:13.600 | about what we're recognizing
00:23:16.920 | and the guidelines that connect the bias
00:23:18.840 | towards what word we should output next,
00:23:20.080 | but they're all hardwired.
00:23:21.360 | And so, you know, you can hardwire,
00:23:24.520 | if we recognize a specific situation,
00:23:27.200 | we can have hardwired in these situations.
00:23:29.680 | This move is what we've learned before
00:23:31.360 | will lead to somewhere good.
00:23:33.160 | But once it gets novel,
00:23:34.520 | it's something that it hasn't,
00:23:36.040 | it can't see or approximate with its hardwired rules,
00:23:39.600 | you're out of luck because there is no way
00:23:41.400 | to be iterative in here.
00:23:42.440 | There is no way to be interactive.
00:23:43.880 | These are completely hardwired models.
00:23:45.680 | There is no memory that can be changed.
00:23:47.480 | There's no looping.
00:23:48.320 | There's no recurrence.
00:23:49.840 | The information goes through.
00:23:51.600 | We apply the guidelines to what's seen,
00:23:53.720 | something comes out.
00:23:54.960 | We do our best with what we've already written down.
00:23:57.720 | We can't explore on the fly.
00:23:59.480 | That's just the architecture of these models.
00:24:01.840 | We see this, for example, when you play chess with GPT-4.
00:24:05.680 | These guidelines have a lot of insights
00:24:08.680 | about chess broken into them.
00:24:11.160 | So the properties might be like, we have a chess board,
00:24:13.160 | we have pieces here.
00:24:14.160 | These are all properties that are being identified.
00:24:16.760 | This piece is protecting the king.
00:24:18.840 | Now, given all this information,
00:24:20.280 | we have our hard-coded guidelines.
00:24:21.920 | What move should we output next?
00:24:23.720 | And it might say, well, in general,
00:24:25.360 | in these situations, don't move the thing
00:24:27.200 | that's protecting the king.
00:24:28.160 | Let's do this.
00:24:29.000 | So you could have really complicated chess games
00:24:30.680 | that look really good.
00:24:32.480 | And I talked about in the article
00:24:33.560 | how if you play chess against GPT-4,
00:24:35.480 | prompting it properly,
00:24:37.040 | you get something like an Elo 1000 rated playing experience,
00:24:40.560 | which is like a pretty good novice player.
00:24:43.320 | But when you look closer at these games, what happens?
00:24:45.760 | It plays good chess until the middle game,
00:24:48.760 | and then it gets haphazard.
00:24:50.920 | Because what happens in the middle game of chess
00:24:53.000 | is your board becomes unique.
00:24:54.520 | And when you get to the middle game of chess,
00:24:56.880 | you can't just go off of hardwired heuristics.
00:25:00.240 | In this case, with a piece in this position,
00:25:02.000 | this is the right thing to do, or here's a good thing to do.
00:25:04.920 | When you get to the middle game, how do chess players play?
00:25:08.120 | They simulate the future.
00:25:10.280 | They say, I've never seen this type of board before.
00:25:12.320 | So what I need to do now is think, if I do this,
00:25:14.280 | what would they do?
00:25:15.120 | And then what would I do?
00:25:15.960 | You simulate the future.
00:25:16.920 | GPT-4 can't do it.
00:25:19.320 | So we see this.
00:25:20.160 | The chess game is good until it becomes bad.
00:25:22.360 | When the hardwired rules of here, do this,
00:25:26.120 | in these situations, this makes sense.
00:25:27.720 | When those no longer directly apply,
00:25:29.440 | it has no way of interrogating its particular circumstance.
00:25:32.840 | And the chess play goes down.
00:25:35.200 | All right, so this is why we can't clean our inbox,
00:25:37.960 | because to clean our inbox, decisions have to be made
00:25:40.160 | about what to say.
00:25:41.280 | And to make decisions about what to say,
00:25:46.320 | you have to simulate the impact.
00:25:48.720 | Well, if I did this,
00:25:49.600 | what would the impact be on my schedule?
00:25:50.920 | If I said this, how's that gonna make this person feel?
00:25:53.360 | How's that gonna affect this team dynamic?
00:25:55.880 | How is this gonna affect,
00:25:57.240 | we have this current order of operations
00:25:59.440 | for completing this project.
00:26:00.920 | If I agree to do this delay,
00:26:02.400 | what's the effect gonna be on this project
00:26:04.200 | that we're doing?
00:26:05.040 | Is that gonna be interminable?
00:26:06.400 | If that's gonna be a problem,
00:26:07.440 | then I'm gonna answer no here.
00:26:08.560 | I'm gonna have to find someone else to do it.
00:26:10.720 | Writing an email, a language model can do.
00:26:13.680 | Figuring out what to say in an email,
00:26:15.080 | you have to simulate the future.
00:26:17.320 | GPT models can't do that.
00:26:18.720 | So are we gonna get there?
00:26:22.040 | Is that even possible?
00:26:23.840 | And here in the article, I say, well, yes.
00:26:26.000 | Language models, because they're massive and feed forward
00:26:29.880 | and unmalleable and they have no interaction or recurrence.
00:26:32.320 | No, they can't do it.
00:26:33.160 | But we have other AI systems
00:26:34.440 | that are very good at simulating the future.
00:26:36.640 | GPT-4 is bad at playing chess,
00:26:40.000 | but Deep Blue beat Garry Kasparov.
00:26:42.120 | But Deep Blue is not a language model.
00:26:44.600 | Deep Blue works by simulating hundreds of millions
00:26:47.000 | of potential moves in the future
00:26:48.400 | is a big part of what it does.
00:26:50.600 | AlphaGo beat Lee Sedol in Go.
00:26:53.840 | And how did it do that?
00:26:54.760 | Well, it simulates a ton of future moves
00:26:56.960 | to try to see the impact of different things
00:26:58.640 | that it might do.
00:27:01.160 | So in game playing AIs,
00:27:02.520 | we're very good at simulating the future.
00:27:04.520 | All right, so that's optimistic for our goal here
00:27:07.480 | of having an AI clean our inbox.
00:27:10.440 | But if we're gonna simulate the future
00:27:12.480 | in a way that lets us clean email,
00:27:14.120 | it's not just a sterile positions of pieces on a board.
00:27:19.120 | We have to understand human psychology.
00:27:21.800 | So can an AI simulate other minds?
00:27:25.440 | Well, here in this article, I say, yes.
00:27:28.120 | In fact, there's a particular engineer
00:27:29.760 | who's been leading the charge to do that.
00:27:31.240 | His name is Noam Brown.
00:27:32.720 | And what did Noam Brown do?
00:27:34.840 | Well, first he made waves with Pluribus,
00:27:37.880 | the first poker AI to beat top-ranked players.
00:27:43.720 | So they played in a tournament with seven top-ranked players
00:27:46.880 | the people you would know if you followed poker
00:27:49.520 | with a $250,000 pot.
00:27:51.240 | So there was skin in the game.
00:27:52.440 | They wanted to win.
00:27:53.400 | Texas hold 'em and no limit Texas hold 'em.
00:27:57.960 | And Pluribus beat 'em.
00:27:59.560 | Beat 'em over the two-day tournament.
00:28:01.440 | Well, as Noam Brown explains himself,
00:28:04.240 | in poker, the cards themselves are important,
00:28:08.320 | but actually what's more important
00:28:09.600 | is other people's beliefs about what the cards are.
00:28:12.080 | So you have to simulate human psychology
00:28:14.280 | to figure out what to do.
00:28:15.720 | What matters is not that I have an ace high.
00:28:17.920 | What matters is, do the other players,
00:28:19.880 | what's the probability that they think I have an ace high?
00:28:22.400 | That's where all the poker strategy comes into.
00:28:24.280 | It's taking advantage of the mismatches
00:28:26.440 | between other players' beliefs and reality.
00:28:28.760 | That's where the money is made.
00:28:30.560 | Pluribus has to simulate human minds.
00:28:32.600 | Interesting aside about Pluribus, by the way,
00:28:36.280 | Brown and his team first tried to solve poker
00:28:39.360 | with just a massive neural net,
00:28:41.080 | sort of a feed-forward chat CPT style approach
00:28:44.680 | where it just had played so much poker
00:28:46.920 | that it would just tell you, here's my poker hand.
00:28:49.520 | Here's the cards that are out.
00:28:50.880 | And it would just sort of figure out
00:28:52.160 | here's the best move to do in that situation.
00:28:53.880 | And this model was huge.
00:28:55.160 | They had to use tens of thousands of dollars
00:28:57.600 | of compute time at the Pittsburgh Supercomputing Center
00:28:59.800 | just to train it.
00:29:01.320 | And with Pluribus, he said,
00:29:02.160 | well, what if instead of trying to hard-code
00:29:04.360 | everything you could see, we simulated the future?
00:29:07.600 | And this collapsed the size of this model.
00:29:09.840 | You can now train this stuff on a laptop
00:29:12.720 | or an AWS for like 20 bucks.
00:29:14.720 | It was a fraction of its size and way out-competed it.
00:29:17.080 | So simulating the future is a way more powerful strategy
00:29:20.040 | than trying to build a really massive network
00:29:22.040 | like a language model that just
00:29:23.440 | has everything hard-coded in it.
00:29:25.700 | So then Noam Brown said,
00:29:26.640 | let's play an even more human-challenging game, Diplomacy.
00:29:30.760 | And in the board game Diplomacy, which is like Risk,
00:29:34.680 | the whole key to that game is you have before every turn,
00:29:39.140 | one-on-one private conversations
00:29:41.800 | with each of the other players.
00:29:43.200 | And you make alliances and you backstab people
00:29:45.640 | and you're trying to place,
00:29:47.180 | the whole thing is human psychology.
00:29:49.840 | Noam Brown and his team at Meta
00:29:51.920 | built a diplomacy-playing bot named Cicero.
00:29:54.560 | I talk about this in the article, beat real players.
00:29:58.160 | They played on a web server for Diplomacy.
00:30:00.040 | People didn't even know they were playing against an AI bot.
00:30:03.360 | And how did they do it?
00:30:04.240 | Well, in this case,
00:30:05.160 | and this is where it gets really relevant
00:30:06.640 | for answering our email,
00:30:08.600 | they took a language model and a simulator
00:30:11.000 | and had them work together.
00:30:12.880 | So the language model could take the messages
00:30:14.840 | from the one-on-one conversations.
00:30:16.400 | And they could figure out like, what is this person saying?
00:30:19.360 | And what do they, what does this mean?
00:30:21.160 | And they could translate it into a common,
00:30:23.940 | really technical language that they could pass on
00:30:26.640 | to the game strategy simulator.
00:30:28.320 | And the game strategy simulator is like, okay,
00:30:29.580 | here's what the different players are telling me.
00:30:31.320 | Now I'm gonna simulate different possibilities.
00:30:34.400 | Like if this person is lying to me,
00:30:36.000 | how much trouble could I get into
00:30:37.240 | if I went along with their plan?
00:30:38.720 | What if I lied to them and this was secretly,
00:30:41.040 | and it tries out different strategies
00:30:42.580 | to figure out what to do.
00:30:43.720 | And then it tells the language model
00:30:45.280 | in a very terse technical terms.
00:30:47.480 | All right, here's what we wanna do.
00:30:49.200 | Agree to an alliance with Italy,
00:30:51.980 | decline the alliance request from Russia,
00:30:54.140 | put this into good diplomacy language to be convincing.
00:30:56.900 | And then the language model generates
00:30:58.520 | these very natural sounding communications
00:31:02.040 | and they send those messages.
00:31:04.180 | So now we're getting somewhere interesting.
00:31:05.520 | A language model plus a planning engine
00:31:07.840 | meant that we now had something that could play
00:31:09.780 | against humans in a very psychologically relevant,
00:31:13.000 | complex, interpersonal type of discussion
00:31:15.480 | where you had to understand people's intentions
00:31:17.660 | and get them on your side and it could do really well.
00:31:21.300 | This is the path that's gonna lead
00:31:24.120 | to AI taming the hyperactive hive mind.
00:31:26.160 | It's not gonna be GPT-5 or 6.
00:31:28.760 | It's gonna be the descendants of Cicero,
00:31:30.840 | the diplomacy playing bot.
00:31:32.780 | It's gonna be a combination of language models
00:31:34.520 | with future simulators with maybe some other models
00:31:37.220 | to try to model project states
00:31:39.440 | or your work states or your objectives.
00:31:41.320 | It's gonna be the ensembles
00:31:42.760 | of many different models working together
00:31:44.920 | that is going to make it possible
00:31:48.880 | to do things like have AI clean our inboxes.
00:31:51.160 | So the question then is,
00:31:54.140 | are the big companies taking this possibility seriously?
00:31:56.620 | I mean, is a company like OpenAI taking seriously this idea
00:32:00.840 | that, okay, if we bring in planning
00:32:03.180 | and these other types of thinkings
00:32:04.420 | and then connect that to language models,
00:32:05.860 | that's when things really get interesting.
00:32:08.620 | Well, I think they are.
00:32:10.420 | What's one piece of evidence?
00:32:12.540 | Well, remember Noam Brown who created Pluribus and Cicero?
00:32:15.700 | OpenAI just hired him away.
00:32:18.260 | And they put him in charge reportedly
00:32:21.000 | of this big project within OpenAI called Q*,
00:32:24.600 | a reference to the A* bounded search algorithm,
00:32:27.520 | something you use to search into the future
00:32:29.920 | to add planning into an added feature
00:32:32.300 | with their language models.
00:32:33.640 | So I think PMBOK0 might be higher than we think.
00:32:38.700 | And this is not gonna be a trivial thing
00:32:41.400 | or a cool thing or an interesting twist.
00:32:42.960 | I think it could actually completely reinvent
00:32:46.120 | the experience of knowledge work.
00:32:47.640 | I have been trying for years to solve this problem
00:32:49.920 | through cultural changes.
00:32:51.920 | We need to get rid of the hyperactive hive mind.
00:32:53.600 | We need to replace it with better systems
00:32:55.400 | that don't have so many ad hoc unscheduled messages
00:32:57.920 | that we have to respond to.
00:32:58.760 | We have to stop the context shifting.
00:33:00.320 | And I've had a really hard time making progress
00:33:02.520 | for large organizations because of managerial capitalism
00:33:05.280 | and entrenchment of stability.
00:33:06.760 | It's very difficult.
00:33:08.180 | So maybe technology is gonna lap me at some point
00:33:10.600 | and eventually there'll be a tool we can turn on
00:33:12.860 | that takes me out of my inbox as well.
00:33:14.580 | But once we do that, those benefits are gonna be so huge.
00:33:18.180 | We're never gonna go back.
00:33:19.600 | We will look at this error of checking an inbox
00:33:21.580 | once every five minutes.
00:33:23.500 | I think in the knowledge work context,
00:33:25.140 | similar to how the cavemen looked at the age before fire.
00:33:28.900 | I can't believe we actually used to live that way.
00:33:31.280 | So I'm optimistic.
00:33:32.420 | There we go, Jesse.
00:33:34.980 | That's AI, PMBOK0, that's the key.
00:33:37.700 | - As you were explaining it all, I had some questions
00:33:40.940 | but you answered them all.
00:33:41.980 | I was curious about like the Deep Blue
00:33:44.660 | and like driverless cars, but.
00:33:46.820 | - Yeah, yeah.
00:33:47.660 | - And I didn't know like the explanation
00:33:49.740 | until you explained it all.
00:33:50.980 | - Not to geek out, but the difference between
00:33:53.120 | the advancement of AlphaGo, which won at Go,
00:33:56.160 | and it did this in the 2000s versus Deep Blue
00:33:58.500 | that won in chess, which did this in the 1990s.
00:34:01.620 | DeepMind did AlphaGo.
00:34:03.380 | The big advancement there is that the hard thing about Go
00:34:06.260 | is figuring out is this board good or bad, right?
00:34:08.600 | So if you're gonna simulate the future,
00:34:10.940 | what you have to do is be able to evaluate the futures.
00:34:14.060 | Like, okay, if I did this, they might do this,
00:34:15.820 | and I would do this, is this good?
00:34:18.260 | That's easier to figure out in chess than it is in Go.
00:34:21.200 | Like, is this a good board or a bad board?
00:34:23.220 | So the big innovation in AlphaGo
00:34:25.900 | is they had these neural networks play Go
00:34:28.920 | against each other endlessly.
00:34:30.760 | They jump-started them by giving them
00:34:32.180 | like thousands of real games.
00:34:33.380 | So learn the rules of Go and get a sense
00:34:35.180 | of like what was good or bad.
00:34:36.420 | And then they played Go endlessly against each other.
00:34:39.180 | And the whole point here was to build up
00:34:41.580 | a really sophisticated understanding
00:34:44.060 | of what's good and bad, right?
00:34:46.140 | So they built this network that could look at a board
00:34:48.220 | and say, this is good, and this is bad,
00:34:49.960 | based off of just hundreds of millions of games
00:34:52.080 | it played with itself.
00:34:53.780 | Then they combined that
00:34:54.620 | with a future-looking planning system.
00:34:56.540 | So now when they're looking at different possible moves,
00:34:58.980 | they could talk to this model they trained up
00:35:00.920 | that's self-trained, is this good, is this bad,
00:35:03.420 | to figure out what the good plays are.
00:35:05.140 | And it led to a lot of innovation in play
00:35:06.940 | because this model learned good board configurations
00:35:09.900 | that no human had ever thought of as being good before.
00:35:12.540 | As part of how it beat Lee Sedal
00:35:14.500 | was it did stuff he had never seen before.
00:35:17.380 | Say, what's going on here?
00:35:18.420 | Whereas with Deep Blue, it was much more,
00:35:20.860 | like they brought in chess masters,
00:35:22.300 | and it was much more sort of hand-coded in.
00:35:25.860 | Is this a good position or a bad position?
00:35:28.620 | It was sort of more heuristical there.
00:35:30.980 | So in AlphaGo, they're like, oh, you can actually build,
00:35:33.940 | you can self-teach yourself what's good and what's bad.
00:35:36.860 | Which was cool.
00:35:37.820 | But it still had to simulate the future.
00:35:40.540 | So we'll see.
00:35:42.100 | All right, so anyways, we got some questions coming up,
00:35:43.820 | some about AI and digital knowledge work,
00:35:45.580 | some about other things.
00:35:47.060 | But first let's hear a word from our sponsors.
00:35:49.460 | Hey, I'm excited, Jesse, that we have a new sponsor today.
00:35:53.820 | A sponsor, one of these sponsors that does something
00:35:55.820 | that is exactly relevant to my life.
00:36:00.100 | This is Listening, so the app is called Listening,
00:36:04.460 | that lets you listen to academic papers, books,
00:36:06.820 | PDF webpages, and articles, and email newsletters.
00:36:11.060 | We're listening where it came to my attention,
00:36:12.980 | where it's known in my circle,
00:36:14.860 | is that people use it to transform academic papers
00:36:18.700 | into something they can listen to,
00:36:19.780 | like you would a podcast or a book on tape.
00:36:22.100 | Now, it can do this for other things as well,
00:36:23.660 | like I just mentioned,
00:36:24.500 | but this is where it really came in the prominence.
00:36:26.980 | And it uses AI to do this.
00:36:28.260 | So speaking of AI, has a very good AI voice.
00:36:31.500 | It does not sound like a robot.
00:36:33.060 | It sounds like a real human.
00:36:34.540 | And you can give it, for example,
00:36:35.780 | a PDF of an academic paper,
00:36:37.980 | and you can pause, play, listen to this,
00:36:41.100 | like you had hired a professional voice actor
00:36:43.300 | to read that paper.
00:36:46.180 | Now, why is this important?
00:36:47.100 | Because it opens up all that time when you're driving,
00:36:51.620 | you're stuck in traffic,
00:36:52.860 | you're waiting for something to start,
00:36:54.660 | time when you might put on a book or take a podcast.
00:36:56.940 | Now you could also put on something
00:36:58.940 | that is very productively useful or interesting
00:37:02.140 | for your own work.
00:37:03.740 | Hey, I wanna read to me this new paper about whatever.
00:37:08.460 | Someone just sent me a paper,
00:37:10.020 | which I'm gonna listen to in listening for sure,
00:37:12.300 | 'cause I have a long drive coming up.
00:37:13.980 | Someone just sent me a paper, for example,
00:37:15.940 | that looked at,
00:37:17.580 | does Twitter posting about your academic papers,
00:37:21.780 | so this is very circular,
00:37:22.820 | lead to higher citation count?
00:37:25.020 | And it looks like the implication of the paper is no.
00:37:28.340 | So promoting yourself on Twitter as an academic
00:37:30.820 | doesn't actually help you become a better academic.
00:37:33.120 | This is fascinating to me.
00:37:34.900 | So this idea that I can just click on that
00:37:37.060 | and now when I'm walking back and forth
00:37:39.640 | or going to the bus stop,
00:37:40.480 | I could listen to this paper.
00:37:42.540 | Just imagine the amount of time you can now use
00:37:45.380 | actually learning interesting things.
00:37:46.860 | So it's really cool.
00:37:47.700 | It's bringing other types of content
00:37:49.820 | into the world of audio consumption.
00:37:53.900 | One of the other cool features I like is a add note button.
00:37:58.620 | So like as you're listening to a paper,
00:38:00.700 | you can click add note
00:38:02.280 | and then just type a few sentences
00:38:03.780 | and it'll store that for you.
00:38:06.380 | Oh, here's a note for this section.
00:38:08.380 | So you can add notes as you go along.
00:38:09.740 | Anyways, really cool for people like me
00:38:11.940 | who have to read a lot of interesting complicated stuff
00:38:14.260 | and don't always have a lot of time
00:38:15.900 | where we can't just sit down and actually read.
00:38:19.380 | So here's the good news.
00:38:20.220 | Your life just got a lot easier.
00:38:22.040 | Normally you'd get a two week free trial,
00:38:25.280 | but for my listeners,
00:38:26.300 | you can now get a whole month free
00:38:28.020 | if you go to listening.com/deep
00:38:31.300 | or use the code DEEP at checkout.
00:38:33.940 | So go to listening.com/deep
00:38:36.260 | or use the code DEEP at checkout
00:38:37.660 | to get a whole month free of the listening app.
00:38:40.800 | I also wanna talk about our good friends at Element, L-M-N-T.
00:38:46.280 | Look, healthy hydration isn't just about drinking water.
00:38:48.640 | It's about water and the electrolytes that it contains.
00:38:52.300 | You lose water and sodium when you sweat.
00:38:56.080 | So you have to replace both.
00:38:58.600 | While most people just drink water,
00:38:59.780 | you need to be replacing the water and the electrolytes.
00:39:04.100 | Drinking beyond thirst is a bad idea.
00:39:07.340 | It dilutes blood electrolyte levels,
00:39:09.860 | which also can cause problems.
00:39:11.700 | So the problem is not to,
00:39:13.620 | the goal here is not to drink as much water as possible,
00:39:15.500 | but to drink a reasonable amount of water plus electrolytes,
00:39:18.140 | especially if you're sweating or exercising a lot.
00:39:20.620 | This is where Element enters the scene.
00:39:23.220 | I use Element all the time.
00:39:25.360 | It is a powdered mix you add to your water
00:39:28.520 | that gives you the sodium, potassium, magnesium you need,
00:39:31.920 | but it's zero sugar and no weird artificial stuff.
00:39:35.900 | It gives you what you need in your water
00:39:37.260 | without any of the other stuff,
00:39:38.460 | the sugar or the weird chemicals.
00:39:40.620 | Zero sugar, zero artificial colors,
00:39:42.360 | no other dodgy ingredients.
00:39:43.580 | It tastes great.
00:39:44.900 | It's salty and good tasting.
00:39:46.720 | I love citrus salt.
00:39:48.700 | Other people like raspberry salt.
00:39:50.520 | They have these spicy flavors like mango chili.
00:39:53.420 | You can mix chocolate salt into your morning coffee
00:39:55.740 | if you really want to rehydrate after a hard night.
00:39:59.100 | I drink this, sure, after my workouts,
00:40:02.540 | but also if I've had a long day of podcasting
00:40:05.100 | and giving talks and I'm just expelling all this moisture
00:40:08.860 | through talking and sweating,
00:40:11.500 | Element is exactly what I go to when I get back.
00:40:13.580 | I add it to my Nalgene model and I get both back.
00:40:16.260 | Anyways, I love Element and I love that I can,
00:40:19.660 | I don't have to worry about drinking it.
00:40:20.700 | No sugar, no nonsense.
00:40:22.780 | So the good news is Element came up
00:40:25.260 | with a fantastic offer for us.
00:40:27.380 | Just go to drinkelement.com/deep
00:40:30.700 | to get a free sample pack with any purchase you do.
00:40:33.340 | That's drinkelement, L-M-N-T, .com/deep.
00:40:38.340 | All right, Jesse, let's do some questions.
00:40:40.460 | - All right, first question is from Zaid.
00:40:43.780 | I'm a student and feel lost for the fear
00:40:45.580 | that AI will replace all jobs.
00:40:47.700 | Specifically software jobs and web development
00:40:50.540 | are on the top of the list of jobs that disappear.
00:40:53.420 | After reading Deep Work, these were the two fields
00:40:55.580 | that I wanted to pursue.
00:40:57.180 | My motivation to study is dying out.
00:40:59.300 | Are these fields now a lost cause?
00:41:02.100 | - No, they're not a lost cause.
00:41:04.140 | I do not think programming as a job is gonna go away.
00:41:07.260 | And I do think it's a good skill to learn.
00:41:10.220 | It does open up a lot of career capital opportunities
00:41:12.540 | to shape interesting careers.
00:41:13.980 | So if you look at the history of computer programming,
00:41:18.100 | it is a long line of tales
00:41:22.340 | of new technologies coming in
00:41:24.860 | that makes programmers much more efficient, right?
00:41:28.340 | And from the very beginning, right?
00:41:29.500 | I mean, we started programming used to be plug boards.
00:41:33.860 | To program an early electronic digital computer,
00:41:36.900 | you're adjusting circuits by taking plugs
00:41:40.420 | and plugging them into other places.
00:41:41.780 | Then we got punch cards, way more efficient.
00:41:44.500 | Now I can store a program on punch cards and run that.
00:41:47.660 | I don't have to redo it from scratch every time.
00:41:49.820 | That's a huge efficiency gain.
00:41:51.580 | And then we got interactive terminals.
00:41:53.260 | Oh, I don't have to make punch cards,
00:41:56.340 | give it to someone and come back the next day
00:41:57.860 | to see if it worked.
00:41:59.060 | We're talking like massive,
00:42:00.420 | multiple order of magnitude efficiency changes
00:42:02.920 | one after another.
00:42:04.660 | Then we got interactive editors.
00:42:06.340 | I could edit particular words or lines of my code.
00:42:10.100 | I could run the code straight there and get the results
00:42:13.220 | and immediately go back and change it.
00:42:15.060 | Then we got detailed debuggers.
00:42:16.900 | Oh, this is what's going wrong in your code.
00:42:18.940 | Here is where your code broke.
00:42:20.340 | Like all of this stuff is,
00:42:22.180 | every one of these is an exponential increase
00:42:24.980 | in the efficiency of programmer.
00:42:26.740 | Then we got this sort of modern world
00:42:28.380 | where we have autocomplete
00:42:29.900 | and syntactic real-time checker IDEs.
00:42:33.100 | As you're writing code, it's telling you,
00:42:34.700 | you typed this wrong.
00:42:35.620 | This is a syntax error here.
00:42:36.900 | It's telling you your mistakes
00:42:38.060 | before you even try to run it.
00:42:39.980 | You don't have to memorize all the different commands
00:42:42.380 | and calls and parameters
00:42:43.540 | because it can auto fill this in for you.
00:42:45.900 | And we have stack overflow in Google.
00:42:48.700 | So now for like almost anything you want to do,
00:42:51.580 | you can immediately at your same desk
00:42:53.660 | in the monitor right here,
00:42:54.700 | find examples of exactly that code.
00:42:57.300 | You have to understand that every one of these advances
00:42:59.740 | was a massive efficiency boom.
00:43:02.620 | So what did we see?
00:43:05.100 | Did we see as we made programmers massively more efficient
00:43:08.580 | that the number of programmers we needed to hire
00:43:10.820 | got smaller and smaller?
00:43:12.940 | If that's what really happened,
00:43:14.300 | there'd be like seven programmers left right now.
00:43:17.140 | Instead, there's sort of more people doing programming
00:43:19.060 | than ever before because what we did
00:43:20.620 | was followed a sort of common economic pattern.
00:43:23.140 | As we became more efficient as programmers,
00:43:25.860 | as each individual programmer could handle
00:43:27.780 | and produce more complicated systems faster,
00:43:31.140 | we increased the complexity
00:43:32.580 | and therefore the potential value of the systems we built.
00:43:35.500 | So we still needed the same number of programmers,
00:43:38.620 | if not more.
00:43:39.900 | A programmer today, I would say,
00:43:41.580 | is a thousand times more efficient
00:43:44.380 | than a programmer in 1955.
00:43:47.140 | But we have way more than a thousand times
00:43:49.260 | more applications of software today than we had in 1955.
00:43:53.340 | This is my best prediction of what we're gonna see with AI.
00:43:57.220 | I think the push to try to fully replace a programmer
00:44:02.620 | with AI is quixotic.
00:44:04.860 | Now, what we're gonna do is make programmers even better.
00:44:07.580 | That's what we're seeing, right?
00:44:09.620 | I mean, this is what Copilot with GitHub is doing.
00:44:12.100 | It's like an even smarter autocomplete.
00:44:14.540 | It's making programmers more efficient.
00:44:16.860 | We're essentially removing the need
00:44:18.460 | to search for things on Stack Overflow.
00:44:20.820 | You can have an AI language model.
00:44:22.260 | Just you can ask it and it will show you the example code
00:44:24.500 | or write you the example code.
00:44:25.460 | That makes us more efficient.
00:44:27.780 | I think we're gonna get more of the AI
00:44:29.380 | writing first drafts of code
00:44:31.420 | or filling in the easier stuff.
00:44:33.740 | So programming will become more complex.
00:44:36.100 | It'll be harder.
00:44:36.980 | But we're gonna be able to produce more complicated systems
00:44:40.180 | with the same number of people.
00:44:41.500 | So we're just gonna see more computer code in our world,
00:44:46.300 | more complicated systems in our world,
00:44:48.340 | more things that run on complicated code
00:44:50.620 | because the ability for programmers to produce this
00:44:52.740 | will be increased.
00:44:53.940 | So what that means for you, Zahid,
00:44:56.340 | is if you like programming, keep learning it,
00:44:59.100 | but keep up with the latest AI tools as you do.
00:45:02.020 | Whatever is cutting edge with AI and programming,
00:45:04.060 | learn that.
00:45:05.060 | Push yourself to learn more and more complicated code
00:45:09.220 | with more and more complicated AI tools
00:45:11.340 | because the complexity curve of what programmers have to do
00:45:14.140 | has also been steadily increasing.
00:45:16.460 | So you gotta keep up with that curve,
00:45:17.740 | but the jobs are gonna be there.
00:45:18.780 | At least that's my best prediction.
00:45:20.540 | All right, who do we got next?
00:45:23.380 | - Next question is from Kendra.
00:45:25.500 | Do you ever use ChatGPT to assist with your writing?
00:45:28.580 | I'm not a full-time writer, but do write a lot.
00:45:30.620 | Recently, I've been using ChatGPT for assistance.
00:45:33.220 | Is this bad?
00:45:34.140 | - Yeah, ChatGPT in writing is an interesting place.
00:45:38.140 | It's something I've been looking into
00:45:40.180 | through numerous different roles,
00:45:42.660 | thinking about article ideas
00:45:44.140 | and some of my roles of looking at pedagogy and AI
00:45:46.460 | at Georgetown.
00:45:47.300 | It's something I'm really interested in.
00:45:49.420 | The complexity about this topic
00:45:50.900 | is there are several different threads here,
00:45:53.340 | and the role of AI in writing each of these threads,
00:45:56.060 | I think, is different.
00:45:57.700 | So let's think about professional writers, for example.
00:46:00.700 | Professional writers, I know,
00:46:02.780 | they're not letting ChatGPT write for them.
00:46:06.660 | Professional writers, I know,
00:46:07.740 | and there's quite a few who are messing around
00:46:09.780 | with language models like ChatGPT,
00:46:12.100 | are using it largely for brainstorming an idea formulation.
00:46:15.500 | Well, what about this?
00:46:16.340 | Can you give me examples of this?
00:46:17.860 | Also for intelligent Google searches.
00:46:20.740 | Hey, can you go find me five examples of this?
00:46:22.900 | And so for the new language models
00:46:25.540 | that have plugin access to the web,
00:46:27.340 | they can kind of give you more modern examples.
00:46:29.340 | It's a useful research assistant.
00:46:31.380 | But as you know, professional writers
00:46:33.580 | don't use ChatGPT to actually write
00:46:35.220 | because professional writers have very specific voices,
00:46:39.020 | the art of exactly how we craft sentences matters to us.
00:46:42.020 | Like that's not outsourceable, right?
00:46:44.660 | 'Cause it matters.
00:46:45.500 | That top 10% skill in making writing great
00:46:50.220 | is all in the little details.
00:46:51.940 | And it's very idiosyncratic how we do it.
00:46:53.820 | So professional writers, for the most part,
00:46:55.380 | don't let ChatGPT write for them.
00:46:58.380 | For non-professional writers
00:46:59.660 | who do have to produce writing,
00:47:00.740 | I think it's becoming increasingly more common
00:47:03.820 | to use tools like ChatGPT to produce drafts of text
00:47:07.020 | or just text in general.
00:47:08.540 | I don't think this is a bad thing.
00:47:11.940 | I think it brings clear communication to more people.
00:47:16.260 | We see big wins, for example,
00:47:17.780 | with non-native English speakers.
00:47:20.140 | The ability now to not be tripped up or held back
00:47:22.900 | because I can't describe my scientific results very well.
00:47:26.700 | My language is bad.
00:47:27.860 | Oh, if ChatGPT can help me describe my results
00:47:30.300 | in a paper better.
00:47:31.660 | Now what matters, of course, is my results.
00:47:34.660 | But now I'm not gonna be tripped up
00:47:35.740 | presenting those results 'cause I have help
00:47:37.340 | doing the writing.
00:47:38.460 | I mean, I think a lot of people have communication
00:47:40.140 | in their job who struggle a little bit with writing.
00:47:42.900 | If they can be clear, I think this is fine.
00:47:45.420 | Can you write a short message in this style
00:47:47.380 | that thanks the person?
00:47:49.740 | Again, I think this is just introducing
00:47:51.300 | more clear communication to the world,
00:47:52.820 | and we are gonna see more of that.
00:47:54.980 | And I don't think that's a problem.
00:47:57.260 | So then what's the thread where things
00:47:58.420 | are more controversial or open?
00:48:00.740 | And I think that's when it comes to pedagogy.
00:48:03.620 | So in school.
00:48:05.260 | And this is really an open question right now,
00:48:08.420 | is what role does learning to write
00:48:12.140 | play in learning to think?
00:48:14.180 | There's different schools of thought about this.
00:48:15.820 | Like, should we teach students from day one
00:48:18.460 | how to write in this sort of cybernetic model
00:48:21.500 | of it's you plus a language model?
00:48:23.860 | Or should we teach students how to write,
00:48:27.420 | and then later, hey, later in life,
00:48:29.540 | you can use language models to sort of write
00:48:31.580 | on your behalf to be more efficient,
00:48:33.060 | but it's important for your development as a thinker.
00:48:34.940 | It's important for your development as a person
00:48:37.580 | to grapple with words.
00:48:39.380 | There's a lot of people who say writing is thinking.
00:48:42.220 | So to practice writing clearly
00:48:43.740 | teaches your mind how to think clearly,
00:48:45.460 | and we can't yet outsource our thinking to chat GPT,
00:48:47.700 | so we don't want to lose that ability.
00:48:50.220 | There's a clear parallel here.
00:48:53.340 | We can compare this to other existing technologies.
00:48:57.820 | In particular, I like to think about comparing this
00:48:59.700 | to the calculator on one hand
00:49:03.140 | and centaur chess on the other.
00:49:05.060 | I'll explain what I mean.
00:49:05.900 | With the calculator, here's a technology that came along
00:49:08.980 | that can do arithmetic very fast and very accurately.
00:49:11.620 | From a pedagogical position,
00:49:15.460 | what we largely decided to do was preserve the importance
00:49:19.500 | of learning arithmetic without a calculator.
00:49:22.260 | So until you get to middle school or beyond,
00:49:24.740 | you're learning how to do arithmetic with pencil and paper,
00:49:27.220 | because we thought pedagogically,
00:49:28.420 | you need to get comfortable manipulating numbers
00:49:31.420 | and their relationships to each other.
00:49:33.220 | You need that skill.
00:49:34.580 | As you move on into more advanced algebra
00:49:37.820 | and on in the calculus and beyond,
00:49:39.500 | we then say, okay, now,
00:49:41.980 | if in working on higher order stuff,
00:49:44.340 | there's arithmetic that needs to be done,
00:49:45.580 | you can use the calculator.
00:49:47.340 | So you can automate arithmetic later on,
00:49:49.740 | but we felt that it's important
00:49:51.020 | to learn how to do arithmetic yourself
00:49:54.260 | earlier in your pedagogical journey.
00:49:56.220 | The other way of thinking about this is centaur chess,
00:49:59.180 | which is where players play chess along with an AI,
00:50:01.700 | a player plus an AI, and they work with each other.
00:50:04.620 | Centaur chess players are the highest ranked players
00:50:06.460 | there are.
00:50:07.780 | A player plus AI can beat the best AI.
00:50:10.060 | A player plus AI can beat the very best human players.
00:50:13.500 | This is a model that just says, no, no,
00:50:14.980 | human plus machine together is just much better
00:50:16.980 | than human without machine.
00:50:18.780 | So that's another way that we might end up thinking
00:50:20.540 | about writing a pedagogy.
00:50:21.900 | Start right away learning how to write
00:50:23.580 | with these language models,
00:50:24.940 | because you'll be a better writer
00:50:25.820 | than you ever would be before.
00:50:26.980 | And the quality of writing in the world is gonna go up.
00:50:29.380 | We don't know what the right answer is yet.
00:50:32.100 | I think educational institutions are still grappling with,
00:50:35.260 | is language model aided writing the calculator,
00:50:37.460 | or is it centaur chess?
00:50:38.460 | And I don't think we know yet,
00:50:39.660 | but a lot of people are thinking about it.
00:50:41.060 | So I think that's probably the most interesting thread,
00:50:43.180 | but if you're just doing mundane professional communication,
00:50:45.660 | you're not a professional writer,
00:50:47.140 | and you have a language model helping you,
00:50:49.300 | I say, Godspeed.
00:50:50.140 | I don't think that's a bad thing.
00:50:51.740 | All right.
00:50:55.460 | Ooh, we got a question coming up next.
00:50:57.740 | Oh, this is gonna be our slow productivity corner.
00:50:59.180 | - Yeah, we get the music.
00:51:00.220 | - Should we hit the music first?
00:51:01.380 | - Yep. - All right, let's hear it.
00:51:03.220 | (slow guitar music)
00:51:06.220 | So as long-time listeners know,
00:51:11.220 | we try to designate at least one question per week
00:51:13.780 | as our slow productivity corner,
00:51:15.260 | meaning that my answer is relevant
00:51:18.020 | to my book, "Slow Productivity."
00:51:19.940 | If you like this podcast,
00:51:21.340 | you really need to get the book, "Slow Productivity."
00:51:23.180 | All right, what's our slow productivity corner question
00:51:25.140 | of the week, Jesse?
00:51:26.100 | - All right, this question's from Hunched Over.
00:51:28.500 | I have a really nice work-from-home setup
00:51:30.660 | and a special nook of my house.
00:51:32.060 | However, I've used this setup for two years
00:51:33.940 | for mostly hyperactive, high-mind-type work,
00:51:36.460 | impulsive checking of emails,
00:51:37.900 | switching between multiple tasks,
00:51:39.700 | Zoom meetings, distraction, et cetera.
00:51:42.540 | I now find it's very hard
00:51:45.620 | to get in a deep work mode at this desk,
00:51:47.420 | even when I have the set time aside.
00:51:49.340 | I habitually switch back into a shallow work mindset.
00:51:53.440 | How do I reclaim my desk to be a place
00:51:55.540 | of deep work for my mind?
00:51:57.620 | - So I talk about this in principle two of my book,
00:52:00.100 | "Slow Productivity."
00:52:01.300 | In that principle, the description,
00:52:04.660 | so the principle is work at a natural pace.
00:52:07.020 | And as part of my definition of that principle,
00:52:10.420 | the end of that, I say,
00:52:12.060 | it's like working at a natural pace,
00:52:14.340 | varying intensity over time scales,
00:52:16.420 | and then sort of comma,
00:52:18.380 | in settings conducive to brilliance.
00:52:21.580 | And a big idea I get into in that chapter
00:52:25.080 | is that setting really matters
00:52:27.320 | when you're trying to extract value from your mind.
00:52:30.660 | Setting really matters,
00:52:31.500 | and we should take that very seriously
00:52:33.000 | and be willing to invest a lot of time
00:52:35.280 | and potentially monetary resources,
00:52:37.980 | if needed, to get the settings proper
00:52:42.080 | for producing stuff with our minds.
00:52:44.220 | So because of that, what we often see
00:52:46.020 | when we study the traditional knowledge workers
00:52:47.720 | I look at in that book,
00:52:49.080 | people famous for producing value in their minds,
00:52:51.440 | is that like you,
00:52:52.660 | they often have very, very nice home offices,
00:52:56.300 | really good desks, and my computers,
00:52:58.140 | and my files, and a very comfortable chair,
00:52:59.940 | and it's very well appointed home office.
00:53:03.400 | And they don't work on their deep stuff in that office.
00:53:05.700 | We see these separations.
00:53:06.820 | David McCullough, I found the picture of this.
00:53:09.220 | I talk about it in the book.
00:53:10.540 | I found the picture of his home office from a profile,
00:53:13.320 | his house in West Tisbury, Martha's Vineyard.
00:53:16.120 | It's a great home office.
00:53:17.940 | The window looks over a scenic view,
00:53:20.020 | and it's got a L-shaped desk.
00:53:21.440 | It looks great.
00:53:22.420 | He wrote in a garden shed.
00:53:24.060 | So he would use the home office to do all the business
00:53:26.360 | of being like a best-selling author and historian.
00:53:28.560 | But when he wrote, he went to a garden shed
00:53:29.980 | that had a typewriter,
00:53:31.620 | 'cause that was what was conducive for his brilliance.
00:53:33.220 | Mary Oliver, the poet,
00:53:34.940 | her best poetry was composed walking in the woods.
00:53:38.460 | There was something about the nature,
00:53:41.460 | and the isolation, and the rhythm.
00:53:42.640 | That is where the good thoughts came.
00:53:44.860 | That's a very specific process.
00:53:46.380 | Nietzsche also would do very long walks.
00:53:49.740 | That's where his best thoughts would come.
00:53:53.500 | And so we see these examples time and again,
00:53:56.380 | that the setting in which you try to do
00:53:57.940 | your most smart, creative, cognitive work really matters.
00:54:02.260 | And if that setting is the same place
00:54:03.980 | that you do shallow work,
00:54:04.980 | it's the same place you do your taxes,
00:54:06.460 | and your emails, and your Zooms,
00:54:08.780 | your mind is gonna have a hard time
00:54:10.220 | getting into the deep work mode.
00:54:12.220 | And so the answer is to have two places.
00:54:14.620 | Here's my home office that I care about function.
00:54:18.300 | It's got monitors, and good webcam,
00:54:20.700 | and my files are here, and I don't wanna waste time
00:54:23.940 | when I'm doing the minutia of my professional life.
00:54:26.640 | But then you need somewhere else you go
00:54:27.960 | to do the deep stuff.
00:54:29.020 | And it could be fancy, it could be very simple.
00:54:32.220 | It could be sitting outside under a tree at a picnic table.
00:54:36.200 | I used to do this at Georgetown.
00:54:37.440 | There was a picnic table in a field
00:54:39.980 | on part of the trail that ran from Reservoir Road
00:54:43.060 | down to the canal, and I would go out to that tree
00:54:44.940 | with a notebook to work.
00:54:47.860 | It could be a garden shed that you converted.
00:54:49.660 | It could be a completely different nook of your house.
00:54:51.620 | I talk about in the book,
00:54:52.500 | people who took like an attic dormer window
00:54:55.540 | and just pushed a desk up there, that's for deep work.
00:54:58.000 | That's what Andrew Wiles did
00:54:59.100 | when he was solving Fermat's last enigma, last theorem.
00:55:02.660 | He did that up in an attic in his house in Princeton.
00:55:06.020 | So have a separate space for deep work from shallow,
00:55:10.100 | and it should be distinctive,
00:55:11.500 | and it should psychologically connect
00:55:13.900 | to whatever deep work you do.
00:55:15.580 | It doesn't have to be specialized.
00:55:17.220 | It doesn't have to be expensive.
00:55:19.020 | It can be weird.
00:55:19.900 | It can be eccentric, but it needs to be different.
00:55:22.380 | So don't try to make your normal home office space
00:55:25.260 | also good for doing deep work.
00:55:27.500 | Have a separate space
00:55:29.060 | for doing your best, most cognitive stuff.
00:55:31.300 | You are gonna find, I would predict,
00:55:33.700 | you are gonna find a significant increase,
00:55:37.760 | a significant increase in the,
00:55:39.900 | not just the quality of what you produce
00:55:41.500 | when you do your deep work,
00:55:42.540 | but how quickly you get into that state
00:55:44.140 | and the rate at which you produce that work.
00:55:46.620 | So anyways, in slow productivity,
00:55:48.220 | that's one of the ideas I really push.
00:55:50.660 | Location matters.
00:55:51.940 | Don't reduce all work to this just frenzied,
00:55:56.020 | to monitor, jumping back and forth
00:55:57.600 | between emails, busyness, freneticism.
00:56:00.260 | Don't make all work that.
00:56:02.340 | Separate.
00:56:03.180 | There's some of that,
00:56:04.420 | and then there's also me trying to produce stuff
00:56:05.940 | too good to be ignored, the real value,
00:56:07.660 | and that's a slower thing,
00:56:08.700 | and I need a different location for it.
00:56:10.620 | All right, what do we got next?
00:56:13.140 | - Next question's from Charlie.
00:56:15.260 | Excuse me.
00:56:16.140 | I time block my day into 50-minute deep work blocks
00:56:19.220 | separated by 10-minute breaks.
00:56:21.140 | I have little autonomy and am closely supervised.
00:56:24.140 | Sometimes I'm extremely busy all week,
00:56:26.140 | and sometimes I'm twiddling my thumbs
00:56:27.740 | waiting for my supervising solicitor to give me work.
00:56:32.740 | How should I utilize my 10-minute breaks
00:56:35.380 | during a busy week?
00:56:36.340 | And also, how should I handle weeks
00:56:38.380 | when I don't have much work for my deep work blocks?
00:56:41.220 | - Well, Charlie, don't worry too much
00:56:42.500 | about those 10-minute breaks.
00:56:43.540 | Have fun, right?
00:56:44.380 | Don't think about 'em, just do whatever.
00:56:46.780 | Like, do whatever's interesting.
00:56:48.340 | I mean, I typically recommend if it's a busy day
00:56:51.460 | in the sense that you have the 50 minutes
00:56:53.900 | on the other side of these breaks
00:56:55.340 | is filled with deep work,
00:56:57.140 | take what I call deep breaks.
00:56:58.860 | So don't look at things that are emotionally salient.
00:57:02.080 | Don't look at things that are too relevant
00:57:03.820 | to the type of work you're doing.
00:57:04.940 | Look at things completely,
00:57:05.860 | or do things completely different than your work.
00:57:08.320 | That's gonna minimize the context-switching cost
00:57:10.540 | when you go back to your work.
00:57:12.420 | More generally, though,
00:57:13.580 | you know, I don't love the sound of this job, right?
00:57:17.020 | What makes people love their work?
00:57:19.500 | This is an idea from So Good They Can't Ignore You
00:57:23.260 | where I noted that people think
00:57:24.540 | what they want to love their work
00:57:25.740 | is a match of the content to their job,
00:57:27.440 | but there's these other more general factors
00:57:29.060 | that matter more.
00:57:30.660 | And one of those general factors that matters more
00:57:32.580 | is autonomy.
00:57:33.420 | Autonomy is a nutrient for motivation and work.
00:57:37.060 | It's critical.
00:57:38.100 | You don't have a lot of it.
00:57:39.160 | So I don't love this job.
00:57:40.420 | So how about this plan for the weeks
00:57:42.340 | in which you don't have a lot of deep work
00:57:43.860 | for the 15-minute blocks?
00:57:45.940 | You are working like a laser beam in that time
00:57:48.800 | on your move to something different.
00:57:50.860 | So you have a side hustle or a skill that you're learning
00:57:54.460 | that is going to allow you to transform
00:57:56.180 | what your work situation is
00:57:57.380 | to be closer to your ideal lifestyle,
00:57:59.140 | and that's what you're working on
00:58:01.340 | in unscheduled 15-minute breaks.
00:58:02.900 | I think you're gonna get a lot of fulfillment out of that
00:58:04.780 | because you're not gonna be bored,
00:58:06.580 | and more importantly,
00:58:07.420 | you're gonna find some autonomy and empowerment.
00:58:09.540 | I am working on the route out of what I don't like
00:58:13.140 | about where I am now.
00:58:13.980 | And it could be a new skill
00:58:15.660 | that within your same organization
00:58:17.900 | is gonna free you to go into a more autonomous position,
00:58:21.620 | or it could be a new skill
00:58:22.580 | that's gonna allow you to go to a different job
00:58:24.640 | that's gonna be more autonomous,
00:58:26.540 | or maybe it's a side hustle
00:58:27.920 | that is going to allow you to drop this to part-time
00:58:30.220 | or drop it altogether because it can support you.
00:58:32.620 | I think psychologically you need something like that
00:58:34.700 | because otherwise a fully non-autonomous job like this,
00:58:37.820 | especially in knowledge work, can get pretty draining.
00:58:40.420 | All right, let's see.
00:58:42.660 | We have some calls this week, don't we, Justin?
00:58:43.900 | - We do, yeah.
00:58:44.900 | - More than one, it looks like.
00:58:46.060 | - Yep.
00:58:46.900 | - Excellent, let's get our first one.
00:58:47.740 | - Okay.
00:58:48.580 | - Hi, Cal, this is Roan,
00:58:52.700 | longtime reader and a listener since episode one.
00:58:55.780 | I'm a longtime fan of your work.
00:58:57.660 | I'm eagerly awaiting my receipt of Slow Productivity.
00:59:00.700 | I'm getting both a signed copy
00:59:02.180 | that was offered there by your local bookstore,
00:59:04.140 | and I'm getting the Kindle version as well.
00:59:06.260 | I'm especially excited that you have recorded
00:59:08.100 | the audio book version yourself.
00:59:09.860 | I've really been hoping for this,
00:59:11.580 | especially since you've started the podcast
00:59:13.220 | to hear these books in your own voice.
00:59:14.580 | I think that's fantastic.
00:59:16.100 | I'm particularly enjoying your forays
00:59:17.620 | into the philosophy of technology.
00:59:19.580 | That's an area of interest myself.
00:59:21.340 | I'm personally finally diving into Heidegger
00:59:23.300 | and my philosophical readings in general.
00:59:25.900 | In honor of your famous Heidegger and Hefeweizen tagline,
00:59:29.740 | I wonder if you've read Heidegger's views on technology,
00:59:32.220 | and if so, has that influenced
00:59:33.860 | or impacted your views in any way?
00:59:35.820 | Thank you again for all of the excellent work
00:59:37.740 | and all the excellent content,
00:59:39.100 | and I am looking forward to the Deep Life book
00:59:43.300 | to come after this one.
00:59:44.340 | Thank you very much.
00:59:45.380 | - Well, thanks for that, Rone.
00:59:48.860 | I'm vaguely familiar with Heidegger on technology,
00:59:52.020 | but I would say most of my sort of scholarly influences
00:59:55.940 | on technology philosophy are 20th and 21st century.
01:00:00.140 | This is where, if you go back to Heidegger,
01:00:04.340 | technology was being grappled with,
01:00:06.460 | but it was also being grappled with
01:00:09.580 | in the context of these much more ambitious,
01:00:14.020 | fully comprehensive,
01:00:15.660 | continental philosophical frameworks
01:00:18.540 | for understanding all of life and meaning and being.
01:00:21.540 | And it's these complex, it was the height.
01:00:24.420 | By the time you get to Heidegger,
01:00:25.860 | and you see this a lot in Marx as well,
01:00:27.580 | this sort of totalizing,
01:00:29.340 | we're gonna sort of have a new epistemology
01:00:32.020 | for all of knowledge
01:00:33.140 | and understanding the human condition.
01:00:34.700 | Very complicated.
01:00:35.900 | And so it's a little bit less accessible.
01:00:38.740 | There's specific thoughts on technology.
01:00:40.100 | Whereas you get farther along in the 20th century,
01:00:42.500 | what you get is more of people,
01:00:43.780 | because of the impetus of modernity,
01:00:46.700 | just grappling specifically with technology and its impacts.
01:00:49.660 | And so you start to see this with thinkers
01:00:51.380 | like Lewis Mumford, for example, or Lynn White Jr.
01:00:56.380 | It's starting to grapple more specifically
01:00:59.420 | with what's going on.
01:01:00.340 | And so we get later,
01:01:01.340 | you get thinkers like Neil Postman,
01:01:03.220 | and you have Marshall McLuhan.
01:01:04.820 | They start working on this.
01:01:06.700 | More recently, you get Jaron Lanier.
01:01:09.100 | Then you have full academic sub-disciplines
01:01:11.140 | like SDS emerging,
01:01:13.660 | which has a very specific methodology
01:01:15.500 | for trying to understand the social technosystems.
01:01:18.700 | More recently, you get things
01:01:19.740 | like critical technology studies,
01:01:21.500 | which tries to apply postmodern critical theories
01:01:24.340 | to trying to understand technologies.
01:01:26.260 | The 20th, especially mid-onward 20th century
01:01:29.820 | and early 21st century, it's more focused.
01:01:33.380 | And I think the pressures of modernity
01:01:36.740 | give us a type of technology,
01:01:39.180 | an understanding of technology
01:01:41.780 | that resonates with our current moment.
01:01:44.060 | So that's been more influential to me, I would say.
01:01:46.180 | I do like your callback, however,
01:01:47.860 | to Heidegger and Hefeweizen.
01:01:50.020 | Most people don't know this,
01:01:51.140 | but when I was writing my books for students,
01:01:54.420 | my newsletter and blog, of course, were focused on students.
01:01:57.940 | And a big thing I was pushing for there
01:02:00.700 | was how do you build a college experience
01:02:04.900 | that's really meaningful and interesting and sustainable,
01:02:08.380 | and also opens up really cool opportunities.
01:02:10.740 | I did not like this idea
01:02:12.060 | of being super stressed out in school.
01:02:13.940 | Like, "Oh, but it'll be worth it
01:02:15.380 | "because I'm gonna get this job
01:02:17.200 | "and then it'll be worth it."
01:02:18.040 | I was trying to teach kids,
01:02:20.660 | how do you actually make your experience
01:02:22.060 | in college itself good?
01:02:22.940 | Not like something you're sacrificing yourself for
01:02:24.820 | to get to something better down the line.
01:02:27.060 | And I had this idea called the Romantic Scholar.
01:02:29.460 | And it was all about how to transform
01:02:31.140 | your college experience
01:02:32.060 | into being much more psychologically meaningful.
01:02:34.140 | And one of my famous, to like, my famous, I mean,
01:02:37.660 | among the readers of "Study Hacks" back then,
01:02:39.600 | so like seven people,
01:02:41.140 | one of my famous ideas was Heidegger and Hefeweizen.
01:02:44.260 | And I was like, "Would you have to read Heidegger?
01:02:47.140 | "Don't just go white knuckle
01:02:48.380 | "at the library the night before.
01:02:50.400 | "Go to a pub and get a pint of Hefeweizen."
01:02:56.620 | And like sip a drink and there's a fire and like read it,
01:02:59.460 | like put yourself into this environment of like,
01:03:02.820 | this is cool, it's an intellectual thinking
01:03:04.860 | and ideas are cool and life is cool.
01:03:06.620 | And approach your work with this sort of joyous gratitude
01:03:09.820 | and care about where you are and how you're working.
01:03:11.580 | I talked a lot about that.
01:03:13.180 | Anyways, it reminds me of our last question
01:03:15.060 | or one of the questions we answered earlier in the episode.
01:03:18.660 | Right, I told the, in the slow productivity corner,
01:03:21.100 | I told the question asker,
01:03:24.020 | "Build a cool space to do your deep work.
01:03:26.560 | "Don't try to make your shallow work home office
01:03:28.720 | "into the place where you do your deep work.
01:03:30.860 | "Like go somewhere cool, do it under a tree."
01:03:33.460 | And then, you know, I really pushed that idea back then.
01:03:35.580 | I talked about, I called it adventure study
01:03:38.220 | and I think was my term.
01:03:39.380 | Go to cool places to do your work.
01:03:42.400 | So you make your work into something
01:03:43.700 | that's intellectually cool, it's exciting,
01:03:45.420 | not something that you're trying to grind through.
01:03:47.700 | I'm trying to think of examples.
01:03:48.720 | I think there was a someone that people would write in,
01:03:51.260 | students would write in.
01:03:52.300 | Someone wrote in with a picture of a waterfall
01:03:54.140 | where they went to study.
01:03:56.020 | Someone else, an astronomy student,
01:03:57.740 | snuck onto the roof of the astronomy building, the stars,
01:04:00.660 | and that's where she would like read
01:04:01.780 | and work on her problem sets.
01:04:03.700 | You know, I love that idea when I was helping students
01:04:05.740 | find more meaning in their student life.
01:04:07.460 | So I like the idea of preserving that today
01:04:09.660 | in knowledge work, especially if you're remote
01:04:11.520 | or have flexibility.
01:04:13.460 | Find cool places to do your coolest work.
01:04:16.180 | Transform your relationship to it.
01:04:18.060 | Like I still do this sometimes with,
01:04:20.380 | if I'm early in a New Yorker article,
01:04:22.980 | I'll go to BevCo at happy hour
01:04:25.940 | and do exactly Heidegger and Hefeweizen.
01:04:27.860 | Like get like something they have on tap
01:04:30.060 | because just psychologically it's like,
01:04:32.180 | this isn't work, this is interesting.
01:04:34.300 | I'm in this, there's all these people,
01:04:35.500 | I know the people at BevCo, I have, you know,
01:04:37.540 | like a Hefeweizen.
01:04:38.820 | I'm just like thinking, isn't it cool to think ideas?
01:04:41.420 | This isn't just me in my home office
01:04:42.900 | like trying to make deadline.
01:04:44.220 | And I'll often do that at the beginning
01:04:45.440 | of a New Yorker article just to put myself
01:04:47.260 | into like the mindset of, this is cool,
01:04:49.780 | this is interesting, this is thinking.
01:04:51.340 | Like remember like this activity itself has value
01:04:54.880 | and it's entertaining.
01:04:56.000 | It's not just functional.
01:04:57.300 | So anyways, cool call.
01:04:59.460 | Maybe I should read some more Heidegger.
01:05:00.700 | I have to get more Hefeweizen, that's the goal.
01:05:02.700 | It takes a lot of Hefeweizen to get through Heidegger
01:05:06.220 | by the way, it's some long books.
01:05:07.660 | All right, do we have another call?
01:05:08.940 | - Yep, here we go.
01:05:09.980 | - Hey Cal, my name's Tim.
01:05:14.620 | I actually met you over the weekend at your book signing.
01:05:16.500 | I was the Dartmouth guy that you met toward the end.
01:05:19.340 | It's nice meeting you in person.
01:05:21.300 | I have kind of a two-part question.
01:05:23.300 | I'm really drawn to the idea of like thinking about seasons
01:05:26.500 | or chapters of your life and career.
01:05:28.580 | And as somebody with young kids at home,
01:05:32.460 | I'm certainly in a specific type of season right now.
01:05:35.060 | So I wanted to understand, I guess a two-part question.
01:05:39.660 | When you're thinking about the seasons of your life,
01:05:44.540 | what's the time box you put around those?
01:05:45.900 | Are those like a quarter?
01:05:47.340 | Is it half a year?
01:05:48.500 | Is it two years, 10 years?
01:05:49.900 | Like how do you, when you think you're entering
01:05:52.140 | or exiting a specific season in your life,
01:05:55.020 | how do you, how long is that?
01:05:56.940 | And secondly, I guess is, I'm in a relationship,
01:06:03.060 | I have a wife and she's also got a busy life and career.
01:06:06.980 | How do you, or do you have any advice
01:06:08.700 | on how do you synchronize or match up the seasons
01:06:12.100 | they may be going through in their careers?
01:06:14.500 | I find it's very difficult if you have two people
01:06:17.700 | trying to push hard at work
01:06:19.100 | and are in a busy season at work,
01:06:21.940 | but also be able to give the attention you need at home.
01:06:25.180 | So it takes a conscious decision on both parts
01:06:28.580 | on which season you're gonna be in
01:06:32.140 | and I wonder if you have any advice on that.
01:06:33.660 | Thanks, Cal, big fan.
01:06:34.860 | - Well, Tim, good to hear from you again.
01:06:37.220 | It's nice to see you at the book event.
01:06:39.460 | Two good questions.
01:06:40.300 | So first question when it comes to seasons,
01:06:41.940 | there's different scales that matter, right?
01:06:44.740 | So there's the literal seasonal scale
01:06:47.660 | of the seasons of the year.
01:06:49.620 | And this is the big idea from principle two of my book,
01:06:52.580 | Slow Productivity,
01:06:53.980 | is you should have variations within the seasons.
01:06:55.860 | Like for me, for example,
01:06:56.980 | my summers are much different than my falls.
01:06:59.220 | So my summers, it's much slower,
01:07:02.460 | there's much less phoneticism and meetings
01:07:05.020 | and I'm much more focused.
01:07:07.700 | Whereas like in the fall, if I'm teaching some classes
01:07:09.860 | and I can do a lot more meetings,
01:07:11.660 | it has a different feel to it.
01:07:13.220 | So seasonal variation is good.
01:07:15.420 | We are not meant to work all out every day,
01:07:19.580 | all the days of the year.
01:07:20.980 | Like we're meant for there to be variations.
01:07:23.380 | If you don't work in a factory,
01:07:25.060 | don't simulate working in a factory
01:07:26.740 | with your knowledge work.
01:07:28.220 | There's also higher scales of seasons,
01:07:30.380 | like longer time periods.
01:07:32.940 | And this becomes more clear to me as I get older,
01:07:35.420 | as I've actually made my way through more of these seasons.
01:07:38.980 | I think of these larger,
01:07:40.340 | like the largest granularity of season I deal with
01:07:42.700 | is pretty close to a decade.
01:07:44.300 | And I think this is pretty relevant
01:07:47.700 | if you're having kids, right?
01:07:49.460 | Because so I think of my 20s as different than my 30s,
01:07:54.060 | as different than my current season, which is my 40s.
01:07:57.500 | So in my 20s, for example,
01:07:58.740 | like one of the things I was trying to do
01:07:59.940 | if I'm thinking about professional objectives
01:08:02.900 | is trying to get on my feet professionally.
01:08:06.140 | It's like, I wanna be a professor, wanna be a writer.
01:08:09.020 | It's like, I wanna like lay those foundations
01:08:11.420 | and that's what I'm working on.
01:08:13.660 | Putting in the time, putting in the skills.
01:08:15.380 | It was a lot of skill building, head down skill building.
01:08:17.980 | Like the stuff I was working on
01:08:19.220 | might not be publicly flashy, but writing the papers,
01:08:21.660 | learning how to be a professor, writing the books.
01:08:24.700 | There are student focused books, doing magazine writing,
01:08:26.860 | doing newsletter writing,
01:08:28.060 | just trying to get my writing skills up.
01:08:30.300 | The three books I wrote in my 20s,
01:08:32.420 | each of them had a element that was more difficult
01:08:35.620 | than the one before that I very intentionally added.
01:08:37.900 | So I was using the books to systematically push my skills,
01:08:42.780 | not to try to grow my career necessarily.
01:08:45.460 | I got the same advance for all three of those books.
01:08:48.660 | My goal was not how do I become
01:08:50.620 | a very successful author in my 20s?
01:08:52.180 | It was how do I become a good enough writer
01:08:54.340 | where becoming a successful author is possible?
01:08:57.220 | And so that was my 20s, right?
01:08:59.180 | And that was largely successful, right?
01:09:00.460 | Because I got hired as a professor right when I turned 30
01:09:06.420 | and my first sort of big hardcover idea book
01:09:10.420 | came out right when I turned 30.
01:09:12.260 | So good they can't ignore you.
01:09:13.620 | All right, so then my 30s is a different season.
01:09:16.580 | So what I'm trying to do in my 30s is now we're having kids.
01:09:19.500 | So I had my first of my three kids when I was 30, right?
01:09:23.060 | So my wife and I are starting a family
01:09:25.580 | and professionally I was like,
01:09:26.700 | okay, now what I need to do professionally,
01:09:28.060 | what do I care about now?
01:09:29.020 | When you have kids that age
01:09:30.060 | or you're starting to have babies,
01:09:31.100 | it's like, I wanna provide stability.
01:09:34.460 | And so it was really about like,
01:09:35.700 | okay, I wanna get tenure.
01:09:37.740 | I want my writing career to be successful enough
01:09:39.980 | that like it gives us financial breathing room.
01:09:41.980 | I wanna be a successful enough writer
01:09:43.820 | that like we're not super worried about money
01:09:46.660 | and have the stability of tenure.
01:09:48.380 | Like those were the two things I wanna do.
01:09:49.500 | I wanna become a successful writer.
01:09:52.020 | Meaning it was unlike in my 20s.
01:09:54.820 | These are the smaller book advances.
01:09:56.340 | I mean, I don't always talk about number,
01:09:57.260 | but I'll tell you like the books I got in my 20s
01:09:59.020 | were all $40,000 book advances.
01:10:00.620 | So these were not by standards today.
01:10:03.820 | These are very small advances.
01:10:05.780 | In my 30s, I was like, I need to now become a writer
01:10:08.180 | that gets like real hefty book advances, I need tenure.
01:10:11.660 | And beyond that, it's like trying to keep the babies alive.
01:10:15.100 | Right, so it was sort of a, this is a frenetic period.
01:10:17.820 | This is not a period of grand schemes.
01:10:20.820 | It's like, get your head, keep the babies alive
01:10:23.340 | and keep, you know, everyone, this baby is fed.
01:10:27.820 | You know, okay, do they know that my wife's traveling?
01:10:30.660 | So I need to like get the bottle.
01:10:32.060 | Like when's the nanny coming?
01:10:33.060 | Like all that type of stuff, get tenure,
01:10:35.380 | become a writer with like some financial heft, right.
01:10:40.380 | And that was what my 30s were about.
01:10:41.700 | And I think that was largely, and that was successful.
01:10:43.460 | I got tenure, you know, five years later
01:10:46.060 | and my books became bestsellers.
01:10:48.100 | And now like I'm getting bigger book contracts
01:10:51.300 | and we could move to where we wanted to move.
01:10:53.620 | And, you know, okay, great, we got that all set up.
01:10:55.820 | We're financially stable, the kids survive.
01:10:57.860 | I have tenure, you know, I'm a successful writer.
01:11:00.540 | Now my 40s is a different season.
01:11:02.020 | I'm not keeping babies alive anymore.
01:11:05.260 | Now I have elementary school age kids.
01:11:06.940 | This is much more a play of, it's parenting.
01:11:10.100 | It's like being there in your kids' lives.
01:11:12.140 | They need as much of my time as possible.
01:11:14.060 | They're developing themselves as people and I have all boys
01:11:17.180 | and they really want specifically dad time.
01:11:19.020 | So now parenting is this whole other thing.
01:11:21.380 | And in my work, like, well, you know, I got tenure
01:11:23.500 | and I become a successful writer.
01:11:25.020 | So now when I think about professional goals in my 40,
01:11:27.340 | they're much more, they're much more,
01:11:30.580 | it's ambitious in a sort of legacy way.
01:11:32.660 | Like, well, but what do I want to be as a writer?
01:11:35.660 | Like, where do I want to, what do I want to do?
01:11:37.020 | Like, what do I want to work on?
01:11:38.820 | Like, where do I want to leave my footprint, right?
01:11:40.660 | And this is a very different feel.
01:11:42.540 | What do I want to do in academia?
01:11:43.660 | Like, I was focused on getting tenure in my 30s.
01:11:45.980 | My 40s now, it's like, where's the like footprint
01:11:48.540 | I want to leave in the world of scholarship, right?
01:11:50.300 | It becomes much more forward-thinking legacy.
01:11:52.340 | It's slower with my kids.
01:11:54.340 | It's not, how do I make sure that like every kid
01:11:56.060 | has picked up and got the milk when they needed it?
01:11:58.700 | Now it's like, how am I showing up in their lives
01:12:00.940 | in a way that like they're going to develop
01:12:02.220 | as good human beings?
01:12:03.380 | So like in this current season, in the 40s,
01:12:06.660 | everything is more lofty or more legacy,
01:12:08.660 | more forward-thinking.
01:12:09.740 | It's slower and more philosophical and the depth is,
01:12:12.780 | there's more depth to it.
01:12:13.780 | So, you know, every season is different.
01:12:16.180 | So those life seasons could be at the scale of decades,
01:12:19.780 | but those are just as important to understand
01:12:21.600 | as the annual seasons and even the smaller scale seasons.
01:12:25.600 | As for your second question,
01:12:27.860 | coordinating with your wife,
01:12:29.580 | what I found is like, what I hear from people,
01:12:32.880 | I found in my own life,
01:12:34.580 | it is really important that you and your wife
01:12:38.100 | have a shared vision of the ideal family lifestyle
01:12:43.100 | and that you are essentially partners working together
01:12:46.940 | to help get towards or preserve this ideal vision you have
01:12:50.380 | for what your family's life is like,
01:12:52.220 | where you live, the role, how much you're working,
01:12:54.620 | how much you're not working, what your kids are like,
01:12:56.420 | what their experience is with you.
01:12:57.780 | You need a shared vision of this is what our team,
01:12:59.980 | our family, this is where we wanna be.
01:13:02.100 | This is what we think is what we're going for.
01:13:03.820 | Like my wife and I started making these plans
01:13:05.500 | as soon as we started having kids and they evolved,
01:13:08.060 | but we wanted a shared plan.
01:13:09.460 | And then it's like, okay,
01:13:10.340 | now how are we both working towards this?
01:13:12.420 | What's gonna matter?
01:13:13.420 | Right, you need the shared plan.
01:13:16.940 | What happens if you don't have this?
01:13:18.700 | Well, you get the other thing, which is very common,
01:13:20.620 | especially among highly educated couples,
01:13:22.740 | which is we are both independently
01:13:24.860 | trying to optimize our careers
01:13:26.420 | and therefore see each other mainly from the standpoint
01:13:29.380 | of an impediment to my professional goals.
01:13:31.260 | And we have this very careful tally board over here
01:13:33.900 | of like, mm-hmm, mm-hmm, you did seven units less of this.
01:13:37.220 | I did four more units less of this.
01:13:38.660 | So you get sort of potentially resentment,
01:13:41.820 | but even without the resentment,
01:13:42.980 | it's a huge stress and anxiety producer.
01:13:45.340 | Trying to individually optimize two careers
01:13:48.260 | without any approach to synergy or any shared goal
01:13:51.140 | of where you're trying to get your life writ large
01:13:54.580 | is a source of tension, right?
01:13:57.260 | It's very difficult.
01:13:58.500 | There's these really cool configurations
01:14:01.580 | that might be possible for you and your family's life
01:14:04.140 | that will be missed if you're only myopically looking
01:14:07.220 | at your own career and saying,
01:14:08.620 | how do I just keep this going forward?
01:14:11.460 | How do I just maximize these opportunities?
01:14:14.180 | 'Cause ultimately what is gonna matter most
01:14:16.060 | for your satisfaction in life is gonna be the whole picture
01:14:18.460 | of what your life is like.
01:14:19.580 | And so you need to be on the same page.
01:14:21.020 | You have your shared vision,
01:14:22.340 | and then you have your shared plan at different timescales.
01:14:24.540 | So how are we going to get closer to this vision?
01:14:27.460 | So what are we working on for the next five years?
01:14:29.180 | Like, where do we wanna try to get?
01:14:30.500 | How are we getting there?
01:14:31.340 | Okay, this year, like, what are we both working on?
01:14:33.340 | What's our setup and configuration?
01:14:35.060 | What is the biggest obstacle we have
01:14:36.820 | to the shared vision of where we want our family to be?
01:14:39.060 | Oh, there's something about our work setups now
01:14:40.660 | that's incredibly stressful,
01:14:42.100 | and it means we're not like,
01:14:43.540 | our kid doesn't have this or this,
01:14:44.700 | do we think it's important?
01:14:45.540 | Wait, maybe we need changes here.
01:14:47.420 | It opens up a lot of options
01:14:49.180 | when you're working backwards from a shared vision
01:14:51.340 | as opposed to working forwards
01:14:52.780 | from just what's best for me
01:14:54.340 | and specifically what I'm working on.
01:14:56.740 | So you gotta be on the same page.
01:14:58.980 | Whatever that vision is, be on the same page.
01:15:00.700 | And again, as soon as I see couples do this,
01:15:03.300 | it opens up so many options for them in their lives.
01:15:06.820 | And it's a hard transition
01:15:07.940 | because, like, coming out of your 20s,
01:15:09.180 | it's all about, I need to maximize what I'm doing
01:15:11.860 | to get some sort of abstract yardstick of impressiveness.
01:15:14.940 | It changes when you're older.
01:15:19.340 | What is my family trying to do?
01:15:21.980 | You know, where do we wanna be?
01:15:23.740 | What do we want, like, a typical afternoon to look like?
01:15:26.220 | What do we want our kids' experience to be like?
01:15:27.940 | What type of place do we wanna live?
01:15:29.500 | Like, what do we wanna be doing
01:15:31.540 | in, like, the evenings and afternoons?
01:15:33.020 | Like, who's around?
01:15:34.100 | When you get these visions really nailed down
01:15:37.900 | and you work backwards from it,
01:15:39.180 | all sorts of creative options show up.
01:15:41.060 | And yes, there might be options
01:15:42.180 | where someone is not optimizing
01:15:44.220 | their potential professional achievement.
01:15:48.420 | It might be like, wait a second, I'm really good at this.
01:15:50.100 | So I could, like, do this half hours
01:15:53.100 | and we could explore to live over here.
01:15:54.580 | Like, you start to see these other options
01:15:56.740 | once you work together.
01:15:58.740 | So that's a good call.
01:16:00.460 | We have a case study coming up
01:16:01.620 | that actually is someone who went,
01:16:03.300 | thought about these same issues.
01:16:04.380 | So I think this is well-timed.
01:16:05.900 | All right, so as previewed,
01:16:09.380 | I wanna read a quick case study here
01:16:10.780 | from one of our listeners.
01:16:11.660 | It actually ends with a question.
01:16:12.980 | So it's a kind of a hybrid.
01:16:14.340 | All right, this case study is from Anna.
01:16:18.620 | A repeat writer to the show.
01:16:22.740 | Anna said, "I wrote you a while back
01:16:25.460 | to ask about whether or not I should take a job
01:16:28.180 | at a startup because I was bored
01:16:30.460 | at my cushy chief of staff job
01:16:32.340 | for a Silicon Valley tech company
01:16:34.500 | where I'm only needed to work part-time
01:16:36.020 | to fulfill my full responsibilities.
01:16:38.700 | I decided not to take the startup job as you suggested."
01:16:41.500 | I hope this would be where she says,
01:16:44.340 | this is where it'd be unfortunate if she said,
01:16:46.500 | and that startup was OpenAI,
01:16:49.580 | and you cost me $20 billion of stock options,
01:16:52.540 | you son of a bitch.
01:16:53.820 | No, she said, "By contrast,
01:16:56.520 | shortly after I made this decision,
01:16:59.660 | the startup went belly up."
01:17:01.600 | All right, so phew, we pushed her in the right direction.
01:17:04.380 | "Next, I got a big promotion and pay raise
01:17:06.380 | at my current company and have even more reason
01:17:08.720 | to believe that they don't mind me working part-time.
01:17:11.700 | I do work remotely, which makes it easier.
01:17:14.500 | Now I'm getting bored again and I feel myself getting antsy.
01:17:18.020 | I decided to learn to paint part-time
01:17:19.840 | and learn a fourth language,
01:17:21.020 | all the while continuing to work less than 30 hours a week
01:17:23.540 | at a job I do enjoy,
01:17:25.440 | although it's not overly stimulating."
01:17:28.540 | All right, so we've got a kind of a cool case study there.
01:17:31.260 | She resisted the urge to go to this high-stress job
01:17:33.800 | that was more impressive,
01:17:34.660 | and that turned out to be a good decision
01:17:36.300 | 'cause that company went belly up,
01:17:37.560 | and in her current company,
01:17:38.400 | she got more money and a promotion.
01:17:40.380 | She does, however, have a question.
01:17:43.060 | "How do I continue to go down this path
01:17:44.900 | without letting my ego get in the way
01:17:46.420 | of the cool life I had built?
01:17:48.620 | Everyone at my company thinks
01:17:49.980 | that their job equals their life.
01:17:52.180 | I feel like there is this constant pull
01:17:53.740 | to believe this is the case.
01:17:54.860 | Will this feeling ever stop?"
01:17:56.860 | Well, Anna, it is hard.
01:17:58.600 | I've experienced this in parts of my life
01:18:01.420 | where I have been intentionally
01:18:04.840 | having my foot halfway on the brake,
01:18:07.440 | where, hey, in this part, I could go all out,
01:18:10.080 | and the other people I know are,
01:18:11.460 | and I'm not, and it's difficult.
01:18:13.580 | I hear this a lot, in particular, from lawyers, right?
01:18:17.500 | There's this movement I really like right now
01:18:20.120 | that remote work and the pandemic really exploded
01:18:22.620 | of lawyers at big law firms leaving the partner track
01:18:27.580 | and leaving the office and said,
01:18:30.560 | "I'm only gonna bill half the hours I did before,
01:18:33.760 | and you're gonna pay me commiserately less,
01:18:36.540 | and there's no expectation now that I'm trying,
01:18:38.220 | there's no ladder for me to go up anymore,
01:18:40.500 | but I'm really good at this particular type of law,
01:18:42.580 | and it's really useful to have me work on these cases,
01:18:45.500 | and so you're happy to keep doing it."
01:18:47.180 | And I live now somewhere completely different.
01:18:49.780 | It's much cheaper than the big cities,
01:18:51.260 | so honestly, billing 50% less hours
01:18:54.900 | and working 35 hours a week,
01:18:57.140 | I'm making more money than anyone else in this town,
01:18:59.140 | and so this works out well.
01:19:00.820 | This is a movement I really like,
01:19:02.460 | and they also are struggling a lot with,
01:19:06.260 | "Yeah, but in my firm, if you get the partner,
01:19:09.660 | that's a big gold star.
01:19:10.620 | If you get the managing partner, it's an even big gold star.
01:19:12.340 | We'd look at our bonuses and our salaries,
01:19:13.860 | and I feel like they're lapping me."
01:19:15.960 | So that's also a psychological issue.
01:19:17.820 | All right, so what helps here?
01:19:20.520 | Partially just recognizing that's just part of the trade-off.
01:19:23.960 | Ego, accomplishment, this person's doing better than me.
01:19:30.060 | I think I'm smarter than that person,
01:19:31.500 | but they're moving ahead of me.
01:19:32.780 | That's never gonna go away, so you just have to see that
01:19:35.340 | as one of the things you're weighing against the benefits,
01:19:38.020 | but two, you need much more clarity, probably,
01:19:41.020 | on what the benefits are.
01:19:42.820 | This comes back to lifestyle-centric career planning.
01:19:45.900 | You need, like we talked about with the last caller,
01:19:48.540 | this crystal clear understanding
01:19:50.900 | of what matters to you in your life
01:19:52.100 | and your vision for you and your family's life,
01:19:54.340 | and if your current work arrangement fits into that vision,
01:19:58.300 | which probably it does, Anna,
01:19:59.580 | because 30 hours a week with high salary
01:20:03.180 | opens up a lot of cool opportunities for your life.
01:20:06.140 | If it's part of this vision that's based in your values
01:20:09.860 | and covers all parts of your life,
01:20:11.140 | and it's not just hobbies,
01:20:12.820 | it's not just like I'm trying to fill my time with hobbies.
01:20:14.780 | It's no, I have an aggressive vision for my life.
01:20:16.980 | We live here, I do this, I start this,
01:20:19.220 | I'm heavily involved in this.
01:20:20.660 | It's a full vision of a life well-lived.
01:20:23.820 | Then it's much easier to put up with the ego issues.
01:20:28.260 | You say, yeah, but here's what I'm proud of
01:20:30.020 | is this whole life I've built,
01:20:31.100 | and I'm super intentional about it.
01:20:32.220 | It's a deep life, and my work is a part
01:20:34.940 | of making this deep life possible,
01:20:36.460 | and what I'm proud of is this really cool life that I built.
01:20:39.420 | The more remarkable you make this vision,
01:20:42.540 | the easier you're gonna be dealing with the work ego issues.
01:20:46.540 | The more remarkable the deep life you craft
01:20:48.740 | that this high-paying 30-hour job is part of,
01:20:51.140 | the better you're gonna feel about it
01:20:53.420 | even when your colleagues at work are doing 80-hour weeks
01:20:57.780 | and making more money and getting more praise,
01:21:00.140 | because you say, what I'm proud of is not just my job.
01:21:02.580 | It's this remarkable life I've crafted,
01:21:04.540 | my impact on my family and my community
01:21:07.380 | and these other things I'm involved in
01:21:08.700 | and the ability to just, whatever it is you care about.
01:21:12.300 | So I would say, Anna, make your vision of your life
01:21:14.340 | much more remarkable than I'm doing hobbies in my free time.
01:21:17.580 | You need to lean into the possibilities of your life
01:21:21.420 | and do something that, and when I say remarkable,
01:21:23.220 | I mean that in a literal sense,
01:21:25.500 | that someone hearing about you would remark about it.
01:21:27.700 | Ooh, that's interesting, what Anna is up to,
01:21:30.660 | that you are a source of interested remark.
01:21:33.420 | That's what you wanna get to.
01:21:35.340 | Now, it's possible when you do this exercise,
01:21:37.660 | the vision you come up with that's super deep and meaningful
01:21:41.340 | is gonna involve you actually doing a lot more work
01:21:43.660 | on something that's really important to you, and that's fine.
01:21:46.180 | But you just wanna have clarity
01:21:47.380 | about what I'm trying to do with my life.
01:21:48.860 | And so it's a good question and a good case study,
01:21:52.040 | because just simplifying and slowing down
01:21:55.740 | without a bigger vision for what that slowing down serves
01:21:58.700 | can itself be complicated or a trap.
01:22:00.820 | If you slow down and simplify and then just find yourself,
01:22:05.020 | just trying to find hobbies to fill the time,
01:22:08.660 | your mind's like, what are we doing?
01:22:10.620 | So you gotta lean into the remarkability
01:22:12.180 | of your vision here, Anna.
01:22:13.300 | Given all that you've already done,
01:22:14.580 | I have no doubt that you're gonna come up
01:22:15.660 | with something cool, so you'll have to write back in
01:22:18.000 | and let us know what you do.
01:22:19.400 | All right, so we have a final segment coming up
01:22:22.940 | where I choose something interesting
01:22:24.100 | that I've seen during the week to talk about.
01:22:25.580 | But first, another quick word from a sponsor.
01:22:28.260 | (air whooshing)
01:22:29.460 | Let's talk about Shopify, the global commerce platform
01:22:32.780 | that helps you sell at every stage of your business,
01:22:37.140 | whether you've just launched your first online shop,
01:22:40.540 | or you have a store, or you just hit a million orders,
01:22:43.980 | Shopify can handle all of these scales.
01:22:47.180 | They are, in my opinion, and from just the people I know,
01:22:52.900 | the service you use if you wanna sell things.
01:22:55.260 | Like it powers, Shopify powers 10%
01:22:57.300 | of all e-commerce in the US.
01:22:59.720 | They are the global force behind Allbirds,
01:23:01.700 | and Rothy's, and Brooklyn,
01:23:02.820 | and millions of other entrepreneurs
01:23:04.140 | of every size across 175 countries.
01:23:06.460 | I can think of a half dozen sort of writer,
01:23:09.340 | entrepreneur friends of mine who sell merch
01:23:11.840 | or other things relevant to their writing empires
01:23:14.620 | that all use Shopify.
01:23:16.420 | And they love it because what it allows you to do
01:23:18.780 | is have this very professional experience
01:23:22.860 | for your potential customers, very high conversion rate.
01:23:26.100 | It makes checking out very easy.
01:23:27.340 | It integrates so easily
01:23:28.500 | with other sorts of backend systems.
01:23:30.940 | I mean, Shopify is who you use if you wanna sell.
01:23:35.140 | And when Jesse and I start our online shop
01:23:37.780 | for deep questions, which I think is inevitable,
01:23:41.740 | it's gotta be inevitable,
01:23:43.620 | we are gonna use Shopify for sure.
01:23:45.140 | - Yep. - Yeah.
01:23:45.980 | I think we're gonna use that for sure.
01:23:46.980 | - People are talking about it at the book event.
01:23:48.860 | - You know, multiple people mentioned at the book event,
01:23:51.580 | the VBLCPP shirts.
01:23:53.420 | - Yeah. - People wanted those.
01:23:55.140 | Values-based, lifestyle-centric career planning.
01:23:56.780 | People are like, "Where's my VBLCP shirt?"
01:23:58.940 | And so when we sell those,
01:23:59.860 | we're gonna sell those 100% using Shopify.
01:24:04.320 | All right, so sign up for a $1 per month trial period
01:24:07.100 | at shopify.com/deep.
01:24:09.900 | When you type in that address, make it all lowercase,
01:24:12.020 | shopify.com/deep, all lowercase.
01:24:15.300 | You need that slash deep to get the $1 per month trial.
01:24:19.340 | So go to shopify.com/deep now to grow your business.
01:24:23.940 | No matter what stage you're in, shopify.com/deep.
01:24:28.500 | (cash register dinging)
01:24:29.580 | Also wanna talk about our longtime friends at Roan,
01:24:32.460 | R-H-O-N-E.
01:24:34.340 | Here's the thing, men's closets were due
01:24:36.700 | for a radical reinvention and Roan has stepped up
01:24:39.620 | to that challenge with their commuter collection,
01:24:43.340 | the most comfortable, breathable, and flexible set
01:24:45.220 | of products known to man.
01:24:48.380 | I really enjoy this collection for a couple of reasons.
01:24:51.900 | A, it's very breathable and flexible and lightweight.
01:24:56.620 | So when I got like a hard day of doing book events
01:24:58.540 | like I've been doing recently,
01:25:00.140 | if I can throw on like a commuter collection shirt
01:25:02.360 | or a commuter collection pants,
01:25:04.520 | men often underestimate how much your pants lead to comfort.
01:25:08.180 | Like a thick pair of jeans can get pretty uncomfortable
01:25:11.300 | after a long day of running around LA.
01:25:13.660 | So I have lightweight, breathable, good-looking clothes,
01:25:15.880 | makes a difference.
01:25:17.400 | They have this wrinkle technology
01:25:20.100 | where you can travel with these things
01:25:21.420 | and the wrinkles work themselves out once you wear them.
01:25:24.460 | So you can look really sharp,
01:25:25.620 | even if you've been living out of a suitcase,
01:25:28.140 | let's say on a book tour.
01:25:30.100 | It all looks really good.
01:25:31.340 | It has gold fusion anti-odor technology.
01:25:33.300 | Like it's just good looking, incredibly useful clothes,
01:25:37.580 | especially when you have a very active day.
01:25:40.780 | The Roan commuter collection is something
01:25:42.580 | I highly recommend.
01:25:44.180 | So the commuter collection could get you
01:25:45.460 | through any workday and straight into whatever comes next.
01:25:48.540 | Head to roan.com/cal and use the promo code Cal
01:25:53.120 | to save 20% off your entire order.
01:25:55.680 | That's 20% off your entire order.
01:25:57.300 | When you head to r-h-o-n-e.com/cal and use the code Cal,
01:26:02.300 | it's time to find your corner office comfort.
01:26:05.920 | All right, let's do our final segment.
01:26:08.940 | So what I'd like to do in this final segment
01:26:11.260 | is take something interesting that someone sent me
01:26:13.540 | or I encountered, and then we can talk about it.
01:26:17.860 | So today I actually want to go back to,
01:26:20.020 | I mentioned this email in my opening segment,
01:26:23.100 | in the opening segment about AI.
01:26:25.880 | And I mentioned that someone had sent me an email
01:26:28.380 | about General Ulysses S. Grant.
01:26:32.220 | This was Nick, hat tip to Nick.
01:26:34.180 | He sent me a scan from a book.
01:26:35.980 | I put the title page up on the screen
01:26:37.580 | for those who are watching the video.
01:26:40.020 | The book, it's an older book,
01:26:41.180 | "Campaigning with Grant,"
01:26:43.260 | written by General Horace Porter, right?
01:26:46.280 | So this is a sort of a contemporaneous account
01:26:48.360 | of what it was like being with General Grant
01:26:52.820 | during the Civil War.
01:26:54.660 | There's a particular page from this I want to read,
01:26:57.660 | page 250.
01:26:59.400 | All right, so this is describing
01:27:01.060 | the general's actions in camp.
01:27:02.880 | He would sit for hours in front of his tent
01:27:06.960 | or just inside of it looking out,
01:27:08.740 | smoking a cigar very slowly,
01:27:11.060 | seldom with a paper or a map in his hands,
01:27:14.000 | and looking like the laziest man in camp.
01:27:16.840 | But at such periods,
01:27:17.860 | his mind was working more actively
01:27:19.660 | than that of anyone in the army.
01:27:22.420 | He talked less and thought more than anyone in the service.
01:27:27.260 | He studiously avoided performing any duty
01:27:29.460 | which someone else could do as well or better than he.
01:27:32.600 | And in this respect,
01:27:33.580 | demonstrated his rare powers of administrative
01:27:36.220 | and executive methods.
01:27:38.020 | He was one of the few men holding high position
01:27:40.500 | who did not waste valuable hours
01:27:42.380 | by giving his personal attention to petty details.
01:27:46.260 | He never consumed his time
01:27:47.540 | in reading over court martial proceedings
01:27:49.500 | or figuring up the items of supplies on hand
01:27:53.060 | or writing unnecessary letters or communications.
01:27:55.620 | He held subordinates to a strict accountability
01:27:57.940 | in the performance of such duties
01:27:59.300 | and kept his own time for his thought.
01:28:02.180 | It was this quiet but intense thinking
01:28:04.840 | and the well-matured ideas which resulted from it
01:28:07.240 | that led to the prompt and vigorous action
01:28:09.740 | which was constantly witnessed during this year,
01:28:11.880 | so pregnant with events.
01:28:13.740 | Now, we actually talked about this
01:28:16.900 | in my interview with Ryan Holiday
01:28:19.260 | on his Daily Stoic podcast,
01:28:21.380 | which was released over the past two weeks in two parts.
01:28:23.580 | We talked about General Grant.
01:28:25.580 | And the point Ryan made, which is a good one,
01:28:28.420 | is that it's useful to contrast Grant
01:28:30.300 | with General McClellan who preceded Grant.
01:28:33.300 | McClellan was the opposite of quiet and deep
01:28:36.700 | and focusing on what mattered
01:28:38.100 | and thinking hard about doing that well.
01:28:39.780 | McClellan, by contrast, was all activity.
01:28:42.860 | We gotta maneuver the troops, I gotta write some letters,
01:28:44.680 | I gotta do this, let's go over here,
01:28:45.900 | we gotta make sure that this is working here.
01:28:47.540 | He was a constant activity, a consummate bureaucratic player,
01:28:51.820 | but he never actually pulled the trigger
01:28:54.300 | and made the attacks that mattered.
01:28:56.500 | And finally, Lincoln said, "I'm sorry, McClellan, enough.
01:28:58.980 | "Let's give this Grant guy a try."
01:29:00.580 | And Grant got it done, one to war, right?
01:29:04.060 | And so I think in here there's this really
01:29:06.480 | useful point about in an age of busyness,
01:29:10.480 | and of course in today's digital age,
01:29:11.780 | busyness has never been more amplified or pronounced.
01:29:15.060 | It is not ultimately the busyness
01:29:16.420 | that wins the proverbial war.
01:29:17.920 | It is not the reading over the court martial proceedings
01:29:21.180 | and writing letters and running around
01:29:23.420 | and talking to people and giving your thoughts
01:29:25.140 | and doing useless maneuvers.
01:29:26.580 | That doesn't win the proverbial war.
01:29:28.180 | It's sitting down and thinking hard
01:29:29.700 | and getting to the core of it.
01:29:32.060 | This is what matters.
01:29:33.980 | Now let's go make this move.
01:29:36.060 | And you make the moves that win.
01:29:37.280 | You win the battles, you win the war.
01:29:39.200 | And that is an act of slowness.
01:29:41.740 | That is an act of slowing down, focusing on what matters,
01:29:44.560 | giving it careful attention, minimizing the non-important,
01:29:48.160 | and then pulling the trigger
01:29:49.000 | and executing on what matters and repeating.
01:29:51.160 | So in General Grant, we see a great demonstration
01:29:55.060 | of the power of slow productivity.
01:29:57.020 | In the moment, it might make you look like,
01:29:58.880 | and I'm quoting here, "the laziest man in the army."
01:30:03.420 | But when you zoom out, you're the hero who won the war.
01:30:07.780 | So I think that's a really cool example
01:30:09.360 | of a point that's all throughout my book,
01:30:10.840 | Slow Productivity, slowing down, focusing on what matters,
01:30:14.340 | doing fewer things, having a natural pace,
01:30:16.320 | but obsessing over the impact and quality of what you do.
01:30:19.520 | That is the formula for making a difference.
01:30:23.160 | You don't win wars through activity,
01:30:24.560 | you win wars through smart strategy,
01:30:26.100 | and that requires quiet, that requires slowness.
01:30:28.120 | So Nick, thank you for sending me that excerpt.
01:30:31.840 | I think it's a great case study
01:30:33.340 | that some of the best ideas are some of the oldest ideas.
01:30:35.660 | There's nothing new under the sun.
01:30:37.140 | General Grant knew about slow productivity,
01:30:39.180 | and now we do as well.
01:30:40.280 | All right, Jesse, I think that's all the time we have.
01:30:44.160 | Thank you, everyone, for listening
01:30:45.340 | or sending in your questions, et cetera.
01:30:47.180 | I guess I should mention calnewport.com,
01:30:49.860 | no, thedeeplife.com, rather.
01:30:51.340 | Thedeeplife.com/listen is where the links are
01:30:54.820 | for submitting questions and calls, so please go do that.
01:30:57.780 | Send your topic ideas to jesse@calnewport.com.
01:31:00.740 | And buy my book, "Slow Productivity."
01:31:02.840 | Find out more at calnewport.com/slow.
01:31:05.240 | We'll be back next week,
01:31:06.240 | and until then, as always, stay deep.
01:31:08.500 | Hey, so if you liked today's discussion
01:31:11.060 | about AI being able to answer emails,
01:31:13.880 | I think you'll also like episode 244,
01:31:17.120 | where I give you more detailed thoughts on ChatGPT,
01:31:20.640 | how it works and what it can and can't do.
01:31:22.740 | Check that out.
01:31:23.580 | That is the deep question I want to address today.
01:31:26.620 | How does ChatGPT work?
01:31:29.700 | and how worried should we be about it.