back to indexAn Important Message On AI & Productivity: How To Get Ahead While Others Panic | Cal Newport
Chapters
0:0 Can A.I. Empty My Inbox?
40:47 Should I continue to study programming if AI will eventually replace software jobs?
45:25 Is it bad to use ChatGPT to assist with your writing?
51:30 How do I reclaim my workspace for Deep Work?
56:21 How do I decide what to do on my scheduled mini-breaks at work?
58:54 Heidegger’s view on technology
65:18 Seasonality with a partner and kids
76:15 A Silicon Valley Chief of Staff balancing work and ego
86:17 General Grant’s Slow Productivity
00:00:00.000 |
So Andrew Morant recently had a great article 00:00:05.640 |
It was titled, "Okay, Doomer" in the magazine 00:00:14.380 |
with AI safetyist, or as they sometimes call themselves, 00:00:18.000 |
deaccelerationist, doesn't roll off the tongue. 00:00:29.260 |
They even have a shorthand for measuring their concern, 00:00:39.720 |
that means I'm 90% sure that AI is gonna destroy humanity. 00:00:46.280 |
Why did this get me thinking about our discussions here 00:01:00.920 |
and important issue for most people when it comes to AI, 00:01:04.120 |
at least for the tens of millions of knowledge workers 00:01:06.600 |
who spend their entire day jumping in and out 00:01:12.840 |
For this large group of people, a big part of our audience, 00:01:20.120 |
When will it be able to empty my email inbox on my behalf? 00:01:25.000 |
When will AI make the need to check an inbox anachronistic, 00:01:30.000 |
like trying to put new paper into the fax machine 00:01:32.460 |
or waiting for the telegraph operator to get there? 00:01:36.700 |
where I'm not context shifting constantly back and forth 00:01:48.580 |
Now, I recently wrote my own article for "The New Yorker" 00:01:53.380 |
of language model-based AI and what the future may hold. 00:01:57.180 |
I had this problem of AI and email firmly in mind 00:02:04.980 |
So there's three things I'm gonna cover in a row here. 00:02:07.580 |
Number one, let's take a quick look at this promised world 00:02:10.840 |
in which AI could perhaps tame the hyperactive hive mind. 00:02:13.740 |
I think this is potentially more transformative 00:02:21.700 |
and its ability to tame things like email right now. 00:02:25.180 |
I actually use ChatGPT to help answer some of my emails, 00:02:33.140 |
And then part three, what is the technical challenges 00:02:36.340 |
currently holding us back from a full email-managing AI? 00:02:41.260 |
This is where we'll get into my latest "New Yorker" article. 00:02:58.180 |
Today in this deep dive, we're putting those two together. 00:03:01.820 |
We have this topic, when can AI clean my inbox? 00:03:08.200 |
Part one, when we talk about AI's impact on the workplace, 00:03:15.400 |
there tends to be three examples of general examples 00:03:19.020 |
of how AI is gonna help the office that tend to come up. 00:03:22.240 |
Number one is the full automation of jobs, right? 00:03:26.360 |
So we hear about, for example, vertical AI tools 00:03:29.740 |
that's gonna take over a customer service role. 00:03:35.820 |
The second type of thing we hear about AI in the workplace 00:04:00.620 |
Create a slide that looks like this and use these graphics. 00:04:16.960 |
because we're mainly interacting with these tools 00:04:26.140 |
So there's this sort of back and forth dialogue 00:04:28.940 |
people are having with chatbots in particular 00:04:36.180 |
none of those three things are really getting at 00:04:42.860 |
the issue that is driving the current burnout crisis, 00:05:00.580 |
that's almost ubiquitous within knowledge work 00:05:02.760 |
where we have unscheduled back and forth messages 00:05:09.340 |
The problem with this is that we have to constantly 00:05:14.080 |
If I have seven things I'm trying to figure out 00:05:16.220 |
and each of those things has seven or eight messages 00:05:18.260 |
that I have to bounce back and forth with someone else 00:05:21.680 |
that's a huge number of messages that I have to see 00:05:26.260 |
which means I have to constantly check my inboxes 00:05:40.660 |
who need things from me, so we take them very seriously, 00:05:50.400 |
from one thing to another thing to another thing 00:05:52.280 |
within my inbox, back to my work, back to the inbox, 00:05:58.120 |
It is hard for our brain to change its focus of attention. 00:06:03.720 |
So this forcing ourselves to constantly jump around 00:06:08.080 |
each of which is dealing with different issues 00:06:19.860 |
of what we produce and the speed at which we produce it. 00:06:22.220 |
This hyperactive hive mind workflow is a huge problem. 00:06:29.860 |
The first chapter of the book goes deep on it, 00:06:34.020 |
This is where I want to see AI make a difference. 00:06:47.160 |
in the Aaron Sorkin television show, "The West Wing," 00:06:50.160 |
the chief of staff for Martin Sheen's President Bartlett. 00:06:55.160 |
see the incoming messages, and process them for you, 00:06:59.120 |
many of which they might be able to handle directly. 00:07:06.960 |
And for the things that it can't directly manage for you, 00:07:10.360 |
it can just wait until you're next ready to check in 00:07:14.600 |
And your AI chief of staff could, in this daydream, 00:07:24.960 |
And you're like, "Yeah, but put it on a Tuesday 00:07:28.120 |
And it's like, "Great, I'll handle this for you." 00:07:33.520 |
Do you want to hear a summary of any of these updates?" 00:07:35.520 |
And you would say, "Yeah, tell me the update on this project. 00:07:47.320 |
And I'd be like, "Yeah, find me a slot on Friday 00:08:00.360 |
and try to switch your attention from one to another, 00:08:08.020 |
that could mean you no longer have to even see 00:08:18.760 |
The quality and quantity of what's being produced 00:08:27.000 |
I think we would also see subjective satisfaction measures 00:08:40.720 |
And then I go back to just working on things. 00:08:43.800 |
To me, that's the dream of AI and knowledge work, 00:08:46.880 |
much more so than, well, when I'm just in the inbox myself, 00:08:50.620 |
the AI agent's going to help me write a draft. 00:08:58.200 |
I don't care about the speed at which I do my tasks. 00:09:02.640 |
I want to eliminate the need to have to constantly change 00:09:06.280 |
what I'm focusing on from one to another project 00:09:14.600 |
All right, so here's the second question, part two. 00:09:25.120 |
Well, I was messing around with chat GPT recently. 00:09:30.560 |
from my actual email inbox and asked it some questions. 00:09:35.720 |
understanding my emails and writing to people on my behalf. 00:09:47.220 |
It was an interesting, it was a longer message. 00:09:50.660 |
the pastor was talking about my recent coverage 00:09:54.120 |
of Abraham Joshua Heschel's book, "The Sabbath," 00:09:58.420 |
and talks about some points from it, some extra points. 00:10:02.020 |
And there's like offering to send me some book. 00:10:14.340 |
The Lost Art of Accomplishment Without Burnout." 00:10:18.900 |
If you like the type of things I talk about on this channel, 00:10:24.260 |
It distills all of my ideas into a clear philosophy, 00:10:43.500 |
It was like this person is a pastor with this church. 00:10:57.180 |
Like his one paragraph got to all the main points. 00:11:02.620 |
And I said, "Can you write for me a polite reply?" 00:11:07.500 |
declined the copy of the book that was being offered. 00:11:27.260 |
"to send me a copy of your book on blah, blah, blah. 00:11:46.940 |
Spoiler alert, I'm gonna actually talk about this 00:11:52.460 |
And Chat GPT did, it gave me three bullet points. 00:11:56.940 |
He shares an anecdote from this book about that. 00:12:02.300 |
So what I'm seeing as I look at and test Chat GPT 00:12:36.100 |
I'm telling Chat GPT, "Summarize the message." 00:12:43.680 |
Really at best, it's marginally speeding up the time 00:12:48.460 |
writing some things faster, preventing some reading. 00:12:59.940 |
and make sure messages are being sent back and forth. 00:13:05.960 |
But these large language model tools right now 00:13:16.640 |
And this is where I want to bring up the article 00:13:27.180 |
I don't know if you can see it in the little corner here, 00:13:31.840 |
So my article is entitled, "Can an AI make plans? 00:13:35.500 |
Today's systems struggle to imagine the future, 00:13:44.520 |
The latest generation of large language model tools 00:13:51.900 |
especially the sort of GPT-4 generation of language models. 00:13:56.840 |
But there's a lot of recent research literature 00:14:02.740 |
And this has been replicated in paper after paper. 00:14:07.300 |
So if you ask a language model to do something 00:14:11.440 |
that requires it to actually look ahead and say, 00:14:18.080 |
So there was an example I gave in the article 00:14:21.300 |
from Sebastian Bubek from Microsoft Research, 00:14:27.840 |
He said, "Look, this is really, GPT-4 is really impressive." 00:14:39.120 |
the reason on new elements that arrive at me, 00:14:41.400 |
then I think you have to call GPT-4 intelligent." 00:14:48.920 |
And he gave an example of something that GPT-4 00:14:55.480 |
Seven times four plus eight times eight equals 92, 00:15:03.200 |
modify one number on the left-hand side of this equation 00:15:11.200 |
If you need the sum to be 14 higher to get from 92 to 106, 00:15:25.000 |
"The arithmetic is shaky," Bubek said about this. 00:15:28.540 |
There's other examples where GPT-4 struggled. 00:15:33.120 |
There's a classic puzzle game called Towers of Hanoi, 00:15:36.880 |
where you have disks of different size and three pegs, 00:15:40.460 |
and you need to move them from one peg to another. 00:15:47.900 |
This comes up a lot in computer science courses 00:15:56.460 |
They gave it a configuration in Towers of Hanoi 00:15:58.800 |
that could be solved pretty easily, five moves, 00:16:02.680 |
It struggled with basic block stacking problems. 00:16:13.640 |
It struggled when it was asked to write a poem 00:16:17.000 |
that was grammatically correct and made sense, 00:16:19.640 |
where the last line was the exact inverse of the first line. 00:16:22.360 |
It wrote a poem, and it mainly made grammatical sense. 00:16:26.620 |
The last line was a reverse of the first line, 00:16:41.040 |
all of these examples are marked by their need 00:16:44.720 |
to simulate the future in order to solve them. 00:16:51.000 |
is we sort of simulate different things we could change. 00:16:55.840 |
Oh, changing the sevens would move it up by sevens. 00:17:01.360 |
When you play Towers of Hanoi, you have to look ahead. 00:17:04.240 |
If I make this move next, this is a legal move. 00:17:11.100 |
So we have to look ahead when humans solve Towers of Hanoi. 00:17:16.560 |
When you're writing the first line of the poem, 00:17:24.560 |
So I got to make this first line of the poem, 00:17:27.160 |
I got to make this first line of the poem reversible. 00:17:36.200 |
let me look back at what the first line was and reverse it. 00:17:52.820 |
We do this naturally, we do this unconsciously, 00:18:06.480 |
with respect to the crosswalk by the time I'm out there? 00:18:13.740 |
I am simulating your internal psychological state. 00:18:19.160 |
that's not only going to accomplish my goals, 00:18:22.280 |
This is why people who are maybe neurodivergent 00:18:32.280 |
but they irritate or insult people frequently 00:18:35.040 |
because part of what is being changed in their brain wiring 00:18:43.920 |
of what they're going to say on another mind, 00:18:46.040 |
then they're much more likely to say something 00:18:47.600 |
that's going to be taken as sort of offensive 00:18:58.320 |
It's also at the core of any rendition we've seen, 00:19:01.480 |
sci-fi renditions of a fully intelligent machine, 00:19:13.200 |
which is HAL 9000 from Stanley Kubrick's 2001. 00:19:19.820 |
Where Dave, the astronaut, is trying to disable HAL 00:19:30.000 |
He's trying to get in to disassemble HAL, to turn it off. 00:19:37.640 |
How does HAL 9000 know not to open the pod bay doors? 00:19:41.680 |
What would happen if I opened the pod bay doors? 00:20:00.480 |
Now, is this just because we need a bigger model? 00:20:02.600 |
Is this, is GPT-5 going to be able to do this? 00:20:04.940 |
Is this, we just have to figure out our training? 00:20:09.060 |
I'll put my technical cap on here for a second, 00:20:11.960 |
but I get into this in my New Yorker article. 00:20:14.900 |
The architecture of the large language models 00:20:17.600 |
that drive the splashiest AI agents of the moment, 00:20:31.640 |
of doing even the most basic future simulations. 00:20:46.480 |
So GPT-4, we don't really know how many layers we are. 00:20:49.040 |
We think it's like 96, but we're not quite sure 00:21:05.640 |
which is actually a token, a piece of a word. 00:21:07.720 |
So you give it input, give it a sentence of input. 00:21:13.560 |
and then out the other end comes a single word 00:21:27.840 |
You have basically these transformer sub-layers first. 00:21:32.800 |
It's a key piece of these new language models. 00:21:38.600 |
what part of the input's being paid attention to. 00:21:40.880 |
And then after those, you have basically neural networks, 00:21:54.080 |
and neural network connections that it simulates activating. 00:22:13.080 |
They can hard-code, these layers can hard-code 00:22:19.200 |
And what happens is, and I get into this in the article, 00:22:32.360 |
We're being asked to do this about this email. 00:22:49.760 |
that sort of make grammatical sense to be next, 00:22:54.960 |
to help bias towards which word we should output. 00:23:01.960 |
and the number of ways these properties can be combined 00:23:36.040 |
it can't see or approximate with its hardwired rules, 00:23:54.960 |
We do our best with what we've already written down. 00:23:59.480 |
That's just the architecture of these models. 00:24:01.840 |
We see this, for example, when you play chess with GPT-4. 00:24:11.160 |
So the properties might be like, we have a chess board, 00:24:14.160 |
These are all properties that are being identified. 00:24:29.000 |
So you could have really complicated chess games 00:24:37.040 |
you get something like an Elo 1000 rated playing experience, 00:24:43.320 |
But when you look closer at these games, what happens? 00:24:50.920 |
Because what happens in the middle game of chess 00:24:54.520 |
And when you get to the middle game of chess, 00:24:56.880 |
you can't just go off of hardwired heuristics. 00:25:02.000 |
this is the right thing to do, or here's a good thing to do. 00:25:04.920 |
When you get to the middle game, how do chess players play? 00:25:10.280 |
They say, I've never seen this type of board before. 00:25:12.320 |
So what I need to do now is think, if I do this, 00:25:29.440 |
it has no way of interrogating its particular circumstance. 00:25:35.200 |
All right, so this is why we can't clean our inbox, 00:25:37.960 |
because to clean our inbox, decisions have to be made 00:25:50.920 |
If I said this, how's that gonna make this person feel? 00:26:08.560 |
I'm gonna have to find someone else to do it. 00:26:26.000 |
Language models, because they're massive and feed forward 00:26:29.880 |
and unmalleable and they have no interaction or recurrence. 00:26:44.600 |
Deep Blue works by simulating hundreds of millions 00:27:04.520 |
All right, so that's optimistic for our goal here 00:27:14.120 |
it's not just a sterile positions of pieces on a board. 00:27:37.880 |
the first poker AI to beat top-ranked players. 00:27:43.720 |
So they played in a tournament with seven top-ranked players 00:27:46.880 |
the people you would know if you followed poker 00:28:04.240 |
in poker, the cards themselves are important, 00:28:09.600 |
is other people's beliefs about what the cards are. 00:28:19.880 |
what's the probability that they think I have an ace high? 00:28:22.400 |
That's where all the poker strategy comes into. 00:28:32.600 |
Interesting aside about Pluribus, by the way, 00:28:36.280 |
Brown and his team first tried to solve poker 00:28:41.080 |
sort of a feed-forward chat CPT style approach 00:28:46.920 |
that it would just tell you, here's my poker hand. 00:28:52.160 |
here's the best move to do in that situation. 00:28:57.600 |
of compute time at the Pittsburgh Supercomputing Center 00:29:04.360 |
everything you could see, we simulated the future? 00:29:14.720 |
It was a fraction of its size and way out-competed it. 00:29:17.080 |
So simulating the future is a way more powerful strategy 00:29:20.040 |
than trying to build a really massive network 00:29:26.640 |
let's play an even more human-challenging game, Diplomacy. 00:29:30.760 |
And in the board game Diplomacy, which is like Risk, 00:29:34.680 |
the whole key to that game is you have before every turn, 00:29:43.200 |
And you make alliances and you backstab people 00:29:54.560 |
I talk about this in the article, beat real players. 00:30:00.040 |
People didn't even know they were playing against an AI bot. 00:30:12.880 |
So the language model could take the messages 00:30:16.400 |
And they could figure out like, what is this person saying? 00:30:23.940 |
really technical language that they could pass on 00:30:28.320 |
And the game strategy simulator is like, okay, 00:30:29.580 |
here's what the different players are telling me. 00:30:31.320 |
Now I'm gonna simulate different possibilities. 00:30:38.720 |
What if I lied to them and this was secretly, 00:30:54.140 |
put this into good diplomacy language to be convincing. 00:31:07.840 |
meant that we now had something that could play 00:31:09.780 |
against humans in a very psychologically relevant, 00:31:15.480 |
where you had to understand people's intentions 00:31:17.660 |
and get them on your side and it could do really well. 00:31:32.780 |
It's gonna be a combination of language models 00:31:34.520 |
with future simulators with maybe some other models 00:31:54.140 |
are the big companies taking this possibility seriously? 00:31:56.620 |
I mean, is a company like OpenAI taking seriously this idea 00:32:12.540 |
Well, remember Noam Brown who created Pluribus and Cicero? 00:32:24.600 |
a reference to the A* bounded search algorithm, 00:32:33.640 |
So I think PMBOK0 might be higher than we think. 00:32:42.960 |
I think it could actually completely reinvent 00:32:47.640 |
I have been trying for years to solve this problem 00:32:51.920 |
We need to get rid of the hyperactive hive mind. 00:32:55.400 |
that don't have so many ad hoc unscheduled messages 00:33:00.320 |
And I've had a really hard time making progress 00:33:02.520 |
for large organizations because of managerial capitalism 00:33:08.180 |
So maybe technology is gonna lap me at some point 00:33:10.600 |
and eventually there'll be a tool we can turn on 00:33:14.580 |
But once we do that, those benefits are gonna be so huge. 00:33:19.600 |
We will look at this error of checking an inbox 00:33:25.140 |
similar to how the cavemen looked at the age before fire. 00:33:28.900 |
I can't believe we actually used to live that way. 00:33:37.700 |
- As you were explaining it all, I had some questions 00:33:50.980 |
- Not to geek out, but the difference between 00:33:56.160 |
and it did this in the 2000s versus Deep Blue 00:33:58.500 |
that won in chess, which did this in the 1990s. 00:34:03.380 |
The big advancement there is that the hard thing about Go 00:34:06.260 |
is figuring out is this board good or bad, right? 00:34:10.940 |
what you have to do is be able to evaluate the futures. 00:34:14.060 |
Like, okay, if I did this, they might do this, 00:34:18.260 |
That's easier to figure out in chess than it is in Go. 00:34:36.420 |
And then they played Go endlessly against each other. 00:34:46.140 |
So they built this network that could look at a board 00:34:49.960 |
based off of just hundreds of millions of games 00:34:56.540 |
So now when they're looking at different possible moves, 00:34:58.980 |
they could talk to this model they trained up 00:35:00.920 |
that's self-trained, is this good, is this bad, 00:35:06.940 |
because this model learned good board configurations 00:35:09.900 |
that no human had ever thought of as being good before. 00:35:30.980 |
So in AlphaGo, they're like, oh, you can actually build, 00:35:33.940 |
you can self-teach yourself what's good and what's bad. 00:35:42.100 |
All right, so anyways, we got some questions coming up, 00:35:47.060 |
But first let's hear a word from our sponsors. 00:35:49.460 |
Hey, I'm excited, Jesse, that we have a new sponsor today. 00:35:53.820 |
A sponsor, one of these sponsors that does something 00:36:00.100 |
This is Listening, so the app is called Listening, 00:36:04.460 |
that lets you listen to academic papers, books, 00:36:06.820 |
PDF webpages, and articles, and email newsletters. 00:36:11.060 |
We're listening where it came to my attention, 00:36:14.860 |
is that people use it to transform academic papers 00:36:22.100 |
Now, it can do this for other things as well, 00:36:24.500 |
but this is where it really came in the prominence. 00:36:41.100 |
like you had hired a professional voice actor 00:36:47.100 |
Because it opens up all that time when you're driving, 00:36:54.660 |
time when you might put on a book or take a podcast. 00:36:58.940 |
that is very productively useful or interesting 00:37:03.740 |
Hey, I wanna read to me this new paper about whatever. 00:37:10.020 |
which I'm gonna listen to in listening for sure, 00:37:17.580 |
does Twitter posting about your academic papers, 00:37:25.020 |
And it looks like the implication of the paper is no. 00:37:28.340 |
So promoting yourself on Twitter as an academic 00:37:30.820 |
doesn't actually help you become a better academic. 00:37:42.540 |
Just imagine the amount of time you can now use 00:37:53.900 |
One of the other cool features I like is a add note button. 00:38:11.940 |
who have to read a lot of interesting complicated stuff 00:38:15.900 |
where we can't just sit down and actually read. 00:38:37.660 |
to get a whole month free of the listening app. 00:38:40.800 |
I also wanna talk about our good friends at Element, L-M-N-T. 00:38:46.280 |
Look, healthy hydration isn't just about drinking water. 00:38:48.640 |
It's about water and the electrolytes that it contains. 00:38:59.780 |
you need to be replacing the water and the electrolytes. 00:39:13.620 |
the goal here is not to drink as much water as possible, 00:39:15.500 |
but to drink a reasonable amount of water plus electrolytes, 00:39:18.140 |
especially if you're sweating or exercising a lot. 00:39:28.520 |
that gives you the sodium, potassium, magnesium you need, 00:39:31.920 |
but it's zero sugar and no weird artificial stuff. 00:39:50.520 |
They have these spicy flavors like mango chili. 00:39:53.420 |
You can mix chocolate salt into your morning coffee 00:39:55.740 |
if you really want to rehydrate after a hard night. 00:40:02.540 |
but also if I've had a long day of podcasting 00:40:05.100 |
and giving talks and I'm just expelling all this moisture 00:40:11.500 |
Element is exactly what I go to when I get back. 00:40:13.580 |
I add it to my Nalgene model and I get both back. 00:40:16.260 |
Anyways, I love Element and I love that I can, 00:40:30.700 |
to get a free sample pack with any purchase you do. 00:40:47.700 |
Specifically software jobs and web development 00:40:50.540 |
are on the top of the list of jobs that disappear. 00:40:53.420 |
After reading Deep Work, these were the two fields 00:41:04.140 |
I do not think programming as a job is gonna go away. 00:41:10.220 |
It does open up a lot of career capital opportunities 00:41:13.980 |
So if you look at the history of computer programming, 00:41:24.860 |
that makes programmers much more efficient, right? 00:41:29.500 |
I mean, we started programming used to be plug boards. 00:41:33.860 |
To program an early electronic digital computer, 00:41:44.500 |
Now I can store a program on punch cards and run that. 00:41:47.660 |
I don't have to redo it from scratch every time. 00:41:56.340 |
give it to someone and come back the next day 00:42:00.420 |
multiple order of magnitude efficiency changes 00:42:06.340 |
I could edit particular words or lines of my code. 00:42:10.100 |
I could run the code straight there and get the results 00:42:22.180 |
every one of these is an exponential increase 00:42:39.980 |
You don't have to memorize all the different commands 00:42:48.700 |
So now for like almost anything you want to do, 00:42:57.300 |
You have to understand that every one of these advances 00:43:05.100 |
Did we see as we made programmers massively more efficient 00:43:08.580 |
that the number of programmers we needed to hire 00:43:14.300 |
there'd be like seven programmers left right now. 00:43:17.140 |
Instead, there's sort of more people doing programming 00:43:20.620 |
was followed a sort of common economic pattern. 00:43:32.580 |
and therefore the potential value of the systems we built. 00:43:35.500 |
So we still needed the same number of programmers, 00:43:49.260 |
more applications of software today than we had in 1955. 00:43:53.340 |
This is my best prediction of what we're gonna see with AI. 00:43:57.220 |
I think the push to try to fully replace a programmer 00:44:04.860 |
Now, what we're gonna do is make programmers even better. 00:44:09.620 |
I mean, this is what Copilot with GitHub is doing. 00:44:22.260 |
Just you can ask it and it will show you the example code 00:44:36.980 |
But we're gonna be able to produce more complicated systems 00:44:41.500 |
So we're just gonna see more computer code in our world, 00:44:50.620 |
because the ability for programmers to produce this 00:44:56.340 |
is if you like programming, keep learning it, 00:44:59.100 |
but keep up with the latest AI tools as you do. 00:45:02.020 |
Whatever is cutting edge with AI and programming, 00:45:05.060 |
Push yourself to learn more and more complicated code 00:45:11.340 |
because the complexity curve of what programmers have to do 00:45:25.500 |
Do you ever use ChatGPT to assist with your writing? 00:45:28.580 |
I'm not a full-time writer, but do write a lot. 00:45:30.620 |
Recently, I've been using ChatGPT for assistance. 00:45:34.140 |
- Yeah, ChatGPT in writing is an interesting place. 00:45:44.140 |
and some of my roles of looking at pedagogy and AI 00:45:53.340 |
and the role of AI in writing each of these threads, 00:45:57.700 |
So let's think about professional writers, for example. 00:46:07.740 |
and there's quite a few who are messing around 00:46:12.100 |
are using it largely for brainstorming an idea formulation. 00:46:20.740 |
Hey, can you go find me five examples of this? 00:46:27.340 |
they can kind of give you more modern examples. 00:46:35.220 |
because professional writers have very specific voices, 00:46:39.020 |
the art of exactly how we craft sentences matters to us. 00:47:00.740 |
I think it's becoming increasingly more common 00:47:03.820 |
to use tools like ChatGPT to produce drafts of text 00:47:11.940 |
I think it brings clear communication to more people. 00:47:20.140 |
The ability now to not be tripped up or held back 00:47:22.900 |
because I can't describe my scientific results very well. 00:47:27.860 |
Oh, if ChatGPT can help me describe my results 00:47:38.460 |
I mean, I think a lot of people have communication 00:47:40.140 |
in their job who struggle a little bit with writing. 00:48:00.740 |
And I think that's when it comes to pedagogy. 00:48:05.260 |
And this is really an open question right now, 00:48:14.180 |
There's different schools of thought about this. 00:48:18.460 |
how to write in this sort of cybernetic model 00:48:33.060 |
but it's important for your development as a thinker. 00:48:34.940 |
It's important for your development as a person 00:48:39.380 |
There's a lot of people who say writing is thinking. 00:48:45.460 |
and we can't yet outsource our thinking to chat GPT, 00:48:53.340 |
We can compare this to other existing technologies. 00:48:57.820 |
In particular, I like to think about comparing this 00:49:05.900 |
With the calculator, here's a technology that came along 00:49:08.980 |
that can do arithmetic very fast and very accurately. 00:49:15.460 |
what we largely decided to do was preserve the importance 00:49:24.740 |
you're learning how to do arithmetic with pencil and paper, 00:49:28.420 |
you need to get comfortable manipulating numbers 00:49:56.220 |
The other way of thinking about this is centaur chess, 00:49:59.180 |
which is where players play chess along with an AI, 00:50:01.700 |
a player plus an AI, and they work with each other. 00:50:04.620 |
Centaur chess players are the highest ranked players 00:50:10.060 |
A player plus AI can beat the very best human players. 00:50:14.980 |
human plus machine together is just much better 00:50:18.780 |
So that's another way that we might end up thinking 00:50:26.980 |
And the quality of writing in the world is gonna go up. 00:50:32.100 |
I think educational institutions are still grappling with, 00:50:35.260 |
is language model aided writing the calculator, 00:50:41.060 |
So I think that's probably the most interesting thread, 00:50:43.180 |
but if you're just doing mundane professional communication, 00:50:57.740 |
Oh, this is gonna be our slow productivity corner. 00:51:11.220 |
we try to designate at least one question per week 00:51:21.340 |
you really need to get the book, "Slow Productivity." 00:51:23.180 |
All right, what's our slow productivity corner question 00:51:26.100 |
- All right, this question's from Hunched Over. 00:51:49.340 |
I habitually switch back into a shallow work mindset. 00:51:57.620 |
- So I talk about this in principle two of my book, 00:52:07.020 |
And as part of my definition of that principle, 00:52:27.320 |
when you're trying to extract value from your mind. 00:52:46.020 |
when we study the traditional knowledge workers 00:52:49.080 |
people famous for producing value in their minds, 00:52:52.660 |
they often have very, very nice home offices, 00:53:03.400 |
And they don't work on their deep stuff in that office. 00:53:06.820 |
David McCullough, I found the picture of this. 00:53:10.540 |
I found the picture of his home office from a profile, 00:53:13.320 |
his house in West Tisbury, Martha's Vineyard. 00:53:24.060 |
So he would use the home office to do all the business 00:53:26.360 |
of being like a best-selling author and historian. 00:53:31.620 |
'cause that was what was conducive for his brilliance. 00:53:34.940 |
her best poetry was composed walking in the woods. 00:53:57.940 |
your most smart, creative, cognitive work really matters. 00:54:14.620 |
Here's my home office that I care about function. 00:54:20.700 |
and my files are here, and I don't wanna waste time 00:54:23.940 |
when I'm doing the minutia of my professional life. 00:54:29.020 |
And it could be fancy, it could be very simple. 00:54:32.220 |
It could be sitting outside under a tree at a picnic table. 00:54:39.980 |
on part of the trail that ran from Reservoir Road 00:54:43.060 |
down to the canal, and I would go out to that tree 00:54:47.860 |
It could be a garden shed that you converted. 00:54:49.660 |
It could be a completely different nook of your house. 00:54:55.540 |
and just pushed a desk up there, that's for deep work. 00:54:59.100 |
when he was solving Fermat's last enigma, last theorem. 00:55:02.660 |
He did that up in an attic in his house in Princeton. 00:55:06.020 |
So have a separate space for deep work from shallow, 00:55:19.900 |
It can be eccentric, but it needs to be different. 00:55:22.380 |
So don't try to make your normal home office space 00:56:04.420 |
and then there's also me trying to produce stuff 00:56:16.140 |
I time block my day into 50-minute deep work blocks 00:56:21.140 |
I have little autonomy and am closely supervised. 00:56:27.740 |
waiting for my supervising solicitor to give me work. 00:56:38.380 |
when I don't have much work for my deep work blocks? 00:56:48.340 |
I mean, I typically recommend if it's a busy day 00:56:58.860 |
So don't look at things that are emotionally salient. 00:57:05.860 |
or do things completely different than your work. 00:57:08.320 |
That's gonna minimize the context-switching cost 00:57:13.580 |
you know, I don't love the sound of this job, right? 00:57:19.500 |
This is an idea from So Good They Can't Ignore You 00:57:30.660 |
And one of those general factors that matters more 00:57:33.420 |
Autonomy is a nutrient for motivation and work. 00:57:45.940 |
You are working like a laser beam in that time 00:57:50.860 |
So you have a side hustle or a skill that you're learning 00:58:02.900 |
I think you're gonna get a lot of fulfillment out of that 00:58:07.420 |
you're gonna find some autonomy and empowerment. 00:58:09.540 |
I am working on the route out of what I don't like 00:58:17.900 |
is gonna free you to go into a more autonomous position, 00:58:22.580 |
that's gonna allow you to go to a different job 00:58:27.920 |
that is going to allow you to drop this to part-time 00:58:30.220 |
or drop it altogether because it can support you. 00:58:32.620 |
I think psychologically you need something like that 00:58:34.700 |
because otherwise a fully non-autonomous job like this, 00:58:37.820 |
especially in knowledge work, can get pretty draining. 00:58:42.660 |
We have some calls this week, don't we, Justin? 00:58:52.700 |
longtime reader and a listener since episode one. 00:58:57.660 |
I'm eagerly awaiting my receipt of Slow Productivity. 00:59:02.180 |
that was offered there by your local bookstore, 00:59:06.260 |
I'm especially excited that you have recorded 00:59:25.900 |
In honor of your famous Heidegger and Hefeweizen tagline, 00:59:29.740 |
I wonder if you've read Heidegger's views on technology, 00:59:35.820 |
Thank you again for all of the excellent work 00:59:39.100 |
and I am looking forward to the Deep Life book 00:59:48.860 |
I'm vaguely familiar with Heidegger on technology, 00:59:52.020 |
but I would say most of my sort of scholarly influences 00:59:55.940 |
on technology philosophy are 20th and 21st century. 01:00:18.540 |
for understanding all of life and meaning and being. 01:00:40.100 |
Whereas you get farther along in the 20th century, 01:00:46.700 |
just grappling specifically with technology and its impacts. 01:00:51.380 |
like Lewis Mumford, for example, or Lynn White Jr. 01:01:15.500 |
for trying to understand the social technosystems. 01:01:21.500 |
which tries to apply postmodern critical theories 01:01:44.060 |
So that's been more influential to me, I would say. 01:01:51.140 |
but when I was writing my books for students, 01:01:54.420 |
my newsletter and blog, of course, were focused on students. 01:02:04.900 |
that's really meaningful and interesting and sustainable, 01:02:22.940 |
Not like something you're sacrificing yourself for 01:02:27.060 |
And I had this idea called the Romantic Scholar. 01:02:32.060 |
into being much more psychologically meaningful. 01:02:34.140 |
And one of my famous, to like, my famous, I mean, 01:02:37.660 |
among the readers of "Study Hacks" back then, 01:02:41.140 |
one of my famous ideas was Heidegger and Hefeweizen. 01:02:44.260 |
And I was like, "Would you have to read Heidegger? 01:02:56.620 |
And like sip a drink and there's a fire and like read it, 01:02:59.460 |
like put yourself into this environment of like, 01:03:06.620 |
And approach your work with this sort of joyous gratitude 01:03:09.820 |
and care about where you are and how you're working. 01:03:15.060 |
or one of the questions we answered earlier in the episode. 01:03:18.660 |
Right, I told the, in the slow productivity corner, 01:03:26.560 |
"Don't try to make your shallow work home office 01:03:30.860 |
"Like go somewhere cool, do it under a tree." 01:03:33.460 |
And then, you know, I really pushed that idea back then. 01:03:45.420 |
not something that you're trying to grind through. 01:03:48.720 |
I think there was a someone that people would write in, 01:03:52.300 |
Someone wrote in with a picture of a waterfall 01:03:57.740 |
snuck onto the roof of the astronomy building, the stars, 01:04:03.700 |
You know, I love that idea when I was helping students 01:04:09.660 |
in knowledge work, especially if you're remote 01:04:35.500 |
I know the people at BevCo, I have, you know, 01:04:38.820 |
I'm just like thinking, isn't it cool to think ideas? 01:04:51.340 |
Like remember like this activity itself has value 01:05:00.700 |
I have to get more Hefeweizen, that's the goal. 01:05:02.700 |
It takes a lot of Hefeweizen to get through Heidegger 01:05:14.620 |
I actually met you over the weekend at your book signing. 01:05:16.500 |
I was the Dartmouth guy that you met toward the end. 01:05:23.300 |
I'm really drawn to the idea of like thinking about seasons 01:05:32.460 |
I'm certainly in a specific type of season right now. 01:05:35.060 |
So I wanted to understand, I guess a two-part question. 01:05:39.660 |
When you're thinking about the seasons of your life, 01:05:49.900 |
Like how do you, when you think you're entering 01:05:56.940 |
And secondly, I guess is, I'm in a relationship, 01:06:03.060 |
I have a wife and she's also got a busy life and career. 01:06:08.700 |
on how do you synchronize or match up the seasons 01:06:14.500 |
I find it's very difficult if you have two people 01:06:21.940 |
but also be able to give the attention you need at home. 01:06:25.180 |
So it takes a conscious decision on both parts 01:06:49.620 |
And this is the big idea from principle two of my book, 01:06:53.980 |
is you should have variations within the seasons. 01:07:07.700 |
Whereas like in the fall, if I'm teaching some classes 01:07:32.940 |
And this becomes more clear to me as I get older, 01:07:35.420 |
as I've actually made my way through more of these seasons. 01:07:40.340 |
like the largest granularity of season I deal with 01:07:49.460 |
Because so I think of my 20s as different than my 30s, 01:07:54.060 |
as different than my current season, which is my 40s. 01:07:59.940 |
if I'm thinking about professional objectives 01:08:06.140 |
It's like, I wanna be a professor, wanna be a writer. 01:08:09.020 |
It's like, I wanna like lay those foundations 01:08:15.380 |
It was a lot of skill building, head down skill building. 01:08:19.220 |
might not be publicly flashy, but writing the papers, 01:08:21.660 |
learning how to be a professor, writing the books. 01:08:24.700 |
There are student focused books, doing magazine writing, 01:08:32.420 |
each of them had a element that was more difficult 01:08:35.620 |
than the one before that I very intentionally added. 01:08:37.900 |
So I was using the books to systematically push my skills, 01:08:45.460 |
I got the same advance for all three of those books. 01:08:54.340 |
where becoming a successful author is possible? 01:09:00.460 |
Because I got hired as a professor right when I turned 30 01:09:13.620 |
All right, so then my 30s is a different season. 01:09:16.580 |
So what I'm trying to do in my 30s is now we're having kids. 01:09:19.500 |
So I had my first of my three kids when I was 30, right? 01:09:37.740 |
I want my writing career to be successful enough 01:09:39.980 |
that like it gives us financial breathing room. 01:09:43.820 |
that like we're not super worried about money 01:09:57.260 |
but I'll tell you like the books I got in my 20s 01:10:05.780 |
In my 30s, I was like, I need to now become a writer 01:10:08.180 |
that gets like real hefty book advances, I need tenure. 01:10:11.660 |
And beyond that, it's like trying to keep the babies alive. 01:10:15.100 |
Right, so it was sort of a, this is a frenetic period. 01:10:20.820 |
It's like, get your head, keep the babies alive 01:10:23.340 |
and keep, you know, everyone, this baby is fed. 01:10:27.820 |
You know, okay, do they know that my wife's traveling? 01:10:35.380 |
become a writer with like some financial heft, right. 01:10:41.700 |
And I think that was largely, and that was successful. 01:10:48.100 |
And now like I'm getting bigger book contracts 01:10:51.300 |
and we could move to where we wanted to move. 01:10:53.620 |
And, you know, okay, great, we got that all set up. 01:10:57.860 |
I have tenure, you know, I'm a successful writer. 01:11:14.060 |
They're developing themselves as people and I have all boys 01:11:21.380 |
And in my work, like, well, you know, I got tenure 01:11:25.020 |
So now when I think about professional goals in my 40, 01:11:32.660 |
Like, well, but what do I want to be as a writer? 01:11:35.660 |
Like, where do I want to, what do I want to do? 01:11:38.820 |
Like, where do I want to leave my footprint, right? 01:11:43.660 |
Like, I was focused on getting tenure in my 30s. 01:11:45.980 |
My 40s now, it's like, where's the like footprint 01:11:48.540 |
I want to leave in the world of scholarship, right? 01:11:50.300 |
It becomes much more forward-thinking legacy. 01:11:54.340 |
It's not, how do I make sure that like every kid 01:11:56.060 |
has picked up and got the milk when they needed it? 01:11:58.700 |
Now it's like, how am I showing up in their lives 01:12:09.740 |
It's slower and more philosophical and the depth is, 01:12:16.180 |
So those life seasons could be at the scale of decades, 01:12:19.780 |
but those are just as important to understand 01:12:21.600 |
as the annual seasons and even the smaller scale seasons. 01:12:29.580 |
what I found is like, what I hear from people, 01:12:34.580 |
it is really important that you and your wife 01:12:38.100 |
have a shared vision of the ideal family lifestyle 01:12:43.100 |
and that you are essentially partners working together 01:12:46.940 |
to help get towards or preserve this ideal vision you have 01:12:52.220 |
where you live, the role, how much you're working, 01:12:54.620 |
how much you're not working, what your kids are like, 01:12:57.780 |
You need a shared vision of this is what our team, 01:13:02.100 |
This is what we think is what we're going for. 01:13:03.820 |
Like my wife and I started making these plans 01:13:05.500 |
as soon as we started having kids and they evolved, 01:13:18.700 |
Well, you get the other thing, which is very common, 01:13:26.420 |
and therefore see each other mainly from the standpoint 01:13:31.260 |
And we have this very careful tally board over here 01:13:33.900 |
of like, mm-hmm, mm-hmm, you did seven units less of this. 01:13:48.260 |
without any approach to synergy or any shared goal 01:13:51.140 |
of where you're trying to get your life writ large 01:14:01.580 |
that might be possible for you and your family's life 01:14:04.140 |
that will be missed if you're only myopically looking 01:14:16.060 |
for your satisfaction in life is gonna be the whole picture 01:14:22.340 |
and then you have your shared plan at different timescales. 01:14:24.540 |
So how are we going to get closer to this vision? 01:14:27.460 |
So what are we working on for the next five years? 01:14:31.340 |
Okay, this year, like, what are we both working on? 01:14:36.820 |
to the shared vision of where we want our family to be? 01:14:39.060 |
Oh, there's something about our work setups now 01:14:49.180 |
when you're working backwards from a shared vision 01:14:58.980 |
Whatever that vision is, be on the same page. 01:15:03.300 |
it opens up so many options for them in their lives. 01:15:09.180 |
it's all about, I need to maximize what I'm doing 01:15:11.860 |
to get some sort of abstract yardstick of impressiveness. 01:15:23.740 |
What do we want, like, a typical afternoon to look like? 01:15:26.220 |
What do we want our kids' experience to be like? 01:15:34.100 |
When you get these visions really nailed down 01:15:48.420 |
It might be like, wait a second, I'm really good at this. 01:16:25.460 |
to ask about whether or not I should take a job 01:16:38.700 |
I decided not to take the startup job as you suggested." 01:16:44.340 |
this is where it'd be unfortunate if she said, 01:16:49.580 |
and you cost me $20 billion of stock options, 01:17:01.600 |
All right, so phew, we pushed her in the right direction. 01:17:06.380 |
at my current company and have even more reason 01:17:08.720 |
to believe that they don't mind me working part-time. 01:17:14.500 |
Now I'm getting bored again and I feel myself getting antsy. 01:17:21.020 |
all the while continuing to work less than 30 hours a week 01:17:28.540 |
All right, so we've got a kind of a cool case study there. 01:17:31.260 |
She resisted the urge to go to this high-stress job 01:18:07.440 |
where, hey, in this part, I could go all out, 01:18:13.580 |
I hear this a lot, in particular, from lawyers, right? 01:18:17.500 |
There's this movement I really like right now 01:18:20.120 |
that remote work and the pandemic really exploded 01:18:22.620 |
of lawyers at big law firms leaving the partner track 01:18:30.560 |
"I'm only gonna bill half the hours I did before, 01:18:36.540 |
and there's no expectation now that I'm trying, 01:18:40.500 |
but I'm really good at this particular type of law, 01:18:42.580 |
and it's really useful to have me work on these cases, 01:18:47.180 |
And I live now somewhere completely different. 01:18:57.140 |
I'm making more money than anyone else in this town, 01:19:06.260 |
"Yeah, but in my firm, if you get the partner, 01:19:10.620 |
If you get the managing partner, it's an even big gold star. 01:19:20.520 |
Partially just recognizing that's just part of the trade-off. 01:19:23.960 |
Ego, accomplishment, this person's doing better than me. 01:19:32.780 |
That's never gonna go away, so you just have to see that 01:19:35.340 |
as one of the things you're weighing against the benefits, 01:19:38.020 |
but two, you need much more clarity, probably, 01:19:42.820 |
This comes back to lifestyle-centric career planning. 01:19:45.900 |
You need, like we talked about with the last caller, 01:19:52.100 |
and your vision for you and your family's life, 01:19:54.340 |
and if your current work arrangement fits into that vision, 01:20:03.180 |
opens up a lot of cool opportunities for your life. 01:20:06.140 |
If it's part of this vision that's based in your values 01:20:12.820 |
it's not just like I'm trying to fill my time with hobbies. 01:20:14.780 |
It's no, I have an aggressive vision for my life. 01:20:23.820 |
Then it's much easier to put up with the ego issues. 01:20:36.460 |
and what I'm proud of is this really cool life that I built. 01:20:42.540 |
the easier you're gonna be dealing with the work ego issues. 01:20:48.740 |
that this high-paying 30-hour job is part of, 01:20:53.420 |
even when your colleagues at work are doing 80-hour weeks 01:20:57.780 |
and making more money and getting more praise, 01:21:00.140 |
because you say, what I'm proud of is not just my job. 01:21:08.700 |
and the ability to just, whatever it is you care about. 01:21:12.300 |
So I would say, Anna, make your vision of your life 01:21:14.340 |
much more remarkable than I'm doing hobbies in my free time. 01:21:17.580 |
You need to lean into the possibilities of your life 01:21:21.420 |
and do something that, and when I say remarkable, 01:21:25.500 |
that someone hearing about you would remark about it. 01:21:35.340 |
Now, it's possible when you do this exercise, 01:21:37.660 |
the vision you come up with that's super deep and meaningful 01:21:41.340 |
is gonna involve you actually doing a lot more work 01:21:43.660 |
on something that's really important to you, and that's fine. 01:21:48.860 |
And so it's a good question and a good case study, 01:21:55.740 |
without a bigger vision for what that slowing down serves 01:22:00.820 |
If you slow down and simplify and then just find yourself, 01:22:05.020 |
just trying to find hobbies to fill the time, 01:22:15.660 |
with something cool, so you'll have to write back in 01:22:19.400 |
All right, so we have a final segment coming up 01:22:24.100 |
that I've seen during the week to talk about. 01:22:25.580 |
But first, another quick word from a sponsor. 01:22:29.460 |
Let's talk about Shopify, the global commerce platform 01:22:32.780 |
that helps you sell at every stage of your business, 01:22:37.140 |
whether you've just launched your first online shop, 01:22:40.540 |
or you have a store, or you just hit a million orders, 01:22:47.180 |
They are, in my opinion, and from just the people I know, 01:22:52.900 |
the service you use if you wanna sell things. 01:23:11.840 |
or other things relevant to their writing empires 01:23:16.420 |
And they love it because what it allows you to do 01:23:22.860 |
for your potential customers, very high conversion rate. 01:23:30.940 |
I mean, Shopify is who you use if you wanna sell. 01:23:37.780 |
for deep questions, which I think is inevitable, 01:23:46.980 |
- People are talking about it at the book event. 01:23:48.860 |
- You know, multiple people mentioned at the book event, 01:23:55.140 |
Values-based, lifestyle-centric career planning. 01:24:04.320 |
All right, so sign up for a $1 per month trial period 01:24:09.900 |
When you type in that address, make it all lowercase, 01:24:15.300 |
You need that slash deep to get the $1 per month trial. 01:24:19.340 |
So go to shopify.com/deep now to grow your business. 01:24:23.940 |
No matter what stage you're in, shopify.com/deep. 01:24:29.580 |
Also wanna talk about our longtime friends at Roan, 01:24:36.700 |
for a radical reinvention and Roan has stepped up 01:24:39.620 |
to that challenge with their commuter collection, 01:24:43.340 |
the most comfortable, breathable, and flexible set 01:24:48.380 |
I really enjoy this collection for a couple of reasons. 01:24:51.900 |
A, it's very breathable and flexible and lightweight. 01:24:56.620 |
So when I got like a hard day of doing book events 01:25:00.140 |
if I can throw on like a commuter collection shirt 01:25:04.520 |
men often underestimate how much your pants lead to comfort. 01:25:08.180 |
Like a thick pair of jeans can get pretty uncomfortable 01:25:13.660 |
So I have lightweight, breathable, good-looking clothes, 01:25:21.420 |
and the wrinkles work themselves out once you wear them. 01:25:25.620 |
even if you've been living out of a suitcase, 01:25:33.300 |
Like it's just good looking, incredibly useful clothes, 01:25:45.460 |
through any workday and straight into whatever comes next. 01:25:48.540 |
Head to roan.com/cal and use the promo code Cal 01:25:57.300 |
When you head to r-h-o-n-e.com/cal and use the code Cal, 01:26:02.300 |
it's time to find your corner office comfort. 01:26:11.260 |
is take something interesting that someone sent me 01:26:13.540 |
or I encountered, and then we can talk about it. 01:26:20.020 |
I mentioned this email in my opening segment, 01:26:25.880 |
And I mentioned that someone had sent me an email 01:26:46.280 |
So this is a sort of a contemporaneous account 01:26:54.660 |
There's a particular page from this I want to read, 01:27:22.420 |
He talked less and thought more than anyone in the service. 01:27:29.460 |
which someone else could do as well or better than he. 01:27:33.580 |
demonstrated his rare powers of administrative 01:27:38.020 |
He was one of the few men holding high position 01:27:42.380 |
by giving his personal attention to petty details. 01:27:53.060 |
or writing unnecessary letters or communications. 01:27:55.620 |
He held subordinates to a strict accountability 01:28:04.840 |
and the well-matured ideas which resulted from it 01:28:09.740 |
which was constantly witnessed during this year, 01:28:21.380 |
which was released over the past two weeks in two parts. 01:28:25.580 |
And the point Ryan made, which is a good one, 01:28:42.860 |
We gotta maneuver the troops, I gotta write some letters, 01:28:45.900 |
we gotta make sure that this is working here. 01:28:47.540 |
He was a constant activity, a consummate bureaucratic player, 01:28:56.500 |
And finally, Lincoln said, "I'm sorry, McClellan, enough. 01:29:11.780 |
busyness has never been more amplified or pronounced. 01:29:17.920 |
It is not the reading over the court martial proceedings 01:29:23.420 |
and talking to people and giving your thoughts 01:29:41.740 |
That is an act of slowing down, focusing on what matters, 01:29:44.560 |
giving it careful attention, minimizing the non-important, 01:29:51.160 |
So in General Grant, we see a great demonstration 01:29:58.880 |
and I'm quoting here, "the laziest man in the army." 01:30:03.420 |
But when you zoom out, you're the hero who won the war. 01:30:10.840 |
Slow Productivity, slowing down, focusing on what matters, 01:30:16.320 |
but obsessing over the impact and quality of what you do. 01:30:26.100 |
and that requires quiet, that requires slowness. 01:30:28.120 |
So Nick, thank you for sending me that excerpt. 01:30:33.340 |
that some of the best ideas are some of the oldest ideas. 01:30:40.280 |
All right, Jesse, I think that's all the time we have. 01:30:51.340 |
Thedeeplife.com/listen is where the links are 01:30:54.820 |
for submitting questions and calls, so please go do that. 01:30:57.780 |
Send your topic ideas to jesse@calnewport.com. 01:31:17.120 |
where I give you more detailed thoughts on ChatGPT, 01:31:23.580 |
That is the deep question I want to address today.