Back to Index

An Important Message On AI & Productivity: How To Get Ahead While Others Panic | Cal Newport


Chapters

0:0 Can A.I. Empty My Inbox?
40:47 Should I continue to study programming if AI will eventually replace software jobs?
45:25 Is it bad to use ChatGPT to assist with your writing?
51:30 How do I reclaim my workspace for Deep Work?
56:21 How do I decide what to do on my scheduled mini-breaks at work?
58:54 Heidegger’s view on technology
65:18 Seasonality with a partner and kids
76:15 A Silicon Valley Chief of Staff balancing work and ego
86:17 General Grant’s Slow Productivity

Transcript

So Andrew Morant recently had a great article in the New Yorker. It was titled, "Okay, Doomer" in the magazine and among the AI doomsayers online. In this article, Morant spends a lot of time with AI safetyist, or as they sometimes call themselves, deaccelerationist, doesn't roll off the tongue. There's a group of people who live all in the same area in the Bay Area and worry a lot about the possibility of AI destroying humanity.

They even have a shorthand for measuring their concern, P-Doom, probability of doom. So they ask each other, what's your P-Doom? And if you say 0.9, for example, that means I'm 90% sure that AI is gonna destroy humanity. All right, so how do we connect this? Why did this get me thinking about our discussions here about finding depth in a high-tech world?

Well, I couldn't help but thinking as I read about these AI safetyists and their concerns about P-Doom, that they weren't really getting at what might be the much more proximate and important issue for most people when it comes to AI, at least for the tens of millions of knowledge workers who spend their entire day jumping in and out of email inboxes that fill faster than they can keep up with them.

For this large group of people, a big part of our audience, perhaps the even more pressing question about AI is the following. When will it be able to empty my email inbox on my behalf? When will AI make the need to check an inbox anachronistic, like trying to put new paper into the fax machine or waiting for the telegraph operator to get there?

When will AI give me a world of work where I'm not context shifting constantly back and forth between 50 different things, but can just work on one thing at a time? In other words, forget P-Doom, what is our current P-Inbox zero? Now, I recently wrote my own article for "The New Yorker" that goes deep into the current limitations of language model-based AI and what the future may hold.

I had this problem of AI and email firmly in mind when I wrote that. So here's what I wanna do today. I wanna dive into this topic. So there's three things I'm gonna cover in a row here. Number one, let's take a quick look at this promised world in which AI could perhaps tame the hyperactive hive mind.

I think this is potentially more transformative than people realize right now. Two, I wanna look closer at the state of AI and its ability to tame things like email right now. I actually use ChatGPT to help answer some of my emails, and we'll talk about those examples in part two of this deep dive.

And then part three, what is the technical challenges currently holding us back from a full email-managing AI? This is where we'll get into my latest "New Yorker" article. What's holding us back? Can we overcome those obstacles? Who's working on overcoming those obstacles? All right, so look, on this show, one of the topics we talk about is taming digital knowledge work.

Another topic we talk about is the promise and perils of new technology. Today in this deep dive, we're putting those two together. We have this topic, when can AI clean my inbox? This should be relevant to both. All right, let's get started. Part one, when we talk about AI's impact on the workplace, especially knowledge work, there tends to be three examples of general examples of how AI is gonna help the office that tend to come up.

Number one is the full automation of jobs, right? So we hear about, for example, vertical AI tools that's gonna take over a customer service role. So this means that job goes away, the AI does the whole thing. The second type of thing we hear about AI in the workplace is the AI speeding up the steps of tasks that you already currently do in your job.

Hey, summarize this note for me right away so I don't have to read the whole thing. Write a first draft of this memo, it's gonna save me time actually typing. Gather me examples to use in this pitch. Create a slide that looks like this and use these graphics. So it's, you're doing the tasks, but the AI speeds up elements of the task.

The third area I see discussed a lot with AI's impact on the workplace is brainstorming or generating ideas. This is really big, I think right now, because we're mainly interacting with these tools through a chat interface. Hey, give me three ideas for this. Do you think this is a good idea?

What's something I could write about here? So there's this sort of back and forth dialogue people are having with chatbots in particular to help come up with ideas or brainstorm. As we know on the show, however, none of those three things are really getting at what I think is the core issue that I think affects every knowledge worker, the issue that is driving the current burnout crisis, an issue that is holding down productivity in the knowledge workspace, and I mean that in the macroeconomic sense more than anything else, and that's the hyperactive hive mind.

We talk about this all the time, but we have set up a way of collaboration that's almost ubiquitous within knowledge work where we have unscheduled back and forth messages to work everything out, email and also in other tools like Slack. The problem with this is that we have to constantly tend to these back and forth messages.

If I have seven things I'm trying to figure out and each of those things has seven or eight messages that I have to bounce back and forth with someone else today to get to an answer, that's a huge number of messages that I have to see and respond to throughout the day, which means I have to constantly check my inboxes so I can see a message when it comes in and reply to it right away.

Every time I check this inbox, I see a whole pile of different messages, most of which are emotionally salient because they are coming from people I know who need things from me, so we take them very seriously, and the cognitive context represented by each message is diverse. So now I have to jump my attention target from one thing to another thing to another thing within my inbox, back to my work, back to the inbox, between different messages in the inbox.

This is a cognitive disaster. It is hard for our brain to change its focus of attention. It needs time to do that. So this forcing ourselves to constantly jump around and keep up with all this incoming, each of which is dealing with different issues and different contexts and information, it exhausts us, it leads to burnout, it makes work a trial, and it significantly reduces the quality of what we produce and the speed at which we produce it.

This hyperactive hive mind workflow is a huge problem. In my book, "Slow Productivity," I get into how we got here. The first chapter of the book goes deep on it, but it is a big problem. This is where I want to see AI make a difference. Imagine if what AI could do is handle that communication for you.

Handle it like a chief of staff. Handle it like Leo McGarry in the Aaron Sorkin television show, "The West Wing," the chief of staff for Martin Sheen's President Bartlett. Someone, an agent that could sit there, see the incoming messages, and process them for you, many of which they might be able to handle directly.

Filter it, give a quick response. You never have to see it. You never have to shift into that context. And for the things that it can't directly manage for you, it can just wait until you're next ready to check in after you finish what you're working on. And your AI chief of staff could, in this daydream, ask you questions to direct it what to do.

"Hey, we got something about a meeting. Should we try to schedule this?" And you're like, "Yeah, but put it on a Tuesday or Wednesday and don't do it too late." And it's like, "Great, I'll handle this for you." Or, "Here are three projects which we got updates on today.

Do you want to hear a summary of any of these updates?" And you would say, "Yeah, tell me the update on this project. Hey, there's this thing that we heard from your department chair. There's a departmental open house." This is BB and the AI. "Do you want to sign up for this?

Do you want to do this?" And I'd be like, "Yeah, find me a slot on Friday that works with my schedule." "Great, I'll do this for you." And then you go back to what you're doing. So imagine that. You don't have to keep up with an inbox. You don't have to dive in in this daydream and see all these messages and try to switch your attention from one to another, which we do bad, but an AI could do well.

I think the productivity gain of an AI agent that could mean you no longer have to even see an email inbox would be enormous. I mean, I think we would see this in the macroeconomic productivity measures. The quality and quantity of what's being produced in non-industrial work is going to skyrocket if we took off this massive cognitive tax.

I think we would also see subjective satisfaction measures and knowledge work go right up. Oh my God, I'm just working on things. And I have this sort of assistant agent that I talk to two or three times a day and kind of handles everything for me. And then I go back to just working on things.

To me, that's the dream of AI and knowledge work, much more so than, well, when I'm just in the inbox myself, the AI agent's going to help me write a draft. Or when I'm working on this project, it can speed up my steps a little bit. I don't care about the speed at which I do my tasks.

I want to eliminate all the context shifts. I want to eliminate the need to have to constantly change what I'm focusing on from one to another project to keep interrupting my attention to go back and manage back and forth conversations. So that would be massive. All right, so here's the second question, part two.

How close are we to that daydream of an AI that could handle our email inbox for us? Well, I was messing around with chat GPT recently. And what I did is I copied some emails from my actual email inbox and asked it some questions. I wanted to see how well it would do understanding my emails and writing to people on my behalf.

All right, so the first thing I did is I had a message here from a pastor. It was an interesting, it was a longer message. And I saw that in this message, the pastor was talking about my recent coverage of Abraham Joshua Heschel's book, "The Sabbath," and talks about some points from it, some extra points.

And there's like offering to send me some book. So I asked chat GPT, hey, could you just summarize this for me? Hey there, I wanna take a quick moment to tell you about my new book, "Slow Productivity, The Lost Art of Accomplishment Without Burnout." If you like the type of things I talk about on this channel, you're really gonna like this book.

It distills all of my ideas into a clear philosophy, combined with step-by-step instructions for putting it into action. So to find out more about the book, check out calnewport.com/slow. Everything you need, you can find there. All right, thanks, let's get back to it. And did a great job. It was like this person is a pastor with this church.

He's reaching out to Cal to express interest in Cal's work on intellectual workflow and his application pastoral duties. He noted the challenges based on this. He highlights, blah, blah, blah. He offered to send you a copy of this book. Like his one paragraph got to all the main points.

So then I tested Chat GPT's people skills. And I said, "Can you write for me a polite reply?" And in this polite reply, declined the copy of the book that was being offered. I should say in reality, I'm actually interested in this book. This is just, I wanted to test the people skills of Chat GPT, right?

And it wrote a good email. "Hey, thank you for reaching me out "with your thoughtful message. "I truly appreciate your insights. "I'm genuinely grateful for your offer "to send me a copy of your book on blah, blah, blah. "And while I see it's valuable, "I must regretfully decline. "Your dedication is great.

"Thanks again for reaching out." It was actually a pretty good response. All right, here's another example. Someone sent me a message that was saying, "Hey, you should see this anecdote "about General Grant and slow productivity." Spoiler alert, I'm gonna actually talk about this later in the show. But I said, "Give me bullet points." And Chat GPT did, it gave me three bullet points.

He expresses gratitude about this. He shares an anecdote from this book about that. He attached the following to the message. So what I'm seeing as I look at and test Chat GPT with my emails is it can understand emails. Like it can understand what and summarize what are in these emails.

And it's good at writing responses. If you tell it what you wanna do, it can write perfectly reasonable responses. So are we at our promised future? Is P inbox 01? Well, not yet. Because here's the problem. Right now, I am still directing all of this. I am loading the message.

I'm looking at the message. I'm telling Chat GPT, "Summarize the message." I'm making a decision about what to do on behalf of this message. And then telling that to Chat GPT. Really at best, it's marginally speeding up the time required to go through my inbox, writing some things faster, preventing some reading.

But it has no impact on my need to actually have to encounter each of these messages, to actually do the context switching, to have to keep up with my inbox and make sure messages are being sent back and forth. It can under process messages. It can write messages. But these large language model tools right now can't take over control of the inbox.

All right, so this brings us to part three. What is needed to do that? And this is where I want to bring up the article that I wrote recently for The New Yorker. So I'm going to put it up here on the screen for those who are watching. They have a really cool graphic.

I love when they do these animated graphics. I don't know if you can see it in the little corner here, but it's a hand placing a chess piece. So my article is entitled, "Can an AI make plans? Today's systems struggle to imagine the future, but that may soon change." So here's the big point about this article.

The latest generation of large language model tools can do a lot of cool things, a lot of really impressive things, especially the sort of GPT-4 generation of language models. But there's a lot of recent research literature from the last year or so that is saying there's one thing they can't do.

And this has been replicated in paper after paper. They can't simulate the future. So if you ask a language model to do something that requires it to actually look ahead and say, "What's the impact of what I'm about to do?" They fail. So there was an example I gave in the article from Sebastian Bubek from Microsoft Research, who wrote a big paper, led a research group who wrote a big paper about GPT-4.

He said, "Look, this is really, GPT-4 is really impressive." He says in a talk about his paper, "If your perspective is what I care about is to solve problems, to think abstractly, to comprehend complex ideas, the reason on new elements that arrive at me, then I think you have to call GPT-4 intelligent." And yet in this talk, he said there is a simple thing it can't do.

And he gave an example of something that GPT-4 struggled with. He put a math equation on the board. Seven times four plus eight times eight equals 92, which is true. And then he said, "Hey, chat GPT, GPT-4, modify one number on the left-hand side of this equation so that it now evaluates to 106.

For a human, this is not hard to do. If you need the sum to be 14 higher to get from 92 to 106, you look at the left-hand side and said, "Oh, seven times four, we have sevens. Let's just get two more sevens. Let's make that seven times six." Chat GPT gave the wrong answer.

"The arithmetic is shaky," Bubek said about this. There's other examples where GPT-4 struggled. There's a classic puzzle game called Towers of Hanoi, where you have disks of different size and three pegs, and you need to move them from one peg to another. You can move them one disk at a time, but you can never have a bigger disk on top of a smaller disk.

This comes up a lot in computer science courses because there's solutions to this problem that are basic recursive algorithms. GPT-4 struggled with this. They gave it a configuration in Towers of Hanoi that could be solved pretty easily, five moves, but it couldn't do this. It struggled with basic block stacking problems.

"Hey, here's a collection of blocks. These colors stack like this. Let's talk about how to move them to get this other pattern." It struggled with that. It struggled when it was asked to write a poem that was grammatically correct and made sense, where the last line was the exact inverse of the first line.

It wrote a poem, and it mainly made grammatical sense. The last line was a reverse of the first line, but the last line was nonsense. The first line was not a palindrome. It wasn't an easily reversible line, and so the last line sounded like nonsense. All of these, as Buvek and others point out, all of these examples are marked by their need to simulate the future in order to solve them.

How do you solve that math equation? Well, humans, what we actually do is we sort of simulate different things we could change. What would the impact be on the final sum? Oh, changing the sevens would move it up by sevens. Great, that's what we want to change. We're simulating the future.

When you play Towers of Hanoi, you have to look ahead. If I make this move next, this is a legal move. But is this going to leave me a couple moves down the line to be stuck? So we have to look ahead when humans solve Towers of Hanoi. Same thing with the poem problem.

When you're writing the first line of the poem, you're also thinking ahead. What is this going to give me when I get to the last line of the poem? Oh, this is going to be nonsense. So I got to make this first line of the poem, I got to make this first line of the poem reversible.

Like GPT-4 couldn't do this. It was just writing word by word. Here's, I'm writing a good poem, and when it got to the last line, let me look back at what the first line was and reverse it. It was too late. It was going to be nonsense. We simulate the future all the time.

Almost everything we're doing, almost all of our actions as humans have a future simulation component to it. We do this naturally, we do this unconsciously, but almost everything we do, we simulate. What's going to happen if I do this? What about that? Okay, I'm going to do this. Should I cross the street right now?

Well, let me simulate. Where's that car? How far is it? Where do I imagine that car is going to be with respect to the crosswalk by the time I'm out there? Ooh, that's a little bit close. I'm not going to do it. When choosing what to say to you, I am simulating your internal psychological state.

That's how I figure out what to say that's not only going to accomplish my goals, but not make you really upset. This is why people who are maybe neurodivergent and they're somewhere on the autism spectrum accidentally end up insulting people by, you know, they're not trying to, but they irritate or insult people frequently because part of what is being changed in their brain wiring is their ability to simulate the other mind.

And when that is impaired, they can't simulate the impact of what they're going to say on another mind, then they're much more likely to say something that's going to be taken as sort of offensive or is going to upset someone. So we're constantly simulating the future. That is at the core of human cognition.

It's also at the core of any rendition we've seen, sci-fi renditions of a fully intelligent machine, they're doing that. In my New Yorker article, I talk about probably the most classic artificial intelligence from cinema, which is HAL 9000 from Stanley Kubrick's 2001. And we know the classic scene, right?

Where Dave, the astronaut, is trying to disable HAL because its focus on its mission is going to endanger Dave and his life. And Dave says, "Open the pod pay doors." He's trying to get in to disassemble HAL, to turn it off. And HAL's like, "I cannot do that, Dave." It's like very famous exchange.

How does HAL 9000 know not to open the pod bay doors? Because it simulates. What would happen if I opened the pod bay doors? Oh, that exposes this. And if this is exposed, this person could take out my circuitry. Oh, that doesn't match my goals. No, I'm not going to open the pod bay doors.

You need to simulate the future to get anywhere near anything that we think of as human style cognition. GPT-4 can't do this. Now, is this just because we need a bigger model? Is this, is GPT-5 going to be able to do this? Is this, we just have to figure out our training?

The answer here is no. I'll put my technical cap on here for a second, but I get into this in my New Yorker article. The architecture of the large language models that drive the splashiest AI agents of the moment, the clods, the Geminis, the chat GPTs. These underlying large language models are architecturally incapable of doing even the most basic future simulations.

And here's why I'm going to draw this. So if you're watching, instead of listing, you'll see this picture. But you have to understand what happens in these large language models is that you have a series of layers. I'm drawing some of these layers here. So GPT-4, we don't really know how many layers we are.

We think it's like 96, but we're not quite sure because they don't tell us. OpenAI is pretty close, but we know from other language models. This is a feed-forward architecture. The information comes in the bottom layer. It works its way through all the layers until at the very top, you get the output, which is actually a token, a piece of a word.

So you give it input, give it a sentence of input. It moves these layers one by one in order, and then out the other end comes a single word with which to expand that input. So it's auto-regressive token predictor. These layers are hardwired. What's in them, it's kind of complicated.

You have basically these transformer sub-layers first. It's a key piece of these new language models. It has to do with embeddings. It has to do with attention, what part of the input's being paid attention to. And then after those, you have basically neural networks, sort of feed-forward neural networks.

But the main thing to think about here is the information moves through these hardwired connections. It's numbers it's multiplied by and neural network connections that it simulates activating. And it inexorably, inevitably moves forward through all of these layers. And out the other end comes as prediction. Now, because these layers are very big, in GPT-4, they're defined by somewhere around a trillion different values.

These are very big layers. They can hard-code, these layers can hard-code a lot of information. And what happens is, and I get into this in the article, but at a very high level, what happens is, as your input goes through these layers, patterns are recognized. Really complicated patterns are recognized.

This is an email. This is an email about this. We're being asked to do this about this email. And then there's very complicated guidelines baked into the connections that then say, given this is the sentence we're trying to expand with a single word, and given all these patterns we recognized about this sentence, and looking at all the possible next words that sort of make grammatical sense to be next, let's combine everything we've looked at to help bias towards which word we should output.

And this number of guidelines and the properties that can be combined and the number of ways these properties can be combined is combinatorically immense. There's a sort of near endless categories of it's an email about this and that person. We're trying to say this. There's endless categories about what we're recognizing and the guidelines that connect the bias towards what word we should output next, but they're all hardwired.

And so, you know, you can hardwire, if we recognize a specific situation, we can have hardwired in these situations. This move is what we've learned before will lead to somewhere good. But once it gets novel, it's something that it hasn't, it can't see or approximate with its hardwired rules, you're out of luck because there is no way to be iterative in here.

There is no way to be interactive. These are completely hardwired models. There is no memory that can be changed. There's no looping. There's no recurrence. The information goes through. We apply the guidelines to what's seen, something comes out. We do our best with what we've already written down. We can't explore on the fly.

That's just the architecture of these models. We see this, for example, when you play chess with GPT-4. These guidelines have a lot of insights about chess broken into them. So the properties might be like, we have a chess board, we have pieces here. These are all properties that are being identified.

This piece is protecting the king. Now, given all this information, we have our hard-coded guidelines. What move should we output next? And it might say, well, in general, in these situations, don't move the thing that's protecting the king. Let's do this. So you could have really complicated chess games that look really good.

And I talked about in the article how if you play chess against GPT-4, prompting it properly, you get something like an Elo 1000 rated playing experience, which is like a pretty good novice player. But when you look closer at these games, what happens? It plays good chess until the middle game, and then it gets haphazard.

Because what happens in the middle game of chess is your board becomes unique. And when you get to the middle game of chess, you can't just go off of hardwired heuristics. In this case, with a piece in this position, this is the right thing to do, or here's a good thing to do.

When you get to the middle game, how do chess players play? They simulate the future. They say, I've never seen this type of board before. So what I need to do now is think, if I do this, what would they do? And then what would I do? You simulate the future.

GPT-4 can't do it. So we see this. The chess game is good until it becomes bad. When the hardwired rules of here, do this, in these situations, this makes sense. When those no longer directly apply, it has no way of interrogating its particular circumstance. And the chess play goes down.

All right, so this is why we can't clean our inbox, because to clean our inbox, decisions have to be made about what to say. And to make decisions about what to say, you have to simulate the impact. Well, if I did this, what would the impact be on my schedule?

If I said this, how's that gonna make this person feel? How's that gonna affect this team dynamic? How is this gonna affect, we have this current order of operations for completing this project. If I agree to do this delay, what's the effect gonna be on this project that we're doing?

Is that gonna be interminable? If that's gonna be a problem, then I'm gonna answer no here. I'm gonna have to find someone else to do it. Writing an email, a language model can do. Figuring out what to say in an email, you have to simulate the future. GPT models can't do that.

So are we gonna get there? Is that even possible? And here in the article, I say, well, yes. Language models, because they're massive and feed forward and unmalleable and they have no interaction or recurrence. No, they can't do it. But we have other AI systems that are very good at simulating the future.

GPT-4 is bad at playing chess, but Deep Blue beat Garry Kasparov. But Deep Blue is not a language model. Deep Blue works by simulating hundreds of millions of potential moves in the future is a big part of what it does. AlphaGo beat Lee Sedol in Go. And how did it do that?

Well, it simulates a ton of future moves to try to see the impact of different things that it might do. So in game playing AIs, we're very good at simulating the future. All right, so that's optimistic for our goal here of having an AI clean our inbox. But if we're gonna simulate the future in a way that lets us clean email, it's not just a sterile positions of pieces on a board.

We have to understand human psychology. So can an AI simulate other minds? Well, here in this article, I say, yes. In fact, there's a particular engineer who's been leading the charge to do that. His name is Noam Brown. And what did Noam Brown do? Well, first he made waves with Pluribus, the first poker AI to beat top-ranked players.

So they played in a tournament with seven top-ranked players the people you would know if you followed poker with a $250,000 pot. So there was skin in the game. They wanted to win. Texas hold 'em and no limit Texas hold 'em. And Pluribus beat 'em. Beat 'em over the two-day tournament.

Well, as Noam Brown explains himself, in poker, the cards themselves are important, but actually what's more important is other people's beliefs about what the cards are. So you have to simulate human psychology to figure out what to do. What matters is not that I have an ace high. What matters is, do the other players, what's the probability that they think I have an ace high?

That's where all the poker strategy comes into. It's taking advantage of the mismatches between other players' beliefs and reality. That's where the money is made. Pluribus has to simulate human minds. Interesting aside about Pluribus, by the way, Brown and his team first tried to solve poker with just a massive neural net, sort of a feed-forward chat CPT style approach where it just had played so much poker that it would just tell you, here's my poker hand.

Here's the cards that are out. And it would just sort of figure out here's the best move to do in that situation. And this model was huge. They had to use tens of thousands of dollars of compute time at the Pittsburgh Supercomputing Center just to train it. And with Pluribus, he said, well, what if instead of trying to hard-code everything you could see, we simulated the future?

And this collapsed the size of this model. You can now train this stuff on a laptop or an AWS for like 20 bucks. It was a fraction of its size and way out-competed it. So simulating the future is a way more powerful strategy than trying to build a really massive network like a language model that just has everything hard-coded in it.

So then Noam Brown said, let's play an even more human-challenging game, Diplomacy. And in the board game Diplomacy, which is like Risk, the whole key to that game is you have before every turn, one-on-one private conversations with each of the other players. And you make alliances and you backstab people and you're trying to place, the whole thing is human psychology.

Noam Brown and his team at Meta built a diplomacy-playing bot named Cicero. I talk about this in the article, beat real players. They played on a web server for Diplomacy. People didn't even know they were playing against an AI bot. And how did they do it? Well, in this case, and this is where it gets really relevant for answering our email, they took a language model and a simulator and had them work together.

So the language model could take the messages from the one-on-one conversations. And they could figure out like, what is this person saying? And what do they, what does this mean? And they could translate it into a common, really technical language that they could pass on to the game strategy simulator.

And the game strategy simulator is like, okay, here's what the different players are telling me. Now I'm gonna simulate different possibilities. Like if this person is lying to me, how much trouble could I get into if I went along with their plan? What if I lied to them and this was secretly, and it tries out different strategies to figure out what to do.

And then it tells the language model in a very terse technical terms. All right, here's what we wanna do. Agree to an alliance with Italy, decline the alliance request from Russia, put this into good diplomacy language to be convincing. And then the language model generates these very natural sounding communications and they send those messages.

So now we're getting somewhere interesting. A language model plus a planning engine meant that we now had something that could play against humans in a very psychologically relevant, complex, interpersonal type of discussion where you had to understand people's intentions and get them on your side and it could do really well.

This is the path that's gonna lead to AI taming the hyperactive hive mind. It's not gonna be GPT-5 or 6. It's gonna be the descendants of Cicero, the diplomacy playing bot. It's gonna be a combination of language models with future simulators with maybe some other models to try to model project states or your work states or your objectives.

It's gonna be the ensembles of many different models working together that is going to make it possible to do things like have AI clean our inboxes. So the question then is, are the big companies taking this possibility seriously? I mean, is a company like OpenAI taking seriously this idea that, okay, if we bring in planning and these other types of thinkings and then connect that to language models, that's when things really get interesting.

Well, I think they are. What's one piece of evidence? Well, remember Noam Brown who created Pluribus and Cicero? OpenAI just hired him away. And they put him in charge reportedly of this big project within OpenAI called Q*, a reference to the A* bounded search algorithm, something you use to search into the future to add planning into an added feature with their language models.

So I think PMBOK0 might be higher than we think. And this is not gonna be a trivial thing or a cool thing or an interesting twist. I think it could actually completely reinvent the experience of knowledge work. I have been trying for years to solve this problem through cultural changes.

We need to get rid of the hyperactive hive mind. We need to replace it with better systems that don't have so many ad hoc unscheduled messages that we have to respond to. We have to stop the context shifting. And I've had a really hard time making progress for large organizations because of managerial capitalism and entrenchment of stability.

It's very difficult. So maybe technology is gonna lap me at some point and eventually there'll be a tool we can turn on that takes me out of my inbox as well. But once we do that, those benefits are gonna be so huge. We're never gonna go back. We will look at this error of checking an inbox once every five minutes.

I think in the knowledge work context, similar to how the cavemen looked at the age before fire. I can't believe we actually used to live that way. So I'm optimistic. There we go, Jesse. That's AI, PMBOK0, that's the key. - As you were explaining it all, I had some questions but you answered them all.

I was curious about like the Deep Blue and like driverless cars, but. - Yeah, yeah. - And I didn't know like the explanation until you explained it all. - Not to geek out, but the difference between the advancement of AlphaGo, which won at Go, and it did this in the 2000s versus Deep Blue that won in chess, which did this in the 1990s.

DeepMind did AlphaGo. The big advancement there is that the hard thing about Go is figuring out is this board good or bad, right? So if you're gonna simulate the future, what you have to do is be able to evaluate the futures. Like, okay, if I did this, they might do this, and I would do this, is this good?

That's easier to figure out in chess than it is in Go. Like, is this a good board or a bad board? So the big innovation in AlphaGo is they had these neural networks play Go against each other endlessly. They jump-started them by giving them like thousands of real games.

So learn the rules of Go and get a sense of like what was good or bad. And then they played Go endlessly against each other. And the whole point here was to build up a really sophisticated understanding of what's good and bad, right? So they built this network that could look at a board and say, this is good, and this is bad, based off of just hundreds of millions of games it played with itself.

Then they combined that with a future-looking planning system. So now when they're looking at different possible moves, they could talk to this model they trained up that's self-trained, is this good, is this bad, to figure out what the good plays are. And it led to a lot of innovation in play because this model learned good board configurations that no human had ever thought of as being good before.

As part of how it beat Lee Sedal was it did stuff he had never seen before. Say, what's going on here? Whereas with Deep Blue, it was much more, like they brought in chess masters, and it was much more sort of hand-coded in. Is this a good position or a bad position?

It was sort of more heuristical there. So in AlphaGo, they're like, oh, you can actually build, you can self-teach yourself what's good and what's bad. Which was cool. But it still had to simulate the future. So we'll see. All right, so anyways, we got some questions coming up, some about AI and digital knowledge work, some about other things.

But first let's hear a word from our sponsors. Hey, I'm excited, Jesse, that we have a new sponsor today. A sponsor, one of these sponsors that does something that is exactly relevant to my life. This is Listening, so the app is called Listening, that lets you listen to academic papers, books, PDF webpages, and articles, and email newsletters.

We're listening where it came to my attention, where it's known in my circle, is that people use it to transform academic papers into something they can listen to, like you would a podcast or a book on tape. Now, it can do this for other things as well, like I just mentioned, but this is where it really came in the prominence.

And it uses AI to do this. So speaking of AI, has a very good AI voice. It does not sound like a robot. It sounds like a real human. And you can give it, for example, a PDF of an academic paper, and you can pause, play, listen to this, like you had hired a professional voice actor to read that paper.

Now, why is this important? Because it opens up all that time when you're driving, you're stuck in traffic, you're waiting for something to start, time when you might put on a book or take a podcast. Now you could also put on something that is very productively useful or interesting for your own work.

Hey, I wanna read to me this new paper about whatever. Someone just sent me a paper, which I'm gonna listen to in listening for sure, 'cause I have a long drive coming up. Someone just sent me a paper, for example, that looked at, does Twitter posting about your academic papers, so this is very circular, lead to higher citation count?

And it looks like the implication of the paper is no. So promoting yourself on Twitter as an academic doesn't actually help you become a better academic. This is fascinating to me. So this idea that I can just click on that and now when I'm walking back and forth or going to the bus stop, I could listen to this paper.

Just imagine the amount of time you can now use actually learning interesting things. So it's really cool. It's bringing other types of content into the world of audio consumption. One of the other cool features I like is a add note button. So like as you're listening to a paper, you can click add note and then just type a few sentences and it'll store that for you.

Oh, here's a note for this section. So you can add notes as you go along. Anyways, really cool for people like me who have to read a lot of interesting complicated stuff and don't always have a lot of time where we can't just sit down and actually read. So here's the good news.

Your life just got a lot easier. Normally you'd get a two week free trial, but for my listeners, you can now get a whole month free if you go to listening.com/deep or use the code DEEP at checkout. So go to listening.com/deep or use the code DEEP at checkout to get a whole month free of the listening app.

I also wanna talk about our good friends at Element, L-M-N-T. Look, healthy hydration isn't just about drinking water. It's about water and the electrolytes that it contains. You lose water and sodium when you sweat. So you have to replace both. While most people just drink water, you need to be replacing the water and the electrolytes.

Drinking beyond thirst is a bad idea. It dilutes blood electrolyte levels, which also can cause problems. So the problem is not to, the goal here is not to drink as much water as possible, but to drink a reasonable amount of water plus electrolytes, especially if you're sweating or exercising a lot.

This is where Element enters the scene. I use Element all the time. It is a powdered mix you add to your water that gives you the sodium, potassium, magnesium you need, but it's zero sugar and no weird artificial stuff. It gives you what you need in your water without any of the other stuff, the sugar or the weird chemicals.

Zero sugar, zero artificial colors, no other dodgy ingredients. It tastes great. It's salty and good tasting. I love citrus salt. Other people like raspberry salt. They have these spicy flavors like mango chili. You can mix chocolate salt into your morning coffee if you really want to rehydrate after a hard night.

I drink this, sure, after my workouts, but also if I've had a long day of podcasting and giving talks and I'm just expelling all this moisture through talking and sweating, Element is exactly what I go to when I get back. I add it to my Nalgene model and I get both back.

Anyways, I love Element and I love that I can, I don't have to worry about drinking it. No sugar, no nonsense. So the good news is Element came up with a fantastic offer for us. Just go to drinkelement.com/deep to get a free sample pack with any purchase you do.

That's drinkelement, L-M-N-T, .com/deep. All right, Jesse, let's do some questions. - All right, first question is from Zaid. I'm a student and feel lost for the fear that AI will replace all jobs. Specifically software jobs and web development are on the top of the list of jobs that disappear.

After reading Deep Work, these were the two fields that I wanted to pursue. My motivation to study is dying out. Are these fields now a lost cause? - No, they're not a lost cause. I do not think programming as a job is gonna go away. And I do think it's a good skill to learn.

It does open up a lot of career capital opportunities to shape interesting careers. So if you look at the history of computer programming, it is a long line of tales of new technologies coming in that makes programmers much more efficient, right? And from the very beginning, right? I mean, we started programming used to be plug boards.

To program an early electronic digital computer, you're adjusting circuits by taking plugs and plugging them into other places. Then we got punch cards, way more efficient. Now I can store a program on punch cards and run that. I don't have to redo it from scratch every time. That's a huge efficiency gain.

And then we got interactive terminals. Oh, I don't have to make punch cards, give it to someone and come back the next day to see if it worked. We're talking like massive, multiple order of magnitude efficiency changes one after another. Then we got interactive editors. I could edit particular words or lines of my code.

I could run the code straight there and get the results and immediately go back and change it. Then we got detailed debuggers. Oh, this is what's going wrong in your code. Here is where your code broke. Like all of this stuff is, every one of these is an exponential increase in the efficiency of programmer.

Then we got this sort of modern world where we have autocomplete and syntactic real-time checker IDEs. As you're writing code, it's telling you, you typed this wrong. This is a syntax error here. It's telling you your mistakes before you even try to run it. You don't have to memorize all the different commands and calls and parameters because it can auto fill this in for you.

And we have stack overflow in Google. So now for like almost anything you want to do, you can immediately at your same desk in the monitor right here, find examples of exactly that code. You have to understand that every one of these advances was a massive efficiency boom. So what did we see?

Did we see as we made programmers massively more efficient that the number of programmers we needed to hire got smaller and smaller? If that's what really happened, there'd be like seven programmers left right now. Instead, there's sort of more people doing programming than ever before because what we did was followed a sort of common economic pattern.

As we became more efficient as programmers, as each individual programmer could handle and produce more complicated systems faster, we increased the complexity and therefore the potential value of the systems we built. So we still needed the same number of programmers, if not more. A programmer today, I would say, is a thousand times more efficient than a programmer in 1955.

But we have way more than a thousand times more applications of software today than we had in 1955. This is my best prediction of what we're gonna see with AI. I think the push to try to fully replace a programmer with AI is quixotic. Now, what we're gonna do is make programmers even better.

That's what we're seeing, right? I mean, this is what Copilot with GitHub is doing. It's like an even smarter autocomplete. It's making programmers more efficient. We're essentially removing the need to search for things on Stack Overflow. You can have an AI language model. Just you can ask it and it will show you the example code or write you the example code.

That makes us more efficient. I think we're gonna get more of the AI writing first drafts of code or filling in the easier stuff. So programming will become more complex. It'll be harder. But we're gonna be able to produce more complicated systems with the same number of people. So we're just gonna see more computer code in our world, more complicated systems in our world, more things that run on complicated code because the ability for programmers to produce this will be increased.

So what that means for you, Zahid, is if you like programming, keep learning it, but keep up with the latest AI tools as you do. Whatever is cutting edge with AI and programming, learn that. Push yourself to learn more and more complicated code with more and more complicated AI tools because the complexity curve of what programmers have to do has also been steadily increasing.

So you gotta keep up with that curve, but the jobs are gonna be there. At least that's my best prediction. All right, who do we got next? - Next question is from Kendra. Do you ever use ChatGPT to assist with your writing? I'm not a full-time writer, but do write a lot.

Recently, I've been using ChatGPT for assistance. Is this bad? - Yeah, ChatGPT in writing is an interesting place. It's something I've been looking into through numerous different roles, thinking about article ideas and some of my roles of looking at pedagogy and AI at Georgetown. It's something I'm really interested in.

The complexity about this topic is there are several different threads here, and the role of AI in writing each of these threads, I think, is different. So let's think about professional writers, for example. Professional writers, I know, they're not letting ChatGPT write for them. Professional writers, I know, and there's quite a few who are messing around with language models like ChatGPT, are using it largely for brainstorming an idea formulation.

Well, what about this? Can you give me examples of this? Also for intelligent Google searches. Hey, can you go find me five examples of this? And so for the new language models that have plugin access to the web, they can kind of give you more modern examples. It's a useful research assistant.

But as you know, professional writers don't use ChatGPT to actually write because professional writers have very specific voices, the art of exactly how we craft sentences matters to us. Like that's not outsourceable, right? 'Cause it matters. That top 10% skill in making writing great is all in the little details.

And it's very idiosyncratic how we do it. So professional writers, for the most part, don't let ChatGPT write for them. For non-professional writers who do have to produce writing, I think it's becoming increasingly more common to use tools like ChatGPT to produce drafts of text or just text in general.

I don't think this is a bad thing. I think it brings clear communication to more people. We see big wins, for example, with non-native English speakers. The ability now to not be tripped up or held back because I can't describe my scientific results very well. My language is bad.

Oh, if ChatGPT can help me describe my results in a paper better. Now what matters, of course, is my results. But now I'm not gonna be tripped up presenting those results 'cause I have help doing the writing. I mean, I think a lot of people have communication in their job who struggle a little bit with writing.

If they can be clear, I think this is fine. Can you write a short message in this style that thanks the person? Again, I think this is just introducing more clear communication to the world, and we are gonna see more of that. And I don't think that's a problem.

So then what's the thread where things are more controversial or open? And I think that's when it comes to pedagogy. So in school. And this is really an open question right now, is what role does learning to write play in learning to think? There's different schools of thought about this.

Like, should we teach students from day one how to write in this sort of cybernetic model of it's you plus a language model? Or should we teach students how to write, and then later, hey, later in life, you can use language models to sort of write on your behalf to be more efficient, but it's important for your development as a thinker.

It's important for your development as a person to grapple with words. There's a lot of people who say writing is thinking. So to practice writing clearly teaches your mind how to think clearly, and we can't yet outsource our thinking to chat GPT, so we don't want to lose that ability.

There's a clear parallel here. We can compare this to other existing technologies. In particular, I like to think about comparing this to the calculator on one hand and centaur chess on the other. I'll explain what I mean. With the calculator, here's a technology that came along that can do arithmetic very fast and very accurately.

From a pedagogical position, what we largely decided to do was preserve the importance of learning arithmetic without a calculator. So until you get to middle school or beyond, you're learning how to do arithmetic with pencil and paper, because we thought pedagogically, you need to get comfortable manipulating numbers and their relationships to each other.

You need that skill. As you move on into more advanced algebra and on in the calculus and beyond, we then say, okay, now, if in working on higher order stuff, there's arithmetic that needs to be done, you can use the calculator. So you can automate arithmetic later on, but we felt that it's important to learn how to do arithmetic yourself earlier in your pedagogical journey.

The other way of thinking about this is centaur chess, which is where players play chess along with an AI, a player plus an AI, and they work with each other. Centaur chess players are the highest ranked players there are. A player plus AI can beat the best AI. A player plus AI can beat the very best human players.

This is a model that just says, no, no, human plus machine together is just much better than human without machine. So that's another way that we might end up thinking about writing a pedagogy. Start right away learning how to write with these language models, because you'll be a better writer than you ever would be before.

And the quality of writing in the world is gonna go up. We don't know what the right answer is yet. I think educational institutions are still grappling with, is language model aided writing the calculator, or is it centaur chess? And I don't think we know yet, but a lot of people are thinking about it.

So I think that's probably the most interesting thread, but if you're just doing mundane professional communication, you're not a professional writer, and you have a language model helping you, I say, Godspeed. I don't think that's a bad thing. All right. Ooh, we got a question coming up next. Oh, this is gonna be our slow productivity corner.

- Yeah, we get the music. - Should we hit the music first? - Yep. - All right, let's hear it. (slow guitar music) So as long-time listeners know, we try to designate at least one question per week as our slow productivity corner, meaning that my answer is relevant to my book, "Slow Productivity." If you like this podcast, you really need to get the book, "Slow Productivity." All right, what's our slow productivity corner question of the week, Jesse?

- All right, this question's from Hunched Over. I have a really nice work-from-home setup and a special nook of my house. However, I've used this setup for two years for mostly hyperactive, high-mind-type work, impulsive checking of emails, switching between multiple tasks, Zoom meetings, distraction, et cetera. I now find it's very hard to get in a deep work mode at this desk, even when I have the set time aside.

I habitually switch back into a shallow work mindset. How do I reclaim my desk to be a place of deep work for my mind? - So I talk about this in principle two of my book, "Slow Productivity." In that principle, the description, so the principle is work at a natural pace.

And as part of my definition of that principle, the end of that, I say, it's like working at a natural pace, varying intensity over time scales, and then sort of comma, in settings conducive to brilliance. And a big idea I get into in that chapter is that setting really matters when you're trying to extract value from your mind.

Setting really matters, and we should take that very seriously and be willing to invest a lot of time and potentially monetary resources, if needed, to get the settings proper for producing stuff with our minds. So because of that, what we often see when we study the traditional knowledge workers I look at in that book, people famous for producing value in their minds, is that like you, they often have very, very nice home offices, really good desks, and my computers, and my files, and a very comfortable chair, and it's very well appointed home office.

And they don't work on their deep stuff in that office. We see these separations. David McCullough, I found the picture of this. I talk about it in the book. I found the picture of his home office from a profile, his house in West Tisbury, Martha's Vineyard. It's a great home office.

The window looks over a scenic view, and it's got a L-shaped desk. It looks great. He wrote in a garden shed. So he would use the home office to do all the business of being like a best-selling author and historian. But when he wrote, he went to a garden shed that had a typewriter, 'cause that was what was conducive for his brilliance.

Mary Oliver, the poet, her best poetry was composed walking in the woods. There was something about the nature, and the isolation, and the rhythm. That is where the good thoughts came. That's a very specific process. Nietzsche also would do very long walks. That's where his best thoughts would come.

And so we see these examples time and again, that the setting in which you try to do your most smart, creative, cognitive work really matters. And if that setting is the same place that you do shallow work, it's the same place you do your taxes, and your emails, and your Zooms, your mind is gonna have a hard time getting into the deep work mode.

And so the answer is to have two places. Here's my home office that I care about function. It's got monitors, and good webcam, and my files are here, and I don't wanna waste time when I'm doing the minutia of my professional life. But then you need somewhere else you go to do the deep stuff.

And it could be fancy, it could be very simple. It could be sitting outside under a tree at a picnic table. I used to do this at Georgetown. There was a picnic table in a field on part of the trail that ran from Reservoir Road down to the canal, and I would go out to that tree with a notebook to work.

It could be a garden shed that you converted. It could be a completely different nook of your house. I talk about in the book, people who took like an attic dormer window and just pushed a desk up there, that's for deep work. That's what Andrew Wiles did when he was solving Fermat's last enigma, last theorem.

He did that up in an attic in his house in Princeton. So have a separate space for deep work from shallow, and it should be distinctive, and it should psychologically connect to whatever deep work you do. It doesn't have to be specialized. It doesn't have to be expensive. It can be weird.

It can be eccentric, but it needs to be different. So don't try to make your normal home office space also good for doing deep work. Have a separate space for doing your best, most cognitive stuff. You are gonna find, I would predict, you are gonna find a significant increase, a significant increase in the, not just the quality of what you produce when you do your deep work, but how quickly you get into that state and the rate at which you produce that work.

So anyways, in slow productivity, that's one of the ideas I really push. Location matters. Don't reduce all work to this just frenzied, to monitor, jumping back and forth between emails, busyness, freneticism. Don't make all work that. Separate. There's some of that, and then there's also me trying to produce stuff too good to be ignored, the real value, and that's a slower thing, and I need a different location for it.

All right, what do we got next? - Next question's from Charlie. Excuse me. I time block my day into 50-minute deep work blocks separated by 10-minute breaks. I have little autonomy and am closely supervised. Sometimes I'm extremely busy all week, and sometimes I'm twiddling my thumbs waiting for my supervising solicitor to give me work.

How should I utilize my 10-minute breaks during a busy week? And also, how should I handle weeks when I don't have much work for my deep work blocks? - Well, Charlie, don't worry too much about those 10-minute breaks. Have fun, right? Don't think about 'em, just do whatever. Like, do whatever's interesting.

I mean, I typically recommend if it's a busy day in the sense that you have the 50 minutes on the other side of these breaks is filled with deep work, take what I call deep breaks. So don't look at things that are emotionally salient. Don't look at things that are too relevant to the type of work you're doing.

Look at things completely, or do things completely different than your work. That's gonna minimize the context-switching cost when you go back to your work. More generally, though, you know, I don't love the sound of this job, right? What makes people love their work? This is an idea from So Good They Can't Ignore You where I noted that people think what they want to love their work is a match of the content to their job, but there's these other more general factors that matter more.

And one of those general factors that matters more is autonomy. Autonomy is a nutrient for motivation and work. It's critical. You don't have a lot of it. So I don't love this job. So how about this plan for the weeks in which you don't have a lot of deep work for the 15-minute blocks?

You are working like a laser beam in that time on your move to something different. So you have a side hustle or a skill that you're learning that is going to allow you to transform what your work situation is to be closer to your ideal lifestyle, and that's what you're working on in unscheduled 15-minute breaks.

I think you're gonna get a lot of fulfillment out of that because you're not gonna be bored, and more importantly, you're gonna find some autonomy and empowerment. I am working on the route out of what I don't like about where I am now. And it could be a new skill that within your same organization is gonna free you to go into a more autonomous position, or it could be a new skill that's gonna allow you to go to a different job that's gonna be more autonomous, or maybe it's a side hustle that is going to allow you to drop this to part-time or drop it altogether because it can support you.

I think psychologically you need something like that because otherwise a fully non-autonomous job like this, especially in knowledge work, can get pretty draining. All right, let's see. We have some calls this week, don't we, Justin? - We do, yeah. - More than one, it looks like. - Yep. - Excellent, let's get our first one.

- Okay. - Hi, Cal, this is Roan, longtime reader and a listener since episode one. I'm a longtime fan of your work. I'm eagerly awaiting my receipt of Slow Productivity. I'm getting both a signed copy that was offered there by your local bookstore, and I'm getting the Kindle version as well.

I'm especially excited that you have recorded the audio book version yourself. I've really been hoping for this, especially since you've started the podcast to hear these books in your own voice. I think that's fantastic. I'm particularly enjoying your forays into the philosophy of technology. That's an area of interest myself.

I'm personally finally diving into Heidegger and my philosophical readings in general. In honor of your famous Heidegger and Hefeweizen tagline, I wonder if you've read Heidegger's views on technology, and if so, has that influenced or impacted your views in any way? Thank you again for all of the excellent work and all the excellent content, and I am looking forward to the Deep Life book to come after this one.

Thank you very much. - Well, thanks for that, Rone. I'm vaguely familiar with Heidegger on technology, but I would say most of my sort of scholarly influences on technology philosophy are 20th and 21st century. This is where, if you go back to Heidegger, technology was being grappled with, but it was also being grappled with in the context of these much more ambitious, fully comprehensive, continental philosophical frameworks for understanding all of life and meaning and being.

And it's these complex, it was the height. By the time you get to Heidegger, and you see this a lot in Marx as well, this sort of totalizing, we're gonna sort of have a new epistemology for all of knowledge and understanding the human condition. Very complicated. And so it's a little bit less accessible.

There's specific thoughts on technology. Whereas you get farther along in the 20th century, what you get is more of people, because of the impetus of modernity, just grappling specifically with technology and its impacts. And so you start to see this with thinkers like Lewis Mumford, for example, or Lynn White Jr.

It's starting to grapple more specifically with what's going on. And so we get later, you get thinkers like Neil Postman, and you have Marshall McLuhan. They start working on this. More recently, you get Jaron Lanier. Then you have full academic sub-disciplines like SDS emerging, which has a very specific methodology for trying to understand the social technosystems.

More recently, you get things like critical technology studies, which tries to apply postmodern critical theories to trying to understand technologies. The 20th, especially mid-onward 20th century and early 21st century, it's more focused. And I think the pressures of modernity give us a type of technology, an understanding of technology that resonates with our current moment.

So that's been more influential to me, I would say. I do like your callback, however, to Heidegger and Hefeweizen. Most people don't know this, but when I was writing my books for students, my newsletter and blog, of course, were focused on students. And a big thing I was pushing for there was how do you build a college experience that's really meaningful and interesting and sustainable, and also opens up really cool opportunities.

I did not like this idea of being super stressed out in school. Like, "Oh, but it'll be worth it "because I'm gonna get this job "and then it'll be worth it." I was trying to teach kids, how do you actually make your experience in college itself good? Not like something you're sacrificing yourself for to get to something better down the line.

And I had this idea called the Romantic Scholar. And it was all about how to transform your college experience into being much more psychologically meaningful. And one of my famous, to like, my famous, I mean, among the readers of "Study Hacks" back then, so like seven people, one of my famous ideas was Heidegger and Hefeweizen.

And I was like, "Would you have to read Heidegger? "Don't just go white knuckle "at the library the night before. "Go to a pub and get a pint of Hefeweizen." And like sip a drink and there's a fire and like read it, like put yourself into this environment of like, this is cool, it's an intellectual thinking and ideas are cool and life is cool.

And approach your work with this sort of joyous gratitude and care about where you are and how you're working. I talked a lot about that. Anyways, it reminds me of our last question or one of the questions we answered earlier in the episode. Right, I told the, in the slow productivity corner, I told the question asker, "Build a cool space to do your deep work.

"Don't try to make your shallow work home office "into the place where you do your deep work. "Like go somewhere cool, do it under a tree." And then, you know, I really pushed that idea back then. I talked about, I called it adventure study and I think was my term.

Go to cool places to do your work. So you make your work into something that's intellectually cool, it's exciting, not something that you're trying to grind through. I'm trying to think of examples. I think there was a someone that people would write in, students would write in. Someone wrote in with a picture of a waterfall where they went to study.

Someone else, an astronomy student, snuck onto the roof of the astronomy building, the stars, and that's where she would like read and work on her problem sets. You know, I love that idea when I was helping students find more meaning in their student life. So I like the idea of preserving that today in knowledge work, especially if you're remote or have flexibility.

Find cool places to do your coolest work. Transform your relationship to it. Like I still do this sometimes with, if I'm early in a New Yorker article, I'll go to BevCo at happy hour and do exactly Heidegger and Hefeweizen. Like get like something they have on tap because just psychologically it's like, this isn't work, this is interesting.

I'm in this, there's all these people, I know the people at BevCo, I have, you know, like a Hefeweizen. I'm just like thinking, isn't it cool to think ideas? This isn't just me in my home office like trying to make deadline. And I'll often do that at the beginning of a New Yorker article just to put myself into like the mindset of, this is cool, this is interesting, this is thinking.

Like remember like this activity itself has value and it's entertaining. It's not just functional. So anyways, cool call. Maybe I should read some more Heidegger. I have to get more Hefeweizen, that's the goal. It takes a lot of Hefeweizen to get through Heidegger by the way, it's some long books.

All right, do we have another call? - Yep, here we go. - Hey Cal, my name's Tim. I actually met you over the weekend at your book signing. I was the Dartmouth guy that you met toward the end. It's nice meeting you in person. I have kind of a two-part question.

I'm really drawn to the idea of like thinking about seasons or chapters of your life and career. And as somebody with young kids at home, I'm certainly in a specific type of season right now. So I wanted to understand, I guess a two-part question. When you're thinking about the seasons of your life, what's the time box you put around those?

Are those like a quarter? Is it half a year? Is it two years, 10 years? Like how do you, when you think you're entering or exiting a specific season in your life, how do you, how long is that? And secondly, I guess is, I'm in a relationship, I have a wife and she's also got a busy life and career.

How do you, or do you have any advice on how do you synchronize or match up the seasons they may be going through in their careers? I find it's very difficult if you have two people trying to push hard at work and are in a busy season at work, but also be able to give the attention you need at home.

So it takes a conscious decision on both parts on which season you're gonna be in and I wonder if you have any advice on that. Thanks, Cal, big fan. - Well, Tim, good to hear from you again. It's nice to see you at the book event. Two good questions.

So first question when it comes to seasons, there's different scales that matter, right? So there's the literal seasonal scale of the seasons of the year. And this is the big idea from principle two of my book, Slow Productivity, is you should have variations within the seasons. Like for me, for example, my summers are much different than my falls.

So my summers, it's much slower, there's much less phoneticism and meetings and I'm much more focused. Whereas like in the fall, if I'm teaching some classes and I can do a lot more meetings, it has a different feel to it. So seasonal variation is good. We are not meant to work all out every day, all the days of the year.

Like we're meant for there to be variations. If you don't work in a factory, don't simulate working in a factory with your knowledge work. There's also higher scales of seasons, like longer time periods. And this becomes more clear to me as I get older, as I've actually made my way through more of these seasons.

I think of these larger, like the largest granularity of season I deal with is pretty close to a decade. And I think this is pretty relevant if you're having kids, right? Because so I think of my 20s as different than my 30s, as different than my current season, which is my 40s.

So in my 20s, for example, like one of the things I was trying to do if I'm thinking about professional objectives is trying to get on my feet professionally. It's like, I wanna be a professor, wanna be a writer. It's like, I wanna like lay those foundations and that's what I'm working on.

Putting in the time, putting in the skills. It was a lot of skill building, head down skill building. Like the stuff I was working on might not be publicly flashy, but writing the papers, learning how to be a professor, writing the books. There are student focused books, doing magazine writing, doing newsletter writing, just trying to get my writing skills up.

The three books I wrote in my 20s, each of them had a element that was more difficult than the one before that I very intentionally added. So I was using the books to systematically push my skills, not to try to grow my career necessarily. I got the same advance for all three of those books.

My goal was not how do I become a very successful author in my 20s? It was how do I become a good enough writer where becoming a successful author is possible? And so that was my 20s, right? And that was largely successful, right? Because I got hired as a professor right when I turned 30 and my first sort of big hardcover idea book came out right when I turned 30.

So good they can't ignore you. All right, so then my 30s is a different season. So what I'm trying to do in my 30s is now we're having kids. So I had my first of my three kids when I was 30, right? So my wife and I are starting a family and professionally I was like, okay, now what I need to do professionally, what do I care about now?

When you have kids that age or you're starting to have babies, it's like, I wanna provide stability. And so it was really about like, okay, I wanna get tenure. I want my writing career to be successful enough that like it gives us financial breathing room. I wanna be a successful enough writer that like we're not super worried about money and have the stability of tenure.

Like those were the two things I wanna do. I wanna become a successful writer. Meaning it was unlike in my 20s. These are the smaller book advances. I mean, I don't always talk about number, but I'll tell you like the books I got in my 20s were all $40,000 book advances.

So these were not by standards today. These are very small advances. In my 30s, I was like, I need to now become a writer that gets like real hefty book advances, I need tenure. And beyond that, it's like trying to keep the babies alive. Right, so it was sort of a, this is a frenetic period.

This is not a period of grand schemes. It's like, get your head, keep the babies alive and keep, you know, everyone, this baby is fed. You know, okay, do they know that my wife's traveling? So I need to like get the bottle. Like when's the nanny coming? Like all that type of stuff, get tenure, become a writer with like some financial heft, right.

And that was what my 30s were about. And I think that was largely, and that was successful. I got tenure, you know, five years later and my books became bestsellers. And now like I'm getting bigger book contracts and we could move to where we wanted to move. And, you know, okay, great, we got that all set up.

We're financially stable, the kids survive. I have tenure, you know, I'm a successful writer. Now my 40s is a different season. I'm not keeping babies alive anymore. Now I have elementary school age kids. This is much more a play of, it's parenting. It's like being there in your kids' lives.

They need as much of my time as possible. They're developing themselves as people and I have all boys and they really want specifically dad time. So now parenting is this whole other thing. And in my work, like, well, you know, I got tenure and I become a successful writer.

So now when I think about professional goals in my 40, they're much more, they're much more, it's ambitious in a sort of legacy way. Like, well, but what do I want to be as a writer? Like, where do I want to, what do I want to do? Like, what do I want to work on?

Like, where do I want to leave my footprint, right? And this is a very different feel. What do I want to do in academia? Like, I was focused on getting tenure in my 30s. My 40s now, it's like, where's the like footprint I want to leave in the world of scholarship, right?

It becomes much more forward-thinking legacy. It's slower with my kids. It's not, how do I make sure that like every kid has picked up and got the milk when they needed it? Now it's like, how am I showing up in their lives in a way that like they're going to develop as good human beings?

So like in this current season, in the 40s, everything is more lofty or more legacy, more forward-thinking. It's slower and more philosophical and the depth is, there's more depth to it. So, you know, every season is different. So those life seasons could be at the scale of decades, but those are just as important to understand as the annual seasons and even the smaller scale seasons.

As for your second question, coordinating with your wife, what I found is like, what I hear from people, I found in my own life, it is really important that you and your wife have a shared vision of the ideal family lifestyle and that you are essentially partners working together to help get towards or preserve this ideal vision you have for what your family's life is like, where you live, the role, how much you're working, how much you're not working, what your kids are like, what their experience is with you.

You need a shared vision of this is what our team, our family, this is where we wanna be. This is what we think is what we're going for. Like my wife and I started making these plans as soon as we started having kids and they evolved, but we wanted a shared plan.

And then it's like, okay, now how are we both working towards this? What's gonna matter? Right, you need the shared plan. What happens if you don't have this? Well, you get the other thing, which is very common, especially among highly educated couples, which is we are both independently trying to optimize our careers and therefore see each other mainly from the standpoint of an impediment to my professional goals.

And we have this very careful tally board over here of like, mm-hmm, mm-hmm, you did seven units less of this. I did four more units less of this. So you get sort of potentially resentment, but even without the resentment, it's a huge stress and anxiety producer. Trying to individually optimize two careers without any approach to synergy or any shared goal of where you're trying to get your life writ large is a source of tension, right?

It's very difficult. There's these really cool configurations that might be possible for you and your family's life that will be missed if you're only myopically looking at your own career and saying, how do I just keep this going forward? How do I just maximize these opportunities? 'Cause ultimately what is gonna matter most for your satisfaction in life is gonna be the whole picture of what your life is like.

And so you need to be on the same page. You have your shared vision, and then you have your shared plan at different timescales. So how are we going to get closer to this vision? So what are we working on for the next five years? Like, where do we wanna try to get?

How are we getting there? Okay, this year, like, what are we both working on? What's our setup and configuration? What is the biggest obstacle we have to the shared vision of where we want our family to be? Oh, there's something about our work setups now that's incredibly stressful, and it means we're not like, our kid doesn't have this or this, do we think it's important?

Wait, maybe we need changes here. It opens up a lot of options when you're working backwards from a shared vision as opposed to working forwards from just what's best for me and specifically what I'm working on. So you gotta be on the same page. Whatever that vision is, be on the same page.

And again, as soon as I see couples do this, it opens up so many options for them in their lives. And it's a hard transition because, like, coming out of your 20s, it's all about, I need to maximize what I'm doing to get some sort of abstract yardstick of impressiveness.

It changes when you're older. What is my family trying to do? You know, where do we wanna be? What do we want, like, a typical afternoon to look like? What do we want our kids' experience to be like? What type of place do we wanna live? Like, what do we wanna be doing in, like, the evenings and afternoons?

Like, who's around? When you get these visions really nailed down and you work backwards from it, all sorts of creative options show up. And yes, there might be options where someone is not optimizing their potential professional achievement. It might be like, wait a second, I'm really good at this.

So I could, like, do this half hours and we could explore to live over here. Like, you start to see these other options once you work together. So that's a good call. We have a case study coming up that actually is someone who went, thought about these same issues.

So I think this is well-timed. All right, so as previewed, I wanna read a quick case study here from one of our listeners. It actually ends with a question. So it's a kind of a hybrid. All right, this case study is from Anna. A repeat writer to the show.

Anna said, "I wrote you a while back to ask about whether or not I should take a job at a startup because I was bored at my cushy chief of staff job for a Silicon Valley tech company where I'm only needed to work part-time to fulfill my full responsibilities.

I decided not to take the startup job as you suggested." I hope this would be where she says, this is where it'd be unfortunate if she said, and that startup was OpenAI, and you cost me $20 billion of stock options, you son of a bitch. No, she said, "By contrast, shortly after I made this decision, the startup went belly up." All right, so phew, we pushed her in the right direction.

"Next, I got a big promotion and pay raise at my current company and have even more reason to believe that they don't mind me working part-time. I do work remotely, which makes it easier. Now I'm getting bored again and I feel myself getting antsy. I decided to learn to paint part-time and learn a fourth language, all the while continuing to work less than 30 hours a week at a job I do enjoy, although it's not overly stimulating." All right, so we've got a kind of a cool case study there.

She resisted the urge to go to this high-stress job that was more impressive, and that turned out to be a good decision 'cause that company went belly up, and in her current company, she got more money and a promotion. She does, however, have a question. "How do I continue to go down this path without letting my ego get in the way of the cool life I had built?

Everyone at my company thinks that their job equals their life. I feel like there is this constant pull to believe this is the case. Will this feeling ever stop?" Well, Anna, it is hard. I've experienced this in parts of my life where I have been intentionally having my foot halfway on the brake, where, hey, in this part, I could go all out, and the other people I know are, and I'm not, and it's difficult.

I hear this a lot, in particular, from lawyers, right? There's this movement I really like right now that remote work and the pandemic really exploded of lawyers at big law firms leaving the partner track and leaving the office and said, "I'm only gonna bill half the hours I did before, and you're gonna pay me commiserately less, and there's no expectation now that I'm trying, there's no ladder for me to go up anymore, but I'm really good at this particular type of law, and it's really useful to have me work on these cases, and so you're happy to keep doing it." And I live now somewhere completely different.

It's much cheaper than the big cities, so honestly, billing 50% less hours and working 35 hours a week, I'm making more money than anyone else in this town, and so this works out well. This is a movement I really like, and they also are struggling a lot with, "Yeah, but in my firm, if you get the partner, that's a big gold star.

If you get the managing partner, it's an even big gold star. We'd look at our bonuses and our salaries, and I feel like they're lapping me." So that's also a psychological issue. All right, so what helps here? Partially just recognizing that's just part of the trade-off. Ego, accomplishment, this person's doing better than me.

I think I'm smarter than that person, but they're moving ahead of me. That's never gonna go away, so you just have to see that as one of the things you're weighing against the benefits, but two, you need much more clarity, probably, on what the benefits are. This comes back to lifestyle-centric career planning.

You need, like we talked about with the last caller, this crystal clear understanding of what matters to you in your life and your vision for you and your family's life, and if your current work arrangement fits into that vision, which probably it does, Anna, because 30 hours a week with high salary opens up a lot of cool opportunities for your life.

If it's part of this vision that's based in your values and covers all parts of your life, and it's not just hobbies, it's not just like I'm trying to fill my time with hobbies. It's no, I have an aggressive vision for my life. We live here, I do this, I start this, I'm heavily involved in this.

It's a full vision of a life well-lived. Then it's much easier to put up with the ego issues. You say, yeah, but here's what I'm proud of is this whole life I've built, and I'm super intentional about it. It's a deep life, and my work is a part of making this deep life possible, and what I'm proud of is this really cool life that I built.

The more remarkable you make this vision, the easier you're gonna be dealing with the work ego issues. The more remarkable the deep life you craft that this high-paying 30-hour job is part of, the better you're gonna feel about it even when your colleagues at work are doing 80-hour weeks and making more money and getting more praise, because you say, what I'm proud of is not just my job.

It's this remarkable life I've crafted, my impact on my family and my community and these other things I'm involved in and the ability to just, whatever it is you care about. So I would say, Anna, make your vision of your life much more remarkable than I'm doing hobbies in my free time.

You need to lean into the possibilities of your life and do something that, and when I say remarkable, I mean that in a literal sense, that someone hearing about you would remark about it. Ooh, that's interesting, what Anna is up to, that you are a source of interested remark.

That's what you wanna get to. Now, it's possible when you do this exercise, the vision you come up with that's super deep and meaningful is gonna involve you actually doing a lot more work on something that's really important to you, and that's fine. But you just wanna have clarity about what I'm trying to do with my life.

And so it's a good question and a good case study, because just simplifying and slowing down without a bigger vision for what that slowing down serves can itself be complicated or a trap. If you slow down and simplify and then just find yourself, just trying to find hobbies to fill the time, your mind's like, what are we doing?

So you gotta lean into the remarkability of your vision here, Anna. Given all that you've already done, I have no doubt that you're gonna come up with something cool, so you'll have to write back in and let us know what you do. All right, so we have a final segment coming up where I choose something interesting that I've seen during the week to talk about.

But first, another quick word from a sponsor. (air whooshing) Let's talk about Shopify, the global commerce platform that helps you sell at every stage of your business, whether you've just launched your first online shop, or you have a store, or you just hit a million orders, Shopify can handle all of these scales.

They are, in my opinion, and from just the people I know, the service you use if you wanna sell things. Like it powers, Shopify powers 10% of all e-commerce in the US. They are the global force behind Allbirds, and Rothy's, and Brooklyn, and millions of other entrepreneurs of every size across 175 countries.

I can think of a half dozen sort of writer, entrepreneur friends of mine who sell merch or other things relevant to their writing empires that all use Shopify. And they love it because what it allows you to do is have this very professional experience for your potential customers, very high conversion rate.

It makes checking out very easy. It integrates so easily with other sorts of backend systems. I mean, Shopify is who you use if you wanna sell. And when Jesse and I start our online shop for deep questions, which I think is inevitable, it's gotta be inevitable, we are gonna use Shopify for sure.

- Yep. - Yeah. I think we're gonna use that for sure. - People are talking about it at the book event. - You know, multiple people mentioned at the book event, the VBLCPP shirts. - Yeah. - People wanted those. Values-based, lifestyle-centric career planning. People are like, "Where's my VBLCP shirt?" And so when we sell those, we're gonna sell those 100% using Shopify.

All right, so sign up for a $1 per month trial period at shopify.com/deep. When you type in that address, make it all lowercase, shopify.com/deep, all lowercase. You need that slash deep to get the $1 per month trial. So go to shopify.com/deep now to grow your business. No matter what stage you're in, shopify.com/deep.

(cash register dinging) Also wanna talk about our longtime friends at Roan, R-H-O-N-E. Here's the thing, men's closets were due for a radical reinvention and Roan has stepped up to that challenge with their commuter collection, the most comfortable, breathable, and flexible set of products known to man. I really enjoy this collection for a couple of reasons.

A, it's very breathable and flexible and lightweight. So when I got like a hard day of doing book events like I've been doing recently, if I can throw on like a commuter collection shirt or a commuter collection pants, men often underestimate how much your pants lead to comfort. Like a thick pair of jeans can get pretty uncomfortable after a long day of running around LA.

So I have lightweight, breathable, good-looking clothes, makes a difference. They have this wrinkle technology where you can travel with these things and the wrinkles work themselves out once you wear them. So you can look really sharp, even if you've been living out of a suitcase, let's say on a book tour.

It all looks really good. It has gold fusion anti-odor technology. Like it's just good looking, incredibly useful clothes, especially when you have a very active day. The Roan commuter collection is something I highly recommend. So the commuter collection could get you through any workday and straight into whatever comes next.

Head to roan.com/cal and use the promo code Cal to save 20% off your entire order. That's 20% off your entire order. When you head to r-h-o-n-e.com/cal and use the code Cal, it's time to find your corner office comfort. All right, let's do our final segment. So what I'd like to do in this final segment is take something interesting that someone sent me or I encountered, and then we can talk about it.

So today I actually want to go back to, I mentioned this email in my opening segment, in the opening segment about AI. And I mentioned that someone had sent me an email about General Ulysses S. Grant. This was Nick, hat tip to Nick. He sent me a scan from a book.

I put the title page up on the screen for those who are watching the video. The book, it's an older book, "Campaigning with Grant," written by General Horace Porter, right? So this is a sort of a contemporaneous account of what it was like being with General Grant during the Civil War.

There's a particular page from this I want to read, page 250. All right, so this is describing the general's actions in camp. He would sit for hours in front of his tent or just inside of it looking out, smoking a cigar very slowly, seldom with a paper or a map in his hands, and looking like the laziest man in camp.

But at such periods, his mind was working more actively than that of anyone in the army. He talked less and thought more than anyone in the service. He studiously avoided performing any duty which someone else could do as well or better than he. And in this respect, demonstrated his rare powers of administrative and executive methods.

He was one of the few men holding high position who did not waste valuable hours by giving his personal attention to petty details. He never consumed his time in reading over court martial proceedings or figuring up the items of supplies on hand or writing unnecessary letters or communications. He held subordinates to a strict accountability in the performance of such duties and kept his own time for his thought.

It was this quiet but intense thinking and the well-matured ideas which resulted from it that led to the prompt and vigorous action which was constantly witnessed during this year, so pregnant with events. Now, we actually talked about this in my interview with Ryan Holiday on his Daily Stoic podcast, which was released over the past two weeks in two parts.

We talked about General Grant. And the point Ryan made, which is a good one, is that it's useful to contrast Grant with General McClellan who preceded Grant. McClellan was the opposite of quiet and deep and focusing on what mattered and thinking hard about doing that well. McClellan, by contrast, was all activity.

We gotta maneuver the troops, I gotta write some letters, I gotta do this, let's go over here, we gotta make sure that this is working here. He was a constant activity, a consummate bureaucratic player, but he never actually pulled the trigger and made the attacks that mattered. And finally, Lincoln said, "I'm sorry, McClellan, enough.

"Let's give this Grant guy a try." And Grant got it done, one to war, right? And so I think in here there's this really useful point about in an age of busyness, and of course in today's digital age, busyness has never been more amplified or pronounced. It is not ultimately the busyness that wins the proverbial war.

It is not the reading over the court martial proceedings and writing letters and running around and talking to people and giving your thoughts and doing useless maneuvers. That doesn't win the proverbial war. It's sitting down and thinking hard and getting to the core of it. This is what matters.

Now let's go make this move. And you make the moves that win. You win the battles, you win the war. And that is an act of slowness. That is an act of slowing down, focusing on what matters, giving it careful attention, minimizing the non-important, and then pulling the trigger and executing on what matters and repeating.

So in General Grant, we see a great demonstration of the power of slow productivity. In the moment, it might make you look like, and I'm quoting here, "the laziest man in the army." But when you zoom out, you're the hero who won the war. So I think that's a really cool example of a point that's all throughout my book, Slow Productivity, slowing down, focusing on what matters, doing fewer things, having a natural pace, but obsessing over the impact and quality of what you do.

That is the formula for making a difference. You don't win wars through activity, you win wars through smart strategy, and that requires quiet, that requires slowness. So Nick, thank you for sending me that excerpt. I think it's a great case study that some of the best ideas are some of the oldest ideas.

There's nothing new under the sun. General Grant knew about slow productivity, and now we do as well. All right, Jesse, I think that's all the time we have. Thank you, everyone, for listening or sending in your questions, et cetera. I guess I should mention calnewport.com, no, thedeeplife.com, rather. Thedeeplife.com/listen is where the links are for submitting questions and calls, so please go do that.

Send your topic ideas to jesse@calnewport.com. And buy my book, "Slow Productivity." Find out more at calnewport.com/slow. We'll be back next week, and until then, as always, stay deep. Hey, so if you liked today's discussion about AI being able to answer emails, I think you'll also like episode 244, where I give you more detailed thoughts on ChatGPT, how it works and what it can and can't do.

Check that out. That is the deep question I want to address today. How does ChatGPT work? and how worried should we be about it.