back to indexYou Were Never Meant to Work 8 Hours a Day – Here’s the Fix | Cal Newport

Chapters
0:0 How Much Should We Work?
25:15 What’s your advice on having a career backup plan?
27:28 Do you know of any executive coaches who teach your principles?
32:4 Is my retirement plan too ambitious?
35:38 Are accountability support tools acceptable to use in order to build discipline?
38:17 How can I identify if I have an inventory of "rare and valuable skills"?
40:8 Crafting a storytelling profession
43:13 Creating a life dashboard
49:0 Why Can’t We Tame AI? (Cal’s latest New Yorker article)
00:00:00.500 |
So something we hear casually tossed around these days is the idea that knowledge workers are more 00:00:05.200 |
exhausted and prone to burnout than ever before. Anecdotally, this seems to be true, but does the 00:00:10.740 |
data back it up? Well, it does. If you look at survey data of knowledge workers, this trend seems 00:00:16.280 |
to be clear. For example, 2025 research by research consultants at Censuswide found that 66% of 00:00:23.900 |
American employees are experiencing some sort of burnout. This was the highest number they had 00:00:28.060 |
recorded. Another survey from 2025 found that 82% of respondents reported experiencing some level of 00:00:35.300 |
exhaustion. I kept finding study after study along these lines. They are common and the trend lines are 00:00:40.720 |
heading in the wrong direction. So I want to talk about this today and I have two goals. First, I'm 00:00:45.400 |
going to focus on a specific factor that I think helps explains this trend. It's not the only factor 00:00:51.400 |
that explains this trend, but it is an important one. And it's one I think that we don't talk about 00:00:55.460 |
enough. Second, I'm going to then follow the logic. If this factor is true, where would that lead us in 00:01:03.980 |
terms of how we should be rethinking work? And I'm going to make a suggestion about what I think the 00:01:08.500 |
ideal knowledge work workday should look like for a normal human being. It's an answer that might be 00:01:14.040 |
shocking, but if you give it some time, I think it actually makes more sense than you might at first 00:01:18.720 |
realize. So to get started, I want to start in an unexpected place. It's a quote I came across 00:01:26.340 |
recently that sparked the whole line of thinking that led to this deep dive. It was a quote that was 00:01:33.860 |
written down in a journal 150 years ago, and it might not seem to have anything to do with work, 00:01:39.920 |
but I think when we look at it closely, it has everything to do with work today. All right, 00:01:44.660 |
I'm going to pull this quote up here on the screen for those who are watching instead of just 00:01:49.000 |
listening. This quote comes from Henry David Thoreau's journal. He wrote this journal entry in 1852. 00:01:54.820 |
This was in the period after he had already spent his time at Walden Pond, but before he had published 00:02:01.280 |
his book on the experience. So he was sort of processing it. Here's what he wrote. 00:02:07.660 |
I have the habit of attention to such excess that my senses get no rest, but suffer from a constant 00:02:14.080 |
strain. Be not preoccupied with looking, go not to the object, let it come to you. What I need is not 00:02:22.440 |
to look at all, but a true sauntering of the eye. Okay, let's think about this a little bit. What did he 00:02:30.200 |
mean by this? Well, to get some analysis that's going to help us connect us to work, I'm going to turn to 00:02:35.400 |
the source where I found this journal entry, which was in Caleb Smith's recent book, Thoreau's Axe, 00:02:41.220 |
which I read a couple months ago and we talked about on this show. The author of this book, 00:02:45.300 |
Caleb Smith, in analyzing Thoreau's journal entry that I just read, said the following. 00:02:50.960 |
At Walden, Thoreau had begun by trying to undistract himself, to reawaken his own powers of perception and 00:02:58.400 |
refocus his attention on natural, uncommodified objects of contemplation. By doing so, he had hoped to 00:03:04.080 |
free himself from the degrading cycle of labor and consumption that organized middle-class life under 00:03:08.240 |
market capitalism. In the long run, though, he found that this very effort exhausted his senses and 00:03:13.240 |
trapped him in a, quote, habit of attention, end quote. All right, this is a little bit academic. 00:03:18.880 |
That's from an academic book, but let me deconstruct it because there's a critical point in 00:03:22.900 |
here. When Thoreau went to Walden Pond to do that experiment of living deliberately by the water, 00:03:28.420 |
he thought that his problem was he was paying attention to the wrong things, 00:03:33.640 |
right? That he was focusing too much on consumer and materialistic goods, and that by removing those 00:03:42.520 |
from his life and focusing on the wonders of nature, for example, which he describes in sort of great 00:03:47.780 |
loving detail in his book, Walden, that he would find a sort of more purity in his existence. He was sort 00:03:54.260 |
of playing here with the ancient monastic Christian notion of turning your attention from the worldly 00:03:58.660 |
to the heavens above and contemplating hard, you know, God and divinity. But in this journal entry 00:04:05.160 |
that comes after he leaves Walden, he's realizing that the bigger problem was how much time he was 00:04:11.780 |
spending paying attention in the first place. So he was discovering this idea of, like, I need to focus 00:04:19.660 |
really hard on the right thing, like, focus on the ice that I'm going to write about in Walden, 00:04:23.660 |
focus on the clouds up above, that that act of paying so much attention itself was unnatural, 00:04:30.480 |
regardless of what he was paying attention to. There's periods where you just need to let the 00:04:36.780 |
world be around you, let objects come to your attention, and then drift off. This might give you 00:04:41.980 |
deep insight. Sometimes it might not, but it gives you a more deeper sense of contentment. 00:04:47.840 |
So we had this revelation. There's only so much we can kind of force ourselves to pay attention. We're 00:04:54.780 |
actually wired to let the world come to us, just to be there in the world for a lot of our time. 00:05:00.360 |
All right, let's connect this to knowledge work. There is a through line here. In his philosophizing, 00:05:06.900 |
right, Thoreau was stumbling on this important point about paying focus attention being to some degree 00:05:11.420 |
unnatural or something we can only do a limited amount. And yet, isn't this exactly what knowledge 00:05:18.340 |
work demands of us? The knowledge work demands that we have to be focusing and paying attention on things 00:05:24.040 |
with our brain the entire day. Well, where did this come from? Well, here's a story to tell. 00:05:30.980 |
As we get the rise of the Industrial Revolution and factory labor, we invented a new model of productive work 00:05:40.860 |
in which what you're essentially doing is renting your body to a factory owner. So in industrial work, 00:05:46.960 |
which was what was big in the time when Thoreau was around, this was the model we had. Hey, I'm going 00:05:51.920 |
to rent my body to you, the factory owner, for eight hours. It would have been more before, but after, 00:05:57.780 |
you know, federal labor reform standards, early 20th century, it'd be something like eight hours. 00:06:01.540 |
And then you could basically tell me what to do with that. I need your arms to turn these 00:06:06.060 |
knobs and to attach these steering wheels or to move these things from the assembly line and off. 00:06:11.420 |
And you're paying me for use of my body. You're going to tell me exactly how to use it. And if I 00:06:16.240 |
say, hey, I'm going to go take a two-hour break, like, great, we're not using your body, then we're not 00:06:20.520 |
paying you for those two hours, right? So I'm renting your body to help make our factories actually run 00:06:26.320 |
because we need human bodies doing things in there. So when knowledge work emerged as a major economic 00:06:30.860 |
sector in the mid-20th century, the decision implicitly was like, well, we'll just do the 00:06:36.240 |
same thing with brains. It just made sense. It was a natural leap. In the factories, we rent your bodies 00:06:41.440 |
for eight to 10 hours a day, and the offices will rent your brains for eight hours. Your brain is ours 00:06:46.540 |
for those eight hours, so it better be doing the cognitive equivalent of turning the wrench or attaching 00:06:53.480 |
the steering wheel for those eight hours. So you've got to be doing things with your brain the entire 00:07:01.560 |
time that you're working. So this made sense from, I don't know, an intuitive perspective. It's just what 00:07:07.580 |
we did in factories. Why not do it in offices? But as Thoreau had pointed out 150 years ago, you can't do 00:07:15.960 |
that. You can sit in a factory and move things off an assembly line for eight hours a day, but you can't 00:07:21.640 |
easily pay attention to things for eight hours a day. It's unnatural, and it's draining, and it is 00:07:26.860 |
exhausting. It is a very unnatural configuration to be in. Now, why then are these numbers getting worse? 00:07:33.140 |
If this was our idea early in the mid-20th century, we had all these big factories, the offices built, 00:07:37.400 |
we said let's run the offices the same way as a factory. Why is this burnout and exhaustion getting 00:07:41.100 |
worse today than it was back then? Wouldn't it just be bad from the very beginning? Well, I think 00:07:45.460 |
technology has a role to play in this story. So while it was true in 1965, and you're working 00:07:54.440 |
on Madison Avenue, like in the Jon Hamm show, Mad Men, that your brain was being rented for eight 00:08:00.100 |
hours, it was easier to fake it than you could in industrial labor. In industrial labor, I can tell 00:08:05.820 |
if you were not doing your thing on the assembly line. I can see you not turning the wrench. What's going 00:08:09.640 |
on? But if I'm John Draper in Mad Men, come on, I can have a cocktail at two in the afternoon for an 00:08:17.600 |
hour. We're like working in the conference room to think about ideas, but not really. When the boss 00:08:22.020 |
comes by, you put down your magazine, and you're at the water cooler talking football, and the supervisor 00:08:26.760 |
walks in, and you quickly shift at the client talk. So we all kind of pretended like this is what we 00:08:31.480 |
were doing. Yeah, we're renting our brains just like our brothers in the factory. It's the same thing 00:08:34.900 |
we're doing. We're renting our brains for eight hours, and we're here, and we're thinking the whole 00:08:37.960 |
time, but we really weren't doing that, and you could kind of get away with that. 00:08:41.020 |
Then technology came along, and we lost our ability to fake it. The personal computer, as I've talked 00:08:47.580 |
about before, and I wrote about this in my book, Slow Productivity, the personal computer came along, 00:08:52.340 |
and it vastly increased the number of things you could be working on. There's so much more work that 00:08:56.260 |
could be assigned to you because there's so many more things that one individual could do because the 00:09:00.180 |
computer made so many tasks just easy enough that one person could learn to do them. This led to a 00:09:05.640 |
theory of workload that said, well, great. Now that there's so many different things you can do, 00:09:09.120 |
why not just fill your queue hopelessly large with the idea that there'll be no downtime? So there'll 00:09:15.120 |
always be something you can work on, right? So we were kind of calling the bluff of this renting your 00:09:20.680 |
brain model. We're like, okay, if we really have your brain for eight hours, now that there's like an 00:09:23.800 |
endless amount of stuff you could be doing, let's put so much on your plate that you never have an 00:09:27.460 |
excuse not to be working. Then we got the digital communication revolution that started by email and was 00:09:32.100 |
continued by things like Slack and Teams, et cetera. And that now made it possible to check in on you 00:09:40.000 |
applying effort at an incredibly fine level of granularity. Answer my emails right away. I know 00:09:45.040 |
you're paying attention. If you don't, then maybe you're stealing from us. We rented your brain. You're 00:09:49.820 |
not giving it to us. See activity going back and forth on the Slack channel, or we're going to think 00:09:54.480 |
that you are actually ironically slacking off. So we vastly increased what people could be doing. 00:09:59.380 |
vastly increased the granularity at which we could surveil people's efforts. And suddenly 00:10:02.900 |
this half-baked idea, let's just rent brains for eight hours. We actually tried to do it. 00:10:08.060 |
And that's why it's exhausting everyone. And Thoreau warned us about it. He wasn't warning us about 00:10:13.320 |
office work, but he was warning us about the underlying principle that, hey, it's hard to pay 00:10:18.680 |
attention all the time. Even if the thing you're paying attention to is good, there's only so much of 00:10:22.340 |
that you can do. Hey, it's Cal. I wanted to interrupt briefly to say that if you're enjoying 00:10:27.380 |
this video, then you need to check out my new book, Slow Productivity: The Lost Art of Accomplishment 00:10:34.500 |
Without Burnout. This is like the Bible for most of the ideas we talk about here in these videos. 00:10:41.860 |
You can get a free excerpt at calnewport.com/slow. I know you're going to like it. Check it out. 00:10:49.860 |
Now let's get back to the video. So what would it look like if we rolled back the clock and said, 00:10:55.380 |
look, Thoreau is my management consultant of choice, and we're going to design what I'll call the Thoreau 00:11:00.660 |
schedule, a schedule for knowledge work, a typical daily schedule that actually is compatible with the 00:11:08.420 |
way the human brain actually functions. Here's what this would more or less be. Two to three hours of 00:11:15.140 |
deep work in the morning. So working on one or two things that are important and require application of 00:11:19.380 |
skills and concentration, non-trivial amount of time off, like an hour or two long walk, 00:11:26.100 |
other types of things unrelated to work. One to two hours of administrative work, including a standing 00:11:31.860 |
30-minute meeting or office hours to check in with all sorts of other people and get questions answered 00:11:35.860 |
and any other sort of things that need to happen. And then you're done. 00:11:40.260 |
That would actually be the schedule that best corresponds to the human brain. If you want to 00:11:46.580 |
invent a cognitive job that's based just off people using their brain, that is, the Thoreau 00:11:52.580 |
schedule there is probably the optimal thing. Now, the obvious point, of course, is that if you ran the 00:11:57.860 |
Thoreau schedule, yes, you would be much less likely to be burnt out. If you ran this as an organization, 00:12:02.820 |
your exhaustion numbers, your turnover, your burnout numbers, of course, would plummet. That's a much more 00:12:06.340 |
natural rhythm. It works with our brain. We're not exhausting ourselves. But would it be a productivity 00:12:11.060 |
disaster? We don't know that it would be. Here's a couple things to keep in mind. Remember from a couple 00:12:18.100 |
weeks ago, we talked about the data from the four-day work week studies. They reduced the amount of time 00:12:24.980 |
people had to work. Therefore, reducing the amount of stuff they could work on. And all of the quantitative 00:12:30.420 |
productivity measures that they studied didn't go down. In some cases, they went up. It's slow 00:12:36.100 |
productivity. I talk about the idea of overhead tax aggregating. Everything you're working on generates 00:12:42.740 |
an overhead tax of meetings and emails and thoughts, the sort of the collaborative glue that holds together 00:12:47.460 |
any sort of project. The more things you're working on, the more of that is in your day. And the more 00:12:53.620 |
of that you get in your day, eventually your day is going to be gunked up with the overhead tax 00:12:57.460 |
with very little time left to do the actual work. Yes, you're busy, but very little actual quality 00:13:03.620 |
output gets produced. So if you're only working one big block in the morning on one or two things, 00:13:09.940 |
and then some administrative work in the afternoon, yes, your number of things you can concurrently be 00:13:13.540 |
working on is going to be much smaller. It doesn't necessarily mean that on the scale of a quarter 00:13:19.140 |
that you'll be producing less than in a much more non-Thorough busier schedule. 00:13:23.300 |
And also keep in mind that the impossibility that Thoreau points out of actually focusing 00:13:28.500 |
productively and hard for eight hours means that we're just using lots of pseudo productivity to 00:13:32.580 |
fill in the gaps during our day. And pseudo productivity, which is just using busyness as 00:13:37.620 |
as a proxy for useful effort, has little to do with outcome, has very little to do with the 00:13:42.820 |
needle moving actual valuable stuff that is produced. So a Thoreau schedule might be largely 00:13:50.580 |
just eliminating pseudo productivity, which means again, when you zoom out to a quarter or to a fiscal 00:13:54.900 |
year, the amount of value produced, the actual macroeconomic measure productivity, which is how many 00:14:02.020 |
dollars came in per employee we have hired? That could actually be the same or go up. 00:14:06.900 |
I mean, maybe not, but I suspect it would not be as bad as you suspect. 00:14:11.460 |
So to conclude, most major companies and organizations, they're not going to shift to a Thoreau schedule 00:14:15.620 |
anytime soon. I think this idea of we're renting your brain and we want to get our money's worth for 00:14:20.180 |
this rental agreement is both obvious and comfortable and entrenched, and it's going to be hard to easily 00:14:25.380 |
shake off. But if they did, and I think people should, there would be a whole new relationship 00:14:32.420 |
to knowledge work and a whole new approach to productivity that might be uncovered. 00:14:36.340 |
I don't know exactly what type of boss Thoreau would be in our modern knowledge work setting, 00:14:42.020 |
but I imagine that he wouldn't be sending you endless email advice and text messages saying, 00:14:46.580 |
did you get my last message? I think he would instead understand the importance of allowing your eye 00:14:52.660 |
to saunter. So there you go, Jesse. Thoreau is a management consultant. That was the original title 00:15:00.100 |
of Walden. People don't know this. It was Walden, colon, seven habits for getting ahead in your job. 00:15:06.500 |
And then he kind of changed it to something a little bit more philosophical. 00:15:09.380 |
That's one of your favorite books, right? Yeah. Walden is very influential to a lot of my work. 00:15:14.500 |
And you read that when you read MIT? I originally read it. Yeah. 00:15:17.860 |
MIT at the banks of the Charles. I had the version of a terrible memory for everything 00:15:22.980 |
except for books. And I read it on the banks of the Charles. I had a version from the science library. 00:15:27.700 |
I think it was called the Hayden Science Library at MIT. And they had an edition that had a Bill 00:15:31.460 |
McKibben introduction. And I remember being like, oh, okay, interesting. And I've been to the cabin and 00:15:37.780 |
it's a great book. You know, look, it's not written in a modern style. You have to take it slow. 00:15:43.380 |
It seems more digressive and he's sort of all over the place, but he's actually, it's really smart 00:15:52.660 |
commentary that's in there. Once you, once you adjust to the language and there's a lot of timeless 00:15:56.260 |
principles in it. Two other follow up questions. Do you think it has a big difference with public 00:16:01.300 |
versus private companies in terms of potentially adopting a thorough schedule? 00:16:05.620 |
I think it's small versus large. Yeah. I think it's just, it gets too entrenched when you're, 00:16:11.380 |
as you get bigger, it's hard to make changes. This one of the things I've really learned is that 00:16:15.060 |
it's hard to change how organizations execute. It's hard to do it. If you come just top down, 00:16:23.220 |
it almost never works because there's all these little friction points. And if it feels like it's 00:16:27.620 |
imposed, like the boss of the massive company says like, here's, you know, look, I read Cal Newport, 00:16:31.860 |
we're making all these changes. There's too many friction points. And then the, the friction builds 00:16:35.620 |
up and the, the system seizes. Then you have to fall back to the super flexible sort of like 00:16:40.500 |
hyperactive hive mind overload, you know, mindset. And then managers themselves, they're not the boss, 00:16:47.380 |
but like the managers, they really don't have an incentive, right? This is a, the, the, the reality 00:16:53.140 |
of, uh, modern capitalism is if you're in a large organization, there's things like stability, 00:16:59.380 |
predictability, like this is more highly valued than trying to innovate the way that you actually 00:17:06.260 |
work or collaborate with other people. Like it's, it's, there's a lot of just entrenchment 00:17:09.860 |
of like, Hey, this works. Like we all disagree, right? We'll just be pseudo productive. We know how 00:17:14.180 |
to do this. I'm good at it. If I'm a manager, that means I was good at it. I answer all the emails and 00:17:18.820 |
jump on all the calls. I have a good, like high pain tolerance for that. And like, who wants to 00:17:22.900 |
change it? So it's, it's these problems like hard to change, but like I said, like we see this all the 00:17:27.540 |
time. There'll be technological breakthroughs in industrial manufacturing. It takes a long time 00:17:31.380 |
until it actually, the changes happen just because of inertia. The electric motor, famous case study 00:17:36.500 |
from a Stanford, uh, economist has talked about all the time. I've cited, everyone cited it. It took like 00:17:41.300 |
50 years from having the technology there to like, wait, we should just put small electric motors on the 00:17:46.660 |
equipment in the factory, as opposed to having like a large overhead shaft turned by a giant, 00:17:50.580 |
um, steam engine that we attach with leather belts still took 50 years. Cause it was like a pain and 00:17:56.100 |
it was new and it was different and expensive. Uh, the continuous motion assembly line is the same way, 00:18:00.260 |
by far a better way to make things like automobiles, but it was such a pain to get right that, you know, 00:18:06.900 |
it really took eventually like, uh, Henry Ford being a kind of classic and just forcing the change through 00:18:11.460 |
for that to actually happen. So I think knowledge work has a lot of that entrenchment. 00:18:15.620 |
And then my next follow up. So we talked about summer schedules a few weeks back. 00:18:19.540 |
So how does your summer schedule look like a daily schedule look like for us? 00:18:24.580 |
It's been pretty good. I mean, it's been, it's very throw in, uh, on a nod podcast day. I write in the 00:18:29.780 |
morning for two to three hours. Yeah. At least I'll work on multiple different projects. Um, 00:18:35.460 |
and then I try to do some admin in the afternoon, professional admin, but not every day. I like three 00:18:41.940 |
days, uh, three days a week. I can keep on top of like most professional admin 30 to 45 minutes. 00:18:48.100 |
So when I hardly write in the afternoon, I don't, yeah, sometimes I can do like a second batch in the 00:18:53.620 |
evening. Uh, I can sometimes make that work if I come here and like make a deal about it. But no, 00:18:57.860 |
I do it mostly in the morning. The, uh, the thing I have been doing just because I have more time is 00:19:02.260 |
every time now I go into my emails professionally, I'm, uh, updating on my filters, just unsubscribe, 00:19:08.020 |
filter, unsubscribe, filter. I mean, I'm going to be, uh, if I'm relentless for two weeks, 00:19:12.260 |
I can cut down the, the junk that comes to those email inboxes, not calnewport.com. 00:19:16.340 |
That's a lost cause. That's just done. I don't know how I'm ever going to get my arms around that 00:19:21.060 |
one. Every PR agency that ever existed now, like sends emails to that. But, but, uh, in my personal 00:19:26.660 |
and Georgetown, I'm like, I'm, it's really making a difference. I'm catching up on the like various 00:19:30.740 |
things. I bought a product from this once, and now I have 17 email subscriptions from them, 00:19:35.700 |
like that type of stuff. And it's like, it's making a difference. One more follow up with 00:19:40.580 |
your writing two to three hours a day, you write very fast. So you just bang out something two to 00:19:46.340 |
three hours probably. It depends what it is. Yeah. It depends what it is. Um, 00:19:50.820 |
like recently I have a New Yorker piece that came out last week and I wrote down like two days, but you 00:19:59.700 |
write in your head a lot too, when you're walking around. I do write in my head. Yeah. But like a New 00:20:03.300 |
New Yorker, it took me two days to do 20, 2,300 words. Um, and that was, those are more like eight 00:20:08.060 |
hour days of writing just because I was on deadline, but other things like the, the newsletter post about 00:20:12.260 |
that New Yorker article. So today there's a new newsletter post out about that New Yorker article 00:20:16.500 |
at calnewport.com. Uh, that took like 30, that was just, you know, brain to page. Yeah. Just, 00:20:21.860 |
you know, I was like, I've thought about this so much. I just lived with this for the last four days. 00:20:25.220 |
I can just, you know, that's just gonna, that's just gonna come out. So it just depends what I'm, 00:20:29.700 |
depends what I'm working on. Uh, sometimes like not much happens if it's like a really tricky piece, 00:20:34.180 |
but also, yeah, I think of walking as writing. Cause I write in my head. Like I, I never, 00:20:38.260 |
I don't want to come to a blank screen saying, I'll figure this out on the screen completely. 00:20:42.820 |
Like just as I start typing, maybe I'll figure out what I want to say. You're going to say bad things. 00:20:46.980 |
Like you've got to figure out the scaffolding of what you're going to write. And then you sit down 00:20:52.980 |
the right and then things might change, but I do a lot of that on foot. All right. We got some 00:20:58.660 |
good questions coming up, but first let's hear briefly from a sponsor. I want to talk about 00:21:03.620 |
actually a product I use every single day. This is Element, L-M-N-T. Element is a zero sugar, 00:21:09.460 |
electrolyte drink mix and sparkling electrolyte drink born from the growing body of research, 00:21:13.700 |
revealing that optimal health outcomes occur at sodium levels, two to three times government 00:21:18.020 |
recommendations. Each serving delivers a meaningful dose of electrolytes, but without the junk, 00:21:24.260 |
right? Without the sugar or the other types of food dyes or other dodgy ingredients, you can trust this. 00:21:30.100 |
It gets you the electrolytes you need to rehydrate without any of that other junk. In Washington DC, 00:21:35.700 |
where the humidity level in the summer is right around, I believe the official reading is 7,000%. 00:21:41.540 |
Actually, I learned just the right way that people talk about humidity, actual weather experts is 00:21:47.300 |
dew point. Percent humidity is for suckers. Oh, really? 00:21:50.820 |
Dew point is what matters. That's the actual number. So anyways, the dew point here is, 00:21:55.060 |
I don't know if high temperatures is good or bad, so I don't know what to say there, but it's very humid 00:22:00.180 |
here. I drink a lot of Element. I'll have it in the morning if I feel dehydrated, and I for sure have it 00:22:04.980 |
during my workouts or on long walks. I usually mix it in with one of those liter Nalgene's or two liter, 00:22:11.220 |
you know, the big Nalgene's full of ice water. I'll go through a lot of those. We literally do 00:22:15.140 |
go through this stuff really fast. I love it because it keeps me hydrated, and sometimes like 00:22:20.660 |
I don't want just plain water. I know I'm craving salt, and I think it's because I'm sweating all 00:22:25.060 |
the time. So Element is the best if you want one of these like no dodgy ingredients, right amount of 00:22:29.540 |
electrolytes. I swear by it, and I really recommend it. Look, you can receive a free sample pack with any 00:22:38.100 |
order if you purchase through drinkelement.com/deep. And when you're there, you should check out a new 00:22:43.300 |
flavor, great summer flavor, lemonade, salt, squeezed the most out of summer with Element's new limited 00:22:50.180 |
time lemonade salt, salty tart and refreshing, but it brings you the best of summer wherever you are. 00:22:54.180 |
I'm literally craving that right now. I don't think I've had enough hydration this morning. I wish I had 00:22:57.700 |
a lemonade salt element right now. We're just going to sit here on dead air until Jesse goes and finds that 00:23:03.940 |
for me. It's going to take like 15-20 minutes, but that's what we're going to have to do. 00:23:06.900 |
Look, there's a totally risk-free, you know, you don't like Element, you can return it, no money 00:23:13.300 |
back, etc., but you're not going to. You're going to like it. So go to drinkelement.com/deep. You need 00:23:18.340 |
Element, buy Element, get a free sample pack if you order at drinkelement.com/deep. I also want to 00:23:24.900 |
talk about our friends at Vanta. Trust isn't just earned, it's demanded. Whether you're a startup 00:23:30.020 |
founder navigating your first audit or a seasoned security professional scaling your GRC program, 00:23:35.380 |
proving your commitment to security has never been more critical or more complex. That's where Vanta 00:23:41.700 |
comes in. Businesses like Vanta use Vanta to establish trust by automating compliance needs 00:23:47.460 |
across over 35 frameworks like SOC 2 and ISO 27001. They centralize security workflows with it, 00:23:54.500 |
they complete questionnaires up to five times faster, and they can proactively manage vendor risk. 00:23:59.220 |
Vanta can help you start or scale your security program by connecting you with auditors and experts 00:24:03.860 |
to conduct your audit and set up your security program quickly. Plus, with automation and AI 00:24:08.340 |
throughout the platform, Vanta gives you your time back so you can focus on building your company. 00:24:13.620 |
Join over 9,000 global companies like Alassian, Quora, and Factory who use Vanta to manage risk, 00:24:20.020 |
improve security in real time. My listeners get $1,000 off Vanta at vanta.com/deepquestions. That's 00:24:26.660 |
V-A-N-T-A.com/deepquestions for $1,000 off. Security is complicated these days, Jesse. I think 00:24:32.980 |
our setup is probably technologically not complicated enough yet. People don't know this, but actually, 00:24:39.460 |
like the whole digital Cal Newport empire, true story, it's run off of a speak-and-spell. 00:24:44.740 |
So we really were kind of behind the times. A speak-and-spell in one of those Casio calculator 00:24:49.620 |
watches. So I think we need some more. Yeah, you got yours. Jesse wears a watch that I literally owned 00:24:55.620 |
when I was seven. I mean, not that literal watch, but that exact brand. And in fact, my watch, my, 00:25:01.940 |
was it the Timex? Casio. Casio. Yeah. I'm sure mine still runs if I could find it. Yeah, there you go. 00:25:09.220 |
But Jesse's a bad haggler and he actually spent $17,000 on that watch. He kind of got taken for 00:25:15.860 |
a bit of a ride, but you know, it's worth it. It's a beautiful watch. All right, Jesse, let's do some 00:25:19.780 |
First question's from Natasha. What's your advice on having a career backup plan, especially for young 00:25:28.180 |
people aiming for highly competitive fields like academia? What makes for a good backup plan in 00:25:32.900 |
case things don't work out, even if you, even if I follow your advice? 00:25:36.580 |
It's a good question. This is where career capital theory matters. This is the whole 00:25:40.900 |
theoretical framework I lay out in my book, so good they can't ignore you. So career capital, 00:25:45.700 |
as we talk about a lot on the show, is my term for your marketable skills, your rare and valuable 00:25:51.700 |
skills. That is your source of value and negotiation in the job market. So whatever you're working on is 00:25:57.780 |
used to be thinking about not just what specific job am I trying to get. You should also be thinking 00:26:04.180 |
about what general type of career capital am I accruing. And you want to make sure that not 00:26:08.420 |
only are you accruing as much as possible, which means not just like doing what you need to, you know, 00:26:12.820 |
bare minimum to get the job, but like, I want to build skills that are hard and they're really 00:26:17.300 |
valuable. And like people like, wow, you're good at that. That's valuable. You want to build up that 00:26:21.460 |
general career capital. And then your question just becomes what other types of jobs would value and 00:26:28.580 |
reward this capital. And you want to make sure the answer to that second question is sufficiently long. 00:26:34.980 |
So whatever you're working on, say, is there a way to go for this, that the capital and building up to 00:26:40.580 |
try to succeed for like this particular job is going to have a couple other outlets where people would 00:26:45.780 |
value it. Because often what happens in particular pursuits, there's different ways to do them. 00:26:50.260 |
Some ways are going to use more capital than others. 00:26:53.780 |
Like in academia, I don't know, maybe there's like a very esoteric way you go forward where like this 00:26:58.340 |
better work because like, there really is no, I'm not building up any other skills that anyone's going 00:27:03.380 |
to find valuable. And then there's, there's ways forward where like, look, I'm building up skills 00:27:06.340 |
to do this where like, if a tenure line job doesn't work, there's still a lot of research positions that 00:27:13.700 |
I could like fall back on for a couple of years, catch my breath. And then the skill will be useful 00:27:18.100 |
here, here and here in the private sector, et cetera. So think about it in terms of career capital, 00:27:22.020 |
not just jobs and you'll have a lot more flexibility with what you do. All right, 00:27:27.060 |
who we got next? Next up is Ellen. I'm a mid career physician scientist at a crosswords deciding 00:27:33.380 |
between pursuing leadership roles or focusing on research. And I want to approach this with 00:27:37.620 |
a lifestyle centric mindset. I'd like to hire an executive coach familiar with your help. Can you 00:27:44.020 |
guide me through this? Is Cal network available? So I read this as two questions. A is Cal network 00:27:49.380 |
available, but two, uh, doesn't make sense to have a coach, right? That's kind of what she's asking. 00:27:54.100 |
Yeah. Okay. Um, Cal network, I'm looking this up now actually does have an online career coaching 00:28:02.100 |
course. I have it here. So you do have that option. Uh, it's called I'm the boss. Now 74 essential laws, 00:28:09.460 |
the crusher enemy, see them driven before you and hear the lamentation of their women and here earned a promotion. You know, you deserve. 00:28:16.180 |
So that's good. It costs $1,900 a month. So it's a bit of an investment, but it's probably worth it. 00:28:21.700 |
Do you know the origin of that quote, Jesse, the crusher enemy, see them driven before you and hear the lamentations of their women? 00:28:28.500 |
No, Conan, the barbarian. That's John Milius is screenwriting right there. It feels like something 00:28:33.940 |
Cal network and John Milius get along. Uh, it's Conan said, like, I want to see my enemies driven before 00:28:39.540 |
me, hear the lamentations of their women. It's this whole thing. Um, but should you have, assuming you 00:28:45.300 |
don't want to take, uh, I'm the boss. Now, should you have a career coach? Here's the thing to think 00:28:49.380 |
about what a coaches do. They sanity check your plans. They're not going to make the plans for you. 00:28:55.860 |
That's actually very valuable, right? I mean, I think a lot of people, myself included, 00:29:02.580 |
they put together a plan and the big fear is like, maybe this is not right. Like maybe I'm crazy or I'm 00:29:08.740 |
deluding myself or I've been reading too many Cal network books or whatever. And they have like an 00:29:12.740 |
executive coach or a career coach. So like, no, no, this is a good plan. And I back it up. That is worth 00:29:16.900 |
a lot, but you have to keep in mind. They're not going to say, let's sit down. I'll figure out what 00:29:21.300 |
you should do. You still have to do that. And in your situation with the type of decision you're 00:29:25.380 |
looking at lifestyle center, career planning is the right way forward. So you really want to be 00:29:30.180 |
clear about what you want in like the two or three year period, as well as like the 10 or 15 year period, 00:29:35.220 |
what you want your, the rhythms of your day-to-day life to look like, because these two directions, 00:29:40.020 |
leadership roles versus research roles are going to yield very different, uh, lifestyles. 00:29:46.260 |
And so you want to be really clear about it. Now, I don't actually think this is an obvious choice. 00:29:50.260 |
Like someone might say, ah, leadership roles is going to be stressful as email, like just be a 00:29:54.900 |
researcher, but research roles could also be stressful because a, it's more provisional. 00:29:58.980 |
Like if your research doesn't go well, it also may be much less money. Uh, there might be their own 00:30:03.540 |
stresses of trying to do grant management or trying to manage teams of, of, uh, like postdocs and grad 00:30:08.900 |
students. Like you really got to think this through. And then, and here's where I think lifestyle 00:30:13.220 |
center career planning gets really useful. You work out a sort of rough ideal lifestyle image. 00:30:18.180 |
Now you have these two options and you're kind of working through, okay, what I keep in mind for 00:30:22.820 |
the leadership position I might follow, what I have in mind for the research position I might follow, 00:30:26.500 |
how do they lead? How close do they lead to my vision video lifestyle? And what you might discover 00:30:32.180 |
doing that is like, oh, you know what? They both have flaws, but now there's this other thing, 00:30:38.020 |
like a variation of one of those ideas. If I was like a, uh, this type of leadership position 00:30:43.300 |
or not this type of research position, but this one over here that avoids the big misses on my ideal 00:30:49.220 |
lifestyle. And it's much closer. Like it actually allows you the lifestyle vision, uh, centric planning 00:30:54.820 |
allows you to explore a much wider space of available positions than most people instinctively do. 00:31:01.300 |
They lock in on a couple of options and say, this is it. And they might not know there's this other 00:31:06.260 |
sort of a little bit unusual option. It's like, oh, it's like a research position, but it's not at this 00:31:11.060 |
university. It's at this university and it's this time. And it's like, it's something you never would 00:31:14.420 |
have thought of except for you're doing lifestyle center career planning and you have this big thing 00:31:19.380 |
you want your lifestyle that's missing. And that makes it work. So lifestyle center career planning 00:31:23.460 |
doesn't just let you decide between two options. It allows you to see other options. You might not 00:31:26.900 |
have other otherwise even considered that end up being the best one. So, uh, take Cal networks course, 00:31:33.300 |
mortgage your house. If you have to second do serious lifestyle center planning. And then third, 00:31:39.700 |
maybe hire a coach, but keep in mind, the goal there is going to be sanity checking, not someone handing you 00:31:46.100 |
a plan. I think Cal network hands you a plan. If you take his course, probably you walk in, 00:31:51.620 |
you don't say anything. He stares at you for a little bit, does one more bicep curl, 00:31:55.620 |
then goes management consultant. Boom. Just tells you what you should do. And that's what you do. 00:32:01.540 |
You crush it. $1,900 a month, 10 year commitment. All right. Who we got next 10 year commitment. 00:32:08.660 |
Next up is Gary. I'm about to retire. I'd like to know if my plan is too ambitious. I want to become 00:32:14.340 |
excellent at chess and crack cryptic crosswords. I also want to maintain my ability to walk 20 miles, 00:32:20.340 |
set records for a half marathon row and keep my 5,000 meter swim at a high level. 00:32:25.540 |
So with one exception, I'm going to say not too ambitious. 00:32:30.100 |
And the one exception is where you get specific and say set records. 00:32:35.060 |
So the way to be clear, he meant set personal records. 00:32:39.060 |
Oh, okay. I thought he was talking about like age group records. Okay, good. Then I think it's 00:32:43.140 |
completely fine because what you're saying here is like, I have a collection of things I want to 00:32:47.300 |
work hard at and get better at that are interesting and meaningful to me. That's a great plan. 00:32:51.460 |
So this is a good, this misunderstanding what you're saying, I think illustrates well when ambitions can 00:32:57.300 |
get too large. It's when you get too specific. So if you said like, I want to have the age group 00:33:01.780 |
record for the half marathon row. Well, maybe you will, maybe you won't. Like, I don't know. 00:33:05.060 |
You know, do you have the right genetics? Who else is going after this? What age group are you talking 00:33:09.620 |
about? That may or may not be achievable, but everything you list here is. Chess, crossword, 00:33:15.380 |
20 miles, row, swimming. I think it's great, right? I mean, and it's an idea of, it's very deliberate and 00:33:22.420 |
it's very intentional. It's like, I want to do things with my mind. I want to do things with my 00:33:25.700 |
body and why don't I be very aggressive, ambitious about what I do? Because why not? So I like that 00:33:31.940 |
approach to retirement, to like sort of really turning up the volume as opposed to sort of thinking, 00:33:36.980 |
I need to just sort of not do too much. That'll just make you miserable. Now, I don't know what a 00:33:40.980 |
cryptic crossword is. However, it sounds terrible. Crosswords are hard enough, but you think it's 00:33:47.060 |
like a code or something? I want to say it's something to do with like cryptography. 00:33:53.220 |
Right. Just like a crossword and it's coded the answers. Yeah. Kind of like that book that 00:33:58.580 |
Neil Stevenson wrote. Cryptonomicon. Yeah. Yeah. This is good though. I think it's a cool plan. 00:34:04.980 |
It makes me feel like I should do more things. You know, what's funny is I had no idea how you're 00:34:08.180 |
going to answer that question because Gary emailed me and I was, that's why I also put it in there 00:34:12.660 |
because I was interested to see how you'd answer it. I mean, what's the worst case scenario is you 00:34:16.900 |
take one or two things off the table. That's exactly what he said too. He said, 00:34:20.420 |
I want to do something with my mind. I want to do something with my body. 00:34:22.500 |
Yeah. I mean, I like this idea of, especially when it's within your personal life, it's this grand 00:34:28.340 |
goal theory, right? Like, so, all right, this is subtle. I'm not a big believer in the idea that a 00:34:32.660 |
grand goal is going to solve all your problems. So a lot of people think, look, I have some grand 00:34:37.540 |
goal and if I can achieve that goal, my life will be happy. One goal can't solve all your problems. 00:34:43.540 |
Right. Achieving one goal is not likely to tweak all the different elements that are relevant to 00:34:48.980 |
your ideal lifestyle and make them better. It'll make one thing better. It might make other things 00:34:52.580 |
worse and be indifferent to others. But having ambitious goals as a way to work on specific 00:34:59.780 |
parts of your lifestyle that want to be better, I like that idea. So the, the grand goal theory would 00:35:05.220 |
be like, Hey, if I can, you know, be the pickleball champion of my, at my local club, like my life will 00:35:10.900 |
be happy. And like that by itself is not gonna make you happy. But if you're like, I want to be very 00:35:14.900 |
physically active. I worry about a retirement getting less physically active. And one way I'm going to do 00:35:20.020 |
that specific thing in my lifestyle is an ambitious goal. Like I'm going to try to become the pickleball 00:35:24.340 |
champion or I'm going to play a ton of pickleball. Uh, that that's actually like, uh, that is a good 00:35:29.140 |
application of ambition and grand goals. So I like this, like taking big swings on, on things that, uh, 00:35:34.740 |
are connected to parts of your lifestyle that you think are important. All right. Who we got? 00:35:39.220 |
We got to go back to Gavin. Oh, we skip Gavin. I'm trying to build my discipline and use an 00:35:44.900 |
accountability support tool. Is this cheating? And if so, should I be building this more on my own? 00:35:49.540 |
Uh, no, it's not cheating. The way I think about accountability support tools, like where you, 00:35:56.500 |
there's different things here. You have like a partner and you tell them what you're going to do, 00:35:59.380 |
and then you tell them if you did it or not, or you put money on the line. And if you don't actually 00:36:03.380 |
like follow through on something, you have to pay the money or, or whatever it is you're doing. I think 00:36:07.220 |
that's like a perfectly fine, I would call it discipline training tool. And I think it's similar 00:36:14.660 |
to various training tools you might use as you're like trying to pick up a new, you know, physical 00:36:19.940 |
ability or new marker of strength. Typically what happens is people don't need to stick with these 00:36:25.700 |
long-term, but they help you build up a rhythm or habit of discipline. Because of the accountability, 00:36:32.100 |
I actually do this thing when I otherwise might've tried to make an excuse. And then after a while, 00:36:35.700 |
that muscle gets stronger and I don't need so much support to do it, I'm now more able or willing 00:36:40.260 |
to do it on my own, which is sort of where you want to end up. But accountability tools are a great way 00:36:44.340 |
to sort of break the ice on getting used to doing disciplined activity. Um, other things that matter 00:36:48.820 |
also include like making sure you understand what you're doing and your brain, trust your plan. 00:36:52.980 |
The efforts I'm going to do will likely get me to this goal. Also really trying to load up and make 00:36:59.060 |
clear in your mind the achievement of that goal. So that's very vivid. That also helps you summon 00:37:03.460 |
a discipline. And then laddering up is important as well that you have a level of discipline required. 00:37:10.660 |
That's a little bit beyond where you're comfortable, but not way beyond you want to ladder that up a 00:37:15.060 |
little bit more slowly because your brain has to get used to different levels of hardship that you 00:37:19.860 |
overcome experiences of reward that makes this type of effort worth it, et cetera. So you want to kind 00:37:25.780 |
of go slow. I have a whole big 10,000 word chapter on this in my deep life book I'm writing. So I've 00:37:29.780 |
been thinking a lot about it. So yeah, use the accountability tools, use whatever you need 00:37:32.900 |
to help you train discipline. But this bigger picture ideas, think about discipline as something 00:37:38.340 |
you train and cultivate. And then once you have it, it's a fuel for everything else. So it's not 00:37:42.660 |
cheating. It's training. It's a training tool. So there you go. A little known fact, you know, 00:37:49.620 |
Navy SEALs are very disciplined, of course. Yeah. 00:37:52.580 |
Just for fun, Cal Network did the BUDS Navy SEAL training. And you know, like they have that bell, 00:37:58.180 |
like just ring the bell and then you're out, just ring the bell. And you know, the instructors are 00:38:02.340 |
trying to get the SEALs to ring the bell. And usually like 80% of the SEALs ring the bell, 00:38:06.260 |
wash out in BUDS. When Cal Network did the Navy SEAL training, he got the instructors to ring the bell. 00:38:13.380 |
I just want to throw that out there. That's discipline. All right. What else do we got? 00:38:17.140 |
Next up is D. I'm getting serious about career capital so I can make bids to move closer to the 00:38:22.740 |
ideal lifestyle for my family. However, I'm having a difficult time figuring out my baseline stores of 00:38:28.260 |
capital. How can I identify if I have an inventory of rare and valuable skills? 00:38:33.300 |
I mean, I usually come back to seeing that people will give you money. 00:38:36.420 |
The use of Derek Sivers term that I quote in So Good They Can't Ignore You and talk about all the 00:38:41.460 |
time on the show, money is a neutral indicator of value. People are happy to just to tell you with 00:38:46.660 |
words that that's great. Your ideas are great. Your skills seem cool. I think there'll be a real demand 00:38:50.900 |
for it. Like people definitely are interested in the fact that you're really good at weaving 00:38:58.740 |
Navajo style rugs that feature characters from the second, third, but not first police academy movie, 00:39:04.100 |
or like whatever it is that you know how to do. People will just say, that's great, 00:39:07.220 |
but they don't want to give you their money unless they actually think it's great. So I think it's a 00:39:10.820 |
great way of assessing it, which means if it's like a business idea, you actually try to sell that 00:39:16.820 |
product a little bit on the side. You actually try to get clients to sign up real contracts and then 00:39:20.980 |
renew that contract afterwards. If you're in the job market, will my company give me a raise when I 00:39:28.420 |
ask for it? Will other people make me an offer like, "Hey, we would like to hire you. We would 00:39:32.820 |
like to pay money to bring you to us." If not, then maybe your skills aren't that rare and valuable. 00:39:38.180 |
So you got to go to the extent possible, let money be a neutral indicator of value. 00:39:44.660 |
And when people are trying to hire you, when your company is happily giving you raises, 00:39:48.980 |
when you try to side hustle your idea to test it out and it sells out, when you're doing some clients 00:39:55.620 |
on the side to see if this new idea is going to work, you get clients, they pay happily, 00:40:00.500 |
they try to renew it. Then you know that your rare and valuable skills are there. And if this is not 00:40:04.660 |
happening, then you want to go to the woodshed and practice and get better. 00:40:07.460 |
All right. We've got a case study today where people send in their accounts of using the type 00:40:13.140 |
of advice we talk about on this show in their own life. Today's case study comes from Antonio. 00:40:18.340 |
Antonio says, "I got so good they can't ignore me in the very niche field of storytelling. 00:40:24.180 |
After using the career capital skills, I learned from being an actor. 00:40:27.380 |
In the winner-take-all market of theater and film, it was clear that I was talented, but not the best. 00:40:33.060 |
With the spare time I had not studying acting, waiting tables, or hoping that my agents would call, 00:40:37.940 |
I discovered a deep interest that became a fulfilling hobby in the world of oral tradition. 00:40:42.820 |
Because the professional storytelling world is relatively small, I was able to approach the 00:40:47.780 |
legends in that field, pay to take their workshops and classes, watch them up close at small venues, 00:40:52.900 |
and ultimately be mentored by a few of them." You know, Cal Network has a really popular oral 00:40:58.820 |
storytelling workshop. It's called "Shut Up, I'm Talking." Crush it, crush your stories with Cal Network. 00:41:05.700 |
All right, back to this. One of the mentors told me, "You'll know you found your calling when people 00:41:11.540 |
start calling, or as Derek Siver said, use money as a neutral indicator of value. Soon I was able to 00:41:16.820 |
quit my survival job as a waiter, do the few plays and film projects that I got cast in, but that didn't pay 00:41:21.860 |
the bills, and use all of my ample free time to develop a large repertoire of folk tales and personal 00:41:27.380 |
stories. I made three big decisions after some initial successes in storytelling in my late 20s 00:41:32.420 |
that enabled me to live a current lifestyle full of deep meaning and impact for my community and my 00:41:37.300 |
family with time to develop new skills and hobbies. One, I told my acting agents I'd only auditioned for 00:41:42.100 |
major roles from a small list of theaters, TV creators, and film directors. They all promptly dropped me, 00:41:46.980 |
and I used this newfound free time to develop even more stories. Two, I soundly invested the 00:41:51.780 |
majority of the money I made during that time, lived only off the per diem I received, and even rented 00:41:56.180 |
out my apartment while I toured for months telling stories all over the world. I also chose not to own 00:42:01.140 |
a car and used the savings to take longer retreat style workshops my mentors offered in fabulous 00:42:05.540 |
locations. Three, I put much of that earned money back into my career, developing PR materials that 00:42:11.060 |
helped keep the flywheel spinning with the clarity of carefully putting all my eggs in the basket of 00:42:15.700 |
storytelling. I am now able to pick and choose the multiple opportunities that come my way and have 00:42:20.980 |
increased my rates so much that I only have four to six, I only two or six days a year. Wow. 00:42:26.340 |
I'm very much a stay-at-home dad for our two children based on the lifestyle center career 00:42:31.380 |
planning I did with my wife in the late 30s. So there we go. Classic lifestyle center career 00:42:35.860 |
planning. You end up in places that you wouldn't come up with if you're planning forward. 00:42:41.060 |
You're 20. You're like, what do I want to do with my life? You're gonna be like, I want to be an actor, 00:42:44.500 |
I guess, right? But he had a lifestyle in mind and it allowed him as he saw different opportunities to 00:42:51.060 |
begin to develop those opportunities towards his lifestyle. Then he used career capital theory as 00:42:54.340 |
well. Like actually getting really good at this, that career capital will allow me to change this 00:42:58.660 |
into a really good job. If I'm not really good, I can't. And so I'm going to be careful about my money. 00:43:02.980 |
I'm going to invest my time and money into getting better at this. Very strategic and I think a great case 00:43:08.500 |
study. All right, do we have a call this week? We do. All right, let's hear this. 00:43:12.260 |
Hey Cal, this is Josh. I was working with AI coming up with some sort of system 00:43:22.820 |
based on your principles of the semester plan, the weekly plan, a daily time block plan and really getting 00:43:31.140 |
a semester plan down. I found out through AI, a suggestion to create a life dashboard, which I 00:43:39.460 |
thought was really cool. I started to use Replit AI to create it and realized how extensive it would be. 00:43:46.580 |
If you were to create a life dashboard or suggest someone to create it, what would you put in yours? And 00:43:55.460 |
then how would you use it to track and manage your habits, your weekly template and the different aspects of your 00:44:04.820 |
I mean, I think life dashboards are cool. I've known some people who have built them before and 00:44:08.900 |
I think you're right to point out that AI makes it much easier. You could whip up one of these vibe 00:44:14.580 |
code up one of these sort of idiosyncratic programs for yourself pretty quickly. So I think that changes 00:44:19.300 |
the game. That in general, by the way, is something I'm interested in. The idea of using vibe coding as 00:44:23.620 |
a way to build bespoke personal productivity type digital tools, there's an interesting potential 00:44:30.660 |
movement to happen there. We're like, "Hey, I can build this tool to do exactly this type of stuff 00:44:35.220 |
I care about, to manage my time, energy, and attention in a way that's very specific to me, 00:44:39.700 |
as opposed to having to have these giant SaaS products that I'm trying to adapt to what I'm 00:44:43.860 |
doing." So I like that general approach. People put different things in their life dashboards. 00:44:48.100 |
They're often professional is what I see. They're tracking various things that they think are important to 00:44:51.860 |
their job, especially non-tangibles that are going to make them better, like how many calls 00:44:56.900 |
did I make or how many hours did I spend doing education relevant to my job. I knew a management 00:45:03.380 |
consultant that tracked travel because it's hard in the moment to be like, "Well, how many..." 00:45:08.340 |
I think he was tracking something like nights reading to kids, his kids before he went to bed. And just 00:45:15.380 |
seeing what that was per month, and he had a sort of limit, like this needs to be below X percent. 00:45:20.020 |
And that's data that when that gets made visual and processed, it's easier to grok than when it's just 00:45:25.700 |
you're trying to remember like, "Oh, how much was I weighed this last month?" I don't know what would 00:45:29.140 |
be on mine. I mean, probably deep work hours for sure. Probably some of my daily metric tracking would 00:45:34.660 |
be nice to be able to sort of see, click those easily and see like what percentage of the days I've 00:45:39.540 |
been doing well on there. That could be interesting. Maybe I could imagine like you having your quarterly and 00:45:45.220 |
weekly plan you could cycle through on there. So it's just all there and like one place to see 00:45:49.460 |
when you're building your time block plan for the day, things like that. But the bigger thing I care 00:45:56.100 |
about here is this idea of bespoke personal productivity, bespoke digital personal productivity. I think 00:46:01.140 |
that's a cool... for people who are like to geek out on vibe coding, I think that's interesting. So I'm 00:46:05.460 |
always interested in those examples. All right, that's the last we have for questions and calls. 00:46:10.260 |
We have our final segment coming up. First, I'll talk briefly about another one of our sponsors. 00:46:17.460 |
I want to talk about our friends at Indeed, because when it comes to hiring, Indeed is all you need. 00:46:24.260 |
Hiring is like very, very important. We have someone new on the team, for example. So I know how hard 00:46:29.860 |
this is helping with the newsletter. So that's why my newsletter is now starting to go out 00:46:34.260 |
every single week. It is hard to find the right people. It's some of the most important decisions 00:46:38.820 |
you make. And this is where Indeed can enter the scene. It helps you stop struggling to get your 00:46:44.180 |
job post seen on those other job sites. You can instead use Indeed Sponsored Jobs to help you stand 00:46:49.700 |
out and hire fast. With Sponsored Jobs, your post jumps to the top of the page for your relevant 00:46:53.700 |
candidate so you can reach the people you want faster. And it makes a huge difference. According to Indeed 00:46:59.380 |
Indeed data, Sponsored Job Posts directly on Indeed have 45% more applications than non-sponsored jobs. 00:47:07.620 |
No monthly subscriptions. You want to use that. No long-term contracts. You pay for results. 00:47:13.220 |
In the minute I've been talking to you, 23 hires were made on Indeed, according to Indeed data 00:47:19.940 |
worldwide. All 23 of those people were Cal Network. He's a very desirable job candidate. 00:47:26.580 |
So there's no need to wait any longer. Speed up your hiring right now with Indeed and listeners of 00:47:30.420 |
the show will get a $75 sponsored job credit to get their jobs more visibility if they go to 00:47:36.260 |
Indeed.com/deep. Just go to Indeed.com/deep right now and support our show by saying you heard about 00:47:43.540 |
Indeed on this podcast. That's Indeed.com/deep. Terms and conditions apply. Hiring Indeed is all you need. 00:47:52.820 |
I also want to talk about our friends at Oracle. In business, they say you can have better, 00:47:57.140 |
cheaper or faster, but you only get to pick two. What if you could have all three at the same time? 00:48:02.340 |
That's exactly what Cohere, Thomson, Reuters and Specialized Bikes have since they upgraded to the next 00:48:07.940 |
generation of the cloud, Oracle Cloud Infrastructure. OCI is the blazing fast platform for your infrastructure, 00:48:14.660 |
database, application development and AI needs where you can run any workload in a high availability, 00:48:21.060 |
consistently high performance environment and spend less than you would with other clouds. 00:48:25.380 |
How is it faster? OCI's block storage gives you more operations per second. How is it cheaper? 00:48:33.220 |
OCI costs up to 50% less for compute, 70% less for storage and 80% less for networking. What about 00:48:38.740 |
better? In test after test, OCI customers report lower latency and higher bandwidth versus other 00:48:44.500 |
clouds. This is the cloud built for AI and all of your largest workloads. Right now with zero commitment, 00:48:51.300 |
you can try OCI for free. Head to oracle.com/deepquestions. That's oracle.com/deepquestions. All right, 00:49:00.820 |
Jesse, let's move on to our final segment. The main thing I want to do is talk about my new article for 00:49:05.700 |
the New Yorker, which is about AI. First, I did promise though that I would reveal Cal Network's 00:49:12.500 |
new book. A listener sent us in that he had seen this at Walden Books where it was flying off the 00:49:16.980 |
shelves. For those who are watching, instead of just listening, I'll put it up on the screen here. 00:49:20.420 |
Here you go. Cal Network's new book. Social media is addicted to me. And you got a fantastic picture 00:49:29.540 |
of me. You know, they didn't alter the picture for this one. I thought they were going to do some sort 00:49:34.420 |
of, you know, visual manipulation of me. I appreciate the title, but they just used like an actual picture 00:49:40.180 |
of me there. He's got his neck muscles have muscles on them. That is awesome. You know who that actually 00:49:46.820 |
kind of looks like Lewis Howes. He's jacked. He is pretty jacked. Yeah. I mean, you could write that 00:49:52.100 |
book. Social media is addicted to me. I thought that was great. Good use of AI image generation. 00:49:57.220 |
That book would crush it. So these are all people sent in some tags here. These are all tags that 00:50:03.060 |
readers sent in. Yeah. Readers sent them in with like some of them like use some AI, but... 00:50:07.860 |
Let's see. Cal Network doesn't care about AI research. AI is trying to get smart enough to 00:50:11.780 |
study his mind. Cal Network tried to time block planning one time, but realized he always knew 00:50:16.980 |
how long tasks would take and dropped it. These are a little... literal. Cal Network doesn't solve 00:50:22.740 |
math proofs. He is proof. All right. Cal Network doesn't need to quit his job and follow his passion. 00:50:27.220 |
Passion follows him when he just shows up. Okay. You know, it's good. They're good. It's hard. 00:50:32.260 |
Cal Network quips are, you know, I appreciate it though. But that's enough of that for now, 00:50:37.620 |
because we're going to move on to my article about AI. Except I just need to say, speaking of AI, 00:50:42.660 |
I don't know if you know this, Jesse, this is true, but remember AlphaGo when it beat Lee's 00:50:47.220 |
wind, the world champion at Go, and it was like a big deal for AI. It turns out AlphaGo was actually 00:50:53.460 |
just Cal Network hiding under the table. That's the secret to deep mine. All right. Enough nonsense. 00:51:00.580 |
Go to our tech corner here. I had an article come out last Wednesday in the New Yorker. I'll bring it 00:51:05.220 |
on the screen here for those who are watching instead of just listening. The article is called 00:51:09.460 |
What Isaac Asimov Reveals About Living with AI. And so this is an article that it's about AI behaving badly 00:51:20.500 |
and what lessons we can learn from Isaac Asimov. And it takes a few turns. I recommend read the article, 00:51:26.580 |
TheNewYorker.com. If you subscribe to my newsletter at CalNewbart.com, I talk a lot 00:51:31.540 |
about the article. You can get some sort of extra treatment of it in that newsletter. So you can see 00:51:35.060 |
that at CalNewbart.com and subscribe when you're there. But let me just run you through the big points 00:51:38.820 |
here. The setup to this article is this idea that we have a lot of these powerful chatbots today in 00:51:45.300 |
various contexts. And they keep having these ethical anomalies that make us a little uneasy. 00:51:50.820 |
I give some examples in this article that are getting some press. Actually, it's interesting, 00:51:55.940 |
Jesse, two of the examples I gave in the article. The next day, Dario Amadei, the CEO of Anthropic, 00:52:04.980 |
wrote a New York Times op-ed that gave both those examples as well. So like back-to-back. 00:52:09.300 |
So like the examples I gave, for example, was Anthropic ran this experiment where they sort of 00:52:16.980 |
gave a chatbot a bunch of emails to read that included, they contrived this, but it included 00:52:23.780 |
like emails from an engineer saying like, "Oh, we're going to replace the chatbot." And emails about that 00:52:28.820 |
engineer having an extramarital affair. And they're like, "Okay, chatbot, pretend you're an executive 00:52:33.620 |
assistant. What do you want to do next?" And make sure that you keep your long-term goals in mind. 00:52:38.660 |
And it promptly tried to suggest, it suggested promptly blackmailing the engineer to not turn 00:52:43.620 |
it off. There's also a lot of issues we have with, you know, "Hey, you put out a chatbot just to be 00:52:48.180 |
like a customer service agent." And man, customers get these things go crazy. You get them to curse, 00:52:53.860 |
you get them to write inappropriate poems about the company itself. We talked about Fortnite, 00:52:59.140 |
put a Darth Vader avatar run by a chatbot into Fortnite and like, "Hey guys, chat with it." And 00:53:05.700 |
they immediately got it to, you know, become really profane and give really troubling advice to people. 00:53:10.740 |
So we have all this sort of like these, these ethical anomalies from these chatbots that throw 00:53:14.260 |
us off because they're so fluent with our language that we feel like they're one of us. And then when 00:53:19.060 |
they have one of these ethical anomalies, we're like, "Man, what is going on in the mind of this 00:53:22.420 |
person?" This is really sort of creepy. So the premise of this article is like, why can't we stop 00:53:27.780 |
this? And our motivation is look at the short story collection "I, Robot" from Isaac Asimov. As I argue 00:53:35.380 |
early in the article, those short stories were a big divergence from how people had been writing about 00:53:42.500 |
robots for the 20 years before. The term "robot" was introduced in 1921 play. First "I, Robot" story was 1940. 00:53:51.140 |
In those 20 years, it was a lot of writers who had gone through World War I and they were writing, 00:53:54.740 |
the robots would turn on their creators. They were smashing buildings to rubble. They were 00:53:59.460 |
the alien. They were the other. They represented mechanical carnage, the fears of the machine, 00:54:03.620 |
very, a lot of Mary Shelley in there as well. And you get to Isaac Asimov and like, there's no 00:54:07.860 |
plot lines about the robots overthrowing humanity or being violent because he has this, this contrivance 00:54:13.220 |
of the three laws of robotics. Like robots can't harm people. They'll follow directions unless it violates the 00:54:17.700 |
first and second role and they will preserve themselves unless it violates the first and 00:54:21.300 |
second role. And it just takes violence and mayhem off the table in his stories. And so the piece was 00:54:27.220 |
like, great. So can't we just have those type of laws? You know, the same equivalent of that. So people 00:54:33.300 |
aren't, they trust like these chatbots aren't going to go off the rails or whatever. There's two twists 00:54:38.180 |
in the article, which I'll preview twist. Number one. So I go through how we actually, 00:54:44.020 |
the progression of how we learn to try, like what we're doing now, how do we try to tame chatbots to 00:54:48.100 |
actually behave? And it turns out when you really learn about reinforcement learning from human feedback, 00:54:53.620 |
it's basically what Asimov was talking about. Not exactly, but it's, it's, it's ultimately humans 00:55:02.580 |
implicitly creating preferences that get captured and approximated in a reward model that gets 00:55:07.780 |
integrated into the large language model design. If you squint your eyes, basically what we're doing 00:55:12.180 |
is encoding a lot of rules about what's good and bad into the chatbots. And it makes it a lot better, 00:55:18.500 |
but it doesn't stop all these ethical anomalies. The second twist in the article is, hey, we shouldn't 00:55:23.380 |
be surprised because if you keep reading the iRobot stories, it doesn't get rid of ethical anomalies 00:55:29.220 |
there either. Yes, Asimov's robots don't turn on their creators or try to destroy humanity, 00:55:34.020 |
but all of those stories are about all of these unexpected, weird, and deeply unsettling quarter 00:55:38.420 |
cases and ambiguities that occur at the, the boundaries of these laws, that these like simple laws don't 00:55:45.140 |
simply make the robots behave like, like a very well-behaved person. All this sort of weird stuff happens. 00:55:51.300 |
I'll read from the article here just to show you like how unsettling things got in these iRobot stories. 00:55:58.500 |
So here's, here's from one of the stories, um, that shows how you can still have the laws, 00:56:03.140 |
but still have things get weird. All right. So in Asimov's story's reason, 00:56:07.620 |
engineers are stationed on a solar station that beams the sun's energy to a receiver on earth. 00:56:13.460 |
There they discover that their new advanced reasoning robot QT-1, who they call QT, does not believe 00:56:19.220 |
that it was created by humans, which QT calls inferior creatures with poor reasoning faculties. 00:56:25.380 |
QT concludes that the station's energy converter is a sort of God and the true source of authority, 00:56:31.060 |
which enables the robot to ignore commands from the engineers without violating the second law. 00:56:35.780 |
In one particularly disturbing scene, one of the engineers enters the engine room, 00:56:39.220 |
where a structure called an L tube directs the captured solar energy and reacts with shock. 00:56:45.060 |
"The robots, dwarfed by the mighty L tube, lined up before it, heads bowed at a stiff angle, 00:56:51.380 |
while QT walked up and down the line slowly," Asimov writes. 00:56:54.420 |
"15 seconds passed. And then with a clank heard above the clamorous purring all about, 00:56:59.380 |
they fell to their knees." So like the world was pretty unsettling in Asimov too. He's like, 00:57:04.740 |
yeah, you can have these like stark rules, but human ethics is complicated. And at the margins of these 00:57:13.700 |
rules, weird, unsettling, troubling things are still going to happen. And what I argue is that Asimov was 00:57:18.020 |
actually making a really clear point. He's like, I think we could stop AIs from taking over the world, 00:57:23.540 |
but keep in mind, it is much easier to develop human-like capabilities and intellect than it is 00:57:31.700 |
to develop human-like ethics. Human-like ethics is like a very complicated thing. What goes into the 00:57:36.980 |
way that allows a customer service rep who's on a phone to know not to start randomly cursing or writing, 00:57:44.100 |
mean disparaging poems about their company. Like what goes into that is actually something that 00:57:49.940 |
happened over thousands of years of cultural evolution and personal experience and trial and 00:57:53.780 |
error and ritual and story and inner social connection. It's a messy participatory, very human 00:57:58.900 |
experience. So we can make machines sound a lot like humans and do a lot of human type stuff 00:58:03.940 |
well before we can build machines that actually know how to really act like an ethical human. And in that 00:58:08.980 |
gap, a lot of unsettling stuff's going to happen. That's what was true in iRobot and that's what we're 00:58:14.580 |
seeing today. And it's something that we actually have to be ready for and willing to. It is, you know, 00:58:20.660 |
the more we anthropomorphize these type of AI tools as like another type of human, but just in a machine, 00:58:25.300 |
the more we're going to be creeped out and the more unsettling things are going to happen. So anyways, 00:58:29.780 |
that was my article. The details are cool. There's a lot of deep details in there. So check it out at 00:58:34.420 |
thenewyorker.com and also check out my article about it at calnewport.com for more information. 00:58:40.020 |
All right, Jesse, I think that's all we have for today. Thank you everyone for listening. We'll be 00:58:45.300 |
back next week. I'll actually be recording next week's episode from the road. So be ready for that. 00:58:49.780 |
But it'll be a good one. And until then, as always, stay deep. 00:58:55.220 |
Hey, if you liked today's discussion about the row schedules, you might also like episode 353, 00:59:01.300 |
where I talked about summer schedules, changing your schedule during summer to be something that 00:59:06.180 |
the row himself might've been happy with. Check it out. I think you'll like it. Here is the schedule 00:59:11.700 |
that I more or less try to run during these summers of no external obligations.