back to indexThe Workload Myth: Why More Hours Won’t Make You More Successful | Cal Newport

Chapters
0:0 Cal Network
3:39 The Workload Fairytale
18:17 How would you rewrite A World Without Email to account for Slack?
22:15 How should I manage multiple deadlines?
27:29 How does Values-Based Lifestyle Centric Career Planning relate to Rutger Bregman's concept of Moral Ambition?
34:25 How can I avoid burnout and use my career capital to find a less demanding job?
40:32 Should I do my weekly plan on Sunday night to avoid the scaries?
43:0 A lawyer argues in front of the New York Court of Appeals
47:22 What exactly is considered task switching?
56:9 When will AI automate my job?
00:00:00.200 |
I'm Cal Newport, and this is Deep Question, the show about cultivating a deep life in 00:00:08.200 |
Here are my deep work HQ, joined as always by my producer, Jesse. 00:00:17.980 |
Usually the video version of this podcast, we start with the deep dive. 00:00:21.700 |
Our audio listeners get to hear a little chatter ahead of time. 00:00:24.980 |
Well, we're putting this opening chatter on video as well on YouTube episode 354 because 00:00:32.420 |
there was something I came across that I felt like I just had to share with the viewers and 00:00:39.720 |
So I don't know if you know about this trend, Jesse, but there's this big trend where people 00:00:44.800 |
Sometimes they'll use AI to write like a 20-page whatever, and sometimes they don't even bother, 00:00:53.160 |
There's a lot of books that are like similar to deep work that are out there, this or that. 00:00:56.240 |
Well, I came across one of these that was so fantastic and it's made me so happy. 00:01:07.120 |
It's disappeared now off Amazon, but I grabbed the screenshot of the cover. 00:01:19.940 |
Interestingly, completely unaltered physique. 00:01:26.640 |
For copyright protection reasons, to try to keep this up here, I didn't notice this at 00:01:33.580 |
They changed the name from the biography of Cal Newport to the biography of Cal Network. 00:01:40.180 |
I like this idea of there being a super confident, jacked version of me out there who goes by the name Cal Network. 00:01:56.160 |
Cal Network doesn't have a to-do list, Jesse. 00:02:04.220 |
If you ask Cal Newport why he didn't respond to your email fast enough, he's going to flip your car into a ditch. 00:02:17.720 |
And if you try to ask him to take on a new commitment, he is going to hold a bicep curl and make eye contact until you say, you know what? 00:02:30.460 |
I will channel him from now on when I need to take my ideas and push them. 00:02:36.880 |
Cal Network calls everyone else Cal because he's too busy getting after it to learn people's names. 00:02:52.760 |
We can get into some interesting research that underscores a point in a surprising way that we've talked about before on this show. 00:03:01.180 |
And then we have a tech corner for our final segment. 00:03:05.480 |
Actually, it's based on something I wrote recently for my newsletter at calnewport.com. 00:03:09.220 |
As mentioned, that is now relaunched to something that will be more like three to four times a month. 00:03:14.100 |
So if you don't subscribe, go check that out at calnewport.com. 00:03:16.520 |
But there's a, I'm going to tackle a big AI question that I tackled in a recent newsletter. 00:03:20.500 |
We're going to get into it a little bit at the end. 00:03:23.240 |
So with all that nonsense behind us, let's move on with our deep dive. 00:03:27.780 |
Over the past four years, a remarkable story has been quietly unfolding in the knowledge sector. 00:03:34.260 |
A growing interest in the viability of a four-day work week. 00:03:41.560 |
Iceland helped spark this movement with a series of government-sponsored trials that unfolded between 2015 and 2019. 00:03:48.580 |
The experiment eventually included more than 2,500 workers, which, believe it or not, is actually about 1% of Iceland's total working population. 00:03:59.340 |
They had a six-month trial that included over 60 companies and close to 3,000 total employees. 00:04:05.000 |
A year later, 45 firms in Germany participated in a similar half-year experiment with a reduced work week. 00:04:12.160 |
And these are far from the only such experiments being conducted. 00:04:15.520 |
According to a 2024 KPMG survey, close to a third of large U.S. companies are at the very least considering the idea of experimenting with four-day work weeks, at least for some employees. 00:04:29.460 |
So the reason I'm bringing these up is not to directly assess whether a four-day work week is a good idea, though I will return to that at the end of the discussion here. 00:04:39.780 |
But instead, because as I'll show, these studies, if you look deeper, have an important data point in them that I think points to a really important conclusion, an idea that is critical if we want to understand knowledge work in our current moment. 00:04:53.520 |
It's an idea that I call the workload fairytale. 00:04:58.200 |
So let's get to this hidden point that shows up in every one of these studies. 00:05:02.540 |
I'm going to switch over now to some of these studies. 00:05:06.740 |
These are probably in, what, the browser, Jesse? 00:05:10.360 |
Let's start with a claim from the Iceland study. 00:05:12.740 |
For those who are watching, I have a summary of the Iceland study up here. 00:05:17.200 |
Here's a key claim from the beginning of this article. 00:05:19.440 |
Productivity remained the same or improved in the majority of workplaces. 00:05:26.300 |
So of the 1% of population of Iceland who participated in this, they did not see a decrease in productivity. 00:05:43.420 |
Across a wide variety of sectors, well-being has improved dramatically for staff, and business productivity has either been maintained or improved in nearly every case. 00:05:55.740 |
And remember, these are experiments in which they are taking five to eight hours out of the work week, keeping the salary to stay and making no other changes. 00:06:03.920 |
Employees generally felt better with fewer hours and remained just as productive as they were with a five-day week and, in some cases, were even more productive. 00:06:22.000 |
Participants reported significant improvements in mental and physical health and showed low stress and burnout symptoms as confirmed by data from smartwatches tracking daily stress minutes. 00:06:34.200 |
Let's step back and consider this for a moment. 00:06:41.700 |
How is it possible that with changing nothing else, just telling people you now have to work notably less hours per week, that the overall value of what they are producing, their overall productivity, didn't go down? 00:06:57.060 |
A big part of this answer, I'm convinced, is an idea that's pretty important from my book, Slow Productivity, and it's the idea of the overlooked importance of workload management. 00:07:10.680 |
Most knowledge workers have a surprising amount of autonomy over their workload, especially as compared to, like, a service worker or someone in an industrial context. 00:07:19.980 |
If I'm a service worker, here's what you need to do. 00:07:24.540 |
If you're the barista at Starbucks, it's very clear what their responsibilities are. 00:07:28.220 |
If I work on an assembly line, it's very clear, like, what I am doing. 00:07:31.140 |
I don't have a lot of say in what I want to or not work on. 00:07:38.300 |
There's a sort of unstructured way that commitments come to you. 00:07:40.940 |
You know, some stuff maybe you can't say no to, but ultimately, you are the source of triage of incoming potential commitments. 00:07:47.520 |
There's also no transparency or direct supervision of your workload. 00:07:51.140 |
You grab the average knowledge work manager and take one of their charges and say, what are they working on right now? 00:07:57.280 |
They can't say, here's the six things that this person is working on. 00:08:02.900 |
What you're working on is up to you to keep track of. 00:08:04.960 |
In fact, most workers themselves don't even know at any one moment the full slate of their commitments. 00:08:11.380 |
They sort of get prodded into action with, like, email or Slack prompts or meeting invites that, ooh, I need to start doing something on this. 00:08:17.880 |
There's no direct supervision or transparency of workload as well. 00:08:21.220 |
Now, most knowledge workers in this, I've got to say, like, pretty unusual situation for the economic, the world of economics, this unusual situation of workload autonomy, tell themselves what I call the workload fairy tale. 00:08:35.740 |
And the workload fairy tale claims that the amount of work that you're doing right now is the exact right amount needed to be successful in your position. 00:08:44.600 |
That somehow it is all just worked out that, like, whatever it is you're doing, this is what you need to be doing. 00:08:51.860 |
Not more, but if you do less, it's going to be a big problem. 00:08:55.680 |
The results of these four-day workweek studies undermine this claim. 00:09:01.140 |
Productivity did not generally go down for these research subjects even after the amount of time they had to work was substantially reduced. 00:09:11.340 |
And if we think about why this is, the answers are actually not that surprising, especially if you've read, like, Slow Productivity or listened to this show before. 00:09:19.320 |
The key work in most jobs, the efforts that actually drive those pure economic productivity measures, you know, dollars produced per worker hours employed, the key work in most jobs requires much less than a 40-hour workweek to complete. 00:09:34.680 |
Just go back, if you do something like time block planning, go back and highlight with a highlighter every time block from the last week in which you were working on something that directly produced value for your organization. 00:09:46.820 |
You're not going to have full columns marked with highlighter. 00:09:50.240 |
You're going to say, you know what, there was like 15 hours this week. 00:09:52.720 |
Maybe there was 10 hours this week where I was actually directly doing the things that create value. 00:09:57.140 |
This course is very different than the service sector, the industrial sector, where outside of breaks and lunch, like, everything you're doing is directly producing value. 00:10:05.140 |
So everything else you're doing is either more optional, like, you're doing it for social reasons, or it's make work, or it's non-promotable activities, or it's just an embrace of pseudo-productivity. 00:10:16.280 |
I want to show some activity, so I'm going to kind of bounce around the emails and make some unnecessary meetings. 00:10:20.260 |
Now, look, not all of this is bad, but there's no reason to fill, you know, every available minute of your day with this optional work. 00:10:30.040 |
The stuff that really matters doesn't actually need that much time. 00:10:33.340 |
This is what's being revealed by these four-day workweek studies. 00:10:36.780 |
The workload fairy tale is, in fact, a fairy tale. 00:10:40.100 |
You're actually probably doing a lot more extra stuff than you really need to to do your job well. 00:10:47.800 |
I wanted to interrupt briefly to say that if you're enjoying this video, then you need to check out my new book, Slow Productivity, The Lost Art of Accomplishment Without Burnout. 00:10:59.400 |
This is like the Bible for most of the ideas we talk about here in these videos. 00:11:05.140 |
You can get a free excerpt at calnewport.com slash slow. 00:11:14.960 |
So if we return to the original question here, hey, but what about this four-day workweek? 00:11:21.480 |
I have nothing against it, but it seems to me that if we solve the underlying workload problem, if we get rid of the workload fairy tale and have more control over, well, how much work should we be doing? 00:11:34.540 |
And how much of the optional work should I be doing on my plate? 00:11:36.920 |
A lot of the issues with burnout that drive ideas like the four-day workweek are actually going to go away. 00:11:42.240 |
A four-day workweek itself is not going to have a problem that it is solving. 00:11:46.400 |
So once we recognize there's only so much we're doing that really matters, we should be able to have a lot more variation and flexibility when we think about what work could be. 00:11:56.700 |
Different types of schedules, different number of hours, less intensity, more breaks, or hey, maybe more key work and you produce more value, but less make work. 00:12:05.780 |
There's a lot of options on the table here, but these studies, I think, point to this really key thing. 00:12:10.700 |
You are not, most of what you're doing in your job is probably not actually that directly useful. 00:12:15.260 |
That's taking place in less of your total time than you think, which means you should feel some cover to actually slow down and take a breath. 00:12:27.180 |
You know what type of workweek Cal network runs? 00:12:34.140 |
Deep work, deep laser focus, 14 minutes, and you know what? 00:12:39.820 |
Cal network is so impressive that when he attends the theater, when the show is over, the actors applaud him. 00:12:56.020 |
I think we kind of slept on this four-day workweek stuff because a lot of it kind of was coming in the immediate post-pandemic times and enough other stuff on our minds. 00:13:05.240 |
But I think it's a scandalous result that they said, hey, stop working a week and nothing really changed with productivity. 00:13:11.520 |
The message that we should get out of that is like, oh, my God, what are we doing during this time then? 00:13:16.260 |
Like, clearly, we have a lot of make work and non-promotable activities going on. 00:13:25.780 |
But first, I want to hear from one of our sponsors. 00:13:29.340 |
We have a new sponsor here, which I'm excited about, Caldera Lab. 00:13:34.880 |
Wouldn't you like to look a little younger, have that fresh glow that you would see on, say, like a Cal Network type? 00:13:41.320 |
Maybe get a few more compliments on your skin or just feel more confident when you look in the mirror? 00:13:45.440 |
This is exactly what Caldera Lab is here for. 00:13:48.580 |
Their high-performance skin care is designed specifically for men. 00:13:54.540 |
In the consumer study, 100% of men said their skin looked smoother and healthier. 00:13:57.420 |
96.9% noticed improved hydration and texture. 00:14:01.380 |
And 93.8% reported a more youthful appearance. 00:14:05.060 |
Caldera Lab spends years developing and testing every formula with leading cosmetic chemists to make sure it actually works. 00:14:12.960 |
I didn't realize this until I got in my 40s, Jesse, that like men, you should put stuff on your skin. 00:14:21.920 |
Like women know this, but like as you get older, it's like moisturizer, like prevents your skin from looking like, you know, the wrinkled tan skin of an old sea captain, which is what you'll look like by the age of 45 if you don't put this stuff on. 00:14:38.960 |
And they have like these serums you can put on. 00:14:46.260 |
I got sent two identical sets of their three big things. 00:14:51.760 |
The good, which is their serum that's packed with 27 active botanicals and 3.4 million antioxidant units per drop. 00:15:00.340 |
The eye serum, which helps you look more well-rested. 00:15:03.840 |
You know who doesn't worry about looking well-rested? 00:15:09.940 |
And also the base layer, which is the moisturizer. 00:15:14.420 |
I won't say who, but someone who I know was at the house and was like, oh my God, can I have one of those? 00:15:21.900 |
I like the base layer, this idea of like the moisturizer stuff. 00:15:24.620 |
Like, oh, my skin no longer feels like George. 00:15:30.580 |
What was the name of the famous, I don't know if you're going to know this. 00:15:33.660 |
He's from the last era, famously tanned, like overly tanned, sort of like gad about town actor. 00:15:53.780 |
Skinetics formulas, rigorous R&D, cruelty-free, plastic neutral, non-cameo-daging. 00:16:03.560 |
So, skincare doesn't have to be complicated, but it should be good. 00:16:06.660 |
Upgrade your routine with Caldera Lab and see the difference for yourself. 00:16:09.520 |
Here, go to calderalab.com slash deep and use that code deep at checkout and you will get 20% off your first order. 00:16:19.140 |
Caldera, C-A-L-D-E-R-A, calderalab.com slash deep and use that code deep at checkout. 00:16:26.840 |
I also want to talk about our friends at Udacity. 00:16:30.800 |
AI is what we're talking about a lot these days. 00:16:32.820 |
In fact, maybe we're talking about it too much, but there's a reason because it is what is hot right now. 00:16:39.280 |
And you might be looking at those people, those AI experts with their fancy tech jobs and huge salaries, remote work and limited PTO and being like, wow, how do I get those jobs? 00:16:49.180 |
They know about this, whatever the skill is, in this case, AI, that is really hot right now. 00:16:53.420 |
So, how do you pick up knowledge about the type of valuable skills that might matter in your job search? 00:16:58.120 |
Do you like go on ChatGPT and say, hey, tell me how to be good at AI? 00:17:06.160 |
Courses from Udacity is the best way to gain new knowledge. 00:17:17.560 |
When you learn with Udacity, you're not just passively watching videos or reading articles. 00:17:20.840 |
You're doing practical exercises and projects that prepare you for the job you want. 00:17:25.640 |
That's why 87% of Udacity graduates say they have achieved their enrollment goal. 00:17:30.900 |
I have used Udacity courses for multiple things, including learning video game programming with my son. 00:17:36.960 |
Jesse, you've used some Udacity courses as well. 00:17:44.420 |
Cal Network tried to sign up for Udacity course. 00:17:47.380 |
And the teacher, a quarter of the way through, turned it around and said, we want to learn from you. 00:17:51.160 |
And then Cal Network taught the rest of that course. 00:17:58.000 |
There's tons of options for learning tech skills. 00:18:00.480 |
But Udacity is the right one to use because it works. 00:18:06.260 |
There's real humans that you are working with. 00:18:08.940 |
The tech field is always evolving and you should be too. 00:18:11.300 |
You can try Udacity risk-free for seven days. 00:18:13.640 |
Head to Udacity.com slash deep and use code deep for 40% off your order. 00:18:20.120 |
Once again, that's Udacity.com backslash deep for 40% off and make sure you use that promo code deep so they know I sent you. 00:18:34.560 |
I work in tech and there are hardly any emails. 00:18:37.260 |
Instead, I'm bombarded with hundreds of Slack messages daily. 00:18:40.160 |
How would you rewrite a world without email in 2025 as a world without Slack? 00:18:45.780 |
I want to rewrite it because it already directly deals with that issue. 00:18:50.780 |
As I say in that book, which came out in 2021, which, by the way, was well after the Slack revolution was underway, 00:18:56.960 |
that I was using email as a catch-all term for low-friction digital communication. 00:19:01.780 |
So it includes Slack or Teams or whatever low-friction digital communication you're using. 00:19:07.160 |
Email was the first, but these other ones have come along. 00:19:10.740 |
The real villain in the book is not actually any particular tool, but it is the workflow that these low-friction digital communication tools made possible. 00:19:20.260 |
And I call that workflow the hyperactive hive mind. 00:19:24.620 |
It is a style of collaboration where people work things out on the fly with unscheduled back-and-forth messaging. 00:19:38.300 |
It is the hyperactive hive mind style of collaboration that I say is the real problem. 00:19:43.460 |
Well, it requires a constant tending of these channels. 00:19:46.700 |
Whatever specific type of technology tool you're using, you have to tend these channels constantly because you have many ongoing back-and-forth conversations. 00:19:53.160 |
And if you take too long of a break from tending the channels, you're jamming up too many back-and-forth conversations. 00:19:59.380 |
But having to continually check these channels is hugely distracting. 00:20:03.680 |
It puts your mind to a set of continual context switching, which makes it significantly diminished in its capacities. 00:20:09.880 |
It's going to burn you out before you get to even like mid-afternoon. 00:20:15.540 |
So the right way to understand Slack, and I wrote a New Yorker article about this a few years ago. 00:20:20.500 |
The right way to understand Slack and the weird relationship people have with it, where they sort of love features of it and they hate it. 00:20:27.240 |
The right way to understand it is that it's the right tool for the wrong way to work. 00:20:31.380 |
This is the actual trajectory of Slack is people began using email to implement this hyperactive hive mind collaboration. 00:20:39.760 |
But there's a lot of things about that particular tool that are a little bit clunky when you're trying to do this rapid back-and-forth on-demand collaboration. 00:20:56.920 |
The sort of quotes and re-quotes and quotes of quotes of quotes that happen when you get these long email back-and-forth conversations. 00:21:05.680 |
So Slack came along and said, okay, if you want to use the hyperactive hive mind to collaborate, this is a better tool for the hyperactive hive mind. 00:21:11.680 |
It is a better tool for that style of collaboration. 00:21:15.080 |
So the reason why people love it and hate it is that if your work is using the hyperactive hive mind, which it probably is, Slack is better suited for that. 00:21:23.640 |
Searchable transcripts, different conversations on different channels, back and forth, as opposed to having the quote, and you can go back and find conversations before. 00:21:32.860 |
It's just like a better implementation of the hyperactive hive mind. 00:21:35.320 |
But we also hate the hyperactive hive mind, because it forces us to constantly be context-shifting. 00:21:40.020 |
So Slack is better at implementing that hive mind, so we don't have rough ends on our features, but that means we have to be even more hyperactive, and we have to check channels even more often. 00:21:47.980 |
And so we hate the things we hate about the hive mind more, while simultaneously liking that at least this is a more efficient way of implementing this thing we hate, and it's an all-circular sort of terrible mess. 00:21:57.600 |
But the problem underneath this all is not the tools, it's the workflow style, which is the hyperactive hive mind. 00:22:05.880 |
And you know who doesn't put up with the hyperactive hive mind? 00:22:11.360 |
You want to talk to Cal Network, only one way to communicate with him in the work, registered mail. 00:22:18.260 |
You have to send it registered mail, and he will sign for it when the mail person comes to his office. 00:22:27.440 |
It's the collaboration style that's the problem. 00:22:32.340 |
How do you manage multiple writing projects with no deadlines? 00:22:35.840 |
Am I supposed to spend a couple hours on each project every day? 00:22:38.700 |
Should I only work on one project each day, or should I focus on just one project until it's complete and then move on to the next? 00:22:46.140 |
It depends on the type of projects we're talking about. 00:22:49.100 |
So when it comes to deep work, long ongoing projects, if they are of the same type, sequential is usually better, right? 00:22:58.480 |
So if you were saying I have three script projects I was working on, I would work on one, then work on another, and then work on a third. 00:23:06.340 |
I would definitely not interleave three similar type projects on a very small timescale. 00:23:12.340 |
Like I'm going to work on a little bit of each each day. 00:23:14.440 |
That's just too much context shifting for you to really make progress. 00:23:18.520 |
You have to pay a price of a cognitive overhead to get going with each of those, and that overhead is just going to add up. 00:23:24.300 |
If you're working on multiple projects on a small timescale, too much of your time is just on the overhead of trying to switch your thinking from one to another. 00:23:32.380 |
If it's a very long project, you can break the project into large chunks and do those sequentially. 00:23:37.740 |
So I've seen this with screenwriters before that maybe they'll work on act one of a screenplay, sort of finish it. 00:23:44.340 |
Like, okay, I don't quite know where to go to act two yet, and I'm a little stuck. 00:23:47.940 |
Now I'm going to go work on the act three of this other screenplay I started and finish that. 00:23:53.040 |
So you can break a project up into, like, big chunks, but you're still going to be working multiple weeks on one before you switch to the other. 00:23:59.660 |
If you have projects of different types, or maybe you have a long-term project and then shorter projects that have to go on as well, then there is some interleaving that has to happen. 00:24:09.160 |
And I have to do this because I'm typically working on something like a book project. 00:24:12.820 |
And then I'll be interleaving in things like New Yorker articles or academic articles, which happen on a smaller timescale. 00:24:21.120 |
So if I was like, let me finish my book before I write, like, my next New Yorker essay or my next academic paper, those just wouldn't get done. 00:24:28.820 |
So if it's a different type of project, like a smaller project, yeah, just get to a good stopping place on the big project. 00:24:34.460 |
And then I'll pause it for a week to really push another project through to completion, then return to the big project. 00:24:42.220 |
The caveat here is I separate out preparation, which is brainstorming and research, from actual writing. 00:24:50.200 |
Like today, I'll use my actual day-to-day as an example. 00:24:54.180 |
I am working on the final section of chapter four of my current book. 00:25:00.740 |
I also have a—there's a New Yorker piece I'm working on, and there's an academic paper I'm working on. 00:25:11.060 |
Those two smaller things are in the research thinking phase, and that is fine. 00:25:15.500 |
So I wrote all morning on my book, and then I have two calls this afternoon. 00:25:20.680 |
One is with the philosopher that I'm working on this academic paper with, and the other is on an AI researcher that I'm interviewing for my New Yorker piece. 00:25:30.580 |
It's not as—I'm not in the full cognitive context of trying to figure out and write. 00:25:35.680 |
I'm just sort of gathering information or discussing things. 00:25:40.680 |
When it comes time to pull the trigger on one of those two smaller projects—so let's say, like, the New Yorker article gets to a point first where I'm like, okay, I'm ready to write this. 00:25:50.600 |
Then I will pause the book and just for a week write that article. 00:25:54.200 |
So, like, when it comes to the actual deepest of deep efforts, like writing, the actual putting a pen to paper, I don't interleave at all. 00:26:14.280 |
For really big projects, you want to interleave, you know, go more sequential, whether full project and then work on the next full project or big chunk of one project for you to do big chunk of the other. 00:26:24.300 |
For smaller stuff interleaving for bigger stuff, you want to pause the big thing to work on the small, but you can allow the research to overlap. 00:26:31.100 |
So it's kind of a complicated set of scenarios there, Jesse, but hopefully that—I think about this type of thing a lot. 00:26:37.380 |
I don't know of many people who can successfully work on multiple large projects at once. 00:26:45.240 |
Taylor Sheridan can do this, the showrunner writer. 00:26:48.600 |
He wrote—God, he's working on, like, five shows at the same time. 00:26:54.260 |
So with your and your summer schedule, you're still working on an academic paper? 00:27:06.000 |
It's going to connect my expertise in theoretical computer science and AI philosophers' techniques and, like, AI. 00:27:12.640 |
Like, we'll probably put that up on archive as sort of like a public repository once we've written it and then figure out what to do with it from there. 00:27:19.520 |
So, yeah, academic papers, I like—what I get away from in the summer is really— 00:27:26.620 |
The only person I know who actually can—Taylor Sheridan is it and also Cal Network. 00:27:30.720 |
When Taylor Sheridan hangs out with Cal Network, he feels lazy. 00:27:36.300 |
Cal Network taught Taylor Sheridan how to ride horses. 00:27:47.460 |
How does values-based, lifestyle-centric career planning relate to Rucker Bregman's concept of moral ambition? 00:27:53.800 |
Do you see any tensions between the two approaches to career planning? 00:28:09.640 |
I think they're in translation in the U.S., but they did very well. 00:28:12.140 |
That was more of like an optimistic look at the future, which I think caught some people's attention. 00:28:17.440 |
He has this new book out called Moral Ambition, which I blurbed. 00:28:24.220 |
And he's arguing that young people, especially like talented people coming out of elite school, should have more moral ambition. 00:28:30.000 |
They should use their lives more to go do like useful things for the world as opposed to, say, like going into finance or law to try to alchemize their elite training in the money. 00:28:45.100 |
So, the question is, is that somehow working against my vision of lifestyle-centered career planning? 00:28:55.180 |
And I think the issue here—I've been realizing this recently—is the word lifestyle, right? 00:29:01.540 |
I think about lifestyle in a very clinical definition sense of the word, which is just literally what is the style of your life? 00:29:08.000 |
What is the day-to-day realities like, like your life? 00:29:10.120 |
And so, when I say lifestyle-centered career planning, I'm saying, look, fix what you want your life to be like in a more general sense and then work backwards from that to find specific ways to deal with obstacles and opportunities that face you to make your way closer to that lifestyle. 00:29:25.480 |
Because ultimately, you want the lifestyle that is appealing to you as the ultimate goal, not accomplishing any particular goal. 00:29:32.080 |
So, we might as well get directly to the problem. 00:29:33.740 |
But I think a lot of people associate lifestyle with something that's maybe softer or is a little bit more like self-indulgent or lazy. 00:29:40.180 |
Like, there's the term, for example, lifestyle entrepreneur, which is typically pitched as, well, you have your own business in part so that you can have like a lot more like flexibility. 00:29:51.700 |
If you work for someone else, you'd have to work harder. 00:29:53.440 |
But if you have your own business, you can sort of trade working harder for, you know, having more vacations or not having to work so—you know, not having to put in really long hours. 00:30:03.380 |
But that's not—I don't necessarily mean lifestyle to be soft. 00:30:06.140 |
And one of the core things that could be a part of your lifestyle vision is moral ambition. 00:30:10.580 |
In fact, lifestyle-centric career planning, I think, is probably better suited for Rucker Bregman's vision of moral ambition than other ways that people think about it. 00:30:22.080 |
Because if you have this strong vision of, like, I want to improve the world, this could be a big, vivid component to your vision of your ideal future lifestyle. 00:30:31.040 |
It's a lifestyle where you're, like, heavily involved in, like, directly being useful to the world. 00:30:35.900 |
And then you can get a little bit more specific about, like, for you, does that mean, like, I'm directly involved with people who I'm helping? 00:30:42.740 |
Or maybe you have more of an intellectual conception of moral ambition, that I am producing ideas that are really changing the world. 00:30:50.960 |
I did a cool profile like this in my book, So Good They Can't Ignore You. 00:30:55.480 |
Pardi Sabeti, who was a computational biologist at Harvard and went and spent some time in her lab. 00:31:00.140 |
And I sort of profiled her as someone who was doing this, who had this sort of moral ambition, and found an interesting way to get there. 00:31:06.980 |
And she ended up in a biology lab working on genetic algorithms. 00:31:09.800 |
And they were using them to study ancient diseases and to try to find ways to get around them. 00:31:15.300 |
You know, she works in computers in the lab, but they also go with the partner institution in sub-Saharan Africa, where they go and spend a lot of time. 00:31:23.660 |
And it was just like a really cool mission-driven life. 00:31:25.740 |
And, you know, my argument there was she didn't sit down at some point and say, here's my grand goal. 00:31:30.480 |
I want to be a algorithmic geneticist who works in a, you know, lab in Africa. 00:31:34.400 |
It was working backwards from the more general moral ambition is like, I want to use my smarts to try to do something good for the world. 00:31:41.500 |
And I want it to be intellectual, but also connect directly to people. 00:31:46.760 |
And then there was a, I documented in the book, all sorts of different serendipity and opportunities that came out of nowhere that she sort of followed. 00:31:57.060 |
She was guided by that lifestyle vision very consistently. 00:32:03.300 |
So, no, lifestyle-saving planning doesn't mean lifestyle entrepreneurship. 00:32:11.780 |
That's why I blurred that book, is that people need more ammunition for their visions of the ideal lifestyle. 00:32:16.580 |
And this book will give you one particular ambition. 00:32:18.660 |
It actually complements very well my book, So Good They Can't Ignore You. 00:32:22.520 |
There's, there's a world that's really incestuous and overlapping, Jesse, but, like, Rutger Bregman, when you think about his moral ambition, you need to do something good for the world. 00:32:30.460 |
It kind of has a feel of, like, the effect of altruist, but he's not nearly as clinical as they are and kind of weird. 00:32:39.460 |
But the effect of altruist really likes So Good They Can't Ignore You. 00:32:42.600 |
So that was because they felt like, yeah, this is, like, more, you know, you're trying to be more systematic about being useful and not just, like, following your passion or this or that. 00:32:52.000 |
So all these worlds kind of mixed together a little bit. 00:33:02.520 |
I don't, okay, I don't know this world well, but there was a, there's a group out of Oxford that Will, what's his name? 00:33:11.380 |
I think him and SPF were boys, like, early on. 00:33:14.720 |
Yeah, so I knew, I would cross paths with Will because there was this group out of Oxford that I think he was involved with. 00:33:19.860 |
I think it's a cool group that was called 80,000 Hours. 00:33:22.500 |
80,000 Hours is, like, the number of hours the typical person will spend working. 00:33:27.200 |
So their whole point, and this is an EA point, is the biggest lever you have to help the world is, like, what you're doing with those 80,000 Hours. 00:33:35.640 |
And so 80,000 Hours.org was, like, a non-profit based out of Oxford, I believe, that was, like, helping people figure out what to do with their lives. 00:33:42.600 |
In a way, not the, and they were very much, like, don't just say follow your passion. 00:33:49.960 |
Like, what do I want to do with this that's really good? 00:33:54.400 |
So I crossed some paths with William McCaskill because they like So Good They Can't Ignore You because, obviously, those ideas, like, don't just follow your passion, use career capital, get good at things, and figure out what to do with it, kind of overlap those a little bit. 00:34:05.480 |
I missed, however, getting deep enough into that group to become boys with SBF. 00:34:14.520 |
Well, that was, no, it was a Flash, no, it was a Going Infinite. 00:34:23.280 |
You know what Cal Network's moral ambition is? 00:34:32.960 |
Cal Network doesn't have a lifestyle-centered career plan. 00:34:36.640 |
People use him as their vision of the ideal lifestyle. 00:34:45.580 |
I make double of what my husband makes, and we need both incomes for family life. 00:34:51.260 |
I would prefer to work 50% of the time but financially can't. 00:34:54.640 |
Any advice on how to find a less demanding job that still leverages my career capital? 00:35:00.880 |
There's a couple different things to do here. 00:35:02.420 |
By the way, your 40s, I'm just going to write a book about this. 00:35:09.820 |
I mean, this is why I'm writing The Deep Life right now. 00:35:11.300 |
It's a really interesting time for lifestyle reconfigurations, right? 00:35:14.940 |
I mean, if you think about, like, especially a standard sort of well-educated middle-class, upper-middle-class American track, right? 00:35:23.880 |
You have this, like, 20s, you are getting educated and doing entry-level jobs. 00:35:29.820 |
30s, you sort of settle into a career, and you begin to—you build capital, you get good at things. 00:35:37.540 |
And then your 40s is, like, this great moment of reconfiguration or reinvention of, like, okay, what's our play here? 00:35:43.660 |
Am I going to write out this, or are we going to change, or what's working, what's not working? 00:35:48.660 |
So it's really an interesting time for thinking through these questions of lifestyle-centric design. 00:35:53.000 |
The first thing I would do is, like, really make sure that you're doing lifestyle-centric career planning. 00:35:57.220 |
So there's often—when there's friction in your current existence, there's a tendency to do solutionism, which is, I want to find a single idea that generates in me an emotional response that's positive, and I'm going to place on that single idea or goal all the hopes and dreams of making my lifestyle something I like more. 00:36:18.960 |
So we want to be careful that's not what's going on here, where you're like, I'm feeling burnt out. 00:36:24.500 |
If I was just working halftime, all that problems would go away and we'd be happy. 00:36:30.100 |
Like, what you really want to do is say, what do I actually want in an ideal world? 00:36:39.560 |
Maybe it's like, no, but it's like working reasonable hours and not feeling so squeezed in the time after that. 00:36:45.740 |
Being able to have more time after where I'm like, maybe it's—I'm not super stressed about work when it's done, and I don't feel like I'm constantly playing catch-up on home-related or family-related issues afterwards. 00:36:58.640 |
Like, I want to be at an office and, like, like what I'm doing, but then I want to be able to, like, come home soon after the kids get home from school or, like, you have some sort of vision and we're not rushing around or whatever. 00:37:10.700 |
Then when you work backwards from that vision, you might be like, let me explore a lot of different ways forward to that vision. 00:37:16.120 |
And, yeah, maybe it's just like going to halftime, but maybe not. 00:37:18.600 |
Like, maybe that's just going to be frustrating. 00:37:22.200 |
If you're like, look, I make double my husband, and if I go back to half hours, I'm going to feel like I'm being sidelined, but I have, like, an intellectual or professional ambition, and I feel like I'm actually better than him at stuff in my, like, job, and this is, like, I'm feeling resentful about it. 00:37:37.520 |
Maybe you're like, okay, what we need to do is change our jobs, and I'm going to change it to a different job that uses my career capital but is less than one of these we're paying for your time, you always have to be accessible type jobs. 00:37:51.040 |
So it's going to be less money, but something where, like, I could use my skills, but when it's done, it's done. 00:38:06.560 |
He's making about a third of the income, and you cut that in half, so that's going to be about a sixth, versus if you went halftime, you're reducing it by a third, right? 00:38:15.140 |
And then maybe he's able to take on really clearly delineated, like, these family responsibilities that means you don't feel so stressed about the stuff going on after work. 00:38:23.700 |
Like, there's different types of options here that's worth thinking about. 00:38:27.760 |
But what I'm going at is, like, get the lifestyle, like, what do you want your day to be like? 00:38:32.220 |
Don't say, like, what do I want to get rid of? 00:38:37.620 |
It won't include burnout, but what else do I want in my life? 00:38:40.020 |
I think that could open up potentially more interesting options. 00:38:44.880 |
Okay, so let's say you've done that, and the best option for it is, like, a new job, maybe something that's 50% time or just, like, less demanding. 00:38:53.220 |
Here, what I have found is that when it comes to stuff that's well past the entry level, it's not like you can go on to LinkedIn and just find this. 00:39:02.860 |
It has to be—it often feels like for people that it's manifested, and what happens is when you begin thinking, looking, and talking about them, kind of looking for something different that's going to be less demanding, and you talk to friends about this, and you talk to colleagues about this, and you have some conversations, suddenly these super bespoke, one-of-a-kind type of opportunities just have a way of sort of sifting out or materialized. 00:39:25.800 |
It's like, oh, wait, this person is just moving over this small organization, and they need to start an outpost over here where you happen to live, and they need someone who can run it for the next couple of years, and it's, like, a great match for your skills, but it's really autonomous and probably much less time. 00:39:38.380 |
It feels like you're manifesting it, but what you're really doing is just putting your antenna up to look for these things, and you're beating the bushes a little bit as you talk to people. 00:39:45.900 |
I'm mixing metaphors here because I don't think you hunt for birds with antennas, but I might have this wrong. 00:39:51.660 |
I'll tell you who can do that, it's Cal Network. 00:39:58.140 |
He doesn't deal with shotguns, but you see what it feels like. 00:40:03.680 |
It's like, wow, once I started thinking and looking for this, like, more nuanced application of my career capital, it just came out of nowhere. 00:40:16.520 |
You know, you weren't actually generating them. 00:40:22.260 |
Remember, it's a family lifestyle you're working backwards from, not individual lifestyles that you hope mesh. 00:40:28.500 |
Be broad when you explore different solutions that could get you to the lifestyle you're looking at. 00:40:33.600 |
And then once you execute, if you're looking for something like, again, my career capital and a very narrow type application, you have to manifest it, which means you have to start looking for it and talking to people and beating the bushes, and you'll be surprised what comes up. 00:40:53.840 |
I'm trying to determine if I should switch my weekly planning to include something for Sunday night to help mitigate this. 00:40:59.460 |
But on the flip side, is there a boundary to set here to protect family time? 00:41:08.880 |
It's like, why in my summer schedule, I don't schedule stuff on Mondays? 00:41:15.760 |
I have them a little bit less these days because, honestly, our family life is incredibly busy on Sundays. 00:41:22.780 |
I have a bunch of kids that do a bunch of stuff on Sundays. 00:41:24.940 |
So there's also a countervailing sense of there'll be some relief to getting those kids to school on Monday. 00:41:32.520 |
So Sunday, you're like, I'm exhausted from all the family stuff, which is nice. 00:41:36.260 |
So I'm kind of looking forward to just like having some autonomy. 00:41:39.920 |
But I also get the Sunday scaries about like, I got to go back and do work. 00:41:43.440 |
Here, I think, is the ideal plan, and it's not Sunday night. 00:41:48.220 |
And I struggle with this, but I think it's the right thing to do. 00:41:53.440 |
If you can do that, and it's really worth fighting for, especially if you get strong Sunday scaries, 00:41:58.460 |
it's better because you also get a subtle benefit Friday night, Saturday, and Sunday morning. 00:42:06.720 |
We don't get as demonstrably anxious on like Saturday about the week ahead because we still have some time. 00:42:12.740 |
But there is a background hum of anxiety because your mind is like, I don't quite know what's happening next week. 00:42:19.400 |
And I think it amplifies some of the scaries. 00:42:20.860 |
If you make your weekly plan on Friday for the week ahead, at the end of the day on Friday, the whole weekend, you have the mental peace. 00:42:30.020 |
So that part of your brain, it's like, what if we're missing something? 00:42:36.620 |
And it might just instantiate itself like a slightly more relaxed Friday night, a slightly more relaxed Saturday. 00:42:43.640 |
I'm always scrambling at the end on Fridays for various reasons. 00:42:47.720 |
And if I was having really bad Sunday scaries, that is 100%. 00:43:05.520 |
I'll tell you what, you know, who doesn't have Sunday scaries, Sunday is scared of Cal Network, not the other way around. 00:43:17.300 |
This is where people send in their accounts of using that type of advice we talk about on the show in their own lives. 00:43:21.460 |
If you have a case study, you can send it to jesse at calnewport.com. 00:43:26.220 |
I had the honor to argue an appeal before the New York Court of Appeals. 00:43:31.160 |
Laurel appellate arguments are not prepared speeches, but rather back-and-forth discussions sometimes heated between the advocating attorneys and a panel of judges. 00:43:40.500 |
When I was in law school, I read an excerpt on John Roberts, and before he was a chief judge of the United States, he was a litigator who regularly appeared before the United States Supreme Court. 00:43:49.920 |
The article noted how impressed all the Supreme Court justices were when Attorney Roberts would step up to the podium without any notes and make his argument, 00:43:58.160 |
never break eye contact, and cite to the expansive record from all memory. 00:44:03.620 |
I decided that when I got my chance, I was going to take a page from the John Roberts playbook. 00:44:08.560 |
I got my chance before the New York Court of Appeals without a note, all from memory. 00:44:13.760 |
I stepped up to the podium and engaged in a rapid-fire discussion with the impressive panel of judges. 00:44:20.740 |
and by the time I was done, I had been thrown into prison for contempt. 00:44:27.040 |
You know, the one time that Cal Network had to speak in front of the Supreme Court, they ended up inviting him to become an associate justice. 00:44:36.020 |
I wish I could attribute my confidence to a memory superpower. 00:44:40.820 |
But instead, it was all about deep work, active recall, and another secret ingredient, exercise. 00:44:45.700 |
For 60 days leading up to the argument, I read and studied each night in my home office with no phone, TV, or internet other than for case research. 00:44:53.140 |
Then the next morning, while on the treadmill or bike trainer, I reviewed in my head the material from the night before. 00:44:58.180 |
Again, no music, no TV, just a breathy rhythm, a blink while to stare at, and some active recall. 00:45:02.600 |
He didn't give us a link to the recordings of his arguments, which is pretty cool. 00:45:09.820 |
The judges interrupt so often, like, all the time. 00:45:15.800 |
Like, they're just, like, he's getting pounded left and right by all the different judges. 00:45:21.340 |
In the first couple of months in 2025, I had more imaginary conversation with those seven judges than with real people. 00:45:34.840 |
So 60 days, he just was like, I'm going to do the thing that actually matters, which is active recall. 00:45:40.560 |
I argue that all the time, and I'm going to do it. 00:45:45.720 |
How am I going to be as prepared as, like, John Roberts famously was? 00:45:49.000 |
When you break it down, it's like, I'm going to do 60 days' worth of deep work, the right type of deep work on the right things, and look at what he accomplished. 00:45:56.520 |
What comes to my mind when I see that is think about, you know, if you took the time that you're probably dedicating right now to helping the share price of Meta or of, you know, ByteDance, who makes TikTok. 00:46:12.900 |
All the time you're spending on your phone and having, like, arguments with people on Blue Sky or following rabbit trails on YouTube. 00:46:20.080 |
Imagine if you put that time over the next 60 days to, like, something you wanted to do really well, and you actually put in the right time. 00:46:26.800 |
You did the hard work that actually matters, and, like, every day you gave it an hour. 00:46:29.880 |
Like, Sam, you could be a beast after that 60 days, and then repeat that again and again. 00:46:36.740 |
Sometimes we look back at intellectuals from time long past, especially those that, like, came from the gentry and had some free time. 00:46:44.280 |
Like, my God, they speak, like, 19 languages, and, you know, in their spare time, like, oh, they translated Euclid from the original Greek to do a new version of it, and they also, like, mastered Gash's chemistry or this or that. 00:46:57.540 |
It's just the application of doing the right work that's hard again and again and again and just getting used to that. 00:47:02.840 |
When you repeat that, a lot of really cool stuff aggregates. 00:47:05.200 |
So, anyways, I think that's a great case study of deep work applied regularly produces really deeply meaningful, cool things. 00:47:13.920 |
The only reason why we don't have that time now is because we have these distraction machines in our pocket, and what do we get from that? 00:47:25.940 |
Not nearly as worth it, unless those are CalNetwork memes, which I hope become a thing, but I don't think they will. 00:47:38.080 |
My name is Jordan, and I'm a conductor and an academic. 00:47:41.880 |
I'm currently completing my doctorate in Baltimore and, meanwhile, conduct a number of different orchestras and other ensembles. 00:47:51.440 |
I wanted to ask you about the concept of interleaving. 00:47:54.040 |
It's something you've talked about before, and it's come out in Adam Grant's recent book, Hidden Potential. 00:48:03.260 |
You might work on skill A for a while and then go to skill B, and then go back to skill A, then skill C, then skill B, something like that. 00:48:13.940 |
But when I think about this idea, it reminds me a lot of the idea of task switching, which I know very well is something we want to avoid when we're trying to engage in deep work, especially. 00:48:29.420 |
Is it that task switching in this particular case is between various deep work-related tasks, and therefore it is cognitively demanding, but it's the appropriate place to spend that cognitive buck? 00:48:45.900 |
Or is there some other way to contextualize and reconcile these two seemingly opposing ideas? 00:48:56.380 |
Well, we talked about this a little bit with the screenwriter question. 00:49:01.120 |
I'm going to get more precise in answering this more precise version of the question. 00:49:07.480 |
So when it comes to task switching, the main thing that we worry about is the cognitive overhead of that network switch. 00:49:16.300 |
So when I switch my attention from the thing I'm writing to an email inbox, the target of my attention is switching from that thing I'm writing over to whatever demands are made of me in that email inbox. 00:49:30.860 |
And this institutes what a psychologist would call a network switch. 00:49:33.780 |
Our brain is trying to shift its focus from this cognitive context over to this other cognitive context. 00:49:40.100 |
That can take 10, 15, up to 20 minutes to fully switch your context. 00:49:44.040 |
So what happens when we glance at an email inbox for five minutes is we start a network switch, we abandon it and go back to the original thing, and now we have a mix of both of those networks together. 00:49:52.320 |
And before we can completely regain our cognitive context on the original thing, we look at our inbox again and institute another network switch. 00:49:58.780 |
And that's that state of sort of partial, continuous partial attention, to use the Linda Stone term or the diminished cognitive capacity I talk about. 00:50:06.380 |
So the cost of task switching, our problem with task switching is this like 10 to 20 minute cost of trying to change your attention. 00:50:13.660 |
So when it comes to like a deeper task that's done over a longer period of time, that type of task switching cost doesn't matter much as long as you're spreading out these switches over a long enough time that the ratio of switching cost to execution gets really small. 00:50:30.220 |
So like if I'm working on one skill all day Monday and another skill all day Tuesday, and then back to the first skill on Wednesday and the other skill on Tuesday, the task switching cost here is negligible, right? 00:50:40.080 |
Because it's, yeah, it takes me 20 minutes to really get in the mindset of that skill, but then I'm spending eight hours that day working on it. 00:50:46.340 |
And then I work the next day, I have to switch to the other skill, that's 20 minutes, but I'm spending all day working on it. 00:50:51.540 |
So once you get to bigger time scales than like once every five minutes, I'm checking something, that type of overhead matters less. 00:50:57.220 |
But there's a second type of overhead that I mentioned in the screenwriter question that also sort of matters. 00:51:03.320 |
And this is like a bigger orientation overhead. 00:51:07.140 |
Like when I'm working on a book, yes, there's a cognitive cost to actually getting my attention all the way on the chapter I'm writing now, but there's a bigger orientation I have over a bigger time period just towards working on that book. 00:51:19.140 |
And I'm thinking about the book, and when I'm not writing, it's in my head, and I'll bounce around thoughts, and it's kind of there in the background churning, and then I come back to it, I work some more on it. 00:51:29.740 |
Maintaining a general orientation towards a very large or important project is helpful. 00:51:35.140 |
And there's a cost to switching that that you'll feel on a bigger time scale. 00:51:41.600 |
So this is where if I was working on two books and switching between them on Monday and Tuesday and then back to one on Wednesday, back to the other on Thursday, that cognitive switching cost of just switching my current focus, who cares? 00:51:54.620 |
I'm spending the whole day working on one book. 00:51:55.900 |
But the orientation switching cost, that would be a problem. 00:51:58.800 |
I start to get, well, I'm kind of, my mind is oriented towards solving this book, but then I'm trying to reorient and try to start solving that book, and that could be a problem. 00:52:06.540 |
You know, I first learned about this actually with academic paper writing. 00:52:09.500 |
I used to work on multiple academic papers at the same time, meaning not the same day, but, like, some days I'll work on this one, then later in the week I'd work on another. 00:52:17.580 |
And my advisor, who's, you know, famous in the field, was like, no, no, no, just do one. 00:52:22.040 |
Stay oriented on that one until you've solved it and done the best possible work, then write the next. 00:52:26.200 |
And I was like, ah, it's faster to work on multiple ones at the same time, but she was right. 00:52:29.840 |
One at a time was going to produce the better work because you can keep your general cognitive orientation towards that one project. 00:52:36.620 |
So what's the scale at which you'd be switching those? 00:52:41.160 |
And that's why I was saying to the screenwriter, if you want to switch your context between different screenplays, you want a chunk that's like, I wrote the whole first act of the screenplay. 00:52:52.580 |
And then if you want to switch your orientation to another screenplay and spend six weeks working on it second act, that's okay. 00:52:58.940 |
It'll take a few days to sort of get your cognitive orientation fully switched over, but then you'll get into the groove of it. 00:53:04.420 |
So we have two scales of context switching, the sort of in-the-moment immediate focus scale, which is what kills us when we check our email or Slack so much. 00:53:11.520 |
And then the bigger picture, like general cognitive orientation, that can take days to really switch over. 00:53:17.700 |
And so in the end, what this tells us is for, like, really big, long cognitive endeavors, you want to give things multiple weeks before you switch to another really big thing. 00:53:28.040 |
And if possible, like, push something through to a really clear finishing point before you switch over to something else. 00:53:37.300 |
All right, so we got a final segment of Tech Corner coming up. 00:53:40.540 |
But first, I want to briefly hear from another sponsor. 00:53:46.520 |
This is a vehicle built for the modern Explorer. 00:53:52.660 |
It's a vehicle that's purpose-built and courage-driven. 00:53:59.980 |
We kind of think of it unofficially as the official car of the Deep Questions podcast. 00:54:05.360 |
I don't know how you unofficially think about something as being official. 00:54:13.640 |
And this is an endorsement you can take to the bank. 00:54:15.860 |
It's the type of vehicle that Cal Network would drive. 00:54:27.500 |
Here's the good thing about this whole line of Defender vehicles, is you get that off-road 00:54:34.280 |
But the driving experience, the nice touches are also cutting edge as well. 00:54:40.080 |
So you get the really nice experience when you're just sort of driving this to work. 00:54:45.060 |
And then you know, though, you have that ability, if I need to take this off-road or go explore 00:54:49.120 |
or the other things that makes it deep life deep, it's there for you as well. 00:54:59.500 |
3D surround cameras in the clear sight ground view. 00:55:04.580 |
Oh, there's this rock where I want to make sure that we're going around it. 00:55:07.320 |
The 3D surround cameras means when you then bring that same vehicle to try to parallel 00:55:11.780 |
park in DC, you can see exactly where everything is. 00:55:14.260 |
Clear sight rear view, you can see out your back, even if your back view actually is obstructed. 00:55:19.380 |
Maybe you packed the back of the guitar too big or Cal Network's traps are blocking the 00:55:25.000 |
Clear sight rear view mirror, use the camera. 00:55:26.900 |
You can still use it like a normal rear view mirror. 00:55:28.660 |
The Pivi Pro infotainment system is fantastic. 00:55:31.100 |
Intuitive driver's display so you can customize. 00:55:33.040 |
So look, it's a beautiful car inside and out, a joy to drive, but it has the capabilities 00:55:38.260 |
to take you on whatever adventure you are looking for. 00:55:42.460 |
I think if we call it the official car of the Deep Questions podcast long enough, they're 00:55:48.160 |
We would, and we would put, we should have a bumper sticker, just like a D and a Q, you 00:55:52.940 |
know, like doesn't spell it out, but we would put a nice, a subtle DQ bumper sticker 00:55:58.380 |
on the back and a giant car wrap of Cal Network's bicep on the side. 00:56:07.180 |
You can design your Defender 110 at LandRoverUSA.com. 00:56:10.260 |
Visit LandRoverUSA.com to learn more about the Defender 110. 00:56:14.020 |
Explore the Defender 110 at LandRoverUSA.com. 00:56:18.280 |
All right, Jesse, let's do our final segment. 00:56:23.680 |
Here's the theme, this question, when will AI automate my job? 00:56:30.140 |
So we talk a lot on here about work and technology. 00:56:32.760 |
Well, this has to be one of the biggest questions at the intersection of those two topics. 00:56:37.140 |
We've been hearing a lot about how AI is going to automate these knowledge work type jobs that 00:56:45.180 |
we thought would be safe from such automation. 00:56:52.940 |
So my newly revamped newsletter, which is now coming out three to four times a month, 00:56:57.460 |
I sort of kicked off the new regular newsletter with an epic article titled AI work and some 00:57:01.900 |
predictions I'll put on the screen here for those who are watching. 00:57:04.080 |
And I get into like everything I've been thinking about recently about AI and work, 00:57:09.220 |
everything I've learned in my journalism, my academic work, et cetera. 00:57:12.500 |
One of the sections from this epic article was titled What About Agents? 00:57:18.020 |
So I want to read you the starting of this piece, and then I'm going to summarize what I found. 00:57:22.100 |
All right, so the starting of this section, I say the following. 00:57:24.980 |
One of the more attention-catching storylines surrounding AI at the moment is the imminent 00:57:30.020 |
arrival of so-called agents, which will automate more and more of our daily work, 00:57:33.940 |
especially in the knowledge sectors once believed to be immune from machine encroachment. 00:57:37.580 |
Recent reports imply that agents are a major part of OpenAI's revenue strategy for the near future. 00:57:43.480 |
The company imagines business customers paying up to $20,000 a month for access to specialized bots that 00:57:48.680 |
can perform key professional tasks. It's the projection of this trend that led Elon Musk to 00:57:54.740 |
recently quit. If you want to do a job that's kind of like a hobby, you can do that job, but otherwise, 00:57:59.040 |
AI and the robots will provide any goods and services that you want. 00:58:03.720 |
But progress in creating these agents has recently slowed. To understand why requires a brief snapshot 00:58:08.660 |
of the current state of generative AI technology, dot, dot, dot. All right, so this is the setup. 00:58:13.240 |
Agents are the things that we are referring to when we refer to AI taking over knowledge work jobs. 00:58:19.080 |
An agent is where you take something like a foundational AI model, like a language model, 00:58:24.080 |
and then you add extra software around it that has memory and goals, and it will continually query that 00:58:30.320 |
model to figure out like what to do and then can take actions based on like what it says, right? 00:58:34.940 |
So like in a simple way of understanding, one of these agents might be an email agent where the agent 00:58:41.140 |
software, this is not AI, just normal software, like talks to your email server and says, 00:58:46.600 |
hey, give me the latest email that we received, and then it takes the text, and then it says, 00:58:51.520 |
I'm going to create a prompt that says, here is a prompt from a recent email. 00:58:54.880 |
Please like describe like how I should respond, and then they'll give that prompt to the language 00:59:00.160 |
model. The language model responds to the agent, and then the agent like takes that response, 00:59:04.000 |
talks to the email server, says, hey, send this email back to the sender, right? So it's a mix of 00:59:09.040 |
AI models and just sort of control log. That's an agent. It's software wrapped around AI models that can 00:59:13.520 |
actually take actions in the world. And so the idea is, these language models are so smart and 00:59:17.640 |
powerful, we could just add agent wrappers around these for like many different types of tasks, 00:59:23.000 |
including multi-step tasks, because they can query the model many times. They can query the model and 00:59:28.000 |
say, give me a plan, and then they can query the model for each of the plans and say, how should I do 00:59:31.300 |
this step? And then they can actually implement what the model says. This is the vision, or one of the 00:59:36.300 |
main visions for how AI was going to automate these jobs. But as mentioned in the thing I just read, 00:59:40.800 |
progress has somewhat stalled. So here's what's happening, and I'll just summarize this for you. 00:59:44.600 |
Not long ago, there was a belief in the generative AI world about so-called scaling laws, which was this 00:59:51.780 |
idea that if we continue to make language models larger, more data, more parameters, treating them 00:59:57.400 |
with more GPUs, their capabilities on basically everything you can do with them is going to get 01:00:02.420 |
better. The scaling laws held for a little while. GPT-2 was much better than the original GPT. GPT-3 was a 01:00:10.640 |
giant leap beyond GPT-2. And when GPT-4 came out, for example, it was better than GPT-3 in like nearly 01:00:17.220 |
every way. They made the model bigger, they trained it longer, and like everything you would ask GPT-3 to 01:00:21.520 |
do, it would do better. So there is this idea that if we keep scaling, these language models will have 01:00:29.320 |
so many capabilities. There's so many things you can ask it about that will give you good answers to 01:00:33.820 |
that we can just wrap agents around GPT-5 or 6, like whatever level it is. It'll be smart enough 01:00:40.180 |
that we can wrap agents around it for like any sort of reasonable work task. And it'll be this army of 01:00:44.600 |
agents that's going to be automating work. And OpenAI was like, this is going to be the key to our 01:00:48.960 |
revenue. They're losing a lot of money. That's like, this will be the key to our revenue. That was the 01:00:52.000 |
idea. The scaling laws have been faltering. After GPT-4 and roughly these equivalents for other types of 01:00:58.120 |
other companies like Grok 2 and some others, continued efforts to make these training, the 01:01:06.340 |
training sets, the GPUs, and the parameter sizes much larger, weren't yielding the same type of 01:01:11.380 |
giant improvements. This is why, for example, GPT-5, which we've been expecting for a long time for 01:01:19.460 |
now, never has, it hasn't come out yet. They've been training bigger models, but they're not getting 01:01:23.560 |
the same leap that they got with like, say, GPT-4. This is why, as was reported in the Wall 01:01:28.380 |
Street Journal last week, Meta had to make this announcement that their big newest model, their 01:01:33.020 |
newest Lama version, which they codenamed Behemoth, which they trained on like 10x the size data centers, 01:01:37.420 |
they're delaying its release because when they were done, it just wasn't that much better than the last 01:01:42.060 |
model, not better enough to release it as like a brand new product. It's why when Elon Musk put a crazy 01:01:48.260 |
amount of money into a huge data center for Grok 3, it was like a little bit better than Grok 2. 01:01:54.740 |
So these giant leaps aren't happening anymore. The scaling laws are beginning to falter. So if we 01:02:01.320 |
can't just assume that language models will just keep getting smarter and then we can just ask it to do 01:02:06.040 |
anything, where has the attention turned in the industry? Well, it's turned instead to doing 01:02:12.040 |
reinforcement language tuning on existing models. So here's how this works. This is a, you take a, 01:02:17.280 |
what they'll call a foundational model, but just an existing language model like GPT-4. 01:02:20.760 |
And then let's say you have a particular thing you want it to be better at, like producing computer 01:02:25.120 |
code. You create a big synthetic data set of like, here's questions and here are the right answers 01:02:31.280 |
to that question. And you go through when you give these questions to the already trained foundational 01:02:37.940 |
model, and then you use reinforcement learning to say your answer is right or wrong, or here's where 01:02:43.860 |
it's right or wrong. And you use that to adjust its weights in such a way to get better at exactly 01:02:48.160 |
that type of problem. If you have a good synthetic data set for a particular type of problem and you 01:02:53.960 |
tune it properly, you can then tune your foundational model to be good at that particular type of specific 01:03:01.540 |
type of prompt. So like when you hear about, you know, open AI saying this model is really good at 01:03:07.680 |
reasoning. What that means is they started with something like GPT-4 and then they generated a giant 01:03:12.300 |
synthetic data set of reasoning questions with step-by-step answers. And then they would give 01:03:16.420 |
each of these questions to GPT-4 or whatever the foundational model is. And they would, you know, 01:03:20.640 |
as it spelled out its work, they would say that step is wrong and that step is wrong. And with this 01:03:25.500 |
fine-tuned reinforcement training, it could get good at solving those type of problems. You ask it those 01:03:29.240 |
type of logic questions, it'll show its reasoning and do well at it. Math is well suited for this as well. 01:03:33.740 |
Open AI is supposedly paying like a hundred dollars an hour for math PhDs to write all these sample 01:03:39.980 |
problems and answers. So they can create this big synthetic data sets that try to tune certain 01:03:45.180 |
foundational models to be good at solving these types of math problems. You can do this with 01:03:49.620 |
computer programming and reasoning as well. So for like certain things, you can use these synthetic 01:03:55.240 |
data sets, tune a foundational model through reinforcement learning to be good at replicating 01:03:59.200 |
the type of question and answer pairs that are in the synthetic data set. This is really where all the 01:04:03.440 |
energy is now. It's why Open AI didn't release GPT-5, but instead released like these six different 01:04:08.440 |
models with really confusing names, O3, O4 Mini, O5 Express minus three, like, you know, what are they 01:04:15.960 |
doing here? Each of those is a foundational model tuned with reinforcement learning to different types 01:04:20.720 |
of synthetic data sets. And one that's good at one might not be as good as others. This is sort of where 01:04:25.460 |
we are now in the industry. Now, this can create really cool tools. If you have a good data set for a 01:04:32.140 |
particular problem, and you're starting with one of these really fantastic foundational models like 01:04:36.780 |
GPT-4, you can get really cool results. Like the type of math problems there in synthetic data sets 01:04:41.600 |
we get better at. They did this for the International Math Olympiad, which famously has these like 01:04:45.520 |
really devious, difficult math problems that require these insights. Well, they generated a huge amount of 01:04:50.920 |
sample math Olympiad style problems, and they've tuned a foundational model with, and it could get really 01:04:55.480 |
good at doing like math Olympiad problems. So this is where we are now, but what this gives us is 01:05:01.600 |
sort of like bespoke abilities. And they're somewhat limited to where we have the right data to do this 01:05:07.840 |
reinforcement learning. This is very different than the vision of we will scale up a general model until 01:05:14.200 |
it gets so smart that we can just ask it any of these things. Like the vision was GPT-5 would just be 01:05:19.500 |
good at the Math Olympiad, would be better at math, and would be better at programming, and would be 01:05:23.200 |
better at logic. But instead, we have to tune with reinforcement learning GPT-4 for each of these 01:05:29.740 |
separate purposes. So the issue for work is there's a lot of things we do as knowledge workers that there's 01:05:34.160 |
not a good synthetic data set to tune on, right? Because it uses a lot of sort of like shifting bespoke 01:05:38.840 |
information. I have to know who's who, how my company particularly works, like the current emotional 01:05:43.720 |
state of the different people in my companies. I don't have a way of like tuning or profitably 01:05:48.280 |
tuning a foundational model to be good at that particular thing. So what we're going to get 01:05:53.640 |
probably is a lot of agents that can do certain things really well or models that are tuned to 01:05:58.240 |
certain things. But we're not going to easily get to a place where we can just throw out agents for 01:06:03.280 |
every task we want to do. So even a task that seems as basic as emptying an email inbox is going to 01:06:09.540 |
remain beyond this current technological trajectory because there's just too many specific bespoke 01:06:14.820 |
idiosyncratic abilities required for me to answer my inbox with the specific people involved in a 01:06:20.880 |
particular company I'm in with a particular history we have in the last six months of what's going on. 01:06:24.860 |
That's, we don't have a synthetic data set for those type of emails and how to answer them 01:06:29.640 |
that we can like tune up a model on. So I think where we are with workplace automation is 01:06:35.520 |
we'll get maybe some more agents. But I think the big wins in the in the workplace is going to be 01:06:43.880 |
a natural language interface, the software, I talked about that in this article, but it's going to be 01:06:48.520 |
using the understanding portion of these models so that I can just tell software what I wanted to do 01:06:52.640 |
without having to learn the software, huge productivity boost, not sexy, but huge productivity 01:06:56.500 |
boost. And the continued expansion of the true killer app of the first generation generative AI, 01:07:01.180 |
which is smart searches, where now I can ask something like a new version of ChatGPT, 01:07:05.820 |
you know, questions about anything, and it can search the web to get information, 01:07:09.000 |
it can analyze it and give me my answer, it can give me the answer in whatever format I want, 01:07:12.980 |
I can give it my own input. Hey, here's a report. Can you go through this and tell me, 01:07:18.000 |
you know, summarize in a table like the main takeaway points? This is like a huge ability. 01:07:22.680 |
I can give it, someone wrote me about this recently in response to my article, like a photo. 01:07:28.180 |
Hey, can you analyze what's going on in this photo? Right? What, who's in here? What's going 01:07:32.920 |
on here? The smart search capability, the models can take information, be it from the web, from your 01:07:38.220 |
local files or whatever, and like do a pretty good job of understanding it, reordering it, reorganizing 01:07:44.760 |
it and producing it in a format you need. This is the thing that raises the killer app. Whenever you 01:07:50.000 |
tell people say like, I'm so impressed with AI, it's usually a smart search application that you're 01:07:54.800 |
talking about. And this is no small business, right? I mean, just web searches alone earned 01:07:59.640 |
Google $175 billion in 2023. This is the way that open AI is going to find profitability, not $20,000 01:08:06.020 |
a month agents that people are paying for to like automate doing some sort of task in my job. 01:08:10.900 |
It's going to be, we all are just talking to these like smart searches all day long. It just makes 01:08:17.400 |
various parts of our life easier, right? It's, can you look at this report, pull out these examples, 01:08:23.680 |
put them in an Excel spreadsheet. And then when you get to the Excel spreadsheet, you can just ask the 01:08:28.500 |
spreadsheet, can you make a chart about this? No, no, make it a pie chart. Can you make the colors red, 01:08:31.940 |
green, and blue instead? Make those labels larger. Great. Can you email that? So it's not your job being 01:08:38.380 |
automated. It's the, some of these steps you do in your job, I think are going to be, you're just going to 01:08:45.280 |
have more power. You're going to be able to do more things than you could before. So don't sleep 01:08:48.720 |
on natural language interfaces. Don't sleep on super searching, but also don't take too seriously 01:08:55.480 |
people that say that increases in ability over there means dot, dot, dot, like we're going to have AGI 01:09:02.160 |
or super intelligence. I get this feedback all the time. They're like, well, look, we, we, we won't be 01:09:07.500 |
able to do this. Now we can do this. So naturally, like we'll be able to do everything else. You can't, 01:09:12.280 |
this is like an EA thing. You can't just assume you're going to keep moving up the exponential 01:09:16.220 |
curve. The scaling laws failed. We're now having to synthetically train foundational models for 01:09:22.060 |
particular tasks for which we need particular data. So there is big change coming. It's going to be 01:09:28.620 |
incredibly intuitive. It's already happening now, but it's not going to be our jobs being fully 01:09:32.100 |
automated. I think that's more of like a attention catching thing, but there's not a reasonable tech 01:09:36.540 |
story for that right now. Just like there's not a reasonable tech story for like a full genius level 01:09:41.480 |
AI. There's not a reasonable tech story for super intelligence until someone can deliver you. Here 01:09:46.200 |
is the technological story of how that will happen. We, I'm not listing as much. I don't, I'm not, I do 01:09:51.680 |
not extrapolate from, we were able to get from A to B. Therefore we should be able to get from B to 01:09:57.260 |
whatever C I want to choose. That's not the way that works. We get interested in a particular 01:10:02.560 |
application when someone can show us, this is what we need to do to get there. We're just missing piece 01:10:08.020 |
B and C, but we're working on it and we're making progress on it. That's when you start thinking about 01:10:11.820 |
that. So anyways, not to get too much down that track, but when it comes to, will my job be automated? 01:10:16.320 |
No, but certain parts of how you deal with information is going to become way more powerful. Certain ways 01:10:23.160 |
to how you'd used to talk to software is going to become way more easier. And I think this is going 01:10:27.180 |
to make people more effective, productive, and make jobs better. And that's like in the next two years 01:10:31.880 |
timeframe that we're talking about. So I actually think that stuff's going to be kind of exciting 01:10:35.000 |
and not all these AI companies will survive, but there's enough value there for at least some of 01:10:40.860 |
these to survive, even with the big expenses of all the compute, et cetera, they have to do. 01:10:46.100 |
175 billion a year for Google is, yeah, it's a lot of cash. 01:10:50.960 |
It's a lot of cash. Yeah. Uh, their stock fell when it was revealed that Google searches are down 01:10:56.580 |
because more people were using, uh, yeah, yeah, yeah. It's, it's, it's huge business. Um, and I, I think 01:11:03.200 |
this layer between searching and answers would be, that's going to be a really big change. We're really 01:11:08.120 |
used to like getting lists of websites. Right. But I do think by the way, Google will catch up. There's a 01:11:14.460 |
really, there's a really interesting, Ed Zitron won a Webby award for a podcast episode he did on his 01:11:20.260 |
better offline podcast about, I think it was called the guy who ruined Google search. And the, the argument 01:11:26.220 |
of this sort of audio essay was that Google has particularly like on purpose, not improved 01:11:34.300 |
Google search, like along these lines of like, let me just talk to it and you give me the answers. And 01:11:40.660 |
because they can sell more ads, actually having more and more pages of results and you have to do more 01:11:45.420 |
searches is better for them. So they kind of are hobbling themselves on purpose because it keeps 01:11:49.860 |
revenue higher now, but it could shoot them in the foot in the longterm because people are, you know, 01:11:55.140 |
Gemini is not very good. And no one looks at those Gemini answers at the top of Google, 01:11:59.440 |
but more and more people just go to chat GPT and ask their questions. I think Google can catch up. 01:12:04.960 |
That was the problem. Like when there's $175 billion at stake, people are so used to Google.com. 01:12:10.580 |
Yeah. All they have to do is like, all right, yeah, we're a little behind, but we're going to like 01:12:14.960 |
change this to look like what people get with chat GPT and people like, yeah, I'm still used to going 01:12:20.020 |
to Google.com. So I think Google could keep, there's so much money at stake, but that's, that's, 01:12:24.240 |
I don't know. It's like open AI is trying to fake people out with talk of agents, 01:12:28.440 |
then Sam Altman just talks about crazy stuff, you know, Dyson spheres to capture all the power, 01:12:33.120 |
super intelligence. You can't crazy things come out of there. And like the more they get worried 01:12:37.500 |
financially, the crazier they get with what they talk about. But this is probably what they're really 01:12:42.380 |
thinking. It's like, if we could, but maybe they don't want to talk about it because they don't want 01:12:45.100 |
to give Google too much warning. Like we need to win search. We need to be like powering the software 01:12:50.280 |
interface for all software that people are using. Like that's where the real power and money is. I think 01:12:54.460 |
not, um, you know, somehow 9,000 just does all your work for you. 01:12:59.300 |
All right. Well, anyways, that's all the time we have for today. Uh, thank you for listening. We'll 01:13:06.980 |
be back next week with another episode. Um, and until then as always, stay deep. Hey, if you like 01:13:12.300 |
today's episode, you might also like episode 344, which is titled, you are not a cog. It touches on 01:13:18.540 |
some complimentary themes. I think you'll like it. Check it out. So one of the things I love to do is a 01:13:23.160 |
computer science professor who also thinks more broadly about how we live and work in the modern 01:13:28.820 |
digital environment is to draw connections between these two worlds of mine, the computer science and