back to indexWhy Your Phone Makes You Feel Empty, Lost & Addicted... | Cal Newport
Chapters
0:0 Would Kant Use TikTok?
24:49 Do “distraction free” apps work?
28:36 How can I finish what I start?
31:56 Is context shifting slowing down my work as a teacher?
37:20 How should I organize my official podcast duties with my traditional teaching requirements?
43:20 Is “slow living” and “slow productivity" the same thing?
50:18 Deep work blocks in the afternoon
54:22 Re-designing a life with a new job
65:14 How do recommendation algorithms work?
00:00:00.000 |
Today, I want to give you an argument about why you feel uneasy about your smartphone 00:00:08.860 |
It is, however, I think an important argument to hear. 00:00:13.700 |
Because it draws from a source that far predates these modern digital technologies. 00:00:20.860 |
I'm going to make you an argument about why you're uneasy about your smartphone that goes 00:00:24.380 |
back to the foundational moral philosophy of Immanuel Kant. 00:00:30.980 |
We're talking about a philosopher from the 1700s who is arguably the most influential 00:00:40.260 |
And it turns out, if you read Kant correctly, he has a lot to say about decidedly modern 00:00:50.020 |
So I'm going to ask you to stick with me here. 00:00:52.980 |
We're going to get a little bit technical, but I'm going to walk you through. 00:00:56.580 |
All of these ideas will be accessible, and we're going to come out on the other end of 00:00:59.260 |
this exploration with a better understanding of the role technology plays in your life 00:01:06.040 |
right now, why that makes you uneasy, and changes you can make. 00:01:09.660 |
So I think it's a cool thing to add to the argument. 00:01:12.060 |
All right, so I'm going to be drawing today entirely from a single academic paper from 00:01:22.540 |
I have a preprint version here that I could access, which has a link to the official final 00:01:28.620 |
I'll put it on the screen here for people who are watching instead of just listening. 00:01:33.420 |
The paper we're going to be drawing from is titled, Is There a Duty to Be a Digital Minimalist? 00:01:41.420 |
This was published in the Journal of Applied Philosophy back in the summer of 2021. 00:01:48.180 |
For those of you who are tracking at home, that's volume 38, number four. 00:01:51.380 |
The authors are Timothy Aylsworth and Clinton Castro. 00:01:55.700 |
These are philosophers from Florida International University. 00:01:59.220 |
All right, so I'm going to jump through this paper somewhat selectively to pull out what 00:02:08.340 |
So I'm actually going to start by jumping ahead here a little bit. 00:02:11.700 |
I'll sort of keep up with this on the screen, I suppose, for those who are watching. 00:02:16.380 |
But I'm going to read everything, so don't worry if you're just listening. 00:02:22.920 |
They're talking here about digital minimalism. 00:02:29.100 |
Authors like Cal Newport, who coined the term digital minimalism, argue that we would be 00:02:33.780 |
better off if we restructured our relationships with technology on our own terms. 00:02:38.620 |
He understands digital minimalism as, quote, a philosophy of technology use in which you 00:02:42.980 |
focus your online time on a small number of carefully selected and optimized activities 00:02:48.420 |
that strongly support things you value, and then happily miss out on everything else. 00:02:53.540 |
A philosophy of technology use is a personal philosophy that covers which digital tools 00:02:57.820 |
we allow into our lives, for what reasons, and under what constraints. 00:03:04.140 |
Newport's definition outlines a noble ideal, but we are happy to adopt a less demanding 00:03:08.900 |
understanding of this notion, and kind of jumping ahead here a little bit. 00:03:12.980 |
We understand a digital minimalist as one whose interactions with digital technology 00:03:16.840 |
are intentional, such that they do not conflict with their ends. 00:03:21.180 |
For most, being a minimalist will involve a serious reduction, in some cases to the 00:03:26.580 |
point of elimination, of interactions with smartphones, smartphone apps, and social media 00:03:32.460 |
For some, it may even require living up to Newport's ideal. 00:03:39.980 |
Newport may very well be right that we have prudential reasons to reduce our smartphone 00:03:44.900 |
Perhaps most people would be better off if they became digital minimalists. 00:03:49.020 |
But if the Kantian argument that follows is sound, then we might have even more compelling 00:03:54.420 |
reasons to adopt the end of digital minimalism. 00:03:59.180 |
All right, so let's make sense of what just happened there. 00:04:02.980 |
They introduced my idea of digital minimalism. 00:04:05.820 |
They simplify it a little bit to make it a little bit more general. 00:04:08.220 |
But they say, basically, yes, this idea that Newport introduced is one of being very intentional 00:04:12.940 |
about how you use your technology so that it supports instead of impeding what you value. 00:04:18.340 |
That is the core idea of my book, Digital Minimalism. 00:04:21.020 |
It's at the core of my personal technology philosophy. 00:04:24.180 |
The key thing they say, this is setting up the argument that we're going to explore, 00:04:29.060 |
is they say, look, Cal in his book has what they call prudential reasons for why you should 00:04:39.340 |
I go through, like, hey, when you let technology get in the way of your values, there's like 00:04:45.140 |
all this stuff that you don't do that you would otherwise like to do, and I think you're 00:04:49.580 |
I'm a sort of pragmatic, practical, direct argument to your experience and intuition. 00:04:54.380 |
They're saying, yes, that might all be true, but we are going to make an argument that 00:04:59.460 |
draws from Kant that says there is also a moral reason, that we can draw from moral 00:05:05.160 |
philosophy that says whether you want to or not, you are obligated to be a digital minimalist. 00:05:12.320 |
That now is the argument that is made in this paper that we are going to draw out, a moral 00:05:20.340 |
So I'm going to jump back to the beginning here. 00:05:23.420 |
They set up a quick example, which they use to explore some of the issues with modern 00:05:28.940 |
So they begin by drawing from a quote from a comedian. 00:05:35.620 |
I really do, says comedian Esther Poviditsky. 00:05:42.540 |
I open books, and then I black out, and I'm on Instagram, and I don't know what happened. 00:05:47.260 |
To many of us, this is a familiar occurrence. 00:05:49.240 |
All too often, we set out to complete a task, but we are interrupted and subsequently derailed 00:05:59.380 |
Incidents of this kind might involve a moral failure, for insofar as we are morally required 00:06:05.860 |
to cultivate and protect our autonomy, we fail to meet this requirement by falling prey 00:06:15.140 |
So the first link in the chain they're going to make in this moral argument is that our 00:06:22.600 |
issues with smartphone usage, as captured by this anecdote of this comedian saying, 00:06:29.800 |
I'm going to read this book, but I end up on Instagram," they say this could be seen 00:06:34.000 |
as impacting our autonomy, and if it impacts our autonomy, we might be able to find a moral 00:06:41.480 |
First, however, they have to establish, is the way we use things like smartphones actually 00:06:47.580 |
affecting our autonomy, and what they do here is they go through three different ways that 00:06:51.120 |
other thinkers have thought about autonomy, and for each of these says it's basically 00:06:54.700 |
self-evident that the behavior we're thinking about, the behavior we observed in that comedian 00:06:58.640 |
Povedisky, is violating these definitions of autonomy. 00:07:05.940 |
They say, "In order to substantiate the claim that smartphone addiction undermines autonomy, 00:07:16.220 |
Personal autonomy has been defined in a variety of ways, but we believe that a minimal definition 00:07:20.200 |
of self-governance is sufficient for our purposes here." 00:07:24.260 |
So they're going to go through some examples here of definitions of autonomy. 00:07:28.180 |
They say, "Let's return to Povedisky's case from the beginning. 00:07:32.940 |
According to what they call the Frankfurt Dworkin model, Povedisky's first order desire 00:07:40.480 |
to check Instagram while reading is inconsistent with her higher order desire, i.e., to want 00:07:48.420 |
All right, so the Frankfurt Dworkin model talks about first order and higher order desires, 00:07:54.100 |
the things that actually is directing your activities right now versus what you want 00:07:57.940 |
to be the case, and if these are out of sync, you have an autonomy problem. 00:08:02.580 |
So like by that model, looking at Instagram when you want to read is an autonomy issue 00:08:06.940 |
because your higher order desire is, "I want to read," but your first order desire that's 00:08:10.660 |
actually directing your activities is looking at Instagram. 00:08:13.380 |
All right, here's another model due to Watson. 00:08:16.420 |
On Watson's characterization, what is distinctive about compulsive behavior is that the desires 00:08:22.840 |
and emotions and questions are more or less radically independent of the evaluational 00:08:29.500 |
Povedisky's smartphone use is inconsistent with her evaluative judgments about what she 00:08:32.660 |
ought to be doing, and the behavior is compulsive. 00:08:34.900 |
All right, so Watson says, "If you're doing something that you would evaluate to be not 00:08:40.780 |
good or less good than something else, then the behavior must be compulsive." 00:08:45.300 |
By the way, that connects to the way psychologists think about behavioral addiction, the persistence 00:08:52.700 |
in an activity even though it's, you know, it's not valuable or it's in the way of things 00:08:58.220 |
Finally, they have a model of autonomy due to Bratman. 00:09:02.180 |
Bratman defends a model of autonomy that requires harmony between what the agent does and her 00:09:11.780 |
Surely Povedisky's behavior fails on this count as well. 00:09:15.180 |
We can suppose that Povedisky, like many of us, would like to read many books over the 00:09:18.220 |
course of her life and to develop a disposition of being able to sit and enjoy reading for 00:09:24.140 |
The action of looking at her phone compulsively is not consistent with her long-term plans. 00:09:28.300 |
All right, so to summarize, no matter which model one adopts, the result is likely to 00:09:34.020 |
Povedisky is not autonomous with respect to her smartphone usage. 00:09:38.340 |
All right, so we've established this first link in our argumentative chain. 00:09:43.860 |
The way we use smartphones today seems to be hurting our autonomy. 00:09:49.800 |
We can look at several official definitions of autonomy and see that smartphone usage 00:09:55.300 |
of the type that we think of, the type in that example, is breaking those models. 00:10:01.700 |
I wanted to interrupt briefly to say that if you're enjoying this video, then you need 00:10:06.540 |
to check out my new book, Slow Productivity, The Lost Art of Accomplishment Without Burnout. 00:10:14.020 |
This is like the Bible for most of the ideas we talk about here in these videos. 00:10:19.460 |
You can get a free excerpt at calnewport.com/slow. 00:10:33.780 |
This is the next link in their moral argument chain. 00:10:37.780 |
This is where we turn our attention to Immanuel Kant. 00:10:43.100 |
Although some ethicists reject the very notion of duties to oneself, Kant makes them a central 00:10:50.780 |
In fact, I'll turn to this in the article here. 00:10:55.220 |
For those who are watching along at home, I'm just scrolling. 00:11:03.000 |
He says that they take first place and are the most important of all. 00:11:07.780 |
He goes so far as to suggest that duties to oneself are the foundation of duties to others, 00:11:12.020 |
making them the precondition of all moral duties. 00:11:16.380 |
But he worries that they have not been properly understood and claims that no part of morals 00:11:20.740 |
has been more defectively treated than this of the duties to oneself. 00:11:24.980 |
He thinks that they have been misunderstood as a mere elevation of self-interest, a duty 00:11:28.560 |
to promote one's own happiness, which he dismisses as an absurdity. 00:11:32.740 |
Rather than grounding such duties in egoism, Kant argues that humanity, i.e. rational nature, 00:11:38.460 |
has an absolute inherent value, and this generates self-regarding obligations insofar as the agent 00:11:44.540 |
is morally required to respect humanity in her own person. 00:11:47.900 |
Thus duties to oneself are derived from the humanity formulation of the categorical imperative, 00:11:51.780 |
which tells us that we must always treat humanity, even in our own person, as an end, never merely 00:11:58.660 |
All right, so Kant is arguing we have a duty to ourselves as much as we have a duty to 00:12:05.740 |
other people, and we have a moral duty in particular for self-governing what is most 00:12:20.140 |
Kant famously claims that human beings, in virtue of their rational agency, have a uniquely 00:12:28.340 |
So the idea that we are rational beings and no other creatures or objects are gives us 00:12:33.220 |
this sort of special value, and preserving what he calls the dignity of ourselves as 00:12:44.180 |
We have an obligation to help protect this in ourselves and others. 00:12:49.860 |
Moving on here, on this view, our actions can either express or fail to express the 00:12:54.820 |
kind of respect that is becoming of human dignity. 00:13:01.020 |
So he's saying our actions need to be focused on respecting our human dignity, and our human 00:13:08.980 |
dignity is based on the idea that we are rational beings. 00:13:11.220 |
All right, so we're really in the weeds here, Jesse. 00:13:14.020 |
We're deep in moral reasoning, but out of this and a bunch of other words that I'm kind 00:13:18.420 |
of skipping, we get to an actual argument form here, all right? 00:13:26.100 |
So they end up with three propositions that lead to a conclusion. 00:13:33.980 |
Humanity, i.e. rational agency, has an objective, unconditional, non-fungible value, which is 00:13:39.980 |
Proposition two, anything that has dignity ought to be respected as an end and never 00:13:48.780 |
Proposition three, if humanity ought to be respected as an end and never treated as a 00:13:53.020 |
mere means, then we have an imperfect duty to cultivate and protect our rational agency. 00:13:58.820 |
Therefore, the conclusion of these three propositions, we have an imperfect duty to cultivate and 00:14:06.700 |
All right, we're getting in the weeds here, logical philosophy, morality all pulled together. 00:14:14.820 |
They then put these together, they get to their core argument. 00:14:20.120 |
We have an imperfect duty to cultivate and protect our rational agency. 00:14:24.140 |
If we have an imperfect duty to cultivate and protect our rational agency, then we ought 00:14:27.340 |
to adopt the end of digital minimalism, therefore we ought to adopt the end of digital minimalism. 00:14:33.460 |
If you take my discrete mathematics course at Georgetown, where we study propositional 00:14:38.100 |
logic, you will actually recognize this argument and could probably turn it into the corresponding 00:14:46.460 |
So basically we have just done a lot of reasoning based on Kant's ideas that end up with the 00:14:52.060 |
conclusion we ought to adopt the end of digital minimalism. 00:14:59.680 |
If we simplify all of this, we're basically saying it is important to respect our own 00:15:11.660 |
The way that we use smartphones when we're unintentional robs us of our ability to do 00:15:18.940 |
this because it robs us of autonomy, and autonomy is at the core of respecting the dignity of 00:15:25.260 |
being a rational being because at the core of being a rational being is the ability to 00:15:32.620 |
The Kantian framework is seeing this core tension between we need to respect the fact 00:15:40.020 |
that we are rational beings and smartphones take away our ability to make rational decisions 00:15:44.420 |
about what we want to do with our life and our time, therefore an approach to life that 00:15:52.660 |
reduces smartphone's ability of taking away our autonomy is justified. 00:15:59.160 |
Digital minimalism is sort of the definition of such a life. 00:16:05.660 |
It says we want to be intentional, not be robbed of our autonomy, therefore there's 00:16:09.240 |
a sort of fundamental Kantian moral argument that we should be very intentional about our 00:16:13.220 |
technology use using something like digital minimalism. 00:16:19.100 |
Let me read the conclusion of this paper because I think they sum this up very nicely. 00:16:24.100 |
They're all lit up here on the screen for those who are watching at home. 00:16:26.100 |
All right, so here's the conclusion of the authors in this paper. 00:16:30.260 |
We have argued that there is a moral obligation to be intentional about our use of smartphones 00:16:36.420 |
We have this duty because we are required to protect the most valuable commodity we 00:16:42.820 |
Kant believes that the proper exercise of our autonomy is the only thing that is good 00:16:46.260 |
without qualification, something that shines like a jewel having its full worth in itself. 00:16:52.860 |
To wantonly forfeit some of our agency by falling prey, the technological heteronomy 00:16:59.980 |
is to demonstrate a failure to respect this precious capacity as the treasure that it 00:17:04.980 |
All right, so why are we geeking out on, so it's like a technical academic argument for 00:17:13.140 |
the type of things we talk about here on the show, and actually not just the type of things, 00:17:17.620 |
specifically what we talk about on the show because, of course, they're talking about 00:17:23.940 |
It's because I think it is easy in thinking about technology and human flourishing to 00:17:31.060 |
fall back on arguments such as, look, kids these days, for example, they're always using 00:17:38.020 |
different technology, we get worried about it, but that's just the wheels of progress. 00:17:43.660 |
Or we fall back on an argument that says every new technology creates moral panics, right? 00:17:50.600 |
We worried about the car, but now we just drive cars. 00:17:52.660 |
We worried about TV, now we don't worry about TV as much. 00:17:56.300 |
It's easy to fall back on these arguments of status quo thinking. 00:18:01.180 |
What's critical about this particular argument is it says, no, there's justifications for 00:18:07.340 |
our concerns about these technologies that are much more fundamental than thinking about 00:18:13.140 |
Our uneasiness about these technologies is not just a naive reaction to the latest techno 00:18:18.180 |
disruption and a long line of techno disruptions that ultimately end up being not so bad. 00:18:24.360 |
We are actually reflecting a specific harm, denial of autonomy, and that we can go back 00:18:32.060 |
to Kanter before to see that this is at the core of the human experience, is at the core 00:18:39.900 |
And so yes, we're uneasy, not because we're naive, we're uneasy because something basic 00:18:48.280 |
So this Kantian argument is pointing towards the exceptional nature of the issue we face 00:18:55.500 |
We cannot just ontologically speaking put in the same category as like any other type 00:19:01.320 |
It is a specific technical fear which requires analysis on its own terms and when we do that, 00:19:06.420 |
we see there are specific harms here that cannot be ignored. 00:19:12.180 |
So if someone's giving you a lot of trouble about your digital minimalism, if they're 00:19:17.420 |
making fun of you or if they're trying to self-justify their own heavy phone use, you 00:19:21.820 |
can now throw a lot of sort of annoying technical philosophical terms at them. 00:19:27.500 |
You could say things like heteronomy and ontological. 00:19:34.240 |
You can mention the categorical imperative, keep dropping the word Kantian and they will 00:19:49.660 |
So I actually found that article, Jesse, because there's a follow-up article that said, all 00:19:54.000 |
right, if that's true, there's also an obligation others have to protect your autonomy through 00:20:01.960 |
That there's like a moral imperative not to distract other people. 00:20:14.240 |
All right, let's talk about our longtime friends at Element. 00:20:21.560 |
Element helps anyone stay hydrated without the sugar and other dodgy ingredients found 00:20:30.300 |
It's important to have the right electrolytes, but you don't want sugar. 00:20:36.340 |
It is a zero sugar electrolyte drink mix and sparkling electrolyte water. 00:20:43.360 |
The mix you add to your own water or the already canned sparkling water with the electrolyte 00:20:49.760 |
mix in it, which you can just grab out of the fridge. 00:20:52.720 |
They're born out of a growing body of research revealing that optimal health outcomes occurs 00:20:56.000 |
at sodium levels two to three times government recommendations. 00:21:00.640 |
So each stick of powder or can of sparkling water delivers a meaningful dose of electrolytes 00:21:07.960 |
free of sugar, artificial colors, or other dodgy ingredients. 00:21:11.560 |
It's incredibly popular among many different communities. 00:21:17.880 |
Anytime that I feel like I'm dehydrated a little bit, so from like after exercising, 00:21:22.920 |
after podcasting, you lose a lot of moisture when you talk, after a long day of lecturing, 00:21:28.360 |
I add the drink mix to a big Nalgene full of water. 00:21:33.880 |
We've got a big box of it because I love that it replaces electrolytes, but it doesn't have 00:21:44.480 |
There's something else they're offering now to think about this winter. 00:21:51.400 |
So the limited time Element chocolate medley, including the flavors chocolate mint, chocolate 00:21:55.600 |
chai, and chocolate raspberry, you could serve those in hot liquid. 00:22:00.400 |
So get your electrolyte mix in a sort of refreshing hot liquid after you're running outside or 00:22:06.560 |
I think that's a fun addition to the Element family for the winter. 00:22:15.520 |
Members of my community can receive a free Element sample pack with any order when they 00:22:28.300 |
Keep in mind you can try Element totally risk-free. 00:22:30.200 |
If you don't like it, give it away to a salty friend and we'll give you your money back. 00:22:33.960 |
They have a very low return rate and very high reorder rate because it is fantastic. 00:22:38.840 |
So go to drinkelement.com/deep and you will get a free sample pack with any order. 00:22:44.320 |
I also want to talk about our friends at Mint Mobile. 00:22:48.760 |
I love a great deal as much as the next guy, but I'm not going to crawl through a bed of 00:22:58.740 |
So when Mint Mobile said it was easy to get wireless for $15 a month with the purchase 00:23:02.340 |
of a three-month plan, I called them on it, looked into it. 00:23:06.400 |
Turns out it really is that easy to get wireless for $15 a month. 00:23:10.460 |
The hardest part of the process is just the time you're going to spend breaking up with 00:23:17.760 |
I have been recommending Mint Mobile to a lot of people recently because my son is in 00:23:23.620 |
We know a lot of other middle school parents. 00:23:25.100 |
They're going through this thought process of my kid needs a phone for X, Y, or Z, taking 00:23:29.880 |
the public transportation, getting picked up from practice. 00:23:33.800 |
I've been telling them just, hey, buy a flip phone on Amazon and give them the $15 a month 00:23:40.120 |
It's cheap, it's easy, and they'll have access. 00:23:43.600 |
So I've been telling more and more people use Mint Mobile to quickly set up a dumb phone 00:23:49.480 |
So if you want to get started, go to mintmobile.com/deep. 00:23:53.740 |
There you'll see that right now all three-month plans are only $15 a month, including the 00:23:58.360 |
All plans come with high-speed data and unlimited talk and text delivered on the nation's largest 00:24:04.640 |
You can use your own phone, or a phone you just buy on Amazon if you want a dumb phone, 00:24:08.960 |
whatever you want to do, with any Mint Mobile plan and bring your phone number along with 00:24:13.920 |
Find out how easy it is to switch to Mint Mobile and get three months of premium wireless 00:24:21.160 |
To get this new customer offer and your new three-month premium wireless plan for just 00:24:30.240 |
Check out your wireless bill to $15 a month at mintmobile.com/deep. 00:24:34.120 |
$45 upfront payment required, which is equivalent to $15 a month. 00:24:38.640 |
New customers on first three-month plan only. 00:24:40.840 |
Speed slower by 40 gigabytes in unlimited plan. 00:24:43.080 |
Additional taxes, fees, or restrictions apply. 00:24:53.400 |
"I just ordered digital minimalism to help me stop wasting time. 00:24:57.040 |
I tried, without success, to use some distraction-free apps. 00:25:02.480 |
Well, let me back up a little bit, because it looks like you're thinking about digital 00:25:07.560 |
minimalism as a collection of tips for trying to stop wasting time or to be less distracted. 00:25:17.440 |
That's the standard cultural paradigm we often have for thinking about advice, especially 00:25:24.600 |
I want to read the five ways to save your attention in some sort of magazine article, 00:25:30.880 |
It looks like you heard about distraction-free apps. 00:25:37.240 |
The first thing I want you to understand is that digital minimalism is not a collection 00:25:43.040 |
It's a philosophy of technology use, a consistent way of thinking about the role of technology 00:25:48.440 |
in your life and how you curate and engage with technologies in your life. 00:25:54.800 |
You either need to adopt the philosophy or not. 00:25:59.620 |
You can't just pick and choose specific things to show up in the philosophy itself. 00:26:05.440 |
The metaphor I like to use is cleaning out a closet that's overstuffed with junk. 00:26:10.720 |
So if you had a closet that's overstuffed with junk, what you are doing—so the equivalent 00:26:15.800 |
in the closet metaphor of like, "Hey, I'm going to try to help my online distraction 00:26:19.480 |
because I heard apps might help"—that's the equivalent in our closet metaphor of going 00:26:23.480 |
to the container store and being like, "Hey, I bought some organizers." 00:26:27.560 |
And then you return to your overstuffed closet and you're like, "Okay, I have a few organizer 00:26:38.160 |
Well the Marie Kondo approach of, "I'm going to empty the whole thing out, empty it to 00:26:43.560 |
I'm going to put everything that was in that closet in the piles. 00:26:45.880 |
I'm going to go through and say, 'Which of this stuff do I really, really need?' 00:26:49.400 |
And that stuff I will put back into the closet very carefully. 00:26:52.240 |
If I don't really need it, it doesn't go back in." 00:26:58.320 |
You start from zero and add back in the stuff you need carefully, not just trying to throw 00:27:03.700 |
organizational bins at the junk that's already in there. 00:27:09.640 |
Instead of just throwing a particular piece of advice at your digital distraction, you're 00:27:13.280 |
going to reinvent your digital life from scratch, taking everything out, reflecting for a month, 00:27:18.480 |
and then only adding back in what matters with rules about how you're going to use it. 00:27:23.400 |
To get to your specific point about distraction-free apps, in this process they can have a use—typically 00:27:35.320 |
If there's a particular technology that you need to use but only use in a limited way 00:27:39.760 |
and you have a very strong urge to keep going back to it, distraction-free apps can help 00:27:44.520 |
you train to resist that urge because it makes it very difficult to access those technologies 00:27:52.600 |
Typically what happens with people is after a few months of using distraction-free apps, 00:27:58.040 |
they lose the urge to go use that technology compulsively because they've gotten that groove 00:28:03.580 |
out of their mind, their reward circuit weakens, and then they don't use them anymore. 00:28:08.160 |
So you might use a distraction-free app as part of your efforts to recreate your digital 00:28:11.960 |
life, but typically their uses are more temporary, they're a training tool, not a permanent feature 00:28:20.280 |
I love the condo advice, how she—and you get rid of something and you express gratitude 00:28:27.320 |
TikTok, I express gratitude to the role you play in my life. 00:28:39.140 |
Next question is from JP, "I get stressed with my goals due to fear of failure, I keep 00:28:43.880 |
everything in my head and can't distinguish from urgent and non-urgent, I pretty much 00:28:48.820 |
Well, look, you can't, in the modern world, you can't organize your life just in your 00:28:56.640 |
Just trying to remember what you want to do, all the things you have to do, your priorities, 00:29:02.760 |
somehow use this all to make a decision about what to do next. 00:29:08.900 |
It's like trying to teach a bear to drive a car. 00:29:12.100 |
It might be funny or terrifying, but it's probably not going to work very well. 00:29:17.200 |
The human brain can't, on its own, organize a modern life. 00:29:22.880 |
Well, you're going to need something like multi-scale planning that is a system that 00:29:28.960 |
can get all of the stuff you need to do and your plan for how you're going to tackle it 00:29:34.800 |
out of your head and into a sort of trusted permanent system that you can frequently access. 00:29:40.080 |
You basically have to extend your mind like a cyborg with other tools to make this complicated 00:29:46.000 |
task of organizing your life much more tractable. 00:29:49.700 |
So multi-player scanning, which I talk about a lot on this show, has you planning things 00:29:54.380 |
on multiple scales, each of which have their own systems to go with it. 00:29:58.080 |
At the highest scale, you have a plan for the current season or quarter where you're 00:30:04.460 |
You reference that quarterly or seasonal plan every week. 00:30:08.220 |
When you make a weekly plan, you physically write out your weekly plan. 00:30:11.820 |
Also when you do your weekly plan, you confront your calendar. 00:30:15.100 |
You add to your calendar appointments with yourself to work on particularly important 00:30:19.300 |
priorities from your quarterly or seasonal plans. 00:30:22.780 |
So now you have a written weekly plan and a calendar that's been updated and corrected 00:30:27.660 |
And then every day, you look at your calendar and your weekly plan when you make a time 00:30:31.180 |
block plan for that day where you give every minute of your workday a job. 00:30:34.900 |
So you're not just trying to decide on the fly, "What do I want to work on next?" 00:30:40.760 |
You've made a plan for the time that remains in your day between meetings and other appointments, 00:30:44.680 |
what you want to do at that time, so you can balance your energy with your needs. 00:30:51.680 |
You can avoid excessive context switching, etc. 00:30:55.980 |
So each of these levels, you have different tools. 00:30:58.740 |
All of this is going to be supported by a task system where you're going to keep track 00:31:03.620 |
You'll look at that task system when you're doing your weekly plans. 00:31:05.980 |
You'll reference it during admin blocks in your daily plans. 00:31:08.820 |
All of these are external systems with structure around them that you use so that your brain 00:31:16.340 |
doesn't have to be responsible on its own for keeping track of what you have to do and 00:31:22.180 |
So you need something like multi-scale planning if you're going to keep track of your life. 00:31:30.620 |
So you shouldn't be worried, I would say, that you're struggling to do this in your 00:31:38.340 |
Actually, a true point about bears driving cars, Jesse, really hard to get insured. 00:31:46.380 |
Especially if you live in the west coast of Florida. 00:31:49.460 |
If you live in the west coast of Florida, yeah, get a bear to drive a car, high insurance 00:31:58.900 |
I'm a teacher looking to improve my efficiency with admin tasks. 00:32:03.780 |
When I have a quiz in two different subjects to grade and record, is the task context switching 00:32:08.900 |
effect less if I were to grade and record one class than the other or if I were to grade 00:32:15.860 |
Well, it's a good question because I'm glad you're thinking about context shifting as 00:32:19.660 |
more or less the number one productivity poison you want to be wary about. 00:32:25.340 |
It takes time to switch your target of your attention from one target to another. 00:32:29.900 |
So if you're moving your attention back and forth rapidly, you're going to put your mind 00:32:32.980 |
into the state of continuous partial attention, which is a self-imposed cognitive deficit. 00:32:41.980 |
In this case, recording grades into a gradebook is mechanical and largely non-cognitive. 00:32:52.180 |
In other words, you don't have to do difficult thinking. 00:32:56.080 |
You're just taking numbers, matching it to a name, writing that name in there. 00:32:59.780 |
You don't have to load up complicated cognitive context. 00:33:03.300 |
You don't have to make decisions or pull from complicated memories. 00:33:08.380 |
So I'm not too super worried about the context shift price when you go to just entering grades. 00:33:16.000 |
Because again, mechanical thing, it's almost like you're working hard on something like 00:33:21.060 |
If you get up and go make a cup of tea and then come back, that's not actually going 00:33:27.640 |
That context shift is not going to be a big hit on your primary test because it's mechanical 00:33:33.700 |
So this is all to say, it doesn't really matter where you put the grade recording. 00:33:40.820 |
So you could grade one thing, then grade the other thing, and then have a long block of 00:33:45.460 |
Or you could grade one thing, enter the grades, grade another thing, enter the grades. 00:33:49.520 |
It's not going to make a difference cognitively. 00:33:52.260 |
It's just going to be a matter of what's going to feel better for you. 00:33:55.820 |
I would suspect the difference would come down to how demanding the grading is. 00:34:00.220 |
So if the grading is really hard, and if the subjects between the two things you're grading 00:34:05.580 |
the two quizzes is separate, like it's not the same cognitive context, I would enter 00:34:11.180 |
the grades right after grading to give your mind a breather. 00:34:18.700 |
I would also consider, let's get advanced here, cognitively advanced. 00:34:25.220 |
After grading the first thing, before you enter those grades, go look at the second 00:34:31.300 |
thing, and maybe read one of the quizzes to try to start loading that cognitive context. 00:34:46.420 |
Because you now have colliding cognitive context as you look at the second quiz for the first 00:34:51.380 |
time, that's a separate context from the quiz you just graded, and so there's going to be 00:34:57.380 |
a collision as your brain is trying to shut down the context of the first grading block 00:35:03.260 |
So grading that first quiz of the second, the first assignment of the first quiz, I 00:35:09.540 |
It's a quiz, and he has multiple quizzes to grade, and there's two different quizzes. 00:35:16.300 |
So you're in the second type of quiz, and you grade the first quiz from the second type 00:35:18.340 |
of quiz, and that's very hard because it's a new context. 00:35:27.760 |
Go back and enter the grades from your first quiz, because here's what's happening. 00:35:33.500 |
While you are entering the grades from the first type of quiz, your brain is continuing 00:35:37.820 |
in the background the process of switching its context over to the second type of quiz. 00:35:41.540 |
You initiated that by looking at a single quiz of that second type, and now you go back 00:35:46.780 |
and just mechanically enter grades, your brain is going to continue making this switch. 00:35:50.520 |
So now when you're done entering those grades and you return to the second type of quiz, 00:35:54.540 |
your context has more thoroughly shifted, and your grading is going to get up to speed 00:36:00.540 |
This is kind of an advanced way of thinking about it, but this is what you probably would 00:36:06.140 |
If instead you grade the first type of quiz, you enter the grades, then you turn to the 00:36:10.420 |
second type of quiz, it might take—you might have to grade five or six students' quizzes 00:36:16.040 |
before you get that momentum going of your brain completely shifting. 00:36:20.540 |
Whereas with my tactic, you grade the first type of quiz, grade one of the second type, 00:36:25.420 |
enter the grades, then return to the rest of the second type, you'll probably get up 00:36:30.340 |
This is just, at this point, like attention hacking. 00:36:34.740 |
But I do like the type of thinking this induces, which is to think about cognitive context. 00:36:38.700 |
It's like one of the most important properties of modern work, and it's the property that 00:36:44.940 |
we think almost nothing about in modern office productivity. 00:36:56.320 |
We put access to tools and data as a priority, and we completely disregard the cost of context 00:37:01.980 |
So I love any discussion like this that gets us in the weeds on it. 00:37:05.100 |
This sort of psychologically aware productivity is really where we should be. 00:37:10.240 |
So I appreciate the chance to sort of nerd out on that. 00:37:12.760 |
We'll have to have Joseph respond and see how it goes. 00:37:17.380 |
So Joseph, if you hear this, let us know if that technique works. 00:37:25.020 |
I'm a professor and also have a French podcast. 00:37:27.860 |
My university has agreed to include it in my official duties. 00:37:32.820 |
However, if I do accept it, how should I think about organizing my podcast within the traditional 00:37:38.140 |
academic framework of research, teaching, and service? 00:37:41.440 |
Should this be included in my academic tasks? 00:37:52.420 |
I had a pipe and a hat, and I just did my French accent. 00:37:57.380 |
Long story short, I am no longer welcome in the Republic of France. 00:38:02.660 |
I have been banned from setting foot in France after they heard my awesome accent. 00:38:10.660 |
I don't know the French system super well, so I'm going to answer this from the perspective 00:38:15.480 |
of the American academic system, which I think is roughly congruent. 00:38:21.620 |
All right, so in the American academic system, by far the most important thing for promotion 00:38:36.340 |
But those alone can't get you promoted or recognized. 00:38:41.660 |
So if your university is going to allow you to count your podcast as an official academic 00:38:46.220 |
task, I would recommend that you are very clear about which of the three major tasks, 00:38:53.500 |
research, service, and teaching, that it counts as, and I would try to make it count as, service. 00:39:00.340 |
When you count it as service, what this means is you can reduce the amount of other service 00:39:05.260 |
you do, let the podcast take the place of other service obligations so you're not increasing 00:39:10.740 |
your time obligations, and critically, you're not reducing the time you spend on research. 00:39:16.500 |
Because when it comes to service and promotion, it's a little bit more binary. 00:39:21.780 |
Was this person a good citizen of the institution and his community? 00:39:25.900 |
Not how good of a service person were they, right? 00:39:28.500 |
So if you can use your podcast as a way to reduce other types of service so your overall 00:39:34.580 |
Do not let it, however, impinge on the time you spend doing research. 00:39:44.540 |
I'm done with promotions now, but I went through both my promotions from assistant to associate 00:39:47.900 |
with tenure and from associate with tenure to full. 00:39:51.700 |
Both of those promotions, I had large portfolios of more public-facing work, and I had to deal 00:40:00.100 |
When I was promoted to associate, I didn't mention my books. 00:40:07.100 |
When I went to full, I did mention them, because as we just saw in the deep dive, some of the 00:40:13.500 |
work, public-facing work I did also has a very big academic footprint, right? 00:40:18.300 |
We just did a whole paper from the Journal of Applied Philosophy in the deep dive that 00:40:21.660 |
was responding to my digital minimalism book. 00:40:24.540 |
My book, Deep Work, has been cited in academic articles close to 800 times now. 00:40:30.900 |
So I did sort of count that more, but I didn't lean into my podcast, even though I was past 00:40:37.020 |
a 10 million download point at that point, because it didn't quite fit clearly into it. 00:40:43.180 |
I didn't have an agreement like yours that this counted as service. 00:40:51.460 |
Are you doing work that's influencing the academic culture? 00:40:56.260 |
So that's what you've got to be careful about. 00:40:57.260 |
So yeah, if you can use your podcast to reduce other service loads, then I think that's great 00:41:04.060 |
because your podcast will probably have a higher impact than the other service you're 00:41:08.820 |
Just don't let it get in the way of research. 00:41:11.220 |
So now that you're a full professor, do you still have to write as many papers? 00:41:21.260 |
Right now I'm focusing more on technology and digital ethics than I am computer science. 00:41:25.420 |
And that's the type of thing you can explore. 00:41:27.260 |
It's sort of the advantage of full professoredom in 10 years. 00:41:31.500 |
And if you don't like the way it's going, you can switch back to something else. 00:41:35.000 |
One of the things that made a big difference for me is that-- so Google Scholar is a quick 00:41:40.880 |
way you can keep up on people's publications. 00:41:45.900 |
What are their statistics, like their H index, their I-10 index? 00:41:50.220 |
What are their total citation counts by years? 00:41:53.500 |
Once Google Scholar figured out-- because I write under two different names. 00:41:58.240 |
My academic computer science papers are typically written under Calvin Newport. 00:42:02.620 |
And of course, my public-facing writing is under Cal Newport. 00:42:05.400 |
When it figured out, oh, Calvin Newport and Cal Newport are the same person, it really 00:42:14.260 |
So where you saw a bit of a fall off in citations in recent years now shows a steady high level 00:42:21.900 |
of citations because the public-facing work on technology I was doing as Cal Newport gets 00:42:29.100 |
So it shows a sort of smooth transition from less computer science, more digital ethics. 00:42:34.220 |
And the impact is measured by citations as sort of stayed steady. 00:42:38.060 |
I thought you were saying the other name was going to be your French name. 00:42:48.900 |
Actually, I do have a French-- I do have French heritage. 00:42:54.060 |
My paternal grandmother was a Levelle, the Levelle family. 00:43:04.060 |
And the Levelle family goes all the way back to the French Huguenots that came over here 00:43:11.380 |
I have a French Huguenot blood back in there. 00:43:27.500 |
So for those who are new, Slow Productivity Corner is the question. 00:43:31.180 |
We do one question each week related to my most recent book, Slow Productivity, the Lost 00:43:40.260 |
Went to Amazon's Best Business Books of 2024 and the winner of Best Business Book of the 00:43:44.060 |
Year from the S-- something, something, something-- S-A-B-E-W. 00:43:54.020 |
This is an important award, somewhere between $60,000 and $600,000 award. 00:43:58.720 |
My award-winning book, Slow Productivity, we try to do a question each week that comes 00:44:08.660 |
What's our Slow Productivity Corner question of the week? 00:44:13.780 |
What do you think about the slow living craze on the internet? 00:44:16.780 |
And do you think it's just a fad or will it be permanent? 00:44:32.940 |
We're going to be learning this together, what slow living is, and then I can answer 00:44:41.980 |
One of the things I am most scared of in my life is looking back with regret. 00:44:48.280 |
Looking back at all the little moments that I missed in pursuit of more. 00:44:52.400 |
This year, I've been really working on slowing down and trying to be more present for the 00:44:57.140 |
little moments that make up most of our lives and making sure that I'm actually building 00:45:04.340 |
So these are some simple, tiny habits that I have implemented that have really helped 00:45:10.560 |
This is literally a block that I have on my calendar every day just to have five minutes 00:45:17.420 |
Doesn't sound like a lot, but when you don't have any music, no podcast, no work that you're 00:45:21.660 |
thinking of, no work that you're doing, no book that you're reading for just five minutes 00:45:25.180 |
and you sit there and you stare at a wall, not trying to meditate, but you just let your 00:45:31.100 |
It's a beautiful time for me to just reset and make sure that there's actually breaks 00:45:35.640 |
in my life to remember that my life is not online. 00:45:38.380 |
It's not on a computer screen and sometimes I need physical breaks to make that happen. 00:45:48.540 |
Well, first of all, for those who are just listening instead of watching, the video had 00:45:54.340 |
like a faux graininess while they played like really relaxing music and he washed eggs from 00:46:07.760 |
I think almost anything seems, almost anything seems profound. 00:46:13.000 |
I mean, almost anything I think will sound sort of important and meaningful and somber. 00:46:19.660 |
You could have a video of someone earnestly trying and failing to life threatening in 00:46:27.640 |
a life threatening way to get a bear into the car to drive it. 00:46:34.720 |
The bear's mauling him as he's trying to get him into a Chevy Impala. 00:46:39.160 |
Play it to that music and cut to some scenes of someone washing eggs. 00:46:42.000 |
You'd be like, "Yeah, life is like a bear trying to get into a car." 00:46:44.700 |
Drive to the insurance agency and get the insurance card. 00:46:46.760 |
Just him at the insurance agency, just his face ripped up in bandages trying to get the 00:47:00.120 |
Slow living and slow productivity are different. 00:47:01.920 |
So let me tell you how, and then I'm going to tell you what slow living seems to be like 00:47:13.760 |
It's about how do we define what productivity means in knowledge work. 00:47:18.960 |
The core argument of that book is that we have a bad implicit definition that we tend 00:47:22.520 |
to fall back on, which is pseudo productivity, which is to use visible effort as a proxy 00:47:31.440 |
It says our goal in knowledge work should not be to be as busy as possible. 00:47:35.320 |
We should instead focus on not doing too many things at the same time, keeping our pace 00:47:39.760 |
of work varied and natural, but then really obsessing over quality. 00:47:43.440 |
This is a better, more sustainable definition of productivity. 00:47:47.280 |
Slow living seems to be about life outside of work and a lot to do with distraction, 00:47:54.320 |
So it's probably closer to digital minimalism when it comes to the things I talk and write 00:48:01.280 |
I think if you're a digital minimalist, your life will seem slower in the way that's being 00:48:06.000 |
talked about in this video, because a digital minimalist works backwards from their values 00:48:10.500 |
to dictate their technology use, and so, you know, if you value your chickens and washing 00:48:17.240 |
your eggs or whatever, you are going to be careful about crafting your technological 00:48:20.920 |
use so that you're not always looking at your phone and you can't enjoy doing that. 00:48:24.120 |
In general, digital minimalists do feel like their lives are slower and richer. 00:48:28.500 |
There is a neurological reason for this, right? 00:48:35.200 |
So if you're constantly paying attention to your phone, you perceive your life as very 00:48:38.200 |
sort of like fast-paced, emotionally activated, sort of this like really sort of shaky, jittery 00:48:44.400 |
world that's always rolling past, because when you're looking at your phone, everything's 00:48:53.640 |
Time moves fast because you're moving fast on your phone. 00:48:56.320 |
Also time moves fast because you're doing this sort of homogenous behavior. 00:49:00.900 |
So when you're doing sort of the same thing, you don't have a really good sense of how 00:49:08.560 |
When you're not on your phone and engaging in specific behaviors, it is just by definition 00:49:13.160 |
slower because everything is slower than using your phone. 00:49:16.500 |
And because those behaviors are novel, they're different specific things in novel specific 00:49:20.820 |
locations, your perception of time is of it being much slower. 00:49:30.140 |
So I think digital minimalism will probably lead you to something like slower living. 00:49:36.220 |
Start with the digital minimalism and end up at the slower living. 00:49:38.980 |
It's sort of a consequence of getting intentional about your life and technology. 00:49:44.180 |
But it is quite separate from slow productivity. 00:49:45.960 |
They share the same word slow, but they're only connected by this idea of sort of intentionality. 00:49:51.200 |
Slow productivity is about your work at your desk. 00:49:53.680 |
Slow living is about your life outside of work. 00:50:03.760 |
We have kind of music, cooler music like that for the in-depth episodes. 00:50:09.600 |
That's a little more like a meditative music. 00:50:18.440 |
I've noticed I struggle with tiredness and a lack of focus in the afternoons, especially 00:50:25.600 |
After lunch, my mind doesn't feel as sharp and I often find myself drifting off and daydreaming. 00:50:30.040 |
Do you have any strategies to help maintain focus and mental clarity during these times? 00:50:36.280 |
Well, first of all, we have to keep in mind that there's a limited capacity to do deep 00:50:44.000 |
So if you're talking about like highly demanding focused activities, things that require you 00:50:48.800 |
to use your full cognitive capacity, probably do those in the morning. 00:50:53.200 |
Do those first thing before you've had a lot of context shifts so your mind is still clear. 00:51:02.040 |
I'm okay not having to return to these cognitively demanding activities in the afternoon because 00:51:05.960 |
I'm just not going to have enough cognitive gas. 00:51:09.080 |
If it's more just, "No, I have administrative stuff to do. 00:51:15.720 |
It's not cognitively demanding, but I just sort of lose focus and drift and lose energy 00:51:22.960 |
And a couple things that helps is time blocking. 00:51:26.020 |
So instead of having to constantly have an argument with yourself of like, "What should 00:51:33.280 |
Time block those afternoons and just make the single commitment to stick to your time 00:51:38.640 |
So you get rid of a lot of that decisional friction that comes from being more freeform 00:51:49.160 |
So let me do a lot of similar tasks together because even if minor, sticking within the 00:51:57.380 |
This can apply even to cleaning out your email inbox. 00:52:01.360 |
I recommend if you have like a super stuffed inbox and it's like three o'clock and you're 00:52:04.920 |
exhausted but you kind of have to get through it, create a folder or label for the current 00:52:12.000 |
And then go through and grab a bunch of messages of the same type. 00:52:16.240 |
So they're all relevant to the same cognitive context, they're all scheduling messages. 00:52:19.940 |
They're all messages related to like an upcoming event. 00:52:22.960 |
Move those all to that label or folder and then tackle those just by themselves. 00:52:27.040 |
So now you're doing messages without having to change your context. 00:52:29.480 |
Then go and grab another type of messages and do the same. 00:52:33.400 |
What happens is if you follow the alternative of just sort of doing your emails in the order 00:52:37.500 |
they exist in your inbox, you're switching potentially your cognitive context from message 00:52:48.880 |
Hey, I'm exhausted by three or four that may be like stop your day between three and four. 00:52:54.520 |
If you're doing multi-scale planning, you have a good weekly plan, your weekly plan 00:53:01.000 |
This is a key idea from my book Slow Productivity. 00:53:03.360 |
The second principle, it says work at a natural pace, which says this idea that like the perfect 00:53:07.880 |
calibration for humans that do cognitive work is nine to five all out every day is preposterous. 00:53:12.740 |
Why would that just happen to be optimal for everyone? 00:53:16.340 |
You might find out working till three or 3.30, this is really what's optimal. 00:53:20.140 |
You're time blocked, you're on it, and then when you're done, be done. 00:53:23.980 |
Or maybe four, maybe two, it could be different for different people. 00:53:26.620 |
But don't feel like you're too stuck with it has to be this exact eight hour day. 00:53:32.160 |
Some people just run out of gas earlier than others. 00:53:34.080 |
Your work might be harder than others, so you need to end earlier. 00:53:36.740 |
I talk about in, I don't know if this is in Slow Productivity, I think this is in my book 00:53:43.380 |
I talk about this type of programming called extreme programming, and it's pair-based and 00:53:49.160 |
it's super intense, and it produced fantastic code, but it's super intense. 00:53:54.500 |
I report that companies that do this type of coding, they produce really cool stuff, 00:53:59.380 |
but they have to let people go home by like 2.30 or 3.00. 00:54:05.900 |
People at first have to go home and take naps. 00:54:08.340 |
So don't assume that everyone is perfectly calibrated to work all out till five. 00:54:14.620 |
If you're organized and on the ball, you'll produce good work. 00:54:25.540 |
This is where we have people write in where they talk about their personal experience 00:54:29.460 |
putting the type of things we talk about the show in the practice in their own lives. 00:54:32.540 |
All right, so today's case study comes from Zach. 00:54:36.980 |
Now here's what Zach says, "Recently, I've made a monumental life change for the better 00:54:44.180 |
in no small part due to CALS, books, podcast, and newsletter. 00:54:51.020 |
While my classes went online, I decided to get my real estate license and pursue my interest 00:54:54.740 |
in real estate investments because of the high autonomy and market activity due to the 00:55:02.660 |
I specialized in commercial investment sales and became proficient in my field because 00:55:06.900 |
of my implementation of deep work principles. 00:55:09.800 |
The only problem was that I was miserable at work. 00:55:13.380 |
My days mostly consisted of cold calling and driving all over the state for client meetings. 00:55:17.340 |
So even though I was making decent money and had full autonomy, my lifestyle wasn't great 00:55:25.560 |
On top of that, I was working mostly solo while I'm a very team oriented person. 00:55:32.700 |
After listening to your podcast religiously on my long drives, my mindset began to shift. 00:55:37.420 |
I realized I was optimizing for autonomy and money without much thought to lifestyle and 00:55:48.660 |
Leveraging CALS principles got me far relatively quickly. 00:55:53.100 |
But when I looked at guys way further down the road, they had a lot of material success 00:55:58.580 |
After hunting and interviewing with jobs that align with my long term lifestyle vision for 00:56:01.940 |
a few months, they successfully landed a job at a tech startup that provides me a much 00:56:08.740 |
It's a short, beautiful commute to an office in my favorite part of town. 00:56:11.700 |
My work is varied, challenging and interesting. 00:56:15.040 |
And most importantly, I'm working with a like minded team who are all just as obsessed about 00:56:21.740 |
I just finished my first week and I'm blown away at what a difference this intentional 00:56:28.260 |
For the first time in years, I'm bursting with excitement to go to work. 00:56:31.900 |
Should I have applied for jobs while working? 00:56:34.540 |
But I was so burnt out that I had a burn the ships mindset. 00:56:36.900 |
I'm eagerly awaiting your next book nearly as much as I am awaiting Brandon Sanderson's 00:56:51.740 |
I told you I have an invitation to go see his lair. 00:57:04.300 |
She's like, I'm going on a trip, I'm going to be spending a week in Brandon Sanderson's 00:57:14.020 |
And all I could think is like, if that was me going on that trip, I would be able to 00:57:21.420 |
It'd be interesting if I get there and it's just half of it's just a sex dungeon. 00:57:27.740 |
I think he's a pretty straight laced Mormon, but you never know. 00:57:35.340 |
The least popular pornographic video of all time is titled "Brandon Sanderson's Sex Dungeon." 00:57:49.500 |
Two things I want to point out about that case study. 00:57:56.900 |
It's one of the most important dials you have to turn in trying to construct your lifestyle. 00:58:03.340 |
What do you want the day-to-day of your life to be like? 00:58:07.140 |
That's how you help figure out what work to do or not do. 00:58:09.700 |
This is much more effective than either following your passion or just blindly following a clear 00:58:16.260 |
metric like money and just hoping by happenstance that will lead you being happy. 00:58:21.820 |
The other thing I want to point out about this example though is, okay, Zach started 00:58:34.100 |
Figuring out the components of your ideal lifestyle is difficult and it evolves with 00:58:42.700 |
He had a hypothesis, I think, built on autonomy and financial security. 00:58:48.460 |
He had a hypothesis of a lifestyle vision that he thought would be ideal for him. 00:58:54.780 |
Zach pursued a job that matched that hypothesis and then learned through real-life experience, 00:59:00.220 |
"Oh, there's these other things I care about. 00:59:02.620 |
I didn't realize them until I had them not be present in my life. 00:59:07.100 |
I didn't realize autonomy without X, Y, and Z wasn't so good, the money thing I don't 00:59:13.260 |
Through life experience, he updated his priors. 00:59:16.420 |
His vision of the ideal lifestyle evolved and he said, "Great. 00:59:20.060 |
Let me now leverage my career capital and make a shift that's going to get me closer 00:59:24.140 |
Now, in this case, the career capital he leveraged was literal capital. 00:59:28.380 |
He was making good money, so he saved up enough to buy him time to make a switch. 00:59:34.340 |
He was early enough in his career that sort of skills-based career capital was less useful 00:59:38.420 |
or less important because he was still a pretty early-stage career. 00:59:42.100 |
Then he used that money to buy him some time to find a job that focused on other things 00:59:46.460 |
he had discovered were important and now he's much happier. 00:59:48.900 |
That's lifestyle-centric career planning in action. 00:59:58.580 |
It's not Brandon Sanderson's Sex Dungeon sexy, but it's what over time is going to 01:00:09.220 |
I've been wearing my Deep Life hat regularly. 01:00:12.860 |
No one has asked me yet or noticed what VBLCCP means. 01:00:16.900 |
I haven't got a reaction to it yet, but I'm still thinking we'll find our first. 01:00:20.740 |
You get any questions about Deep Life or people just assume it's a brand? 01:00:28.780 |
I'm going to keep wearing mine until I find a true believer, but I haven't found them 01:00:33.420 |
All right, we've got a cool final segment coming up, a Tech Corner segment, but first 01:00:43.580 |
The fact that Netflix hides thousands of shows and movies from you based on your location 01:00:48.580 |
and then has the nerve to just keep increasing their prices. 01:00:52.940 |
Now you could just cancel your subscription in protest, or you could be smart about it 01:00:56.980 |
and make sure you get your full money's worth, like I do, by using ExpressVPN. 01:01:10.340 |
The way it works very briefly is that instead of just directly accessing a website or a 01:01:14.660 |
service with a VPN, you instead connect to a VPN server. 01:01:19.220 |
You tell that server with an encrypted message the site and service you actually want to 01:01:24.420 |
That server talks to it on your behalf, encrypts the response, and sends it back. 01:01:28.420 |
So that means anyone monitoring your internet usage only learns that you're talking to a 01:01:34.720 |
They don't learn what site you're talking to. 01:01:36.140 |
They don't learn what service you're talking to. 01:01:39.020 |
One of the advantages of doing this, beyond just the obvious privacy advantages, the hacking 01:01:43.340 |
advantages, the security advantages, is if you connect to a VPN server in a different 01:01:48.160 |
location and that server talks to Netflix on your behalf, Netflix thinks you're in that 01:01:54.140 |
So ExpressVPN has servers all around the world. 01:01:57.940 |
So you can select a server in like whatever geographic zone you care about, and then you'll 01:02:02.060 |
get that zone's Netflix content or whatever streaming service you're using when you use 01:02:08.900 |
That's an extra bonus thing you can get, a benefit of using a VPN on top of all the other 01:02:15.540 |
The reason why I like ExpressVPN is that it's easy. 01:02:20.140 |
You can change your location of the server with one click. 01:02:23.100 |
When it's on, which is easy to do, you just click to turn it on, you just use all your 01:02:29.180 |
And all this happens transparently in the background. 01:02:32.300 |
It works on phones, laptops, tablets, even smart TVs and more. 01:02:39.860 |
So like there's probably one nearby to get the fastest speed. 01:02:42.740 |
You can stream in HD with zero buffering through it. 01:02:45.020 |
So it's got great sort of best in class speed. 01:02:48.460 |
It's rated number one by top tech reviewers like CNET and The Verge. 01:02:53.980 |
That's why of the VPNs that are out there, I recommend ExpressVPN. 01:02:58.860 |
Right now you can take advantage of ExpressVPN's Black Friday Cyber Monday offer to get the 01:03:07.460 |
Use my special link expressvpn.com/deep and you'll get four extra months with the 12-month 01:03:15.540 |
plan or six extra months with the 24-month plan totally free. 01:03:21.380 |
That's expressvpn.com/deep to get an extra four or even six months of ExpressVPN for 01:03:29.660 |
I also want to talk about our friends at Shopify. 01:03:35.780 |
When you think about businesses whose sales are rocketing like Feastables by Mr. Beast 01:03:42.260 |
or Thrive Cosmetics or Silicon Valley's Weekend Uniform supplier Cotopaxi, you think about 01:03:48.140 |
an innovative product or a progressive brand or buttoned down marketing. 01:03:54.860 |
But an often overlooked secret is how these brands actually do their selling. 01:04:01.700 |
The experience of buying from these brands online. 01:04:06.480 |
These brands, along with millions of others, use Shopify. 01:04:10.700 |
All right, nobody does selling better than Shopify. 01:04:16.100 |
It's home of the number one checkout on the planet and the not so secret secret which 01:04:24.940 |
This means that way less carts go abandoned and way more sales get done. 01:04:31.460 |
So if you're growing your business, your commerce platform better be ready to sell wherever 01:04:37.820 |
On the web, in your store, in their feed, and everywhere in between. 01:04:46.820 |
Upgrade your business and get the same checkout that Feastables or Thrive or Cotopaxi use 01:04:55.940 |
Sign up for your $1 per month trial period at shopify.com/deep. 01:05:03.340 |
Go to shopify.com/deep all lowercase to upgrade your selling today, shopify.com/deep. 01:05:08.980 |
All right, Jesse, let's go to our final segment. 01:05:13.220 |
All right, for our final segment today, we want to do a triumphant return to my tech 01:05:20.680 |
corner segment where we get into a technical topic that is relevant to the type of things 01:05:28.680 |
So I put on my computer science hat a little bit to help give us some more insight on topics 01:05:32.480 |
relevant to living a deep life in a distracted world. 01:05:35.200 |
All right, in today's tech corner, I want to talk about how do recommendation algorithms 01:05:41.360 |
Because, in part, this is really relevant to the ongoing discussion about social media 01:05:49.960 |
So if we look at some of the new child safety legislation like COSA or COPA 2.0 or California's 01:05:57.440 |
big law, we see that one of the things that they are pushing for is that when kids are 01:06:03.380 |
using social media that we have to be careful about what it does recommend or not recommend. 01:06:10.720 |
So you sort of see these arguments, okay, we're not talking about censoring information 01:06:16.440 |
that can exist on social media, but we want to be careful about what we recommend or don't 01:06:21.120 |
We also see this in discussions about things like Twitter or Twitter alternatives like 01:06:25.320 |
Threads or Blue Sky, where there's often this notion of the recommendation algorithm can 01:06:32.640 |
We saw this a lot with the discussions around Threads when it was released, that they were 01:06:35.920 |
tuning down, they claim, the political nature of content and tuning up recommendations for 01:06:41.800 |
There's this idea that there is a "algorithm" that is in charge of showing us stuff, and 01:06:48.080 |
this algorithm is really important, and we can change this algorithm to change the experience 01:06:53.640 |
or maybe strip it out altogether and have an experience without it. 01:06:56.480 |
It is at the core of many discussions around social media and its harm, so I thought we 01:06:59.920 |
would talk about, well, how do these algorithms actually work? 01:07:05.560 |
So what I'm going to do here is greatly simplify the idea of how a sort of machine learning-based 01:07:11.960 |
optimization recommendation algorithm actually works. 01:07:14.920 |
I want to start by saying there is a spectrum on which these algorithms exist. 01:07:21.040 |
So if we look in the social media ecosystem, on one end of the spectrum will be something 01:07:24.360 |
like Twitter, which actually is relatively non-algorithmic. 01:07:28.960 |
The way curation decisions are made on Twitter, a lot of it is actually cybernetic, which 01:07:34.440 |
means it's based on individual humans' decisions to retweet or not, and when those are combined 01:07:39.440 |
with the network structure of Twitter, which has power law dynamics, it's really good at 01:07:43.880 |
sort of selecting for certain content to have explosive growth and start trending. 01:07:50.680 |
It's actually just the aggregate of a lot of human decisions. 01:07:53.120 |
On the other end of the spectrum is TikTok, which is essentially entirely algorithmic. 01:08:00.400 |
It doesn't care who you follow or don't follow or what other people like. 01:08:03.920 |
It just uses an algorithm to select what to show you next, then what to show you next, 01:08:10.480 |
So we're going to be leaning more towards that TikTok side, where really it's like a 01:08:13.800 |
computer is deciding, not other humans, what it is you should see. 01:08:17.520 |
I'm going to give you a highly simplified way of thinking about this, then we can draw 01:08:24.720 |
So let's pretend, for the sake of this example, that we're building an algorithm to recommend 01:08:31.760 |
And I am going to do a lot of drawing here, God help us. 01:08:35.120 |
So if you're listening instead of watching, you might want to actually load up the YouTube 01:08:46.320 |
So you just go to, what, the deeplife.com/listen, look for episode 327. 01:08:51.920 |
It usually comes up within the same day or the day after the episode lands. 01:08:57.680 |
So we're TikTok, and we want to recommend videos to a user. 01:09:03.800 |
So we need ways, first of all, of describing the videos we have in our collection, and 01:09:10.920 |
we want to describe them in a way computers can understand. 01:09:17.880 |
Let's say we're going to assign a single number to every video that we're going to use to 01:09:37.880 |
So we're going to have videos, we're going to describe each videos with a single number. 01:09:43.800 |
Let's say, you know, this number, for example, here is going to describe for each video, 01:09:52.800 |
And so we have this a single number on which we can categorize videos. 01:09:57.160 |
I drew a line here, and we can imagine this as this is our space in which videos can fall. 01:10:07.280 |
And in this very simple example, kind of numbered from, you know, zero up to eight. 01:10:13.920 |
We could use yellow dots in this example to be different videos, and we can just sort 01:10:16.800 |
of place them on this line, depending on how many cats they have in them. 01:10:21.520 |
So there's a couple videos with a lot of cats, and some five, a couple over here, and I don't 01:10:28.560 |
know, we have fractional numbers of cats, whatever. 01:10:31.520 |
So we're describing all these videos by a single number. 01:10:36.940 |
In this simple example, now let's let a user come along. 01:10:41.260 |
And what we do is we want to look at the videos that you are looking at. 01:10:48.840 |
And let's say we want to categorize them simply as a video you like or don't like. 01:10:53.700 |
So in like the Facebook days, there'd be an actual like button. 01:10:56.960 |
The way we think TikTok works is that it actually looks at how long you watch a video. 01:11:01.000 |
So if you quickly swipe to the next video, you don't like it. 01:11:03.820 |
If you watch it long enough, then we can consider that you do like it. 01:11:07.680 |
So we're gonna start showing you videos, and we are going to start, let's say, let's just 01:11:13.140 |
keep track of the videos you like, so the videos that you actually watch for a little 01:11:18.780 |
And I'm gonna plot those on this same one-dimensional line here with a purple dot, and so maybe 01:11:27.260 |
you like a couple videos with five cats, you like one with zero cats, six cats, there's 01:11:33.420 |
a three-cat one, another six-cat, maybe one eight, maybe another couple more zero ones. 01:11:38.540 |
So I'm just keeping track of, okay, these videos you liked, where did they fall in this 01:11:47.660 |
Now after we've done this for a while, what I can do, and this is how these sort of basic 01:11:51.300 |
algorithms work in a very simplistic way, I can say, okay, where on average, where on 01:11:58.820 |
average are these videos you like falling on this single value I care about? 01:12:07.180 |
You can think about what we're trying to do here is basically find the average point, 01:12:11.880 |
think about this as like we're trying to find a point that has like the best overall distance 01:12:16.460 |
to all of your points, it's interesting, my controller is weird. 01:12:24.180 |
In reality, the way this is typically done is actually trying to minimize the average 01:12:29.200 |
square of the distances, don't worry about that here. 01:12:31.660 |
What I'm trying to do here is sort of find a point on here that's sort of in the middle, 01:12:35.980 |
it's the average, it's minimizing distance to all of your likes. 01:12:39.500 |
So you have a bunch of zeros you like here, but you have like a bunch of fives, sixes 01:12:43.780 |
So, you know, maybe your average is like right where that X is, that's kind of the center 01:12:53.560 |
So now when it comes time for me as TikTok to show you another video, what I can do to 01:12:59.300 |
be smarter is say, great, I'm going to randomly select you a video from all the videos that 01:13:04.780 |
exist, but I'm going to weight the probability that I select a given video depending on how 01:13:10.460 |
close it is to this point that we said was kind of at the center of your preferences. 01:13:16.940 |
So in here, right, this point is like somewhere between four and five cats is kind of like 01:13:21.300 |
the center of your preferences when we measure videos by cats. 01:13:24.780 |
So you know, it's possible that I could select you a video out here, but I'm much more likely 01:13:31.220 |
So you're gonna get a lot of videos with like four or five cats and sometimes some videos 01:13:35.020 |
with less cats and sometimes some videos with more. 01:13:37.660 |
But pretty soon you're gonna be like, wow, this is eerie, TikTok has really figured out 01:13:40.820 |
that there's kind of, I like videos that have, you know, like a couple armfuls of cats in 01:13:45.700 |
Again, this is simplified, but it roughly gets to how these type of things work. 01:13:55.740 |
Of course, these videos are going to have more dimensions on which we're going to want 01:14:01.820 |
But that's okay, because the same thing works even as we go to more dimensions. 01:14:07.940 |
So maybe we say, okay, there's two numbers, let me select this here, maybe there's two 01:14:13.420 |
numbers by which we want to describe all of our videos. 01:14:18.300 |
So one number is the number of cats, and then the another number is like the number of skeletons. 01:14:26.220 |
So we could just draw this if you're looking at the screen here as just like another axis, 01:14:38.500 |
You know, a video with seven cats and one, two, three skeletons would show up right here 01:14:44.040 |
A video with like zero cats and four skeletons might be over here. 01:14:49.460 |
And again, we see, we plot every time you like a video, we kind of plot it in this two 01:14:56.300 |
dimensional space, and we can do the exact same thing we did before, where we find like 01:15:00.980 |
roughly speaking where the center is by some sort of center metric, okay, roughly speaking, 01:15:06.100 |
this is the center of all the videos you've liked. 01:15:08.420 |
And so now when we randomly select videos, they're going to be kind of roughly in this 01:15:11.460 |
range, like you're gonna see a lot of stuff with a good number of cats and a fair number 01:15:16.500 |
And like, you're very rarely going to see something with like a bunch of cats and a 01:15:19.380 |
bunch of skeletons or no cats and you know, whatever, right? 01:15:25.500 |
Now we would be in three dimensional space and you could imagine there's regions where 01:15:29.300 |
you have lots of videos you like in that three dimensional space. 01:15:31.500 |
And when we randomly select videos, we select them near there, we can expend the number 01:15:36.100 |
of numbers we use to describe these videos, they can get much larger. 01:15:40.340 |
And something like TikTok is going to have probably thousands of different numbers, each 01:15:48.300 |
Now we can't draw this, once we get past three numbers, we can't really draw these in a way 01:15:52.580 |
that makes sense to us, but the same mathematics works. 01:15:56.700 |
So the videos are described by a ton of numbers. 01:16:03.020 |
And then we can select for videos that are in some sense close to the clusters of videos 01:16:14.020 |
What if you like multiple, there's, if we look in this region where you have a bunch 01:16:18.900 |
of clustered videos you like, what if there's like multiple types of videos you like? 01:16:23.860 |
This just shows up as like multiple clusters, kind of like multiple clusters in this multidimensional 01:16:32.060 |
We have ways of finding a bunch of different center points. 01:16:35.740 |
Okay, there's a bunch of different center points that each correspond to like a type 01:16:40.740 |
And so now when we randomly select a video to show you, we're giving extra probabilities 01:16:46.780 |
to anything near one of these clusters, and the bigger the cluster, the more likely we 01:16:52.380 |
The other complication, well, how do we know if you like something that you've never seen 01:16:57.300 |
TikTok answers this by alternating between just purely showing you something weighted 01:17:03.400 |
towards the things you like versus showing you something new. 01:17:06.940 |
So it will opportunistically show you new things just to see, give you a chance to show 01:17:12.840 |
a preference for things you've never seen before. 01:17:15.200 |
That's why when you use TikTok at first, it kind of drifts over time until you finally 01:17:20.900 |
It'll show you a lot more random stuff at first to try to see what you like. 01:17:26.180 |
It's like very roughly speaking, something like this is going on. 01:17:28.460 |
All right, so here's some conclusions about this. 01:17:33.260 |
These algorithms are automatic and agnostic to content details, right? 01:17:37.500 |
It's not computer code where you can come in and it has in there political content, 01:17:46.040 |
unsafe for kids content, sports content, and you can turn a knob, let's turn down politics 01:17:52.040 |
and turn up sports content, or turn down controversial and turn up non-controversial. 01:17:59.660 |
It has all these numbers, most of which, by the way, are figured out using embedding tools 01:18:04.400 |
that are machine learning tools so that you don't know what they are in advance, right? 01:18:10.280 |
The software just figuring out what numbers matter. 01:18:12.960 |
That's just automatically plotting your videos that you like or don't like and finding these 01:18:17.360 |
sort of center spaces in the space and randomly selecting. 01:18:20.280 |
The algorithm has no idea what these spaces are from a content point of view. 01:18:24.840 |
It's selecting vectors that are weighted to be near other vectors that you've expressed 01:18:33.860 |
That's why when you purify these algorithms, like TikTok does, it seems eerie, like how 01:18:37.760 |
did TikTok figure out that, you know, I like videos about, you know, bears working on crafts? 01:18:45.640 |
But this type of exploration of the space and weighted selection will pretty quickly 01:18:49.900 |
cluster these things together and the intersection clusters will have a lot of weight. 01:18:54.240 |
It will just automatically find these things. 01:18:56.460 |
It seems very eerie, but it's actually quite simplistic mathematically what's happening 01:19:02.520 |
But because it's automatic, they're not nearly as controllable as we think. 01:19:09.760 |
Controlling these type of recommendation algorithms is difficult because of their automatic content 01:19:16.520 |
What we end up needing to do is things like human-in-the-loop dead zone definitions. 01:19:23.760 |
So we show a lot of content to real people and we say, "Here's the type of stuff we're 01:19:28.920 |
And when they see the stuff that, to their human intuition, matches things we're worried 01:19:37.920 |
And they create what you can think of as a like dislike plots in this space. 01:19:41.560 |
And then you can find the sort of centers of these spaces of stuff that people or testers 01:19:45.360 |
said was bad, and you can reduce weight for videos near those. 01:19:49.360 |
It could give you sort of negative probability weight if you're near one of those zones, 01:19:57.720 |
It's not just you coming in saying, "Don't do this type of content." 01:20:00.960 |
You have to have humans calling stuff bad, and that translates into this inscrutable 01:20:05.700 |
multidimensional space, and it sort of affects the weight. 01:20:08.680 |
So it's kind of an imperfect way of trying to tame this algorithm. 01:20:14.560 |
Stuff that the human testers haven't seen or clicked on is going to be treated like 01:20:20.480 |
And so these algorithms, we just, we have to keep this in mind. 01:20:24.160 |
Algorithms are automatic and mathematical and not easily tamable in a sort of human 01:20:32.200 |
So when thinking about reforms of these technologies, do not think about like a newspaper editor 01:20:43.840 |
It can give you like eerily successful results in terms of honing in on your interest, but 01:20:50.320 |
it's also very hard to keep an algorithm like that successful. 01:20:54.040 |
And somehow have it avoid lots of stuff, because it doesn't know what stuff means. 01:20:58.280 |
Humans have to get in there, and it's messy at best. 01:21:03.920 |
They're often discussed to be these like highly tunable, understandable things. 01:21:07.400 |
They're simple algorithmically, but complicated in their effect and complicated to tame. 01:21:14.400 |
We did philosophy and computer science in the same episode. 01:21:19.760 |
We just kind of got our nerd bona fides up here, probably also lost half our audience. 01:21:30.240 |
We'll be back next week with another episode. 01:21:38.080 |
Hey, if you liked today's discussion about the philosophical underpinnings of digital 01:21:42.320 |
minimalism, I think you'll also like episode 298 about intentional information in which 01:21:48.980 |
we go deep at understanding the role of information and human flourishing. 01:21:54.880 |
So today I'm going to argue that we misunderstand the impact of how we obtain information on