back to index

Does Anyone Else Feel Exhausted? - The Hidden Forces Draining Your Focus, Energy & Happiness


Chapters

0:0 You Are Not a Cog
27:43 Should I break my large tasks into many small ones?
32:14 How will AI affect living the deep life?
37:39 How can I say “no” to more incoming requests?
41:27 Should an architect take on broader roles that don’t necessarily add to career capital?
44:0 Can a nurse implement time blocking?
47:30 Can a Kansan system work across all departments without being overly complex?
52:43 Organizing the details of a Trello board
57:25 Lifestyle centric value based planning for a young family
65:2 AGI is not Superintelligence

Whisper Transcript | Transcript Only Page

00:00:00.400 | So one of the things I love to do as a computer science professor who also thinks more broadly
00:00:04.740 | about how we live and work in the modern digital environment, is to draw connections between
00:00:10.600 | these two worlds of mine, the computer science and the advice world.
00:00:15.160 | So I went to a talk the other day, it was given by a computer security researcher from
00:00:19.540 | around this area, and it sparked in my mind an interesting thought about one of the reasons
00:00:23.900 | why we often feel so exhausted and unhappy with contemporary knowledge work.
00:00:29.080 | So what I want to do here is try this out for size.
00:00:31.960 | I am going to connect a very narrow computer security issue with the very broad question
00:00:38.580 | of how do we make our work less exhausting?
00:00:41.320 | All right, so let's get into it.
00:00:43.760 | I pulled up on the screen here for people who are watching instead of just listening a meme
00:00:49.520 | that gets at a clear computer security issue.
00:00:51.620 | So here's what this meme is, there's someone at a computer, and here's the text, "Sorry,
00:00:56.980 | that your password must contain an uppercase letter, a number, a haiku, a gang sign, a
00:01:02.360 | hieroglyph, and the blood of a virgin."
00:01:04.480 | All right, so does this sound familiar?
00:01:08.480 | Starting in the early 2000s and picking up with increasingly urgency has been these ever-escalating
00:01:14.860 | requests from software and security ops to make your password increasingly better from a hard-to-crack
00:01:24.400 | or security perspective.
00:01:27.200 | And the way this sort of process unfolded was like at first there were sort of suggestions,
00:01:31.580 | like, "Hey, a good password, you know, should have this."
00:01:34.900 | People ignored that.
00:01:35.900 | And so then they started educating, like, "Well, we're going to give you some like information
00:01:40.040 | about like why you want a better password or what makes a better password."
00:01:44.720 | That was largely ignored.
00:01:45.580 | And then the software and security operators finally just begin forcing people, like, "Your
00:01:50.180 | password has to obey all these rules or we're not going to accept it."
00:01:53.240 | So you have to figure this out when you set up your password.
00:01:57.420 | There's other rules as well.
00:01:58.420 | It's not just what a new password has to do.
00:02:00.240 | They begin adding rules about, like, "We looked at your last passwords as well, and it can't
00:02:03.880 | be too similar to your most recent password."
00:02:06.760 | Also rules about, like, "You have to change this password roughly like once every 18 minutes."
00:02:11.240 | I see, it seems roughly what they seem to request.
00:02:14.660 | So from a security operation perspective, it's as if their mindset is, "Why are users resisting
00:02:23.400 | these rules?
00:02:24.880 | Having more complicated passwords objectively makes these systems safer and harder to crack,
00:02:30.240 | and it's bad if these systems get hacked into and cracked."
00:02:34.020 | And from the security people's perspective, it's not like these rules are somehow super onerous,
00:02:41.160 | like, people don't know how to do them or it requires some sort of complex skill that people
00:02:45.540 | don't have.
00:02:46.540 | It's just coming up with a password that matches these various rules.
00:02:50.160 | Like, we're not making that big of an ask, and it's, like, important for passwords not
00:02:54.040 | to be cracked.
00:02:55.040 | All right.
00:02:56.040 | So this is like a mindset in the security world.
00:02:59.660 | I'm going to give it a name.
00:03:02.760 | The mindset behind this approach to computer passwords, I'm going to call the isolated optimal
00:03:08.500 | mindset.
00:03:11.080 | I'm going to try to generalize this mindset, and then we're going to bring it out of computer
00:03:14.620 | security here in a second.
00:03:16.340 | But the isolated optimal mindset looks at specific behaviors in isolation and asks, "What's the
00:03:23.140 | optimal thing for a person to do in this situation?"
00:03:26.960 | So let's just look in isolation setting up a password for this IT system at our company.
00:03:32.220 | What is the optimal thing for a user to do here?
00:03:35.220 | Oh, to give a password that we know will be largely resistant to brute force cracking attempts.
00:03:41.500 | And the way that this isolated optimal mindset unfolds is, like, look, if the optimal thing
00:03:45.640 | you're going to do here is not crazy, like, okay, you need to go on a quest of ever increasing,
00:03:52.900 | you know, difficult obstacles.
00:03:53.900 | And when you make it back on the other end of the quest, you'll have your password.
00:03:57.700 | As long as it's not crazy like that, or something that people just won't know how to do, right?
00:04:01.960 | So you're not saying, "Yeah, we just need you to write, like, a quick C# script and let's
00:04:05.620 | just have it do an auto-generation of your password and make sure that it has, like, proper polymorphism
00:04:10.660 | on its objects."
00:04:11.660 | Actually, I don't know if C# is object-oriented, so that reference might have just upset.
00:04:16.440 | Jesse got really upset when I referenced a property of object-oriented programming when
00:04:22.060 | referencing C#, or famously, you would use C++ more often than C# for object-oriented programming.
00:04:28.680 | And Jesse, he just, like, rolled his eyes and shook his head.
00:04:31.040 | He gets really mad.
00:04:32.040 | Would you say that's true when I mess up computer programming references?
00:04:35.540 | All I can think of is, like, Neil Stevenson's cryptography books when you're talking about
00:04:39.420 | all this, and I'm like, "I don't understand any of this."
00:04:41.600 | If there's one thing that upsets Jesse, is when we're talking about polymorphism in objects
00:04:47.040 | and object-oriented programming, not correctly referencing the ingrained polymorphism support
00:04:53.040 | in various language classes.
00:04:54.820 | We fight about this all the time.
00:04:56.280 | But anyways, all right, so trust me, we're leaving the nerd world in a second.
00:05:00.120 | But we're starting in computer science.
00:05:01.120 | And we're going to move to the world that 99% of you care about.
00:05:04.620 | So this is the mindset that drives all that annoying password stuff, the isolated optimal
00:05:08.380 | mindset, which, again, is just, hey, what would be the optimal thing for someone to do
00:05:12.120 | here?
00:05:13.120 | And if that answer isn't crazily complicated or onerous, then, like, why won't they just
00:05:16.960 | do this?
00:05:18.960 | I think this mindset explains a lot of the expectations in the broader world of work that
00:05:26.960 | tend to exhaust us as well.
00:05:28.960 | So this mindset and security, which I heard and talked about in a talk the other day, got
00:05:35.020 | me thinking about, you know what?
00:05:36.460 | And this is the mindset in the world of work more generally that is causing some problems.
00:05:40.960 | I want to give you two concrete examples to try to make this more clear what I mean.
00:05:45.700 | Consider all the issues surrounding email.
00:05:47.460 | And let's apply the isolated optimal mindset to help explain these issues.
00:05:53.460 | Isolated in the moment, if I send you an email with like a question, the optimal behavior is
00:06:00.460 | for you to just to respond right away.
00:06:02.460 | Right?
00:06:03.460 | Because think about it, if you would just respond to my message right away, it gives me a lot
00:06:09.640 | more flexibility and ease in how I do my work.
00:06:14.260 | Like when I need information, I can get it much in the same way that when, you know, I need
00:06:18.640 | information from the internet, Google will just give me that answer.
00:06:22.180 | And if I look at this behavior in isolation, you answering my email, it passes the test of
00:06:28.120 | this is not super onerous.
00:06:30.180 | I'm not asking you to go do something really hard or beyond your ability.
00:06:33.180 | In fact, it will probably just take you three minutes.
00:06:35.180 | Right?
00:06:36.180 | You just have to look this thing up and get me back an answer.
00:06:38.180 | So the isolated optimal mindset says, yeah, just respond to my email right away when I
00:06:44.680 | send it.
00:06:46.480 | But out of this comes that culture of responsiveness that we know creates a lot of problems.
00:06:53.260 | Let me give you another example of this at play in the world of work.
00:06:55.980 | Think about meetings.
00:06:58.500 | But in the moment, if you could just agree to a meeting when I need to get a group of
00:07:03.220 | people together to make a decision or to gather information or better yet, as is they're trying
00:07:09.760 | to make the norm in certain parts of my university right now.
00:07:12.500 | Better yet, just have your calendar made public so that other people can just see all your free
00:07:18.860 | time and just choose a time that works on everyone's schedule and just have a meeting invite show
00:07:24.580 | So I don't even have to interact with you to pull you into a meeting.
00:07:29.300 | We don't even have to talk about when you're available.
00:07:30.300 | If you would just do this, it would make my life easier.
00:07:34.300 | It would be optimal in isolation because I have this thing and I need feedback from these
00:07:38.580 | three people on it.
00:07:39.580 | That would be a good way to make progress on it.
00:07:41.300 | And if I could just, without having to do much else, just have a meeting go on the books and
00:07:46.020 | we'll all get together at the next available time, we can all get together and talk about it.
00:07:49.540 | That makes life easier.
00:07:52.580 | It seems optimal in isolation and it's not super onerous.
00:07:56.420 | Like, what do you care if like some meetings show up on your calendar?
00:07:59.620 | It's work.
00:08:00.100 | Work has meetings and your time was free.
00:08:02.260 | And like, what's the problem?
00:08:02.900 | I'm not asking to do something onerous.
00:08:04.100 | So the optimal isolated mindset says, yeah, we should just be able to like auto schedule
00:08:07.700 | people in the meetings when we need them.
00:08:08.980 | This of course creates that culture of meeting availability, which itself leads to all sorts
00:08:14.900 | of problems in practice.
00:08:16.900 | So what is the alternative?
00:08:20.260 | Well, this is where I want to go back to the world of computer security,
00:08:23.780 | because that approach the passwords is now something that's getting a lot of pushback.
00:08:27.860 | And if we look at how the computer security world is beginning to push back
00:08:32.500 | on the, give me the super complicated password because it's going to make our system more secure.
00:08:37.220 | If we look at how the computer security world is starting to push back on that narrow issue,
00:08:40.980 | we can see that that solution is going to generalize to our broad work issues as well.
00:08:44.420 | So we're going to get some insight about how to fix the world of work more broadly.
00:08:47.780 | So this was the talk I was hearing about computer security.
00:08:50.900 | there is a new subfield within this broader topic that's known as human-centric security.
00:08:57.700 | And this subfield does something interesting.
00:09:00.260 | They work with, talk to, and observe at their actual jobs, real people.
00:09:08.260 | So they're not just sitting back and saying, for example, what level of complexity of a password
00:09:16.100 | means that like these cracking software we have is going to struggle? Like what's the technically,
00:09:23.540 | what's going to be the thresholds we need in our standards that's going to make it hard for a hash
00:09:28.580 | attack to, you know, crack it. They're actually watching real people. Hey, what's going on in the day
00:09:35.860 | when you get a request to set up a password? What else are you doing? What do you do with this
00:09:40.260 | password? Why aren't you setting up this password? Like what else is happening? What's your concerns
00:09:44.740 | here? So they actually talk to real people and they figure out the context in which these individual
00:09:50.820 | decisions are being made. So instead of using the isolated mindset of just in isolation, this would be
00:09:56.580 | the optimal way to set up a password. They say, no, what's the whole context of this person's like life and
00:10:00.900 | day and IT situation when they're asked to do that. And what they realized when they did this type of
00:10:07.700 | human centric research was like, well, wait a second, users are dealing with all sorts of different IT
00:10:12.180 | systems, both in their professional life and their personal life. And all the time they have to set up
00:10:15.940 | accounts and all these accounts are making these demands. And the problem is the number one problem
00:10:20.500 | users have is not that they couldn't come up with a password that meets those demands. They worry about
00:10:25.540 | forgetting them. They're not memorable. And if you forget it, it's a problem. Now you've added a big
00:10:33.140 | time overhead of having to get your password recovered. And that can be stressful that like,
00:10:37.860 | what if the system doesn't even let me do that? Now, the IT professional might say like, well, there's
00:10:42.660 | these like password managers you can use, but that's not obvious. And people have different systems they're
00:10:48.900 | using. Like, well, I use this computer at work and this phone is not for work, but also use it sometimes
00:10:55.220 | for work. And my computer at home is both. And this system, though, I might want to access it on both
00:11:00.180 | systems. It's not obvious if you're not much more in the weeds on these type of computer security systems.
00:11:07.140 | And I hate to say this, computer researchers, but these password managers you talk about are not so
00:11:11.540 | obvious, especially when you have many devices of different operating systems used and owned for many
00:11:15.620 | different types of purposes. People aren't that confident about how do I set up these passwords?
00:11:20.100 | They don't necessarily trust those things. They say, well, why is this any more secure? Like,
00:11:23.380 | what if that gets hacked? All my passwords are there. You might say instead, well, write it down somewhere,
00:11:27.860 | but that's really fraud as well. Where am I writing this down? What if someone gets access to that?
00:11:31.300 | Where am I storing it? Well, when I'm at a hotel, I don't have, you know, access to that, right?
00:11:37.300 | That might be at home in a filing cabinet. And how am I going to remember this? And so they're like,
00:11:42.340 | if there's any way we can resist the rules to try to just get something in here that I'm going to
00:11:46.820 | remember, maybe like a single password that's easy to remember, that passes these rules I can use for
00:11:52.100 | all my systems. People are calling back to that, not because they don't understand the rules, not because
00:11:57.940 | they don't understand that, yes, this makes a password more hackable, but they're doing a calculus and
00:12:02.820 | saying, this is not worth it for me. The overhead of trying to obey these rules in the right way is worse to
00:12:08.580 | me than the fear of like your system might be compromised. So a human centric security research
00:12:13.220 | says, great, that's what we have to work with. The reality of the psychology and the life and the
00:12:18.340 | context in which these decisions are being made, like the, maybe we need to set up systems that don't
00:12:23.060 | require passwords for the security, or maybe we as a, because there's alternatives you can do here.
00:12:28.580 | Or maybe we as a company have to make standard the password manager and we've pre-installed it. And
00:12:36.180 | it's part of your training when you work here and we, we, you, you learn about it and it's not so scary
00:12:40.980 | and it kind of makes sense how it works. And it's been explained to you and that's worth doing upfront
00:12:43.780 | or whatever it is, but you're meeting people where they actually are. You're not tackling problems in the
00:12:48.260 | abstract. Hey, it's Cal. I wanted to interrupt briefly to say that if you're enjoying this video,
00:12:54.020 | then you need to check out my new book, Slow Productivity, The Lost Art of Accomplishment
00:13:00.020 | Without Burnout. This is like the Bible for most of the ideas we talk about here in these videos.
00:13:07.300 | You can get a free excerpt at calnewport.com/slow. I know you're going to like it. Check it out.
00:13:15.300 | Now let's get back to the video. All right. So let's bring this mindset back to our work problems we had
00:13:20.980 | from before. So if we return to email, for example, we said the problem with the isolated optimal mindset
00:13:28.580 | is that, yeah, it's optimal for you to answer my email fast. But if we all are making that same decision,
00:13:33.540 | I get 300 emails a day and all I'm doing is trying to answer the emails and I get exhausted.
00:13:39.060 | You would bring a human-centric mindset to the email picture and you immediately see,
00:13:44.020 | wait a second, this is exhausting. The actual behavior I'm watching this user doing at their
00:13:51.540 | real desk in a real job, they are exhausted because they have 200 emails they have to answer.
00:13:56.500 | And they're all different contexts. They have to keep shifting their brain from one context to another.
00:14:00.660 | And when they're away from the email, they know more are piling up and that has its own sort of
00:14:04.260 | social psychological cost as well, which is also stressful. Wait a second, this is not good.
00:14:09.780 | This approach to communication makes people like miserable and cognitively fractured and not very
00:14:15.300 | effective. Oh, great. We need to think up other ways to deal with communication that doesn't cause this
00:14:22.100 | problem. I don't care what's optimal for you in this moment for this one question. I'm like,
00:14:26.100 | what's the best way to run this office? So understanding the context tells you like,
00:14:31.300 | okay, we need to get away from ad hoc unscheduled messaging as our primary
00:14:34.900 | vector for like information flows. Same thing when we apply the human-centric approach to meetings,
00:14:41.220 | right? As talked about in the isolated optimal approach, if it's, look, it would just be optimal
00:14:47.540 | if you make it easy for me to grab you to a meeting, we get over-scheduled.
00:14:51.460 | And that becomes, we get a situation here where your schedule becomes so full of meetings with
00:14:57.300 | these little gaps of time in between, but all you're doing is going from meetings to meetings
00:15:00.660 | with no real time to recover or do anything else, that it can become deranging. You have no breathing
00:15:05.460 | room. You're exhausted. You're falling deeper in the task hole instead of trying to get out of it,
00:15:12.820 | because every meeting generates more things. But before you can even process those things and make sense
00:15:17.380 | of them and write them down, you're in the next meeting and more things are piling up. So it could be
00:15:20.580 | uniquely deranging. You can't actually get work done. It's exhausting. It also becomes super inequitable
00:15:27.460 | because the only way to succeed in these setups is to actually do your work outside of the work hours.
00:15:32.100 | And guess what? Not everyone is set up to be able to do that. Not everyone is like a 24 year old
00:15:37.060 | living with roommates and bored, who's like, yeah, let me just like crush it from eight to 12 at night.
00:15:41.460 | Like other people have things going on in their lives. The human-centric mindset would say, okay,
00:15:48.020 | let's look at the context of auto-scheduling meetings. We look at the context of a real person
00:15:52.980 | and their real day. They have a ton of meetings on their schedule. This looks really stressful.
00:15:57.060 | So I don't care that it's optimal for the person in the moment setting up this meeting.
00:16:01.860 | The whole context shows that this is a very stressful way to do it. So we need another way of having
00:16:06.980 | group interaction or collaboration that doesn't fracture the schedule so much. And then it leads to the
00:16:11.860 | other types of solutions we talk about like office hours and docket clearing meetings and pre-scheduled
00:16:16.180 | standing meetings, and things where you have regular opportunities to have real-time interaction
00:16:20.900 | with people, but the footprint is constrained. Right? These are not, you know, the alternatives to
00:16:28.740 | ad hoc communication, the alternatives to ad hoc meetings. They don't pass the test of is this the
00:16:36.020 | optimal thing in the moment for this, what I need right now. They don't pass that test. They're more
00:16:39.700 | inconvenient. They're less flexible. Some bad things will happen. But when you look at them from the
00:16:44.500 | human-centric approach, they make the actual day-to-day experience of the human users involved
00:16:49.700 | significantly better. So this is basically what I'm calling for. This is the idea that I'm pulling from
00:16:56.020 | the security world and trying to bring to the world of work more generally. I think in a lot of different
00:17:00.820 | ways we think about productivity and digital era knowledge work, a lot of these ways we are acting
00:17:05.140 | like the computer security engineers from the early 2000s. We're just thinking in isolation,
00:17:11.220 | what's the most efficient way to do this thing I need to get done right now? Ooh, technology can make
00:17:14.980 | that really fast. Let's do that. We need to be thinking more like the human-centric security researchers
00:17:21.220 | of the 2020s. We're saying what matters is the actual experience of the humans, what they're thinking,
00:17:28.500 | how they're feeling, what's easy, what's hard for them. And we want work to be effective and sustainable
00:17:34.660 | for the humans, not for the task. We want the humans to feel energized and successful and do good work,
00:17:42.420 | not individual tasks in isolation, feeling like they got executed in the most efficient number of cycles.
00:17:49.540 | So this human-centric approach, I have found this to be a useful analogy for thinking about
00:17:58.660 | how to think about work. So there's a page we can take from the world of computer security
00:18:02.900 | and we can bring that over here. Let me tell you, Jesse, it was funny,
00:18:07.380 | awkward about that talk. Great talk. But the professor had done this really cool research, but it was awkward
00:18:16.020 | for me because it was talking about, they were looking at the way that people online doing
00:18:23.140 | VPN ad reads were misinforming the public.
00:18:30.740 | And I'm like, "Oh, we do VPN ad reads." And I eventually raised my hand. I was like, "Look,
00:18:35.140 | let me give you like the insider view." Because it was interesting. I think her view was
00:18:39.300 | that it's like these YouTube personalities are just like riffing on VPNs. I was like,
00:18:44.980 | "Oh, let me tell you about scripts. And let me tell you about like how this happens." So it's
00:18:49.220 | interesting. I was like, "I'm in a very unusual situation where I'm a computer scientist who also
00:18:55.860 | does ad reads on technical stuff." What'd she say?
00:18:59.460 | I don't know. I think she was like, "Am I in trouble? Like, is he mad at me?"
00:19:03.540 | She thought it was interesting. I was just talking about like, and it was an interesting discussion.
00:19:09.940 | I was like, "Let me tell you what that world looks like on the other side." It was really cool research,
00:19:13.700 | actually. They random sampled YouTube and were able to calculate like how many people are actually
00:19:20.580 | seeing some of these ads by like figuring out like how many people are doing these ad reads and their
00:19:27.780 | view. And the idea was actually basically for anything, not just for VPNs. If there's a brand
00:19:33.620 | that is spending a lot on advertising on like YouTube or something, you could be hitting a huge amount of
00:19:40.980 | people because actually the cost per person is pretty low on YouTube. So you could be reaching like a huge
00:19:47.780 | amount of people. So you have to care about the information that you're reaching. The other
00:19:52.020 | thing I thought about that was awkward when I was writing this deep dive is I thought about our password
00:19:56.740 | security here at the HQ, which I don't think pass muster. To give people, without giving away our
00:20:05.780 | passwords, I would say the password I use on our machines here is like the second easiest possible
00:20:13.140 | password. Would you say if the first easiest possible password would be password, would you
00:20:17.940 | say without saying what ours is, that is probably the second most guessable easiest possible password
00:20:23.060 | that you would use? It reminds me of Spaceballs when he's like your luggage combination is one,
00:20:27.860 | two, three, four. It basically is like that. But my thought, and this is why I'm not a computer
00:20:33.460 | security researcher, is my password protection on these computers is the door lock. Like we've already lost,
00:20:40.500 | if someone is in here trying to log on to our computer, they're just going to grab all of our
00:20:43.380 | stuff and go. Also, it's like congratulations, you have just gained access to four years of local
00:20:51.300 | archive copies of the Deep Questions podcast. There you go. It's not exactly missile codes on these
00:20:57.220 | machines. The one other thing that I think about is for YouTube is I can't believe more people don't
00:21:02.980 | pay the $12 a month for ad-free YouTube. You're talking about me, basically.
00:21:07.940 | That blows my mind. I just haven't got around to it. I was telling someone about this the other day.
00:21:13.620 | Because people are like, "Oh, I see ads." Like I never see ads on YouTube.
00:21:16.580 | Let me give the context. Jesse is on my back because...
00:21:20.500 | No, it wasn't necessarily you. I forgot you.
00:21:21.940 | I know, but he is rightfully on my back. Whenever I load up a YouTube thing on my computer, I get the ads.
00:21:28.100 | And we make, I don't know, we generate just on YouTube ads alone, probably like tens of thousands
00:21:33.300 | of dollars a year. And I don't pay the whatever, $12. What is it? $12?
00:21:38.180 | Yeah. It's like less than $20 a month.
00:21:40.020 | Yeah. I just don't know how to do it. This goes back to this question of like human-centric computing.
00:21:46.500 | So you can be like YouTube pro or something. I know it's something you sign up for, but I don't know
00:21:52.580 | what it is. So I just am constantly skipping ads and like watching ads. And I'm really plugged
00:21:57.860 | into the world of advertising on YouTube. I definitely, I'll tell you what we need
00:22:03.540 | is like Liberty Mutual Insurance. I'm seeing a lot of Liberty Mutual ads. And then also ads
00:22:10.020 | for like whatever I just was talking or thinking about. Somehow those always, those always show up.
00:22:14.740 | So what is it though? Pro?
00:22:17.620 | Yeah. Premium.
00:22:19.060 | Premium.
00:22:19.780 | Yeah.
00:22:20.260 | Okay. I guess it, I mean, we do...
00:22:22.100 | YouTube premium.
00:22:23.380 | We have like a 275,000 subscriber channel and I don't pay the $12.
00:22:27.380 | I should.
00:22:29.380 | I should.
00:22:30.820 | All right. Well, there we go. So we nerded out about as much as I think our audience can take.
00:22:34.900 | So we've got some good questions coming up, but first let's hear from some sponsors.
00:22:40.980 | So I want to talk in particular about our friends at Uplift. Look, muscles are vital
00:22:46.580 | for movement and they play a key role in supporting the vascular system. The calves,
00:22:52.020 | if you know this, Jesse, are often called the second heart. They help pump blood against gravity,
00:22:56.820 | aid in circulation throughout the body. By using a standing desk and incorporating movement accessories,
00:23:01.300 | you are more likely to engage these muscles, promoting improved blood flow and overall
00:23:06.020 | health. This is where Uplift comes in. They have the Uplift desk, which I think is at the forefront
00:23:13.140 | of ergonomic solutions. These things are, they're really good technology. Now we know what a standing
00:23:18.740 | desk is. They go, if you haven't seen them in a while, they go up and down now, so you can adjust
00:23:22.980 | them and they are much more compact than they used to be. Somehow the lifting mechanisms are built into
00:23:27.700 | the legs themselves. They also hold a lot of weight. That's another ad I see a lot. Uplift desk ads on
00:23:35.380 | YouTube and they're compelling. Like the one that I keep seeing is they have all this crap on the desk
00:23:42.900 | and the desk can still raise up and down. See, I remembered that. So YouTube ads are effective.
00:23:48.500 | But Uplift also has these other accessories that are also meant to promote sort of healthy movement
00:23:53.620 | ergonomics during your day. The one I have and I've been messing around with is the Wobble
00:23:57.540 | stool. I'm going to bring it here, Jesse, so you can see it. It's like a stool that wobbles. It won't
00:24:01.540 | fall over, but it wobbles. So like you can do some, like you have to do some core, not just like
00:24:07.300 | hold up your core, but it allows you to get movements. You're not just stuck in one position.
00:24:13.060 | I learned recovering from my back stuff that being stuck in a position can also be a problem. So I like
00:24:21.220 | these type of movement, movement accessories. So Uplift is really a smart new way to think about the
00:24:27.620 | furniture you use in your office to promote healthy posture, ergonomics, and movement.
00:24:35.860 | The Uplift desk itself has over 200,000 configurations, which allows you to tailor your workspace to perfectly
00:24:43.220 | suit your needs. They even have a desk configurator that helps you build out a complete workstation with
00:24:48.260 | storage seating, wire management, so you can build out like just the space you need. And again,
00:24:52.020 | the other movement accessories are excellent as well. So make this year yours by going to
00:24:58.820 | upliftdesk.com/deep and use our code DEEP to get four free accessories, free same-day shipping,
00:25:06.900 | free returns, and an industry-leading 15-year warranty that covers your entire desk.
00:25:11.540 | And they will give you an extra discount off your entire order. That's U-P-L-I-F-T-D-E-S-K.com/deep for
00:25:22.820 | our special offer. And it's only available at that link. So start 2025 right, stand, move, thrive with
00:25:30.340 | Uplift desk. This show is also sponsored by BetterHelp. All right, let's talk numbers here. Traditional in-person
00:25:39.940 | therapy can cost anywhere from $100 to $250 per session, which adds up fast. But with BetterHelp
00:25:48.020 | online therapy, you can save on average up to 50% per session. You pay a flat fee for weekly sessions,
00:25:56.340 | which saves you big on cost and on time. The thing is, therapy should be accessible. It should not be a
00:26:03.300 | luxury. Online therapy helps with that because you can get quality care at a price that makes sense, and it can
00:26:08.420 | help you with anything from anxiety to everyday stress. Your mental health is worth it. And now
00:26:15.380 | it is within reach. I really feel like I've been talking to people about this a lot more recently.
00:26:19.940 | Maybe it's because the next chapter I'm writing in my book after the current one, but I've started
00:26:25.380 | outlining the next one, is on reclaiming your brain. And I've been thinking a lot about the role of your
00:26:30.580 | brain and your relationship with your brain and the role it plays in cultivating a deep life.
00:26:35.540 | So this is really on my mind. This idea that your relationship with your brain is so vital. We think
00:26:41.780 | about all the external stuff that might be relevant for making your life better or more meaningful,
00:26:46.740 | how you manage your time, your goals, your plans. But if you have a bad relationship with your brain,
00:26:51.540 | all this is going to be difficult to put in place or implement. Therapy is one of the best ways to
00:26:57.300 | improve that relationship with your brain. And why I'm proud to be sponsored by BetterHelp is that I
00:27:02.500 | really think they make this accessible to more people. It's online. It's easy. You're not stuck having to be
00:27:06.740 | in a specific location or stuck with a particular therapist. And as mentioned, the price I think is
00:27:12.500 | right. There's over 30,000 therapists in the BetterHelp Network, making it the world's largest
00:27:16.580 | online therapy platform. They serve over 5 million people globally. Again, it's convenient. You join a
00:27:22.500 | session with a click of a button, so it's easier to fit this into your busy life. Your wellbeing is worth it.
00:27:28.580 | Visit betterhelp.com/deepquestions to get 10% off your first month. That's betterhelp.com/deepquestions.
00:27:38.660 | Speaking about questions, Jesse, let's get on with our listener questions for the show.
00:27:43.620 | First questions from Raphael. I struggle with context switching, especially with complex problems that
00:27:51.300 | take days to solve. How can I effectively switch to smaller tasks? Should I treat the larger tasks just like
00:27:56.980 | the smaller ones externalizing things into Trello until I get back to them next?
00:28:00.500 | Well, it's a complicated question. There's two different possible things going on here. So one
00:28:07.380 | is approaching bigger projects using the David Allen approach, and this might be what you're suggesting.
00:28:13.860 | So let's deal with that first. The David Allen approach to big projects is there are no big projects.
00:28:18.180 | I mean, there are, but you don't work on big projects, is the way David Allen would say it. He would say,
00:28:25.220 | all you can do is next actions, actions that take a few minutes to do that are clearly defined and you
00:28:30.820 | know exactly how to execute them. So like in his approach, projects just get turned into next actions
00:28:37.860 | that go on list with any other sorts of next actions, whether they're associated with products or not.
00:28:42.180 | And work remains churning through next action list. And the fact that some of these next actions are
00:28:47.700 | supporting a bigger project is great, but you don't actually treat it different in the moment.
00:28:53.860 | It's a computer processor paradigm, right? Like a computer processor just executes instructions
00:28:58.340 | from a limited instruction set. It doesn't care or know that this particular instruction is part of
00:29:04.900 | this big program that does this particular function. And this instructions from another
00:29:08.580 | program doing this type of function. It doesn't care. It just says, give me the next thing to do.
00:29:12.660 | Increment register, done. Retrieve this value from memory, done. Right? So that's kind of the David Allen
00:29:18.420 | approach. If you can just be executing instructions that are very clear, you save yourself from having
00:29:24.740 | to constantly be trying to think about what you need to do and why and what that means. And when
00:29:28.820 | you're not negotiating with this with yourself all the time, work becomes less stressful.
00:29:32.740 | I believe in the David Allen approach, he calls them stakes in the ground. You have a list of projects,
00:29:36.900 | but you just sort of review that semi-regularly to say like, do I need to generate some more next
00:29:42.660 | actions from some of these projects to put over my next action list? And then otherwise you're just
00:29:46.020 | executing those lists. I tend to think this approach doesn't work particularly well for most projects
00:29:51.540 | because a lot of big projects can't actually be decomposed into a sort of a sequence of isolated
00:29:58.180 | next actions that you can just interleave with other types of next actions.
00:30:02.260 | Most of these types of projects, especially in sort of non-entry level knowledge or positions,
00:30:06.340 | require a non-trivial sustained engagement, right? You have to go through the time required to build up
00:30:14.820 | the cognitive context relevant to the project you're working on, swap in the right things, inhibit the
00:30:20.020 | things that are unrelated to it. And then once that cognitive context is loaded, really give some time
00:30:24.900 | to try to grapple with the project, make progress on it, learn from that progress, adjust how you understand
00:30:30.660 | that when you're all done, sort of like update your notes and your understanding of what's going on.
00:30:34.260 | It requires sustained attention. You can't just break down that project into two-minute steps. You
00:30:39.780 | can interleave with changing the cat litter and calling the credit card company to renew your card.
00:30:43.700 | So I don't tend to be a big believer in breaking down big projects into just small isolated things
00:30:49.140 | that you treat like anything else. I think projects have to be scheduled on multiple scales. This is why I
00:30:54.980 | recommend with multi-scale planning that you kind of have the open loops are there in your quarterly
00:30:59.860 | plan, which you review every week. And you can look at your week and say, when am I going to make
00:31:04.580 | progress on these big projects this week? And you're moving things around and actually making time for
00:31:09.060 | them to make sure that time happens. And then on the daily schedule, you're making a time block plan
00:31:14.180 | for your day. That's based first and foremost on what's on your calendar. So the time gets preserved.
00:31:18.260 | And that's the way I like to think about big projects, right? It's like to be more concrete.
00:31:22.260 | Here's a big project I'm working on, writing a chapter from a new book that doesn't break down in the
00:31:27.860 | small next actions I put on a Trello board. It's instead each week, one of the big things I keep
00:31:35.060 | in mind is I'm working on my book. This is one of my big things this quarter. And in fact,
00:31:39.380 | what am I trying to get done this quarter? I'm trying to get done these two chapters.
00:31:41.940 | So how can I make sufficient progress on this this week? And I'm looking at like, well, most of these
00:31:46.980 | mornings I can start each day with writing. Let me like protect them. This day I can't, I have a faculty
00:31:51.460 | meeting. So maybe I'm going to put together like an evening block. And then these are big blocks to make
00:31:55.780 | sustained effort on a hard project. So in general, I'm not a big fan or a big believer in treating all
00:32:01.780 | work the same. It all gets knocked down into little projects. I think big projects sometimes need big
00:32:07.060 | blocks of time. And those have to be treated differently than small tasks. All right, who we got next?
00:32:14.100 | Natalie is next. How do you think AI will affect living the deep life? Do we need to pivot to new skills
00:32:20.580 | because AI will be able to automate so much and deliver things like hard tasks and deep research
00:32:24.980 | better than humans? Are you making any adjustments yourself and your approach?
00:32:28.660 | Well, I mean, more generally, lifestyle centric planning says you should always
00:32:33.300 | be keeping up with what is my career capital that is the rare and valuable skills I offer to the market
00:32:40.260 | because that is your main source of leverage for continuing to shape your life in ways that resonate
00:32:45.620 | and that take it away from things that don't. So like in a broad sense, well, sure, you want to be
00:32:52.660 | aware of anything that might be reducing the value of your current career capital and/or give you an
00:32:58.980 | opportunity to build up new career capital. If we get more specifically, I would say for most people,
00:33:05.380 | like 99% of people in the knowledge economy, AI is not that relevant in its current form. It's not that
00:33:12.260 | relevant yet to these questions. I mean, if you're a freelance photographer, sure. But if you're an
00:33:18.100 | executive, it's not there yet, right? So what I keep arguing about AI is you don't have to be a technology
00:33:25.220 | prognosticator. I don't think you need to be trying to guess, okay, where is this going to evolve towards?
00:33:33.940 | And let me try to preemptively start building up skills that will meet AI when it gets there so I
00:33:39.860 | can take advantage of that skill. I think right now these efforts to try to learn new skills to be AI
00:33:46.500 | ready are largely wasted effort because you're learning skills relevant to AI in its current form. And its
00:33:53.060 | current form is clearly not the forming which is going to have the biggest economic impact. So AI, we argue this
00:33:58.420 | all the time on the show, so I won't belabor it, but AI right now is like a generative AI based on
00:34:03.860 | language models. I'll be more specific. It's largely right now interacted within a chat bot paradigm of
00:34:09.380 | I type text into a box and then an entity that sort of acts like an oracle answers back, that kind of
00:34:16.180 | answers my request. There was this hope, OpenAI in particular had this hope, that if the AI oracle on the other
00:34:24.420 | site is sufficiently advanced and powerful, that just having this text box interface with an all-knowing
00:34:30.980 | oracle would just, people would find ways to make it useful for their work. And this by itself would
00:34:35.700 | be a killer app or lend itself to killer applications in many different fields. That didn't happen.
00:34:40.900 | I mean, this was the, in 2022, this was the thought. Yeah, we're like six months away from massive
00:34:47.860 | disruption. That's just going to start pouring like waves over niche after niche in the knowledge economy.
00:34:52.820 | But year after year passed and that didn't happen, even as the technology got better. So it's pretty
00:34:57.220 | clear now, like, oh, there's another evolution of sort of classic product market fit that's going to
00:35:02.340 | have to happen before we get the biggest professional disruptions from AI. Most people interacting with a
00:35:08.020 | chat bot is not actually, they're not building killer applications for their work. It's going to be
00:35:12.740 | some new integration into existing software, some sort of new way. This hasn't been invented yet,
00:35:18.660 | but clearly this current chat bot form is not causing the disruption that was seen.
00:35:22.420 | But a lot of people were still saying, well, I need to learn to be really good at using the chat bots.
00:35:26.980 | And so like a lot of people invested a lot of time into, for example, prompt engineering for
00:35:31.460 | the current generation of chat bots. That's going to be a worthless skill.
00:35:34.500 | Two years from now, if we're looking at industries being highly disrupted by AI, it's not going to be
00:35:41.060 | people typing these like carefully constructed prop sequences into a chat bot. It's going to be something
00:35:46.580 | that's going to be way more intuitive and easier to use. So what I'm arguing is you have to wait until
00:35:52.420 | the disruption vector is visible before you can adapt to it. And we just don't know what that's going to
00:35:57.380 | be for most jobs. So if you can't point towards, in my job, AI is starting to disrupt it in this way.
00:36:05.380 | There's more and more people doing X. This company is doing it. This is going to make a lot of the things I do
00:36:09.300 | now less valuable. If you don't see that happening now or similar things happening in related industries,
00:36:14.820 | you don't really know what skill to build up. So I always say, let's have cautious watch and wait
00:36:19.300 | right now with AI for most jobs. We don't know how it's going to evolve into the vectors that are going
00:36:25.060 | to have disruption. But right now, if we think about the disruption, like a viral infection
00:36:30.820 | through the job market, the current form of this virus is not highly infectious. Like a lot of people,
00:36:37.860 | maybe a lot of people have been exposed to it. There's a lot of people who mess around with these
00:36:40.980 | chatbots. But really, it's still the enthusiasts who are using them most right now. So let's keep
00:36:47.060 | an eye for it to evolve. But I don't know yet what skill to tell you to pick up or what skill of yours
00:36:52.660 | might become less relevant. This might be slower and messier and more bespoke than you'll realize.
00:36:59.700 | I mean, my big argument I've been making on the show is probably my best guess is the first wave
00:37:04.020 | of actual disruption will be unlocking advanced features that already exist in existing software.
00:37:10.100 | Like you could always do this advanced stuff in Excel. I just don't know how to do it. But with an AI
00:37:16.020 | natural language interface, now I can. So it's going to be unlocking productivity in terms of
00:37:20.260 | latent ability and existing skills. It's like a very different vision to what people fear, which is going to be somehow
00:37:25.700 | chat GPT. It's going to just start on its own doing parts of your job or something like that.
00:37:30.980 | that. So keep an eye on it. Cautious wait. But it's unclear now where the disruption is going to
00:37:35.940 | happen or what skills it is you should be learning. All right. Who's next?
00:37:40.660 | How to say no is next. I work on a multi-year transformation project, but I'm also seen as
00:37:46.900 | one of the faces of the department. I use time blocking and Kanban, but the work still never stops.
00:37:52.100 | My waiting for others is overwhelming. Is there a way to say no to certain requests that don't derail
00:37:57.220 | our long-term goals? Well, you need the first face to productivity dragon here and actually like write
00:38:02.180 | down in one place, all the different types of thing that you find yourself responsible for right now.
00:38:06.420 | And I think you're going to find that you have yesed your way into an overwhelming number of
00:38:12.660 | information slows or systems where you have to be involved. Like to be the face of a department
00:38:17.780 | that means like you're reasonable, you're reliable, people like you, you're personable.
00:38:24.100 | So of course people are going to come to you and say, can you do this? Can you do that? And there's
00:38:27.380 | like a little thrill you get when you say yes. But if you face a productivity dragon, you might just like,
00:38:31.060 | well, this is too many things. Like this fractures my time too much. It's more than I can service well.
00:38:36.180 | And then you need to simplify down from that overwhelming amount to an amount that
00:38:42.180 | is more reasonable. The key thing I can tell you is based on how you describe yourself,
00:38:49.140 | you're a face of your department. You're sophisticated in your use of things like time blocking
00:38:54.740 | and Kanban. People really like working with you. They don't want you to go. Their fear
00:39:01.060 | is not, ooh, are you going to say something unreasonable or make an unreasonable request?
00:39:06.340 | We're just waiting to drop the ax on you as soon as like you say something or show any sort of like
00:39:12.020 | lack of gratitude. No, no. Their fear is what if this person goes? This is a really good person.
00:39:16.260 | So for you to come in and say, look, I'm documenting all the different things I'm working on.
00:39:20.660 | This is too many. This is the amount that I think actually allows me to be effective on them.
00:39:24.420 | So I am going to reduce down to this. If you have clarity, you have numbers. It's clear,
00:39:29.620 | you know what you're doing. You're responsible. You're responsible. Your personal people like you.
00:39:32.500 | You have a lot more latitude than you think because your leverage here is you going.
00:39:36.420 | People don't want good people to go. It is very hard to hire good people. So my instinct here is,
00:39:41.620 | yeah, you have to reduce. So starting with facing that productivity dragon is the right way to do it.
00:39:46.980 | So it's not just an ad hoc decision that you personalize and be like, wow,
00:39:49.860 | I'm being mean to this person. And instead you have a realistic assessment of your workload.
00:39:54.260 | a realistic assessment of what is a reasonable load for you. And then saying, hey, trust this
00:39:58.660 | assessment. I have to find one way or another to get there. I mean, it is hard.
00:40:01.300 | I've been saying, I say no to so many things, Jesse, so many things. I always think like my publicist
00:40:08.100 | and my speaking agent think I'm either crazy or like don't want to be successful, but I have to say no to
00:40:14.180 | so many things. I'm still doing too much.
00:40:17.700 | - Every week you say no to things? - Yeah.
00:40:20.500 | Cool stuff too. I don't know. It's just hard to, too many jobs.
00:40:25.540 | - You got to write chapter three. - I'm good at time management.
00:40:29.620 | That's why it's easy. That's why I know like this seems like it would be nice to say yes in the moment,
00:40:34.660 | but I know too much about my productivity dragon to be like, no, no, I know the impact of doing that
00:40:38.820 | and where I am and how much of this stuff I can do. And I just can't be doing that right now.
00:40:41.780 | I've found that people are actually pretty reasonable about it. If I'm like, look,
00:40:45.380 | here's the thing and I'll do things like this. Like I had a conversation with someone recently where
00:40:49.700 | there was like a, without giving like away details, a thing that was coming on,
00:40:54.980 | we need people to sign up to like do X, Y, and Z. And I just had to be like, look, I can't,
00:40:58.820 | I can't participate in any of this. This month, just I'm sort of scheduled about two months out now
00:41:06.740 | and I just don't have give for this. And I know it'd be good if I'd be there. I normally would.
00:41:11.620 | I've done this past year is I just can't do any of it this year. You know, if you're clear,
00:41:16.740 | I think people get it. Like, okay, yeah, must be busy. You know? So I get a lot of it. Saying no
00:41:21.700 | is hard. I'm excited. That's why I'm excited for Tim Ferriss's new book, which is just about saying no.
00:41:26.660 | That one's going to be good. All right. Who have we got next?
00:41:30.660 | David's next. I'm an architect that left a traditional practice for an in-house design
00:41:35.540 | leader for a hospital system. An executive has encouraged me to take on a diverse roles to broaden my skill set.
00:41:41.460 | How do I balance openness to opportunity while staying focused on a deliberate career trajectory?
00:41:47.780 | Well, just be deliberate about your openness to opportunity.
00:41:51.300 | So, okay. What they're really saying is like, don't just do one thing.
00:41:56.900 | You might want to pick up other skills. That's fine, but be very deliberate about that.
00:42:00.020 | Well, if I'm going to do that, what is my current workload? Let me face productivity dragon. Let me
00:42:06.660 | just do one new skill at a time. That's what I'm doing this year is like, I'm going to take on this
00:42:10.820 | other role and I'm going to simplify this other one until I can master it. Then I'll put that aside and
00:42:14.500 | take on another one. Like being really clear. Again, your workload is your workload and be very careful
00:42:20.580 | about it. The other thing you could do here is just get more clarity on your career trajectory.
00:42:24.500 | So yes, this executive has a vision for what they want your trajectory to be. And maybe taking on these
00:42:29.780 | diverse roles is like a good path forward towards an executive position like his or hers. But maybe
00:42:36.420 | that's not what you want. Maybe that's not what you're looking for. You're like, no, no, I want to just
00:42:40.260 | like do this type of project and eventually get like more autonomy so I can like move over here
00:42:44.820 | and build my farmhouse in Door County up in Illinois or something, Wisconsin, and walk among the trees.
00:42:53.620 | And I don't know, you could have just some different vision. Great. Be specific about that. And like,
00:42:57.540 | that's what I'm working towards. If you're working towards something specific, it's easier to resist the
00:43:02.420 | blandishments of people who are trying to push you over there, which is not where you actually want to go.
00:43:06.740 | So get clear about what you want to do. And if having experience in other roles will be key for what you
00:43:11.860 | want to do, be very deliberate about that. You can be servicing that general desire without having to,
00:43:18.020 | for example, overload yourself. You could be exposed to like other types of obligations in the office
00:43:23.380 | without taking on all the other obligations in the office. So you'll be careful of traps where
00:43:28.420 | a good intention creates a bad scheduling situation. Oh, I was thinking about Door County.
00:43:37.380 | Door County, God, this is coming up from deep work. I think that's the, I think it's deep work where
00:43:45.460 | I talk about Rick Furr making the Viking sword. And he works with at a, a barn with the doors open,
00:43:53.540 | like overlooking one of the Great Lakes. And that was in Door County. That's what I was thinking about.
00:43:56.820 | It's cool up there. It's nice, nice country. All right. Who do we got?
00:44:00.660 | Next is Jay. Is it possible for a nurse to implement time blocking in a 12 hour shift?
00:44:06.340 | No, it's a different type of job. Time blocking presupposes a job more like a knowledge work position,
00:44:14.180 | where you have a relatively large amount of autonomy in terms of how you execute your work.
00:44:18.020 | So anything that's objective based, like, yeah, here's the things you've taken on to do. You need
00:44:23.460 | to make progress on these things and maybe attend some meetings, but how you feel your time between that
00:44:26.900 | meeting is up to you. Time blocking is very useful. So that time is not wasted. A nursing shift typically,
00:44:32.180 | no, no, it's way more structured than that. Like you're, you're seeing patients either as like
00:44:37.220 | assigned by the incoming appointment flow, if it's at a private practice or what's going on on the big
00:44:41.380 | board. If you're in a, like a emergency department type of situation, it's way more structured
00:44:46.980 | what you're doing throughout your shift. So time, time blocking is not that relevant. There's other things
00:44:52.420 | that are relevant in the medical scenario that could make work more sustainable or less exhausting.
00:44:56.500 | Like I'm a big believer in looking at places in the medical context where there's unnecessary friction
00:45:02.820 | that adds up over time to a lot of exhausting heat. Like the way that people have to wrangle with
00:45:08.180 | electronic medical records, for example, can sometimes be like a big source of friction
00:45:11.780 | that really makes things less sustainable. Being explicit about the sort of patient per hour load
00:45:17.860 | and seeing what actually is a reasonable number there, as opposed to just like,
00:45:21.380 | let's push people as far as they can physically go. So there's a lot of things that could be done
00:45:25.380 | in healthcare to make these jobs more sustainable, but they typically aren't the type of things I talk
00:45:30.180 | about, which are more cued into a more highly autonomous knowledge work type role.
00:45:34.500 | All right, what we got?
00:45:37.220 | - So we have a bonus question from Bill that we're going to dedicate the theme music to.
00:45:44.740 | - Is our excuse to still play the slow productivity theme music?
00:45:47.700 | - Yep.
00:45:48.500 | - All right. Let me show you, by the way, Bill sent me, not to encourage this behavior, but I kind of do.
00:45:53.940 | He sent me a first edition of The Good Shepherd, a book I praised on this show is what I think,
00:45:59.540 | like one of the very first techno thrillers. It takes place on the deck of a destroyer in World War II.
00:46:05.300 | And it's written in this sort of tight, I don't know if it's third person or first person, let me see.
00:46:10.820 | But it's tight perspective. All right. I think it's done in tight third person.
00:46:17.700 | So by tight perspective, I mean, it's third person, but it follows the captain. So the perspective never
00:46:24.100 | leaves the, where the captain is, what the captain sees. And it just follows them through this like
00:46:29.540 | very stressful 24 hour ship on the destroyer. And it's written, it feels like in like a real time
00:46:33.620 | type format. Like it just unfolds linearly, like what's happening, tight third person perspective.
00:46:38.500 | So you're just from the perspective of a single person and it's impressionistic, like trying to
00:46:42.660 | build up what it's really like, but also tons of technical details of World War II era anti-summarine
00:46:48.340 | operations on destroyer. So like a lot of technical details, which are presented like in a good techno
00:46:53.140 | thriller without much explanation. Just like the, they just talk about the stuff like they would be
00:46:57.060 | talking about it, even though you don't understand as the reader, what all this stuff means,
00:47:00.180 | like a good techno thriller. I just think it's a really cool, interesting book. It's from,
00:47:03.540 | I'm going to guess 1955. Let me see. I mean, it's post-war, but not super post-war.
00:47:10.740 | Why can't I find it on here? It's not the copyright. What if it was like 2019? Yeah, 1955.
00:47:21.060 | You called it. There we go. So thank you, Bill. You've earned yourself,
00:47:25.780 | whether you asked for it or not, the Slow Productivity Corner theme music.
00:47:29.700 | All right. What's this question? Can a Kanban system work across all departments in an
00:47:42.500 | organization without being overly complex? So for those who don't know, I mean, we talk about a lot on this
00:47:48.100 | show, but the Kanban style system is where you have the columns and you have the cards in the column.
00:47:53.620 | So like in Kanban, typically you have like a waiting to be done, working on and completed column.
00:48:00.740 | And if you're in a team, you might have a working on column for each team member.
00:48:06.020 | So you can clearly see who's working on what and clearly how much they're working on. Kanban has clear
00:48:12.100 | limits called WIPs or work in progress limits on how much cards can be in anyone's column. So it's a
00:48:16.900 | great workload management. I also like about Kanban systems that stuff that needs to be done is not all
00:48:24.180 | spread out on people's plates, but exists by default in a generic team level waiting to be done. And the
00:48:30.020 | only thing you're responsible for are the things that are on your column. This is important because
00:48:34.820 | it's the things you're working on that generate administrative overhead. So sometimes people just say,
00:48:39.380 | hey, it's just convenient. If stuff comes in, I don't know who, let's just spread it out. You
00:48:44.260 | don't have to work on it all at once. But like, you know, hey, can you handle this? You can put it on
00:48:49.060 | your list. The problem with that approach is that once something is in your hands, it can generate
00:48:54.100 | administrative overhead you have to deal with emails, meetings and cognitive cycles. So by keeping things
00:48:59.060 | by default off of any individual's hands, it can't generate administrative overhead. There's no one that
00:49:04.820 | you can email about it. It doesn't belong to anyone yet. There's no meetings to have about it because
00:49:08.260 | it's not being worked on yet. And so each individual is not only just working on a reasonable number of
00:49:12.900 | things at the same time, they're dealing with a reasonable amount of overhead at the same time.
00:49:16.340 | So I really like those systems. Agile methodology systems like Scrum use Kanban style boards. This
00:49:23.460 | is why it's a little bit confusing. They also have boards with columns and cards representing things
00:49:28.100 | that have to be done. There's more of a variety of what those boards are and different collections
00:49:33.460 | or rules and terminology that surrounds them. So agile methodologies in Kanban have similar metaphors
00:49:41.540 | for dealing with work, but they differ in the details. Can these apply to a lot of different
00:49:46.900 | type of work? Yes. Can they apply like every team in a big organization uses things like this? Yes.
00:49:53.460 | Here are the two caveats. A, you want these to exist at the team scale. So six people? Sure,
00:50:02.180 | this works fine. 60 people? You can't have one big board for that. It's going to be too many things and too
00:50:07.540 | many people. You can't easily coordinate with all the people. So usually these systems have
00:50:12.980 | a very efficient approach to coordination. Like let's all just like stand up and talk to each other
00:50:17.380 | for 10 minutes. Like, how's your card doing? What else do you need? What should we do next?
00:50:20.820 | So they need to exist at the team level, not at larger scales. Each team should have their own board.
00:50:26.100 | And the key thing is to resist. I think the thing that bogged down these approaches in software dev,
00:50:32.580 | where they really got big, is that we nerded out too much on them. Software types,
00:50:37.780 | we just nerded out too much. And we begin to obsess about the rules. And there's all these like rules
00:50:42.580 | and sub rules. And it became about the rules themselves because, you know, I'm a computer
00:50:46.900 | scientist, so I can use the second person plural here. We love complicated rules. So we want our dev
00:50:52.900 | system not just to be like, hey, here's a place to keep tasks and see who's working on what. We want to be
00:50:58.260 | rolling like 2D 10 to see if like I, my attack number is above your hit point level. And like the
00:51:05.780 | goblin got killed by the wizard. Like we want to have all these rules and rules and all these complexities.
00:51:10.740 | And it can get pretty absurd, like agile and a software development environment. People have
00:51:14.660 | scrum masters and secondary scrum masters and dungeon master screens. And I don't know all the,
00:51:20.660 | it just becomes super complicated. And everyone gets obsessed with doing it just right because we're
00:51:24.180 | all like slightly antisocial in these circles. If you're adopting these ideas outside of software,
00:51:29.060 | don't overburden it with rules. What matters is we have a centralized place to store what needs to be
00:51:34.500 | done. So it doesn't by default, these things do not by default exist on individuals plates.
00:51:39.060 | We have clarity about who's working on what we have constraints about who's working on what,
00:51:43.220 | and we have a clear way to check in with everyone about what they're working on, what they need,
00:51:46.740 | and when they're done, what they should work on next. You do those things. That is good. You get,
00:51:54.740 | you know, I'm going to read some complicated scrum manual and have all the different roles and do
00:51:58.260 | all the different, like the story requires this and that it gets over the top. You don't need that.
00:52:02.660 | My book, a world without email and slow productivity, both talk about this world without email gives a
00:52:08.980 | particular case study of a healthcare group that uses this, that I think is a good example of a
00:52:13.700 | Kanban style system outside of straight up software dev. And I get a lot more details and slow
00:52:18.420 | productivity as well about like, what are the key ideas of these systems that matter?
00:52:22.500 | So yes, I do think these can exist across large organizations if they're integrated properly.
00:52:28.580 | All right. Do we, do we play it twice if it's a bonus question?
00:52:33.220 | Yes. All right. Let's hear it.
00:52:42.020 | All right. Do we have a call this week?
00:52:44.900 | We do. All right. Let's hear it.
00:52:46.260 | Hey, Cal and Jesse. It's Derek from the case study in episode 340. Thank you very much for the advice.
00:52:53.380 | It was really validating hearing your thoughts. As you'll recall, I have two Trello boards right now,
00:52:59.220 | one for admin and one for grant application processing. I've been doing a lot of deep diving
00:53:04.660 | and slow productivity, a world without email and the podcast on what else I can do to help keep my
00:53:09.540 | work sustainable. And to this end, I figured out how to create a task board within Microsoft Teams
00:53:14.740 | that have shared with my coworkers. My vision is this will serve as one of those two status lists
00:53:20.660 | that you've written and spoken about. Right now, my columns are Queue, Active, Backburner and Done.
00:53:28.100 | My question is what granularity of obligations should live on this board? Do I put grant related
00:53:34.180 | activities in the queue like draft financial agreement for X? Should administrative tasks go in here too?
00:53:41.300 | Or just your definition of what a project is from slow productivity, which is any work related initiative
00:53:46.980 | that cannot be completed in a single session? Lastly, how does this board interact with
00:53:51.460 | the existing ones that I have for admin and application processing? I'm really excited to
00:53:55.940 | take this for a spin and report back. I would just really appreciate clarity about what types of things
00:54:00.020 | go in such a task board, especially since this is shared with my team. Thank you very much.
00:54:04.500 | Usually I don't put projects on task boards. I want the granularity of what's on a task board. It doesn't
00:54:13.380 | have to be like a David Allen style next action, but something you could work on in a single session
00:54:18.500 | is typically the way I like to think about that. So when there's projects you're working on,
00:54:23.140 | they can exist in your larger scale plans. And then you can decide on the smaller scale plans,
00:54:29.940 | what progress you want to make on that project that week or not. Whether or not that interacts
00:54:35.700 | with your task board, it depends, right? So sometimes if it's a project, like I'm writing
00:54:40.660 | a grant application and it's on your quarterly plan, when you make your weekly plan, what that really
00:54:46.180 | means is I want to like block off 10 hours of writing this week and it'll be on my calendar.
00:54:52.100 | And when I get to those days, I'll work on the writing, then I'll be making progress on it.
00:54:55.140 | There's not really a task you need to put on a task list somewhere. I mean, you could put right 10
00:54:59.620 | hours and at the end of the week, take that off your task board, but that seems like a little bit
00:55:04.100 | over the top or superfluous, right? On the other hand, a project might be kind of complicated,
00:55:10.820 | like that it generates different types of tasks. So maybe it's organizing a conference and you're like,
00:55:15.860 | okay, this week I need to work on this. There's really like six or seven different things I need to
00:55:20.340 | get done for this project this week that are all sort of tasky. There, I would put them on my task
00:55:25.620 | list, perhaps. What I might do in that situation is create a temporary column for that project and
00:55:29.860 | then have those tasks under it. Or if I have like work on this week, I might label certain projects
00:55:36.340 | related to this project, like with that, just in caps, like the project name, and then have the task in
00:55:41.220 | it. And there, when I'm working off my task list, I sort of see those there. Oftentimes though, if it's
00:55:46.020 | something that I know I want to make progress on, I might have put aside time for working on that
00:55:49.940 | project. And so, you know, I'll know when I get to that time, oh, the details of what I should do
00:55:54.420 | right now are on my task list. So I really just think about the interaction between those task boards
00:55:58.660 | and projects about whether I need help knowing or remembering what about that project I need to work
00:56:05.380 | on this week. And if the answer is yes, you can put them in task on that board. And if the answer is no,
00:56:10.100 | like it's just writing, it doesn't have to interact with your task board. But I would keep the cards on the task
00:56:14.740 | board at the granularity of things you can do in a single session. It's why you need other stuff in
00:56:20.900 | your practice other than just a task board, right? That's why you need your like, this is what I'm
00:56:25.140 | working on this quarter and it's deadlines and my strategy for getting this done. You need somewhere
00:56:28.900 | the, like in March, we really have to get out in front of this grant application,
00:56:34.180 | but not until late March, should we really start ramping up this work on the website overhaul,
00:56:38.900 | but let's wait till then to do it. Like you need that type of thinking in like some sort of quarterly
00:56:42.420 | plan or semester plan document. And then how that translated into actual work, again, just depends
00:56:48.980 | on do I need help remembering what it is specifically I need to do to work on this each week.
00:56:54.900 | So, you know, a lot of my project work just exists as projects. Like my task board is, it's more like
00:57:01.380 | one-off specific things, I would say, if I really looked at it. It's fine, by the way, I like that Derek had
00:57:08.660 | specific, he had some specific task boards for recurring obligations in his work to come up all the
00:57:16.100 | time. And like the application process, you know, this or that, like, okay, I get this stuff all the time.
00:57:20.980 | And like, here's my dedicated board. I kind of have a system going with that. I think that's good. That's fine.
00:57:25.140 | All right. We also have a case study here. This is where people write in
00:57:29.700 | to talk about ways they've put the advice we talk on the show into action in their own life. So we can see what it looks like out in the wild.
00:57:37.060 | Today's case study comes from Jake. Jake says, "The other day, I was beginning to explain to my wife
00:57:44.500 | the concepts regarding career capital and traded it in for more control of one's schedule versus
00:57:49.300 | traded it in for more responsibility and increased pay. While doing so, I realized that she has done
00:57:55.380 | exactly that with her career. She is a pediatric dentist who has worked at an office for about 10 years.
00:58:01.780 | While doing so, she has focused on doing great dental work and interacting with the patients in a way that
00:58:05.940 | leaves them happy with the visits, making her the company's top earner and most senior doctor.
00:58:12.100 | We recently had two boys, now two and a half and four years old. One thing that was really important
00:58:17.940 | to her was that she was able to pick up our boys after school every day. When her older son started
00:58:22.740 | school, she told her work that she was unable to work past 2:00 PM because she needed to pick up our son.
00:58:28.020 | Being the top earner, they created a new schedule for her to work seven to two,
00:58:32.420 | doing op only. She not only gets to pick up her son every single day from school, but because she
00:58:37.860 | is op only, she actually makes significantly more money. I have read all of Cal's materials,
00:58:43.540 | so it goes without saying, I also have tremendous work flexibility and I'm able to drop him off in the
00:58:47.620 | mornings every day. Us being able to drop off and pick up our son every day as working professionals is
00:58:52.980 | incredible. And do you know what he means by op? I was wondering that when I first read it.
00:58:57.860 | She is op only, O-P, capital O-P.
00:59:00.820 | Is that operation? That could make sense. Let's see, she's a pediatric dentist. Yeah,
00:59:07.860 | maybe she's only doing operations. Yeah, it's possible. Well, regardless, I appreciate the case
00:59:16.340 | study. What I like about it is that this gives you a realistic view of lifestyle-centric planning in the
00:59:24.100 | deep life. So when we think about living a deeper life, especially in a modern distracted world,
00:59:29.140 | again, we like to connect with the idea of the grand goal. So the traditional grand goal thinking
00:59:35.860 | would say, if you're in the situation where you're like, I'm unhappy with my work because I really want
00:59:41.780 | to be there to pick my boys up from school. And I kind of worked these longer hours. The grand goal
00:59:46.340 | thinking would say, you need to make a radical change. You need to like start your own, you know,
00:59:51.220 | store, open up a store in town where you can control the hours or become like a full-time novelist or
00:59:57.700 | some sort of grand change. And we need to make our life completely different. But what did Jake's
01:00:03.540 | wife do instead? She said, I have a lot of career capital. I'm very good at what I do. People don't want
01:00:09.380 | me to go. And so I'm going to say, I'm going to leverage that capital and say, here's what I need.
01:00:14.500 | Here's what I want to create a situation in which I am done it too. And because she was very good at
01:00:20.580 | what she did, they said, okay, we'll make this work. You can start early and you'll just do this
01:00:23.940 | and not that type of work. And now she's done it too. And because she was working backwards from not a
01:00:30.340 | vague dissatisfaction with being busy, which again, would lead to the radical change,
01:00:33.780 | but with specificity about what would my ideal lifestyle look like. And a big part of that vision
01:00:39.380 | was very concrete. I'm there to pick up my kids from school. She's like, well, how could I get towards that
01:00:44.500 | vision? Oh, I see. I don't have to radically change my job. I could change the configuration
01:00:48.980 | of my job. And I'll probably get away with that because I'm pretty good. So that's like classic
01:00:53.540 | applying career capital theory in lifestyle centric planning. These are the type of things that can make
01:00:59.380 | like a really intentional life. The intentional life doesn't necessarily mean, uh-oh, I guess I need
01:01:04.180 | to quit this job and we're going to sail around the world with our kids on a sailing boat, right? It doesn't
01:01:08.740 | have to be that type of dramatic, uh, radical change. It just has to be figuring out what
01:01:14.980 | attributes you want in an ideal lifestyle and then working with what you have. What are my opportunities?
01:01:19.140 | What are my obstacles? How do I make that? How do I make that actually work? So there's a lot more of
01:01:24.180 | that possible than people realize. Once you understand the game, it's not this like vague, uh,
01:01:29.460 | radical change. And it's more like, I'm trying to reconfigure and change and shift towards the ideal
01:01:34.900 | lifestyle and knowing that it's skill and rare and valuable skills is what's going to help you actually
01:01:38.820 | get there. So I think it's a cool story. All right. We got a final segment coming up, another tech corner.
01:01:46.180 | It's just one thing you haven't heard from me enough. It's overly technical jargon. But first,
01:01:51.460 | I'm going to hear from another one of our sponsors. We'll talk about in particular our friends at
01:01:58.020 | Shopify. If you sell things, you need to use Shopify. I would say most of the people I know who have some
01:02:04.340 | sort of business where they sell directly to consumers. Maybe like I know writers who do this,
01:02:10.020 | like with online stores, with merchandise, et cetera, uh, they use Shopify. And for good reason,
01:02:16.180 | like Shopify is the tool. If you want to be selling things, no one does selling better than Shopify.
01:02:23.220 | It's home of the number one checkout on the planet. They have their shop pay that boosts conversions up to
01:02:28.180 | 50%, meaning way less carts go abandoned and you get more sales going. Shopify works as your commerce
01:02:35.540 | platform, whether your customers are strolling or scrolling on the web, in your store, in their feed,
01:02:41.700 | and everywhere in between. That's just the way I talk now, Jesse. I rhyme because that's just,
01:02:46.660 | that's just smooth. Um, it just works. You have a point of service kiosk, like thing in your store,
01:02:52.820 | point of sale, whatever it's called. You have an e-commerce shop. You have whatever you're doing in
01:02:56.580 | between. Just don't think farther than Shopify. It really does get it done. So you can upgrade your
01:03:03.780 | business and get the same checkout that basically everyone I know who sells things online uses sign up
01:03:09.700 | for your $1 per month trial period at shopify.com/deep, all lowercase. You gotta type that in all lowercase.
01:03:16.020 | Go to shopify.com/deep to upgrade your selling today, shopify.com/deep.
01:03:21.860 | I also want to talk about our friends at My Body Tutor. I've known Adam Gilbert, My Body Tutor's founder,
01:03:28.100 | for many years. He used to be the fitness advice columnist in the very early configuration of my blog.
01:03:33.700 | More recently, he's been working on My Body Tutor. I say more recently, but he's been doing this forever.
01:03:38.980 | This company, I was talking to him recently. This company has been thriving for a long time.
01:03:43.300 | It's a 100% online coaching program that solves the biggest problem in health and fitness,
01:03:48.100 | which is lack of consistency. It's not hard to figure out what you should eat,
01:03:52.980 | what type of exercise you should do. What's hard is actually doing that.
01:03:55.700 | My Body Tutor hooks you up with an online coach who helps you figure out for you, like what you should
01:04:03.700 | be doing with your eating and your exercise. And then you check in with this coach every day.
01:04:07.700 | That accountability leads to consistency. It also leads to flexibility because you have
01:04:11.700 | something coming up like a trip or the holidays and your coach who knows you can say, here is how
01:04:15.620 | we're modifying what you're doing for the days ahead. So it solves the problem. But because it's 100%
01:04:24.420 | online, it's not the same expense as having like a personal trainer meeting you at the gym or a nutritionist
01:04:28.980 | who's coming to your house and helping you do your food. So it's way more affordable than what it used
01:04:35.220 | to require to have that type of one-on-one accountability and consistency. So if you want to
01:04:41.140 | get healthier, don't look farther than My Body Tutor. Here's the good news. Adam will give deep
01:04:46.740 | questions listeners $50 off their first month. All you have to do is mention this podcast when you join.
01:04:52.020 | Go to MyBodyTutor.com. That's T-U-T-O-R, MyBodyTutor.com, and mention deep questions when you join.
01:04:59.540 | All right, Jesse, let's do our final segment. So I'm going to do a quick tech corner. I want to
01:05:04.660 | follow up on our recent tech corner. So I had talked, I believe it was on the last episode about
01:05:11.060 | the Ezra Klein podcast episode that was generating a lot of attention. It was an episode on how AGI,
01:05:19.940 | Artificial General Intelligence, was closer than people think. So he had on someone who knew a lot
01:05:25.380 | about it, who was saying, yeah, we will probably "reach AGI" at some point during the current
01:05:30.660 | presidential administration. And this generated a lot of energy and attention. And I came on the
01:05:37.700 | show and said, we have to be very careful about what AGI actually means. I think it gets misinterpreted
01:05:44.820 | and it's not unimportant, but it's not as scary as you think, but it gets misinterpreted with other
01:05:50.820 | types of things that people fear with AI. So this happened. And I wanted to bring up a particular
01:05:57.700 | example of this so that we could maybe be a little bit more reassuring when we're thinking about AI in
01:06:04.020 | our current moment. So up on the screen here for people who are watching, instead of just listening,
01:06:08.740 | is a clip from the Breaking Points TV show. It's with Sagar and Crystal. They did a segment on
01:06:17.780 | this article. And it was a very good segment. This is Sagar and Crystal, who's up here. But what caught
01:06:21.540 | my attention is how their YouTube guy labeled this video. So it's not them, but it's how their YouTube
01:06:28.340 | guy labeled the video. I actually met them when I wrote that New Yorker piece a few years ago.
01:06:33.140 | I went and hung out at their studio. And I remember Sagar telling me about their YouTube titles. And they
01:06:37.460 | have a person who does it. And they have caps and blah, blah, blah. They use caps locks and this or that.
01:06:42.980 | And they sort of had like someone who does this. Anyways, let me read you the title of the YouTube
01:06:47.940 | clip from this episode that was about that Ezra Klein interview. The title was Former AI Insider:
01:06:56.260 | AI Super Intelligence Coming Under Trump. All right. So here's what I want to emphasize.
01:07:02.500 | This is the type of conflating of issues that we need in our current moment to be very careful about.
01:07:09.700 | super intelligence is a very different thing than AGI. All right. That Ezra Klein discussion had nothing
01:07:17.060 | to do with super intelligence. And certainly the person he was talking about was not claiming that
01:07:21.620 | super intelligence was coming under Trump. He was talking about AGI. So I want to just, again,
01:07:27.220 | briefly emphasize the differences and why the differences matter, right? So AGI, as we discussed
01:07:35.380 | last week, artificial general intelligence is a subjective threshold, at which point we just kind
01:07:42.660 | of agree more or less that the types of things that these AI systems do right now that we know that we're
01:07:49.300 | doing and we're seeing them doing the generating text and conversations and data searching and photo
01:07:55.060 | generation, whatever. When they can start doing the types of things they do really well, when the ability
01:08:00.260 | at which they do them or are doing them gets at what we roughly agree is like comparable or better than
01:08:05.700 | like the average human who does them. That is not a, it's not a, a binary threshold that like you cross
01:08:12.340 | that threshold and then everything is different because these systems are already doing things very well.
01:08:17.940 | If you look at the text to generate or the photos to generate, you're like, wow, that's as good as a person or
01:08:21.620 | close to it. AGI is just where we agree like, yeah, this is all as good as a person. It's like, we're not that far from that right now.
01:08:28.260 | That's what that official was saying. So that is what AGI is. In general, it's an arbitrary threshold.
01:08:35.300 | Why it's important is just from like a general like economic and security disruption standpoint,
01:08:41.380 | the better these models get at the things they're already doing now, like the more we have to worry
01:08:46.660 | about various economic and security disruptions. And so certainly as they get better, we're going to
01:08:52.500 | have to care about that more, but there's not like something that happens post AGI that like, oh,
01:08:58.820 | we've crossed some Rubicon and now our relationship to technology is different because these systems
01:09:03.220 | already do things close to human level, right? So, I mean, we're not going to notice something different
01:09:08.420 | immediately when the systems that can do pretty well, like a certain type of math exam,
01:09:13.780 | can now like do as well as like a good, you know, human test taker. Like these are not necessarily
01:09:18.820 | major Epsilon. So they matter, but they're not scary. Super intelligence is talking about something
01:09:23.220 | very different. So it's over in this different sort of tree here. If we're looking at the biology of AI,
01:09:29.300 | it's on this different tree where you get first some notion of artificial consciousness, where you have
01:09:36.100 | a system that has, it's alive. It has like autonomy and a sense of itself and can take autonomous actions.
01:09:41.780 | We talked about that in the last episode and super intelligence is a step beyond that.
01:09:46.340 | It's where a system that is autonomous with some notion of self and consciousness
01:09:50.580 | begins creating ever better versions of itself. And the idea there is like that can somehow
01:09:56.500 | recursively speed up so that like it creates a better version of itself, which is now really smart.
01:10:02.340 | So it can create a better version of itself even faster. And you get some sort of exponential speed
01:10:06.260 | up until you have something that's not only like conscious and self-aware and autonomous,
01:10:09.940 | but it is like exponentially smarter than humans and then games over because like it can outsmart us in
01:10:14.580 | all ways because it's just much more smarter than us. That's super intelligence. That's sci-fi stuff.
01:10:19.060 | That's really different than like the moment when the research reports generated by AI,
01:10:27.140 | which right now are pretty good, but kind of are sloppy in some areas or like less sloppy in those areas.
01:10:31.300 | That's what AGI is. Hey, you know what?
01:10:34.580 | This like memo is now good enough produced by, like right now it's like, okay,
01:10:40.820 | but there's like a few things in here I'd be embarrassed about, but now it's good enough.
01:10:43.540 | I could use it without editing it. That's important. That's very different than super intelligence.
01:10:48.900 | And so I guess I'm trying to emphasize is we have to draw a clear line between this tree of discussion
01:10:57.380 | around like artificial intelligence coming alive and the existential implications. That is very different
01:11:04.820 | than these discussions that are happening like on Ezra's show about what happens when capabilities in
01:11:09.860 | certain things get comparable to people and its economic impacts and security impacts. It's a very
01:11:16.260 | different thing. Crossing AGI, we're still talking about using chat GPT, doing the types of things we're
01:11:22.100 | doing now. It's just doing it X percent better. Those are, those are two completely different
01:11:26.820 | things. I made that point last time. I'm trying to clarify it this time, but this is the type of
01:11:31.300 | the thing I don't want people thinking because I, you know, when I talk to people about that article,
01:11:34.900 | their sense was like a Rubicon was being crossed. If we get the AGI, now systems will be able to do
01:11:41.380 | X. And now we have a new thing in our world. That's not the case at all. They don't do anything new.
01:11:45.540 | They can't do now. They'll just be doing it X percent better. So super intelligence. I'm still
01:11:51.940 | at the school of thought, by the way, that we have no reason to believe that's even computationally
01:11:56.340 | possible. Like we're just making huge assumptions that A, our level of intelligence can create a more
01:12:05.460 | intelligent version. B, that, that is recursively true, that there's always these new levels of
01:12:11.860 | intelligence that are possible and computable. And that the, the speed at which these intelligences
01:12:18.500 | can be created somehow also speeds up. So like going from intelligence level 10 to 11 is somehow going to
01:12:23.780 | be faster than going from intelligence level one to two, even though like, these are all just like
01:12:28.260 | massive assumptions that like Nick Bostrom made in a philosophy seminar at Oxford, right? It's not
01:12:34.420 | anything we actually have any reason to believe is true. It's also just as plausible that like,
01:12:39.140 | when it comes to like general self-aware intelligence, like evolution got us about as good as it can get.
01:12:44.180 | This is it. Like there's not like some higher plane of a really complicated, you know,
01:12:50.580 | understanding that just that's out there that we computers can achieve, but humans aren't there.
01:12:54.420 | We just don't know. There's a lot of assumptions there. All right. So there we go. That's my PSA
01:12:59.460 | this week, Jesse, a continuation of last week. Super intelligence and artificial consciousness
01:13:04.500 | are different concepts than AGI. I don't know if that makes people feel better or worse. I think
01:13:08.580 | it should make you feel better though. AGI is an economic, it's an issue of economic and security
01:13:12.980 | disruptions and the threshold itself is arbitrary. It is not like the thing is aware now and it wasn't
01:13:20.580 | yesterday and now we've crossed a line. It is not that. It is like an arbitrary subjective threshold
01:13:25.460 | and how we evaluate the things that these systems are doing, the type of things we're already doing
01:13:29.620 | when they get sufficiently better. We sort of say that we passed that threshold. It's a big deal,
01:13:34.020 | but it's not a big deal from like a sci-fi movie way. It's a big deal from a like the powered loom was
01:13:39.700 | bad for textile workers type of way. So hopefully that makes sense. All right. Well, that's all the time
01:13:47.700 | we have for today. We'll be back next week with another episode. And until then, as always, stay deep.
01:13:53.300 | If you liked today's episode, you might also like episode 341 titled Drowning, Treading, Swimming,
01:14:00.740 | which takes a closer look at how to avoid the worst impacts of overload and create a work and personal
01:14:09.220 | life that's going to be much more sustainable. I think it complements well what we talked about today.
01:14:13.380 | Check it out. I think you'll like it. One of the major themes I talk about here is how to tame overload
01:14:18.420 | in your life and work, the type that can be supercharged by modern technology to the place where you
01:14:24.420 | really have no space left in your life to cultivate more depth.