back to index

The Minimal Productivity System That Could Reinvent Your Life | Cal Newport


Chapters

0:0 A Minimal Protocol for Taking Control of Your Life
29:28 Is “finding purpose” and “following your passion” the same thing?
34:25 How can I capture key takeaways from podcasts if I’m always on the move?
36:26 Can you elaborate on project work with your PhD students?
38:43 How can I deal with my federal job with drastic priority changes due to political party shifts?
45:59 Is it possible for some managers to avoid pseudo-productivity?
49:8 Writing a book as a side hustle
56:22 An athletic trainer makes a career transition []
64:30 A.G.I. versus SkyNet

Whisper Transcript | Transcript Only Page

00:00:00.000 | So one of the conflicts we've confronted here is the one between having too little productivity
00:00:06.140 | and too much. We know the problem with too little productivity in your life. This could be
00:00:11.640 | disorganization and stress, scrambling, job insecurity. People stop counting on you when
00:00:16.880 | it comes to things at work or in your personal life, and you feel like you're not making progress
00:00:21.140 | on any of these things that are non-urgent but important. As Emerson said, the crime which
00:00:26.420 | bankrupts men and nations is that of turning aside from one's main purpose to serve a job
00:00:30.340 | here and there. But we also know the problem with having too much productivity in your life.
00:00:36.320 | Your life can become re-centered on optimization itself as you fill more and more of your time
00:00:41.560 | with execution for execution's sake. You can lose a sense of wonder and appreciation for anything
00:00:46.500 | other than just mechanistic accomplishment. Here's Anne Helen Peterson summarizing some of these fears.
00:00:53.900 | This is the dystopian reality of productivity culture. Its mandate is never, you figured out
00:00:58.700 | how to do my tasks more efficiently, so you've got to spend less time working. It is always instead,
00:01:02.760 | you figured out how to do your tasks more efficiently, so now you must do more tasks.
00:01:07.480 | So this conflict got me thinking recently that there's a key question lurking underneath all this
00:01:12.660 | conflict that we really haven't talked about enough, and it's the following. What is the
00:01:18.600 | minimally viable productivity system? That is, what is the minimal set of rules and tools that will allow
00:01:27.560 | you to escape the problems of having too little productivity, but not jump all the way into becoming
00:01:34.260 | a task-juggling superhero? Just enough to find some breathing room, but not enough for your life to become
00:01:40.600 | all about execution. I think this would be useful to figure out because it would provide us with a common
00:01:46.680 | starting place where everyone should begin from, a sort of baseline, minimally viable productivity
00:01:51.720 | that you start from to get away from the hardship of being too disorganized, the hardship of being
00:01:56.280 | someone that no one can count on, the hardship of not being able to make progress on the things that you
00:01:59.800 | care about, but just enough that you have flexibility in there for you to figure out what fits you and your
00:02:05.720 | goals and your personality when it comes to how much more organized you want to be. That's what I want to
00:02:11.240 | try to do today. I've been thinking about this topic. I'm writing a chapter in my new book on this, so I'm
00:02:15.800 | beginning to bounce ideas around about this topic. And what I want to do today is start by first figuring out what are the
00:02:23.560 | actual problems that we would want a minimally viable productivity system to solve. Then two, identify what
00:02:30.840 | are the key components a system that solved those problems would need to feature. And then three, give some ideas
00:02:38.600 | about then how might you concretely implement those components. Here I think the key is there is no one right
00:02:44.680 | answer. Once we know the minimal components of a minimally viable productivity system, I just want to talk about
00:02:50.520 | what might you look for in implementing these and how you actually do it can be a choose your own adventure.
00:02:56.680 | All right. So that's our goal. Let's get into it. I want to start by saying, what should the goals be
00:03:02.280 | for a minimally viable productivity system? I don't have this locked in yet, but I've been thinking about
00:03:08.840 | this for a couple of weeks now. And there's three goals that come up most common when I think about what
00:03:13.560 | I really want out of this. The first is going to be stress reduction. I need a productivity system to
00:03:21.480 | make my life less stressful. This is sort of table stakes for trying to bring more organization to your
00:03:27.240 | life in the first place. Second, I need this system to increase people's perceived understanding of my
00:03:37.720 | responsibility or increase my responsibility. What I mean by this is professionally, I don't want to be dropping the ball.
00:03:43.480 | However, this system runs, I don't want to be seen as someone
00:03:47.000 | who may or may not get something done. If you give it to me, I want to be someone you can trust.
00:03:51.480 | I want the same thing to happen to my life outside of work. If a friend or someone in my community needs me to handle
00:03:55.640 | something for them or ask for my help, they trust. I'm not going to forget about it. I'm not going to flake.
00:04:00.680 | So whatever the system does, it should give me that sense of responsibility. I'm someone you can count on.
00:04:07.000 | Finally, I think whatever else the system does, it has to be able to help me make at least some progress
00:04:13.800 | on things that are important but non-urgent. I think that word some there does a lot of work because
00:04:21.640 | depending on what circumstance or stage of life you're in, how much progress you might be able to
00:04:26.840 | make on an important non-urgent matter could really vary. I mean, if you're like 24 years old and your
00:04:31.240 | job is not that demanding and you have some sort of big idea you want to make progress on,
00:04:35.720 | you could be spending a lot of time working on this. The other extreme, we can go all the way to like
00:04:41.400 | Viktor Frankl in Man's Search for Meaning, his memoir of his time during the Holocaust and then the
00:04:47.320 | psychotherapy field called logotherapy that he created in its wake. And there he was pointing out
00:04:54.040 | just having something, no matter how minor it could seem in a different context, having something that
00:05:00.760 | you are working on that you control because you chose to do it is critical, not just to human flourishing
00:05:06.360 | but human survival. So we've got this whole spectrum of what doing some work could mean. But the key is
00:05:13.160 | a system should make sure that there's always some room for you to make progress on stuff that's
00:05:17.160 | important that no one's asking you to do it, that you have some autonomy and control in your life.
00:05:20.840 | So I want those three things out of any system, but I want the minimal system that gets me those.
00:05:24.840 | That's my goal right now. All right. So then I started thinking, what components would a system
00:05:31.000 | have to have to satisfy those three things? This too is contentious, but after thinking about this
00:05:35.880 | for a few weeks, three things came to mind. Here's my best swing. The first is there's got to be some notion
00:05:45.160 | of task management, right? So there has to be some way of keeping track of the things that you have
00:05:51.640 | agreed to do that is not just in your brain. This is going to be vital for basically all three of the
00:05:58.600 | goals we have for the system. You're not going to drop the ball on things. It's going to reduce the
00:06:02.920 | stress of just trying to remember things in your brain or forgetting about things until the last minute and
00:06:07.000 | having deadline overload, which is like a huge source of acute stress in people's lives also as a way to
00:06:13.640 | keep track and remember of the things you need to do to make progress on non-urgent tasks. That's kind
00:06:17.320 | of at the core of a lot of things. There has to be some basic way to keep track of what you've agreed
00:06:20.840 | to do that doesn't exist just in your brain. The second component any such system must have is some
00:06:26.760 | notion of workload management. So if task management is about keeping track of what you've agreed to do,
00:06:33.160 | workload management is about controlling the volume of things that you agree to.
00:06:36.520 | It's like the gate through which obligations come.
00:06:39.960 | This is a piece of my thinking that I think has developed, especially in the last few years. It became
00:06:46.120 | a big part of my most recent book, Slow Productivity. It's probably a part that had been neglected in some
00:06:51.000 | of my earlier thinking about organization and productivity. But the amount of stuff that you're actively
00:06:55.880 | working on plays a vital role in almost all aspects of your mental health as well as your
00:07:00.200 | effectiveness in both your professional and personal life. So you have to have something,
00:07:03.720 | a minimally viable system that satisfies those goals. You have to have some intentional thought about
00:07:09.800 | what my workload is and what I am trying to do to keep it within a range that is reasonable.
00:07:16.760 | You don't have that there. You do everything else in productivity. You could end up executing a heck of a
00:07:24.440 | a lot of stuff and being stressed out of your mind and not actually being able to work on the stuff you
00:07:29.800 | want to work on. All right. Final component I think any viable productivity system is going to have to
00:07:35.240 | have is some notion of time control. Some notion of like, I want to have a say in how my time is allocated
00:07:46.360 | on a day-to-day basis. Maybe say, well, how would you not have a say? I mean, you're always making
00:07:51.720 | decisions, but not really. Actually, the default that people do is much more of a reactive mode.
00:07:56.120 | I am reacting to things that come towards me, maybe an email, maybe a Slack, someone calling me and saying,
00:08:02.600 | hey, where's this thing? I'm reacting to things that people are asking for me. And in between reacting,
00:08:06.680 | I sort of just drift off into a daze of digital distraction and pacification until I get stressed about
00:08:11.800 | something else. And then I react to that stress by doing something else. Most people's days unfold
00:08:16.040 | haphazardly. If you pin them down that morning and said, what do you think is going to happen today?
00:08:21.000 | And then you see what actually happens. There's going to be very little convergence between it.
00:08:24.040 | They are being bounced around the sort of time schedule pinball machine,
00:08:29.720 | and they're not the ones hitting the flipper buttons. And so we want to have at least a resistance
00:08:34.680 | to that. It doesn't mean you're going to be able to control your day. Like, oh, I have exact control
00:08:38.760 | what I'm going to go on, but you're going to have some intention that's going to be applied,
00:08:42.520 | or at least attempted to apply. I just think you have to have that in any viable productivity system,
00:08:46.760 | at least if you want to hit those three goals I talked about before. There's a lot of other stuff
00:08:50.200 | you might do with productivity, but first talking about a minimally viable system.
00:08:54.120 | Those are the components I think you have to have task management, workload management,
00:08:58.600 | time control, some systems for each. So I want to go through those. Let's go through those three
00:09:05.480 | components. And I'm going to do two things for each. I'm going to give an idea of a
00:09:11.880 | bare bones thing you could do that would like put that some sort of system there in your life.
00:09:17.240 | And then I'll talk about more like what I do in that component, like more of like a moderately
00:09:20.920 | advanced. So you can get a sense of the options. But the key thing I want to emphasize here is these
00:09:25.400 | are just samples of possibilities. So by giving you a bare bones way of implementing each of these ideas
00:09:30.280 | and a sort of moderately advanced way of implementing each of these ideas,
00:09:33.320 | what I'm trying to do is just give you a sense of there's a large landscape of possibilities here.
00:09:37.720 | You can choose your own adventure. You can choose your own adventure based on what the details are
00:09:42.520 | of your particular situation and also of your particular personality, what resonates or not.
00:09:46.520 | I mean, some people are going to want to go way more analog than this because they get a sort of
00:09:51.480 | aesthetic rush out of having something that looks beautiful. Other people really love high-tech
00:09:58.440 | systems. They're like, I really want an AI-powered Zettelkasten bot playing a big role in what I'm
00:10:04.040 | doing. And some people really enjoy that. There's nothing wrong with that either.
00:10:06.600 | So you can customize this. But I want to give a couple examples so that the point I'm making is
00:10:13.080 | there's no one way to do these things. Hey, it's Cal. I wanted to interrupt briefly to say that if
00:10:17.800 | you're enjoying this video, then you need to check out my new book, Slow Productivity: The Lost Art of
00:10:24.680 | Accomplishment Without Burnout. This is like the Bible for most of the ideas we talk about here in these
00:10:32.120 | videos. You can get a free excerpt at calnewport.com/slow. I know you're going to like it. Check it out. Now
00:10:41.080 | let's get back to the video. All right, so let's get specific. So let's go back to task management.
00:10:45.480 | What we need here, right? What are we trying to accomplish is some notion of what David Allen called
00:10:51.560 | a trusted system, a place where the things you need to do ends up and you trust yourself to review it
00:10:57.000 | regularly. Those two things is what helps prevent you from beginning things and also reduces distress
00:11:02.680 | of trying to keep track of things in your mind. As soon as your mind trusts where I wrote this down
00:11:07.240 | is a place I won't lose it and a place I won't forget it, then your mind releases. All right,
00:11:11.960 | that is a classic David Allen and David Allen himself was actually adapting that idea from Dean Atchison.
00:11:17.720 | So we can kind of follow this chain back if you want to go through productivity culture.
00:11:21.720 | All right, what's like a bare bone ways of implementing this?
00:11:25.320 | The bare bone way here is a text file and a calendar, right? You need a calendar. So stuff
00:11:31.800 | that is due on particular days or happens at particular times, just go straight on a calendar.
00:11:36.680 | It's a tool that we all use. It's a remarkably effective technology. It's probably one of the oldest
00:11:41.960 | productivity technologies. The oldest probably is actually what counting tablets, cuneiform tablets
00:11:47.880 | going back to the Sumerian days. But tracking things with time was another one of the early
00:11:54.200 | applications of technology. So have a calendar. And then have a text file where you're just like,
00:11:58.040 | I'm writing down things that I have to do. That's the simplest implementation. When I was in graduate
00:12:02.600 | school, there was a period in which I was implementing this system where I used a calendar
00:12:06.440 | on my computer and I had a tablet, like a legal pad. I remember it very vividly because it was a CSAIL
00:12:12.760 | legal pad. CSAIL was the Computer Science and Artificial Intelligence Laboratory. So that's the,
00:12:18.360 | at the time, this was like the CS department at MIT. There's now a school of computing. It's a whole
00:12:23.320 | separate sort of thing. And it was swag. I don't know where I got this. It was just swag. It was
00:12:27.080 | like a white legal pad and it had the CSAIL logo, the status center up in the corner. And I just kept
00:12:31.560 | a list of there of things I needed to do. And I would cross things off as I would do them. And eventually,
00:12:35.880 | the page would have so many things crossed off. I would copy the uncrossed off things to a new page
00:12:40.760 | that was clean. If you want a more modern or work through example of that very simple way,
00:12:46.520 | look at Ryder Carroll's bullet journal method that does something similar. You write things you have
00:12:50.680 | to do and just copy to a clean page when it gets too messy. But that's the bare bones here. I have a
00:12:54.840 | calendar for things that are time sensitive in a file or a notepad that I have everything else.
00:12:58.600 | That works. The minimal viable system, that's enough. If you want to get somewhat more advanced,
00:13:05.480 | you can combine a calendar with a status board. This is a course I've talked on the show,
00:13:10.360 | what I do. I use Trello. One board for each of the roles in my life. A different column on each
00:13:16.440 | board for different statuses of things I need to do. Each card on each column is a thing I have to do.
00:13:21.080 | So I might have a board for my role as director of undergraduate studies for the computer science
00:13:27.720 | department at Georgetown. And I might have a column in there that is something like major declaration
00:13:34.840 | requests that still need to be processed. And then I have a card for each student who has
00:13:39.960 | written me saying, "Hey, I want to declare a major." And I have kind of the information I have in there.
00:13:44.280 | I have another row in that that says to discuss with the chair at the next meeting I have with the
00:13:49.880 | chair. I have another thing in there that says to discuss with the associate director. And there we
00:13:55.080 | meet once a week and I store cards there with tasks that we're going to discuss at the next
00:13:59.480 | meeting. I have a waiting to hear back from column there that's very important.
00:14:03.480 | Oh, I had a question of this student and now I'm waiting to hear back from them. I put a card there
00:14:07.800 | so I won't forget that explains, "Here's the student. Here's what I asked them. I'm waiting to hear back
00:14:11.560 | this. Here's the next step." So that's what I do, status boards. But as long as you have some way of
00:14:19.080 | putting stuff down where you're not going to forget it and you'll review regularly, you have a minimally
00:14:24.200 | viable task management system. What about workload management? Well, here, look, you need some way of
00:14:31.720 | estimating your current workload. So what is on my plate? We sometimes use the phrase on this show,
00:14:38.280 | facing the productivity dragon, just facing directly. This is the full magnitude of what is on my plate,
00:14:43.640 | of things I've agreed to do. You need some way of understanding what your personal maximum effective
00:14:49.160 | workload is. Like this is roughly how much I can do at once. That estimate may be broken down into
00:14:55.480 | different types of work. And so you should be able to compare those two things and have some sense
00:14:59.240 | where your actual workload is going beyond your estimated, like this is what I'm comfortably can handle.
00:15:05.640 | And the workload management needs some collection of rules and tools that you use to try to
00:15:11.320 | keep those two things balanced. That's what a workload management system needs. Now, how you do that,
00:15:15.880 | there's a million different ways. If we're gonna go with just like minimum simple things you could do
00:15:20.520 | for workload management to satisfy those properties, a couple of simple ideas. And these come from my
00:15:25.320 | book, Slow Productivity. Preschedule big commitments on your calendar is a simple thing to do. All right,
00:15:31.240 | I'm going to agree to do this big thing. I'm going to go and actually find the time on my calendar
00:15:35.720 | at the point of agreement. I will go at that point and go find time on my calendar to schedule,
00:15:40.760 | to work on this. If it takes 10 hours, I'll find three hours here, four hours here. I will find and
00:15:45.960 | protect that time. So now I'm actually allocating my actual hours to the agreement as opposed to just
00:15:51.880 | agreeing it. And it's just abstractly added to my workload. And sometimes it gets near to do this,
00:15:56.920 | like where am I going to find the time to do it? You get a nice little reality check from having to
00:16:01.800 | pre-schedule time for major commitments, which is if you don't have that time available in the near future,
00:16:07.000 | you can't avoid that reality. Because you're trying to find, hey, I got to schedule the 10
00:16:11.480 | hours for this hypothetical chore. And if I can't find those 10 hours in the next two weeks, I can't
00:16:15.000 | do it in the next two weeks. It gives you feedback on how crowded your schedule actually is.
00:16:20.680 | Or at first you're easily finding time for things, but then as your schedule fills, you have to start
00:16:25.880 | looking out farther into the future to schedule things. And eventually you have to tell people like,
00:16:30.360 | yeah, look, I'm scheduling a month out now. I've filled the next four weeks.
00:16:33.480 | If you're not pre-scheduling time and you're just saying yes to all of those things,
00:16:37.880 | it's still going to take up that much time. You just don't know it yet. You're going to have to
00:16:41.960 | pay that bill as those deadlines get due and the work is going to get done in a frenzy of stress.
00:16:47.080 | It's not going to get done that well. Another very simple thing you can do is have quotas for
00:16:50.760 | a particular type of work. Yes, I do these type of committees, but only one per quarter.
00:16:56.680 | Yes, I'm willing to do peer review, but I do four per semester. Once I hit my quota, I'm done.
00:17:03.480 | Yeah, I do calls. I think it's important in my role as an entrepreneur that I do calls,
00:17:08.040 | hop on calls with young entrepreneurs who want advice, but I can only do one per week.
00:17:11.400 | Quotas allow you to keep things that are important, but potentially schedule strangling. They keep those
00:17:18.760 | things in your life, but in a way that is reasonable. Another simple thing, if you're just doing bare
00:17:23.480 | bones here, would be project counts. You just figure out through experience, maybe through doing that
00:17:29.240 | pre-scheduling for a while, how many projects are the major types of projects you do in your job? How
00:17:34.360 | many can you usually handle before things become a little bit too stressful? And then you just have
00:17:39.960 | this very simple system. I only do three at a time. I'm at three. I got to wait till I finish something.
00:17:43.560 | It's this bare bone things you can do for workload management.
00:17:48.040 | If you want to get more advanced, you can do, like I talked about in slow productivity,
00:17:52.920 | some sort of individual scale or team scale implementation of a agile Kanban style work
00:18:00.040 | tracking system, where you separate what you have agreed to work on or your team has agreed to work
00:18:05.880 | on from what is actually being worked on. And you have clear WIPs or work in progress limits for what
00:18:10.520 | is actually being worked on at any one time. If you have a team, you could actually have something like
00:18:15.720 | a Kanban board where there is digital cards. These used to be done with physical note cards on bulletin
00:18:22.760 | boards, but now there's any number of digital products for this, of all the things your team
00:18:27.400 | needs to work on. And those things exist in a sort of to be worked on section of your virtual board.
00:18:33.560 | No one is responsible for those things until they get moved to the column for someone labeled with
00:18:40.040 | someone's particular name. And then I can see like, okay, here's the Cal column. The things under the
00:18:44.920 | column is what I am working on. Now. These are the only things you can talk to me about the things over
00:18:49.640 | here. No one's working on yet. You can email me about these things. Hey, what's going on with such
00:18:54.600 | and such. We talked about it. It's not been assigned yet. I'll talk. I'll talk to you when it is.
00:18:58.200 | Here's the two things I'm working on and you have a clear limit. Like here's what's reasonable work on
00:19:04.040 | two things at a time, three things on a time. You can give those things a lot of time. The fraction of
00:19:08.120 | your schedule dedicated to administrative overhead is reduced to a point where you can be very effective
00:19:12.520 | with applying your brain to accomplishment. The throughput with which you finish things goes up.
00:19:17.240 | So you could do this as a team. It really works. If you have a sort of like daily
00:19:23.400 | standup style status, check-in where it's like, who's working on what, what do you need from each
00:19:27.240 | other? Now let's go work. I talked about slow productivity, how you can even implement something
00:19:32.600 | like this for yourself, even if your team has no interest in this, where you personally differentiate
00:19:38.040 | on the things you've agreed to do between the things you're actively working on and the things you're
00:19:41.960 | waiting to work on. And you don't take meetings or do emails or really dedicate any administrative
00:19:46.360 | overhead to the things you're waiting to work on. And you make this whole list transparent.
00:19:49.880 | And when someone who gave you something to do that you're waiting to work on bothers you,
00:19:52.920 | you can push them to that list. You can see where your thing is. And as soon as it moves to my active
00:19:56.360 | status, you'll hear from me. I'll tell you right away. But until then I'll say this nicely, you know,
00:20:01.480 | bug off. Of course you won't say it that way. You'll use all sorts of fancy language, but I'm a professor,
00:20:06.920 | so I don't really know how to interact like a normal human in a business environment, but I'm sure you all
00:20:10.600 | figure that out. Some fancy way you do that. So yeah, you can get as complicated here as you want,
00:20:16.200 | but just having some sort of, at least you need some sort of minimal way of saying, how much am I
00:20:22.280 | working on? How much is too much? And what am I doing to try to keep those two things in balance?
00:20:25.560 | It gets to have something there. All right. The final thing is time control.
00:20:30.280 | As I talked about before, our goal here is to have some sort of proactive control or intention
00:20:36.200 | in how your day unfolds. Even if you can't control it beat by beat, have some intention embedded in your
00:20:41.000 | day as opposed to just reacting. So here's an example, just like a bare minimum thing you could do here.
00:20:48.120 | Some sort of morning review. First thing in the morning, maybe I'm keeping my tasks in a legal
00:20:53.080 | pad. Like we talked about, I have a calendar, look at the calendar, look at your legal pad.
00:20:58.360 | What's on my plate. Maybe grab a couple of things off that legal pad and say, okay,
00:21:03.480 | these are the things I want to try to get done today. Maybe kind of figure out here's the most
00:21:07.960 | important things I want to do. Why don't I do it? And I see I have an open time on my calendar.
00:21:12.040 | Let me just start. That's when I'll do them. Make a few decisions. It's putting minimal intention into
00:21:16.280 | your day. More advanced thing would be like I do multi-scale planning where I plan on multiple
00:21:22.440 | timescales. So on like the semester timescale, I'm thinking about my big picture goals for that
00:21:27.640 | semester. I look at that plan every week when I make a weekly plan. Critically in my weekly plan,
00:21:31.880 | I'm looking at everything scheduled on my calendar. I'm going to move things around if it's going to
00:21:35.960 | really unlock my week. If I move this or cancel this or move this to this day, it opens up big time
00:21:41.400 | here. So you play chess with your calendar during a weekly plan. I also schedule progress on the big
00:21:48.680 | rocks for my semester plan. I'll schedule them right on my calendar during the weekly plan.
00:21:52.360 | You know, I really am trying to make progress on finishing chapter three of my book by the end of
00:21:58.200 | March. I saw that in my semester plan. So now when I'm doing my weekly plan, I want to get like five
00:22:03.560 | hours blocked off on my calendar for writing, just to make sure that time gets done. And then the final
00:22:08.280 | scales every day in the morning, I like to do a time block plan, give every hour of my day a job,
00:22:12.920 | not reacting. I want to see the time I have and make the best possible plan for it. Yes,
00:22:17.240 | I'm going to get knocked off this time block plan within a couple hours and I'll have to adjust it a
00:22:20.920 | few times. And sometimes it'll be okay. And sometimes I'll never recover, but I'm going to try
00:22:25.240 | to have some say on how my day unfolds. So that's a more advanced way to do it. Multi-scale planning.
00:22:31.480 | But again, you can start with something as minimal as just like five minutes every morning.
00:22:35.400 | Where's the list? Where's the calendar? What am I wanting to do today? Or what's something I want
00:22:40.280 | to remember to do today? Just like give it a little bit of thought before you open that email inbox,
00:22:44.760 | before you jump into Slack. All right. So to summarize, we can go on and on about the optimal
00:22:52.120 | productivity systems or the best productivity systems or the most modern productivity systems.
00:22:57.480 | But if you want just a minimally viable productivity system, that sort of bare bones that I think
00:23:01.640 | everyone in the modern knowledge economy needs to avoid stress or disillusionment or burnout,
00:23:08.440 | you got to have some sort of task management component. You need some sort of workload component.
00:23:13.160 | You need some sort of time control component. And even if they are super simple,
00:23:18.440 | you are going to save yourself from the worst deprivations of being not productive enough.
00:23:24.680 | And yet focusing on these three things, if you're reasonable about it,
00:23:28.200 | that also saves you from that sort of optimization mindset thing, right? It leaves that to the people
00:23:33.880 | who like think about productivity as a hobby. I think it puts you in a really good place. So there we go.
00:23:39.400 | The MVPS, minimally viable productivity system. That's my current take on this issue. I'm sure it'll
00:23:45.880 | evolve, but I think it's an important one to throw into the discussion. I can't call it MVP because
00:23:51.960 | minimally viable product is a real Silicon Valley piece of lingo. Okay. Have you heard that lingo?
00:23:57.400 | Just most valuable player. Well, then there's that as well. Yeah. But in Silicon Valley, it's like
00:24:04.600 | rapidly developing the simplest possible software product that does something useful as opposed to
00:24:10.760 | trying to build a fully featured piece of software before you release it. So minimally viable product.
00:24:16.040 | And they didn't think about MVP because Silicon Valley people are not playing a lot of sports.
00:24:20.360 | I don't think that crossed their mind. I don't think they were thinking about that. So MVPS is what
00:24:27.160 | we'll call it. Minimal viable productivity system. All right. So we've got some good questions coming up,
00:24:33.160 | but first let's hear from some sponsors. I want to talk about our friends at Cozy Earth. Cozy Earth
00:24:39.720 | products are designed to transform your five to nine, the time that matters most, into the coziest
00:24:46.440 | sanctuary. I'm a huge Cozy Earth booster. You all know this. Their bamboo sheet set my wife and I are
00:24:52.760 | obsessed with. We have multiple pairs so that when one pair is being washed, we can have another pair on
00:24:56.600 | the bed. I have the sweatshirt. My wife has pajamas. We have the towels. We have all sorts of Cozy Earth
00:25:01.800 | stuff because we just love the way their fabrics actually feel. The sheets are soft. They temperature
00:25:10.360 | regulate. We'll actually travel with them. Like if we're going to a vacation, like a vacation home,
00:25:16.920 | we bring our Cozy Earth sheets with them. We're kind of that addicted to it. You should have them. I like
00:25:21.960 | this inversion, by the way. Transform your five to nine, which is interestingly in my sleeping hours.
00:25:27.560 | I sleep from 5 PM to 9 AM. I don't know if that's healthy or not. Only with Cozy Earth sheets can I
00:25:33.640 | actually sleep that much. Here's the thing. You can transform your space risk-free with a 100-night
00:25:38.600 | sleep trial and a 10-year warranty on all Cozy Earth bedding and bath products. Love them or send them
00:25:42.600 | back, but trust me, you won't want to. All right. I've got a good, because you just get these. This is
00:25:46.680 | just like my straight from my gut personal endorsement. I love this stuff. It's very, very comfortable.
00:25:52.360 | So I'm going to give you a really good discount code so you can get these without having to pay full price.
00:25:57.000 | If you visit CozyEarth.com/deep and use my exclusive code DEEP, you will get up to 40% off Cozy Earth's
00:26:07.400 | best-selling sheets, towels, pajamas, and more. That's CozyEarth.com/deep. And if you get one of those
00:26:15.240 | purchase surveys, select our podcast when you say, "Here's how I heard about you," so that we get credit
00:26:21.240 | for that. Remember, CozyEarth.com/deep. Luxury shouldn't be out of reach, and sanctuary awaits
00:26:28.440 | at Cozy Earth. I also want to talk about our friends at Lofty, as long as we're continuing this
00:26:35.080 | theme of sleep. We use multiple Lofty products in our household, and particularly, we really like the
00:26:43.640 | Lofty Clock, a bedside essential engineered by sleep experts to transform both your bedtime and
00:26:49.080 | your mornings. It's a really beautiful-looking clock, but it fits with our themes here on the show,
00:26:55.960 | because what it does is, instead of you having to have your phone in your room to work as your alarm,
00:27:01.400 | or whatever, you're going to check your phone all the time now. You now have this beautifully designed
00:27:04.840 | alarm clock that is much more, I would say, natural than a regular alarm clock, because what it's going
00:27:10.680 | to do is, it has a two-phase alarm, so a soft wake-up sound that ease you in the consciousness,
00:27:17.000 | followed by a more energizing get-up sound. The sounds are beautiful. You can have part nature,
00:27:22.680 | part orchestrated music, so it's like a calm way to wake up. I really like this. It's an all-in-one
00:27:28.360 | bedside sound machine, so you can also use this to play sort of like white noise sounds or nature
00:27:32.600 | sounds. It looks beautiful. The one that I have in one of my kids' room has like these,
00:27:37.960 | it's like a, I don't know what you call it, like an oval. It has these like three lights in it in the
00:27:43.800 | front that can light up in the morning when it's time to wake up. My son likes to make a cave out of
00:27:49.240 | his pillows for stuffed animals, and he'll put his lofty in there, and it like illuminates the cave,
00:27:53.880 | and he thinks it's really cool. Like, hey, this is my illuminated cave. His is set up right now to
00:27:59.000 | play We Have Monk Bells. So like, boom, boom. It's cool. It's a great way to wake up. So you got to get
00:28:04.280 | the phone out of your bedroom, get something that's going to wake you up gently, that's going to mimic
00:28:07.560 | your natural rhythms, and it's going to look like a really cool piece of engineering. I recommend Lofty.
00:28:12.520 | We have several, and I think you will like it as well. So join over 150,000 blissful sleepers who have
00:28:19.880 | upgraded their rest and mornings with Lofty. Go to buylofty.com and use code DEEP20, and you will get
00:28:27.560 | 20% off orders over $100. That's B-Y-L-O-F-T-I-E.com and use that code DEEP20. I'll tell you, Lofty is better
00:28:39.000 | than my old method for waking up, which is I used to hire Lou Gossett Jr. to hit garbage can lids.
00:28:49.400 | See, that's a Lou Gossett Jr. reference. Officer and a, is that Officer and a Gentleman?
00:28:53.880 | What's the movie, oh man, what is the movie where Richard Gere is in Officer Candidate School,
00:28:59.720 | and Lou Gossett, I think it's an Officer and a Gentleman. I'll have to look it up. Anyways,
00:29:04.280 | he's a drill sergeant. But I saw, this is kind of a tribute because I didn't realize that Lou Gossett Jr.,
00:29:09.480 | he must have died this year because I saw him in the in memoriam at the Oscars. So that's kind of my
00:29:15.000 | tribute to Lou Gossett Jr. A terrible way to wake up though, banging garbage can lids.
00:29:20.600 | This is much better, Monk Bells. All right, let's move on to some questions.
00:29:27.320 | I didn't see, by the way, the lofty notes don't involve, if possible, make a reference to Lou Gossett
00:29:32.520 | Jr. That would be funny if it was like, if you don't reference Lou Gossett Jr. in this ad,
00:29:37.560 | we're going to have to do a make good. We're great at ads, aren't we? All right, what do we got?
00:29:42.280 | First question's from Lair. I have a research volunteer position for a nonprofit. I found purpose
00:29:48.920 | in this organization, but it doesn't really match my long-term career prospects. Would this still be
00:29:53.800 | considered falling into the passion trap, or does purpose operate under a different set of rules?
00:29:58.200 | Well, okay, we have a couple different things going on here.
00:30:03.000 | So first of all, is it a trap to have your volunteer position, your research volunteer position,
00:30:10.120 | get in the way of developing a meaningful paid career? There, yes. The answer is yes, that's a trap.
00:30:16.600 | If you're volunteering, you need to see that as a volunteer position, like helping out at your kid's
00:30:20.840 | school or like your local church. It could be a very important part of your life, but you see it
00:30:25.480 | separately than what you see as your paid profession. So it certainly shouldn't get in the way of you
00:30:29.480 | developing a meaningful and sustainable paid employment. But there's a secondary question
00:30:36.440 | here that's worth getting at more generally, which is the difference between passion and purpose.
00:30:42.760 | So there's a passion trap, but there is also a purpose trap. And I think it's worth trying to
00:30:49.080 | figure out what the difference is between these two. So the passion trap, as I wrote about in my book,
00:30:54.120 | So Good They Can't Ignore You, is the assumption that the key to really liking your job
00:30:59.320 | is to match the content of your job to something you're really interested in.
00:31:03.240 | So you say, I have a passion for X. So if my job involves X, I will feel passionate about my work.
00:31:09.000 | This was the fundamental model for career satisfaction that was taught to like Jesse and I's
00:31:14.200 | generation. To follow your passion, you'll be passionate about your work.
00:31:17.560 | It doesn't work out that way. The factors that make a job meaningful and sustainable,
00:31:22.600 | the factors that can help you develop a source of passion for your job are complicated and multivariate,
00:31:27.000 | and it's much more involved and simply saying, I like this. So if my job is connected to that,
00:31:32.120 | I'll be happy, right? I really like baseball. It doesn't mean that if I take, you know, a back
00:31:38.840 | office job at the nationals that I'm going to love my job. What makes you love your job is more
00:31:43.080 | complicated than just the content of it. The purpose trap is interesting. It's different. So the purpose trap
00:31:49.800 | is the fact that your job provides some sort of sense of purpose. So like the work you're doing
00:31:57.400 | feels important or is important, that that allows you to put up with lots of other factors about your
00:32:05.880 | job and its impact being negative. So the purpose trap is, yeah, this is kind of terrible. Like for whatever
00:32:13.400 | reason, like where the hours, the lack of money, the stress, but it's important like the field I'm in.
00:32:22.360 | So I'm going to put up with those other things. So it's letting purpose blind you to other elements
00:32:27.880 | that make a good job good. Now, again, the reality is purpose can be a very important component in
00:32:33.880 | engineering your ideal lifestyle and in particular engineering what you want out of your job,
00:32:38.120 | but it shouldn't be the sole component. There's other things you want as well. It could be
00:32:41.640 | autonomy, connection to other people, sense of mastery, those matter. And then there's a financial reward.
00:32:47.880 | So it's able to fund other things that are important in your life. And also when you're doing
00:32:50.920 | lifestyle-centric planning, you care about how the job fits in and supports other things that are
00:32:54.360 | important to you in your life. All of those factors matter. So you can't let one factor in
00:32:59.640 | there. Like, does this job have a sense of purpose? Stomp over everything else. You have to be more
00:33:04.600 | intentional about it. So what should you do? Build career capital first and foremost, be so good they
00:33:09.800 | can't ignore you at whatever it is you're doing, and then leverage that capital to take control of your
00:33:14.120 | career. Shape it towards things that resonate and away from things that don't. Make it supportive of
00:33:19.800 | your overall vision, your lifestyle, and move away from things that destabilize your overall ideal of your
00:33:24.840 | ideal lifestyle. This is complicated. It's iterative. We talked about this last week in
00:33:30.760 | the deep dive, the good life algorithm, the idea that you have to sort of experiment and discover
00:33:35.000 | what works and what doesn't and make corrections and adjustments in your life. You're sort of navigating
00:33:39.720 | this multidimensional landscape of possible lives towards something that resonates more and more.
00:33:43.800 | But it's what ultimately works. So don't look for the one fix. If I'm passionate about this,
00:33:48.440 | my life will be passionate. If I have purpose in my job, my whole life will feel good.
00:33:52.520 | It's always going to be more complicated. You always have to be sort of solving for the complex
00:33:58.040 | equation here. So I don't know. There's a lot hidden in this question, but I appreciate that
00:34:02.200 | because we actually got to get to a lot of interesting points. I didn't talk a lot about
00:34:06.520 | the purpose trap in So Good They Can't Ignore You, but it's come up a lot since that book came out.
00:34:10.040 | Where people are like, no, I mean, I'm not passionate about this, but this is an important cause.
00:34:14.200 | So because of that, I'm putting up with a lot of other negative things in my life.
00:34:17.640 | That's a real common trap that people get in. All right, here we got next.
00:34:21.800 | - Next is from Charity. I'm always on the go and listening to a podcast,
00:34:26.200 | so it's hard for me to stop and write stuff down. How can I capture these discovered tactics and tips?
00:34:31.160 | - Well, I think it depends on the podcast. I think for most podcasts, it doesn't matter.
00:34:36.200 | For this podcast, I think what Jesse and I recommend is probably you have a dedicated space
00:34:43.720 | for listening to it in. I would have this built custom if possible. Your inspiration and clearly
00:34:50.760 | doesn't have to be this large, but your inspiration when thinking about like a space that's appropriate
00:34:54.840 | for listening to our podcast, I'm thinking like the cathedral and charts, maybe Notre Dame.
00:35:00.360 | - Smaller, but something in a similar level of contemplation. And you really should be
00:35:05.720 | sitting in there, dedicating, I would say easy four or five hours to kind of go slowly and to
00:35:11.480 | re-listen and to take your notes. No. Okay. Ideas from podcasts. I don't know. Here's the two things I do.
00:35:17.320 | I will jot down timestamps in the notes on my phone. If there's something like, ooh, I want to remember
00:35:24.760 | that. It only comes up so often, right? I mean, some interviews are rich with these. If you're watching
00:35:28.440 | an interview show, some have none, but I put timestamps down temporarily in Apple notes,
00:35:33.960 | and then you can go later and write those down somewhere else. The other thing I sometimes do,
00:35:38.280 | this will happen a lot if I'm working out and I'm listening to a podcast and then I get an idea.
00:35:43.160 | Like, ooh, that like gives me an idea what they're just talking about.
00:35:46.760 | I voice dictate into Gmail, email to myself. So I'll just like... And they're weird because I have no
00:35:53.880 | punctuation and, you know, there's a lot of typos. But I'll just voice translate a bunch of ideas, send.
00:36:00.280 | And then after my workout, I have a couple of those emails in there I might want to deal with as well.
00:36:04.360 | So you can kind of develop your tricks. The third thing you could do is if you have a single purpose
00:36:08.920 | notebook, you could dedicate this like a field notes notebook and a pin. Just have that with you in a pocket.
00:36:14.280 | You could just jot in there timestamps as things come up and then you could deal with that as well.
00:36:18.280 | So whatever works, but I typically am doing the notes and the emails to myself.
00:36:24.280 | All right, who do we got?
00:36:27.080 | Next question is from Andrew.
00:36:28.360 | In a world without email and in the podcast, you talk about communication protocols and office hours.
00:36:34.200 | Is there anything else? As a professor, I'd love to know how you work on projects with your PhD students.
00:36:39.560 | You know, in a world without email, I talk about a cool study that was done, I think it was University
00:36:44.440 | of Maryland. And they're looking at models for managing, for professors managing doctoral students.
00:36:50.760 | And the model they tried that worked really well was borrowing ideas from agile software development.
00:36:57.240 | So in particular, they worked on, okay, we keep track of clearly what each of the students is
00:37:02.440 | working on. Right? So there's no question about that. And we have, like they have in the agile
00:37:09.000 | methodology, these daily stand-ups that are like 10 minutes long. And this was actually hard to dial
00:37:13.560 | this in just right. This is not, let's all talk for a half hour about what's going on. It is 10 minutes
00:37:19.000 | of like, okay, I see you were working on this, whatever it is, writing up the data from this experiment,
00:37:24.200 | trying to understand this proof from this paper, working on it, correcting this mistake in this proof.
00:37:29.960 | How's it going? What do you need? What progress did you made? What do you need to make progress going
00:37:33.480 | forward? And it's quick and you go through. And in this way, students always know what they're working on
00:37:38.120 | and they can't get stuck that long. This actually worked really well. And it worked well in contrast
00:37:44.440 | to what standard, this is the way I was trained, was you have a weekly check-in with each of your students.
00:37:50.120 | And you have this meeting and it's an hour long and it feels, sometimes it's packed because you have things
00:37:55.800 | you really have to work on, but sometimes it feels performative and students can be stuck for a whole week
00:37:59.720 | until they actually get to this meeting. And you might not have the right energy or time to help them
00:38:04.040 | make progress or not. They found this daily stand-up worked much better. So students really
00:38:08.040 | know what they were working on. And then you set up additional one-on-one meetings as there's
00:38:12.280 | very focused work to be done. Oh, now you're really stuck on a proof. Let's put aside time now for you
00:38:18.520 | and I to work on this proof and I'll help you get unstuck. So like the longer meetings are being
00:38:23.160 | dedicated for actual, we've identified an actual problem where we can make real progress. But the
00:38:29.960 | daily check-ins really quick and the keeping track of who's working on what keeps people from
00:38:34.200 | getting stuck. So that feels like a good idea. I don't, I'm a theory
00:38:37.960 | theoretician. I also work on digital ethics. I don't have large research teams, but I've heard
00:38:41.240 | this works pretty well. All right. What do we got next?
00:38:43.800 | From Chris, I'm a federal worker with recent concerns over incompatible values pitched by
00:38:50.520 | leadership. Do I just focus on surviving the day-to-day despite a values gap?
00:38:54.600 | Yeah. Well, it's a timely question, but it's a complicated one. So in your lifestyle-centric
00:39:01.880 | planning, I took some notes on this. Your job is probably supporting many aspects of your ideal
00:39:07.720 | lifestyle. I mean, even just having a job, the income, where you live, like what it's supporting,
00:39:12.200 | the hours, etc. So what you were working on and how you value what you're working on is one aspect of
00:39:22.760 | the ideal lifestyle that your job is supporting, but it's not everything. So we want to be careful here
00:39:29.560 | about making an immediate drastic change because what could happen is you could be saying there's
00:39:36.360 | something I don't like about currently how my job is set up. And so I'm going to leave that job because
00:39:41.000 | it's important to me that I'm, I like the people I'm working for the mission, but then there's like six
00:39:45.560 | or seven other things that are important to you in your life to take a hit. And you're thinking like,
00:39:48.520 | maybe that wasn't a fair trade, right? So we want to go with some care. So let's talk about how to be,
00:39:54.360 | how to be more discerning here. In my book, So Good They Can't Ignore You, I talk about working towards
00:40:00.680 | working on things that are directly against your values. I say that that's a disqualifier for a job
00:40:05.800 | being something that you can get long-term value out of. So what we want to be, if we're going to get
00:40:09.800 | more fine-grained about that here, so does this apply or not? You want to be careful here,
00:40:14.600 | especially in this government context about there's a difference between
00:40:17.640 | what my job is directly is pushing something that's against my values versus there are things in my
00:40:26.920 | job or the place I work for that I value that are being like blocked or stopped.
00:40:31.800 | It's like, this is really relevant in the government right now where it might be like, maybe,
00:40:36.040 | you know, you're, you're working on clean water or something like this. And it's not that someone is
00:40:45.000 | coming in and saying, you have to like actively work on something that's going to make water less clean,
00:40:49.880 | but that like funding is being taken away from clean water. And so there's a bit of a difference there.
00:40:56.360 | Or the, the boss of my boss of my boss is someone who doesn't care about, I'm in, you know, regulatory
00:41:04.120 | oversight and like has a different way of thinking about it, but it's not actually changing what I'm doing
00:41:09.320 | day to day to day yet. So there is a difference between I'm selling cigarettes and I really don't
00:41:14.840 | want to be selling cigarettes versus I am helping people quit smoking. And the amount of programs we're
00:41:23.320 | working on this has been cut in half. There's a difference between those two things. It's frustrating
00:41:29.880 | to have resources or projects towards what you care about be reduced,
00:41:34.600 | but it's sold editing to be actively working on the opposite of what you care about. So that I think
00:41:40.600 | it's a, that is an important distinction. A lot of what happens or what's happening now in the government
00:41:45.400 | or when new administrations take over is it often tends to be more of the resources or focus is being
00:41:52.920 | reduced on the things I care about versus I am actively being forced to work on something that
00:41:58.440 | I directly dislike. That happens as well, but make that distinction. That's gonna be a key distinction
00:42:02.680 | in figuring out how to act here. You also want to be careful not to personalize. This is very common in
00:42:07.800 | jobs in general. This is not just for this government scenario where you, you personalize, like you can,
00:42:13.800 | because our mind is good at this. We're used to dealing with individuals. So you personalize this
00:42:19.320 | individual, this person who's coming in and messing with me. My organization is, you know,
00:42:23.720 | 20,000 people, but this person who's coming in and messing with it. I hate that person. I've just,
00:42:28.760 | I could just imagine that person. I dislike them. And, um, I, this is about a battle between me and them
00:42:36.120 | and I'm not gonna let them win. You know how they're not going to win. I am going to, you know,
00:42:40.680 | I'm going to quit. And in your mind, you imagine it like in the paleolithic tribe where there's 12 of
00:42:44.840 | you and you're making the big display in the group of 12 and the, the, the new person who's
00:42:50.360 | trying to take over the tribe can ignore what you did. And it's big showy thing. But what really
00:42:54.200 | happens in a large organization is no one notices. That's what they're hoping. They're like, yeah,
00:42:58.200 | we want people to quit anyways. They don't care. They don't notice. You can't personalize. You have
00:43:02.920 | to see it abstractly. What am I getting from the job? How's it fit into my ideal lifestyle? Um, and then
00:43:07.800 | you can take, it just gives you some breathing room. Then you can take your time to figure out what to do.
00:43:11.480 | And the answer might very well be no, no, no. We've changed our mission. Like in my government
00:43:17.720 | position, uh, I'm now doing something actively just like, I need to get out of there. And we've seen
00:43:21.320 | this actually happen recently in the government where people are saying, I'm being asked. I'm literally
00:43:25.160 | being asked to do something that I don't want to do. And, um, they're leaving and they're being asked to
00:43:31.320 | leave. But if it's not that now, you can take your time. Hey, am I still able to make progress
00:43:37.640 | admits to hardship, make progress on things I care about to the best that I can? Oh,
00:43:41.720 | it's super frustrating, but it's important that the work goes on. Maybe that's the answer. Or maybe
00:43:46.520 | it's this whole thing has been gutted and there's no reason to be here anymore, but my job and its
00:43:51.400 | benefits and its flexibility is, you know, it's supporting my family. It allows the hours are
00:43:56.040 | reasonable. I can go coach to my kids, little league team. And we're heavily, there's these other things.
00:44:00.920 | This is making possible. That's very important in my life. So I don't want to throw that away.
00:44:04.120 | Um, so yeah, I'm going to explore. I probably need to make a change, but I'm going to take my time
00:44:09.160 | finding this and maybe I'm going to be very checked out mentally and, you know, phantom part-time job
00:44:15.880 | and in all this sort of situations, but I'm going to be, take my time finding this because no one's
00:44:21.320 | going to notice if I make a big showy thing, unless I'm, you know, already famous or something like
00:44:25.160 | this as well. So I guess what I'm really pushing here is some is caution in this scenario. Understand
00:44:31.000 | the way your job fits into your larger lifestyle centric vision, separate actually working against
00:44:36.760 | your values versus working in a place where someone against your values is monkeying around with it.
00:44:41.640 | Frustrating you can't do more is different than frustrating at what you are doing. And then if you
00:44:46.680 | do make a change, take your time to make that change right in a way that's going to support your
00:44:52.360 | full lifestyle, don't get caught in a trap that, you know, Elon Musk or big balls is going to notice
00:44:59.720 | and be like, oh my God, I'm going to change my ways. They quit and they sent this email to their boss
00:45:05.000 | and like, this is, you know, so you're kind of winning if you're staying in control of your own life.
00:45:10.040 | I guess that's the way I would think about it. All right. What do we have next, Jesse?
00:45:13.880 | We might have our final corner.
00:45:15.880 | Oh, slow productivity corner. Today is we're recording this on the fourth.
00:45:21.720 | Yeah. And the fifth of March is the one year anniversary of my book, slow productivity.
00:45:25.720 | So over the last year and every episode, we've had one question dedicated
00:45:30.520 | to ideas for my book, slow productivity, which was just our excuse to play theme music.
00:45:35.320 | We should have people write in. Why don't we have people write in the Jesse at CalNewport.com.
00:45:40.600 | Is this the end of the corner? Should we find another way to use the theme music or should we just like
00:45:45.240 | clean break and move on from the segment right now? Our default is to move on, but send your vote.
00:45:51.400 | If you have one to Jesse at CalNewport.com. But either way, we know for sure right now we can hear that theme
00:45:56.840 | theme music at least one more time.
00:46:05.720 | No, it's just hitting me what that theme music sounds like.
00:46:07.960 | What's that?
00:46:09.400 | The more, you know, remember the NBC, the more, you know,
00:46:15.240 | that's why it's hitting our like millennial memory banks right there.
00:46:17.880 | All right. What's our question of the week?
00:46:19.480 | It's from Daniel in your episode, let Brandon cook.
00:46:22.440 | You argued that letting the Brandon's of your organization cook will help move the rest of the
00:46:26.360 | work cultural away from pseudo productivity.
00:46:29.400 | What if your organization has a fairly entrenched pseudo product productive managerial work culture
00:46:34.760 | that trickles down to knowledge workers?
00:46:37.000 | Are there specific recommendations you'd make for moving managerial work culture in a deeper direction?
00:46:44.600 | Well, the let Brandon cook idea who I have in mind, this beginning to affect is to managerial class.
00:46:51.240 | Right. So the idea from that episode titled let Brandon cook was if an organization starts deciding,
00:46:58.360 | okay, at least we have some people who have a highly specialized skill and we're going to prioritize them
00:47:04.520 | applying that skill, right? We're not going to, even if it's like less convenient for us or other people,
00:47:12.120 | it's not going to be about responsiveness or having the most low friction back and forth conversations
00:47:17.560 | or what's going to make our lives easier. It's like, let that person do what they do best because
00:47:20.760 | it moves to bottom line. And I said, if you have a few people doing that, that'll put cracks into the
00:47:25.400 | pseudo productivity firmament. So this idea that busyness is what matters, that idea itself is
00:47:31.160 | to stabilize when you have some people who are mattering, not for being busy. And who do I think that's going
00:47:36.680 | to really affect psychologically? I think managers. You're a manager and you realize, well, some of these
00:47:42.600 | people, they're helping our bottom line more, not by answering my emails really quickly, but because
00:47:49.000 | like we're letting them actually spend time doing what they do really well. Makes it hard for you to
00:47:56.280 | remain fully committed to the pseudo productivity ideal that activity is all that matters. So actually,
00:48:01.480 | it's the managerial class itself where I want to start affecting some changes. And I think that is
00:48:06.360 | where letting Brandon Cook can begin to help make progress. Because once you start thinking another
00:48:15.160 | way of measuring productivity is how much actual valuable stuff did you produce? That sounds so
00:48:20.440 | obvious because every other measure of productivity in every other sector does that, but we don't do that
00:48:24.200 | in knowledge work. So as soon as you bring that in the knowledge work, this many lines of good code,
00:48:28.920 | this articles that got this many, you know, readers or won these many awards, when you're thinking about
00:48:34.600 | results, suddenly email response time doesn't matter. Suddenly being on Slack doesn't matter.
00:48:40.760 | Suddenly like what's really important that we need to know exactly what days you're in the office doesn't
00:48:45.320 | matter because like results can, results are what moves the needle. So I don't know. I think
00:48:51.080 | the let Brandon Cook idea starts with superstar performers, but begins to change the mindset of
00:48:56.680 | managers. And now you're going to get over time, more flexibility for a lot of other people as well.
00:49:01.480 | All right. Do we have a call this week? We do. All right. Let's hear it.
00:49:09.880 | Hi, Cal. My name's Danny. Long time reader. Big fan. I am a programmer full time. I also have a part time gig
00:49:20.040 | teaching math. The programming relates to math education as well. And although I don't want to quote,
00:49:27.480 | become a writer, I have a book in me about math education. And I kind of wanted to ask you how
00:49:35.800 | one goes about sort of doing writing a book on the side. You know, I've read little things about
00:49:42.040 | the king who wrote a book while he waited for his wife to come to dinner in just those little bits of
00:49:48.520 | time. So I'm just curious if you had any thoughts or tips or had heard things about how people manage the
00:49:56.360 | process of just sort of putting a book together, uh, not to make a living at it and not on any really
00:50:04.760 | strict schedule, but just wanting to, you know, get a book together. Thanks Cal. Appreciate everything.
00:50:11.880 | Well, it's a good question. Uh, and when it comes to nonfiction, Danny,
00:50:16.200 | people want the, the picture you're painting there. They want that to be true.
00:50:22.680 | Like what's interesting to people is this idea of like, I have this habit where I'm kind of
00:50:26.360 | writing a little bit every day or in these certain types of little windows of time. And over time,
00:50:31.240 | this book comes together and then like, Hey, this book's pretty cool. And then it gets published and
00:50:35.160 | kind of finds an audience. It's actually not how nonfiction is written, right? There's a, there's a,
00:50:40.840 | there's a reality to how nonfiction is written. There's that story that, that people like to tell
00:50:44.600 | because it's fun, low stakes writing. When you get time, it's like fun, but it's not how nonfiction
00:50:51.240 | gets written in nonfiction. You sell the book first, you sell the book based off a proposal first.
00:50:56.840 | There's a little bit of exceptions. If you're talking about an academic press,
00:51:00.840 | it's a little bit different, right? You might have the book together for like,
00:51:05.320 | it gets a little bit more complicated, but for the most part of nonfiction, you're selling the book first
00:51:09.240 | and then you're writing it. So actually in nonfiction, unlike fiction, the motivation to
00:51:14.760 | write is not a problem. The motivation is I have a contract. They've already given me the first half
00:51:19.240 | of the advance. I'm going to have to give it back. If I don't deliver them a book by this date,
00:51:22.840 | it's due in eight months. It's not fun. Like I'm just kind of like writing. When I get a chance in the
00:51:27.800 | shed, this is a job. Now I've been paid to do a job. I got to execute it. It actually feels much more
00:51:31.960 | commercial and workman like than you didn't, you would imagine. So writing nonfiction is not something
00:51:38.520 | where motivation really matters because you sell the book first. So how do you sell the book?
00:51:42.760 | Well, first you get an agent. And here, the process of getting an agent is not, there's not
00:51:48.760 | a lot of hoops you're jumping through, right? I mean, there's something called a querying process
00:51:51.960 | where you're sending letters of a certain format. You're emailing them these days to agents who are
00:51:57.480 | saying, I want to find authors. Query me here. Here's the type of authors I support. And you're
00:52:01.400 | sending them and it's a page long. There's a format to it. And if they're interested, they're like,
00:52:04.600 | let's talk. And if you get an agent, they'll help you write a proposal.
00:52:07.800 | They'll be the ones to sell it to a publisher. Then you'll go write the book.
00:52:10.200 | So that's how that actually works. People don't like that story because it front loads
00:52:16.040 | the evaluation. You're like, shoot, I like the idea of writing this book. But the reality of the story
00:52:23.640 | is that like, I could start querying agents this week and by next week, know that none of them are
00:52:27.640 | interested. And that could be the end of that dream, right? Because your mind kind of knows like,
00:52:32.840 | I don't really have this fully worked out. Like, are we the right people to write this book? Is the topic,
00:52:37.240 | something that people really need to see? I'm not quite ready yet for someone to like evaluate this,
00:52:42.040 | but that's really how nonfiction actually starts. So it's a, it's like good news, bad news. Good news
00:52:47.960 | is you don't have to worry about tricking yourself to write or motivation or willpower procrastination.
00:52:52.520 | The bad news is getting to step two of nonfiction book writing is like really hard. Step one is
00:52:58.680 | actually selling the book first, but don't run away from the reality, right? Confront the reality.
00:53:05.080 | If you can't get an agent for your book idea, then use that as a forcing function to figure out, well,
00:53:08.920 | why not? And that could help you find a better idea. I wrote a, like a kind of a well-known blog post
00:53:14.200 | about this years ago, back when I was still writing my student books. And if you go to
00:53:18.360 | search for like calnewport.com and you know how to get a nonfiction book deal, I wrote about everything
00:53:24.520 | I learned selling my first three books, which are just like straight up student focused nonfiction
00:53:30.120 | advice books. I lay out all these ideas. Here's how it works. Here's what I learned to do. It's funny,
00:53:35.400 | Jesse, I talked about in that article. I reread it recently. I'm like, well, look, I'm not like
00:53:39.880 | a New York times bestselling author who has sold millions of books, but I've still had a pretty
00:53:43.640 | good run. And now you fast forward, you know, 10 years after that, I have done all those things.
00:53:48.440 | But back when I wrote that post, I was, that was like this impossible future that I, I had not
00:53:53.560 | achieved and I was never going to achieve. So I didn't really believe in myself back then, but that post
00:53:57.400 | gets, I mean, I was right in the thick of like just starting out as a writer is right after I sold my
00:54:01.000 | third book. So go find that article and don't ignore the reality. And that the thing that trips up most
00:54:08.120 | people with this reality is I identify early on in that article to sell a book. So I learned from
00:54:14.600 | my agent 20 years ago, you got to have an idea that people are going to feel like they have to read.
00:54:19.000 | There's gotta be a sizable audience. That's going to feel that way. And you have to be the right person
00:54:24.200 | to write it. It's actually really hard to find something that satisfies all three. Like you can come
00:54:30.360 | up with a killer idea, but like, if you're not a writer with any sort of connection to that idea,
00:54:34.680 | then like, you're not the right person to write it. Or you could have an idea that you were the
00:54:38.040 | right person to write, but it's so niche that no one cares. It's not a big enough audience.
00:54:42.280 | Or you have an idea that like, in theory, a lot of people would be interested in this.
00:54:47.160 | Like it's relevant to a lot of people and you're the right person to write it, but no one is going to
00:54:50.280 | feel like, Ooh, I have to read that book. It's not motivating me. Like, Ooh, I got to see that.
00:54:54.520 | It's hard to get all three, but if you get all three, it's hard not to sell it because again,
00:54:58.440 | agents are desperate for clients. Publishers are desperate for books. They just have to be viable
00:55:04.360 | ideas being written by viable people, but it is not a world of people trying to reject stuff that
00:55:10.440 | you're trying to slip past. It's a world where people want to accept stuff. So your job is to
00:55:15.560 | get rid of the rough edges that makes it impossible for them to accept what you're doing. So read that
00:55:19.400 | article, confront the reality of how the industry works instead of the story you want to be true,
00:55:24.280 | or you could write for fun. But if you really want to publish a book, you got to confront the
00:55:28.040 | reality of how that industry actually works. And if he takes the approach by writing for fun,
00:55:32.600 | you can just self publish and essentially nobody's ever going to read the book though, right?
00:55:36.440 | Yeah, no, no, probably no one will read it, but you could write for fun. You could write on like
00:55:39.640 | medium. No one will read it. You could have a sub stack, but again, if you don't already have
00:55:44.520 | an established reputation, no one's going to read that either, but you could do that. Or you could write
00:55:49.160 | for family or friends like that would be fun as well. Fiction is better for this because in
00:55:54.120 | fiction, you're supposed to write the book first. So you can just write fiction because in theory,
00:55:59.800 | you're doing the same thing as Brandon Sanderson, right? You're like, yeah, we all just write the
00:56:03.640 | thing first and see if it works. So it's a little bit more fulfilling. Nonfiction though, you shouldn't
00:56:07.800 | delude yourself. If you're 300 pages into your self-help manifesto that you think is going to be
00:56:14.040 | brilliant, you're not really following the path or in the world of professional writers. That's just not
00:56:19.960 | the way that typically works. All right. So we've got a case study here where people write in the
00:56:26.680 | share their personal experiences, implementing the type of advice we talk about on this show.
00:56:30.920 | This week's case study comes from Kelly. Kelly says, I am 26 years old and an athletic trainer by trade.
00:56:38.680 | I recently transitioned away from working in collegiate athletics and into a new pursuit of
00:56:42.680 | K-12 substitute teaching. At my old job, I became burned out after dealing with high administrative
00:56:48.440 | overhead, expectations for having an online presence outside of working hours, and limited free time to
00:56:53.000 | explore deep pursuits. I had trouble finding meaning outside of work. Despite your prudent advice,
00:56:58.360 | I resigned from my old position with absolutely no plan for my next move. I did some lifestyle-centric
00:57:02.600 | career planning, however, and found that substitute teaching could help me build career capital in the
00:57:08.040 | educational realm, provide enough consistent structure around which I could time block,
00:57:13.000 | give me enough time off to explore new hobbies, and pursue deep, meaningful things.
00:57:17.960 | So I took your advice to make my phone less appealing, bring a book everywhere I go,
00:57:22.040 | and limit my consumption of algorithmically curated articles, posts, and content.
00:57:26.840 | I was doing great with my more analog lifestyle until a medical crisis happened in my family last week,
00:57:31.400 | and then Kelly goes on about how a lot of this stuff got difficult over this last week.
00:57:35.560 | But I want to pause there to focus on the things that Kelly did do that I really want to emphasize
00:57:41.880 | here, and then I'll talk about briefly how to make sense of what's going on with her right now with this
00:57:48.360 | medical issue. So what I like about this, lifestyle-centric planning for the win. Now, I would have done,
00:57:54.280 | as she pointed out, I would have done the lifestyle-centric career planning before I left the
00:57:57.800 | old job, but okay, it still worked out. Substitute teaching, it's not the thing that like pops to your
00:58:03.560 | mind necessarily as like, "Oh God, that's the dream job. That's what I want to do." But it's the type of
00:58:07.960 | thing that can come up when you're doing lifestyle-centric career planning. What do I want my life to be like
00:58:11.800 | day-to-day? What are the things that matter? Doing that analysis and then looking at your specific
00:58:17.400 | opportunities and obstacles led Kelly to like, "If I did substitute teaching, I'm working on this
00:58:23.000 | capital here, and I feel the structure I need. I don't have this extra stuff I have to do outside
00:58:27.320 | of my work, and I can pursue these other things that are important to me." And I like how then Kelly also
00:58:31.800 | did a rebuilding of her digital life. I'm going to bring books, I'm going to make my phone less
00:58:36.520 | interesting, and how that freed up a lot more time and focus and contemplation for pursuing things that are
00:58:41.000 | interesting and probably reconnecting with herself more. That's a fantastic example of the way I talk
00:58:47.080 | about transforming your life, which is not as exciting as like, "I just got this giant goal and I pursued it
00:58:51.480 | and everything was better," but it made a real difference. Then Kelly talks about, and we'll mention
00:58:56.360 | this briefly, "Hey, things got tough recently. There's like a big medical emergency in my family." And
00:59:00.440 | I won't read all the details, but basically she's saying that put her on her phone all the time.
00:59:05.400 | first to be in contact for obvious reasons with family members, but then once she was on her phone
00:59:11.160 | all the time, she began using it more, again, understandably as digital pacification because
00:59:16.920 | it was very stressful what was happening in her family. And as I talked about this, I've learned
00:59:21.240 | this like during my own medical things recently, there's like a numbing effect, a distracting effect
00:59:25.320 | of just algorithmically curated content. So the question she's asking finally is like, "Well,
00:59:31.560 | how do I deal with that?" And I say, "Well, you're going through an emergency."
00:59:34.440 | Like now is not the time to nitpick about like your digital habits. Now is also not the time
00:59:40.680 | to hold yourself to the same standards that you were holding yourself to the day before
00:59:43.800 | the emergency happened. You will get back to that. But like what you're doing now basically
00:59:48.040 | is think about this as crisis lifestyle-centric career planning. What do I want during this crisis
00:59:53.800 | period? What is my ideal of how I want to look back and say I got through this? And it's going to be
00:59:58.280 | less about, "I spend four hours a day on self-improvement," right? Because no, this is not,
01:00:03.640 | your head space is not there and your time is distracted. It's going to be more like I want
01:00:06.520 | to be the person people count on. I want to be the person that people look back afterwards and say
01:00:10.360 | like, "Kelly was a rock during this and it was really useful." It's going to be about leadership.
01:00:13.560 | You're going to have to have a lot more self-care. Maybe you want to start replacing
01:00:18.760 | phone self-care with other self-care, but you're not replacing it with like, "I want to go be productive,
01:00:23.480 | but maybe I'm going to spend more time," whatever, "going for like long runs or hot baths or going to like
01:00:29.400 | movies," like stuff that you feel like is maybe a little bit more uplifting or less numbing that still
01:00:34.440 | helps distract you. But it's like you have to do a whole separate lifestyle career planning for this
01:00:39.560 | period that's based on like, "How do I get through this in a way that I'm proud of?" And then when this
01:00:43.880 | crisis is over and it will be over, go back to what you're doing before because it sounds great.
01:00:49.000 | You're envisioning what matters to you and you're pursuing that and you're building your own idiosyncratic
01:00:53.400 | path to a life that is in the moment something that you're proud of and in long term is opening
01:00:57.560 | up cool options. So you're doing the right things, Kelly. Don't be so worried about what's happening now.
01:01:02.280 | Keep focusing on who you want to be and then get back to your bigger vision when the emergency is over.
01:01:07.000 | All right, we got a off the rails tech corner coming up, all things AI, but first let's hear from another sponsor.
01:01:17.560 | Hiring the right people quickly is important. This is something I need to get better at.
01:01:25.800 | Jesse, I don't know if I remember this entirely correctly, but the way I remember we started
01:01:33.000 | working together was it involved me lurking around a CrossFit gym with a butterfly net.
01:01:39.800 | And then I think I caught you, like jumped out from behind the bushes. That's not the most efficient
01:01:45.160 | way to hire. That took me a while to do. There's a better way to do it. And that way is Indeed.
01:01:52.440 | Indeed is the best way to hiring. When it comes to hiring, Indeed is all you need.
01:01:58.600 | Stop struggling to get your job posts seen on other job sites. Indeed, sponsor jobs will help you stand out
01:02:03.400 | and hire fast. With sponsored jobs, your post jumps to the top of the page for your relevant candidates,
01:02:09.000 | so you will reach the people you want faster. This makes a huge difference. According to Indeed data,
01:02:14.360 | sponsored jobs posted directly on Indeed have 45% more applications than non-sponsored jobs.
01:02:22.040 | We know hiring is very important for any type of business you might run, and Indeed really has to be
01:02:28.120 | your co-pilot here. Plus, with Indeed sponsored jobs, there's no monthly subscriptions, no long-term
01:02:32.600 | contracts. You only pay for results. How fast is Indeed? In the minute I've been talking to you,
01:02:38.360 | 23 hires were made on Indeed, according to Indeed data worldwide. So there's no need to wait any longer.
01:02:46.520 | Speed up your hiring right now with Indeed. Listeners of this show will get a $75 sponsored job credit.
01:02:53.080 | This is a good deal. $75 sponsored job credit to get your jobs more visibility if you go to
01:02:58.360 | indeed.com/deep. Just go to indeed.com/deep right now and support our show by saying you heard about
01:03:07.000 | Indeed on this podcast. Indeed.com/deep. Terms and conditions apply. Hiring. Indeed is all you need.
01:03:17.400 | Also want to talk about our friends at Oracle. Even if you think it's a bit overhyped, AI is suddenly
01:03:24.200 | everywhere from self-driving cars to molecular medicine to business efficiency. If it's not in
01:03:29.960 | your industry yet, it is coming, and it's coming fast. But AI needs a lot of speed and computing power,
01:03:35.960 | so how do you compete without costs spiraling out of control? It is time to upgrade to the next generation of
01:03:41.400 | the cloud. Oracle Cloud Infrastructure, or as I like to call it, OCI. OCI is a blazing fast and secure
01:03:49.480 | platform for your infrastructure database application development, plus all your AI and machine learning
01:03:54.680 | workloads. OCI costs 50% less for compute and 80% less for networking. So you're saving a pile of money.
01:04:03.640 | Thousands of businesses have already upgraded to OCI, including Vodafone, Thomson Reuters, and Suno AI.
01:04:09.240 | Right now, Oracle is offering to cut your current cloud bill in half if you move to OCI. For new US
01:04:15.720 | customers with minimum financial commitment, offer ends March 31st. See if your company qualifies for
01:04:20.680 | this special offer at oracle.com/deepquestions. That's oracle.com/deepquestions. Speaking of deep
01:04:30.680 | questions and AI, Jesse, let's move on to our final segment. All right, so I want to talk a little bit
01:04:36.360 | about AI here and jam way too many ideas into a short segment. The thing I want to react to
01:04:42.840 | to kick things off is Ezra Klein's recent podcast. I have this up here on the screen for those who
01:04:50.040 | are watching instead of just listening. This is Ezra and he is talking here with Ben Buchanan, Biden
01:04:58.120 | administration AI-related official. There's a quote from this podcast I want to read and then I want to
01:05:04.760 | use that to riff off of. All right, so here is the quote. This is Ezra talking at the beginning of
01:05:09.080 | this recent episode of his podcast. He said, "For the last couple of months, I have had this strange
01:05:15.640 | experience. Person after person from artificial intelligence labs from governments have been
01:05:21.160 | coming up to me and saying, 'It's really about to happen. We're going to get artificial general
01:05:26.600 | intelligence.' What they mean is that they have believed for a long time that we are on a path to
01:05:31.160 | creating transformational artificial intelligence capable of doing basically anything a human being
01:05:36.120 | could do behind a computer, but better. They thought it would take somewhere from five to 15 years to
01:05:41.640 | develop, but now they believe it's coming in two to three years during Donald Trump's second term.
01:05:45.880 | They believe it because of the products they're releasing right now and what they're seeing inside
01:05:50.120 | the places they work and I think they're right." All right, so not surprisingly, this podcast has many of my New York Times
01:05:57.800 | reading coastal friends worried about AI. Like if you just kind of hear this wording casually and you're not in the AI industry
01:06:06.600 | and you're not a computer scientist, this does kind of sound like we're a couple years away from Skynet.
01:06:12.120 | Right? Machines getting out of control. Machines making us unnerved about turning them off. Machines
01:06:20.120 | that are causing consequences that we didn't anticipate. So existential challenges. But are we?
01:06:27.560 | Well, no. And I want to start by saying there's a meaningful distinction between AGI and that other
01:06:34.600 | scenario which we should better call superintelligence. And I think it is worth reviewing
01:06:40.040 | this distinction. If for anything so you'll have a more accurate view of what's coming
01:06:44.280 | and also have a more accurate view of other things that maybe are farther in the future that's worth
01:06:49.320 | keeping an eye on. All right, so AGI in the way Ezra is talking about here is actually, you know,
01:06:54.280 | it sounds like a big thing. We achieve AGI, something has happened. When that happens,
01:06:59.000 | we cross that threshold, something is worrisome. It is really, you can think of it as an
01:07:02.920 | arbitrary quality threshold for the things that these language models are already doing.
01:07:10.440 | This is actually what experts mean when they talk about AGI right now. It's like right now,
01:07:14.840 | I can ask ChatGPT to create a memo summarizing whatever historical factors relating to like the
01:07:21.960 | adoption of a certain type of technology. And it will do this like pretty well. It'll like pull on
01:07:27.400 | information. It'll be written properly. It'll be on the topic. Probably not as good as like having a
01:07:33.480 | researcher do it, but it'll do it pretty well. It's going to get better at this. And at some point,
01:07:38.280 | it'll get good enough at writing this memos. We'll say like that's crossed the AGI threshold. Like
01:07:41.800 | that's as good as like a good researcher would do. It's like this arbitrary threshold.
01:07:45.320 | So it's not that when we cross that threshold, there'll be suddenly new things that artificial
01:07:51.080 | intelligence can do that it hasn't been able to do before. It's the things you already know it can do
01:07:55.000 | have gotten past the sort of arbitrary subjective threshold of like that seems as good as like humans
01:08:00.280 | are doing it. It's like kind of around there right now. Like it can write a pretty good memo. It can write a
01:08:04.120 | pretty good joke. It can write pretty good computer code, but not quite as good as a person. Like soon
01:08:08.280 | it will be as good as a person. That's AGI. Now this has real economic and security consequences. The
01:08:15.960 | better these models get at these things, there are these real world consequences that get worse.
01:08:22.200 | And Ezra and Ben get into these in the interview. It really is about both security and economic
01:08:27.160 | consequences of this. But when you recognize what AGI is, is like an arbitrary quality threshold and
01:08:31.800 | stuff. AI is already doing pretty well and not some sort of new capabilities. We realize it is serious,
01:08:38.600 | but it's not the type of thing that James Cameron's going to make a movie about. Let's talk though about
01:08:44.280 | the things that James Cameron did make a movie about. And that is this idea of super intelligence of AI
01:08:50.120 | getting increasingly autonomous and smart until it's smarter than us as designers. And then all sorts of
01:08:55.560 | sci-fi chaos happens. This is something that people are also worried about. It is separate as we just
01:09:02.600 | established from AGI, but it's not impossible. So why don't we talk about that? So here's the,
01:09:09.560 | here's a conversation Ezra did not have, which is like, what, what would be needed for super intelligence?
01:09:14.440 | And are we on track to that or not? So let's have that conversation. Now I begin to wrap up
01:09:21.480 | my conversation or consolidate my conversation of like, what would be needed to create the quote,
01:09:27.240 | like an original article from the fifties about Rosenblatt's very first self-learning
01:09:33.080 | machine learning model, the, the perceptron electromagnetic perceptron was called a
01:09:38.360 | Frankenstein machine. What would be needed for AI to become something that was James Cameron ask like,
01:09:46.520 | Ooh, either I feel uncomfortable turning it off or it's doing stuff I didn't intend for it. And it's scary.
01:09:51.800 | There's four things you need. And I call these the Frankenstein factors.
01:09:56.920 | One understanding. So the machine has to be able to understand complex concepts,
01:10:03.080 | know the reasoning behind them, be able to like apply these understandings to create new information,
01:10:09.720 | just have like an understanding of stuff and ideas and concepts. Number two is world modeling. A system
01:10:15.320 | needs some sort of, to do this, some sort of a model of the world around it and that it's in three, it needs
01:10:22.200 | some sort of a incentive system. Like here's what matters to me and what doesn't. And for it needs
01:10:27.240 | some sort of actuation, all of these things, a way of actually like impacting the world around it.
01:10:32.600 | And so then the, the, what you need then is a system that has a state of the world using its
01:10:39.240 | understanding and its incentives can explore actions that can take through actuation that will change
01:10:45.800 | its world state to something that is better in its incentive, whatever its incentives are, the things
01:10:50.360 | and values. And that is like the action loop that makes things interesting. So if you don't have a world,
01:10:57.560 | like an updated understanding of yourself and the world around you and where you are and what's happened to
01:11:01.080 | you, there's no way for you to have any sort of like a sentience because there's no memory,
01:11:04.840 | there's no state, nothing changes, right? And if you don't have incentives, then your model's not,
01:11:09.560 | your, your system's not doing anything. If you don't have actuation, there's no way for you to act on
01:11:12.680 | your incentives, but you put those four Frankenstein factors together. Now you have the possibility of
01:11:17.480 | a system that goes awry. And I think this is actually what people, the lay person has in mind
01:11:23.640 | when they get worried about artificial intelligence, not the economic or security consequences of
01:11:29.560 | machines continuing to get better at things are already doing, even though that's really where
01:11:32.600 | the, uh, the practical attention is right now. All right. So I have some good news about it. And then
01:11:37.400 | a piece of bad news. So there's some good news about these Frankenstein factors.
01:11:43.480 | One, all of the energy is just focused on one of those, which is building models with understanding.
01:11:51.240 | That's what these language models are right now is they have complicated understanding of concepts built
01:11:56.600 | through training, world modeling incentives, actuation. Like most of this right now is just
01:12:01.640 | left to the people using the models because there isn't any real direct immediate economic incentive to,
01:12:06.680 | to building those types of machines. Like having a model with lots of understanding
01:12:11.000 | that a human with their own incentives and model of the world. And here's what I want to do.
01:12:16.440 | And I can take the results and apply them over here. This is actually like the most efficient use
01:12:20.680 | of this technology right now. So there is no push to build systems with complicated versions of the
01:12:25.400 | Frankenstein factors. All of the focus right now is just on understanding modulo, some sort of like
01:12:30.280 | relatively minor scripted agent systems.
01:12:34.280 | Another thing that should make us feel better is that the
01:12:38.520 | Frankenstein factors two for four, world modeling incentives and actuation,
01:12:42.840 | these aren't trained, they're engineered. I think it's a really important point I mentioned before,
01:12:46.760 | but I just want to underscore it here. The language models we're building for understanding are trained,
01:12:52.760 | meaning that we have these large transformer-based neural networks, we give them a lot of training data,
01:12:56.920 | and they somehow adjust their internal wiring until they do what we ask them to do really well,
01:13:02.680 | and we don't really know how they're doing it. So they have this sense of an alien mind,
01:13:08.520 | as I talked about in my New Yorker piece from a couple of years ago, or sort of like emergent
01:13:11.880 | abilities that can catch us off guard or surprise us. That's very unsettling. We don't know how this
01:13:16.120 | thing works, but it just starts working and we kind of watch what it can do. These other factors,
01:13:22.520 | world modeling incentives, actuation, any reasonable way we have thinking about building these,
01:13:27.320 | these aren't systems we train and don't know how they operate. They're just going to be hand
01:13:30.440 | engineered by people. So we choose the incentives. That gives us a lot of power about what these
01:13:36.760 | machines can do. We choose like how the actuation works, what actuation it can and can't do.
01:13:41.560 | That gives us a lot of control over what these things can do. Remember the understanding that we
01:13:45.880 | have encoded in language models. Language models are inert. They're giant matrices full of numbers
01:13:51.960 | that we can run through GPUs to create probability distribution tables on tokens or token sequences.
01:13:56.760 | They have no state, no recurrence, no loops, no ability to, no autonomy, right? It's just a
01:14:01.640 | large Play-Doh machine. We turn to crank and out of the other side comes tokens.
01:14:05.800 | So a language model has no ideas, has no memory, has no state, can't do anything. It is these other
01:14:13.080 | engineered systems that could work with a language model if you wanted to have a fully autonomous sort of
01:14:17.560 | digital intelligence. And those are hand engineered and they can be what we want to be. And I point to
01:14:21.720 | the example often here of Cicero, the multi-model system that played the board game diplomacy very
01:14:29.960 | well. Noam Brown worked on this before he got hired away to open AI. This is probably one of the closest
01:14:36.520 | systems we've seen to having all the Frankenstein factors. It used language models and it had incentives
01:14:41.880 | and it had actuation. It could actually like communicate with people over the internet to play
01:14:45.960 | this game. But because everything was engineered except for the model that like evaluated moves and
01:14:51.640 | created language, they could say, for example, we don't want you to lie. And because they could
01:14:57.880 | control the actuation and the incentives and the simulation of the world, they could just program
01:15:02.760 | in don't consider options with lies. So there's this kind of nicest, the Frankenstein factors have a lot
01:15:08.680 | of control outside of the understanding component. I call this IAI or intentional AI, the idea that if
01:15:14.360 | we're going to build an autonomous type of system, if and when we get to that, we'll have more control
01:15:19.080 | over that than we imagine when we think about the unsupervised training of language models.
01:15:25.320 | The other thing we should feel good about is another idea. I'm actually writing a paper about this right now.
01:15:31.560 | We don't have any reason to believe that our colloquial notion of superintelligence
01:15:37.240 | is actually computationally tractable. So we just have this idea of like, well,
01:15:42.040 | computers are doing these things pretty well. So we can just imagine a sufficiently powerful computer
01:15:48.360 | doing thinking at a level that is significantly more complicated than what humans could ever do,
01:15:54.280 | and then that computer would have power over us. But this sort of cognition that computers are doing,
01:15:59.960 | these are actual, you can think of these like actual problems being solved by computers. And here's
01:16:04.200 | something that every theoretical computer scientist knows, right? This is just ingrained into us in
01:16:08.120 | like every theory class we ever take, and this goes back to Turing. Most things can't be done by
01:16:12.840 | computers. Most problems are unsolvable, or if they're solvable, they're computationally intractable.
01:16:16.840 | So we don't actually know that it is possible for there to be a computer program that is somehow
01:16:24.360 | representing something like a supercharged human intelligence. That is like a fallacy of, it's a common
01:16:31.400 | philosophical fallacy to imagine we can make computers more powerful, so the things computers can do,
01:16:36.280 | they'll be able to do increasingly more sophisticated. Complexity theorists know like
01:16:41.640 | our main frustration is like most things can't be solved. Most problems are just impossible.
01:16:45.240 | Or if it's possible, it's computationally infeasible. It's actually the rare problem
01:16:51.400 | that a computer can efficiently solve. So we have no reason to believe that superintelligence
01:16:56.840 | is actually computationally feasible. We might be hitting the limits of what cognition a computer
01:17:05.480 | can do in any sort of reasonable computational efficiency, right? I mean, think about it. It might
01:17:09.880 | be true. We're building the biggest possible computer systems that are feasible right now. They might have
01:17:14.920 | tens of thousands of GPUs and these giant custom-built data centers. We can't really build these systems any
01:17:20.120 | larger. We just could be pretty close to some limit of like, this is it, right? So it's a logical fallacy
01:17:26.280 | to extend a curve. Things got smarter, they'll keep getting smarter. We don't know that's the case. Most
01:17:33.560 | problems are unsolvable. Now I'll throw in my final thing, which is like, yeah, but you don't need a super
01:17:43.240 | intelligence for one of these autonomous systems built on the four Frankenstein factors to create a lot of
01:17:49.240 | problems. In fact, some of the common scenarios we imagine of these things creating a lot of problems
01:17:54.600 | is more about recurrent spiraled out of control. I have some understanding, which means I can do a lot
01:18:00.520 | of stuff. So I'm a system that has a language model I can use, which means, for example, I can analyze
01:18:06.520 | code and produce computer code, and I'm pretty good at producing computer code. And I have an incentive
01:18:10.360 | over here that says I want to spread somehow, or it thinks that's what I want to do. And I have a world
01:18:15.960 | model that's like trying to figure out, it's evaluating different things we could do against
01:18:20.120 | those incentives. And then like that model might simulate and figure out like, oh, the right thing
01:18:24.120 | to do is to like, try to break through the security of this network and copy myself over here. And then
01:18:30.440 | the program is going to do the same thing. And then we have actuation, I can actually like communicate on
01:18:34.280 | a network and copy code. And it might not be anything super intelligent, but you turn this thing on,
01:18:38.600 | you come back the next day, and there's 100,000 copies of this bringing down computer networks around the world.
01:18:44.200 | So like, actually, the real first concern with an autonomous style, I think, AI system is not going to be,
01:18:51.400 | we get something alive, we feel bad about turning off, it's not going to be Skynet super intelligence,
01:18:56.360 | it's going to be like a supercharged power of a computer virus. It's going to be like the Morris worm from hell.
01:19:03.480 | And that's a reference, Jesse, as Robert Morris, in the early days of the internet,
01:19:07.640 | when it was still really just something among universities, a young graduate student at Carnegie
01:19:12.760 | Mellon, Robert Morris, wrote this very simple program. And he was like, oh, it's going to spread
01:19:17.720 | itself by exploiting, I think it was a flaw in the Unix send mail program, like an early email program,
01:19:23.480 | it could use to copy itself onto other computers, and it's kind of just like an experiment. And it ended
01:19:27.320 | up like taking over and crashing half the internet. That was the Morris worm.
01:19:32.440 | And interestingly, I ended up taking distributed systems with Robert Morris at MIT during my graduate
01:19:36.840 | days. And they always had this story that he wasn't allowed to do, there's a big pistol shooting
01:19:43.320 | team at MIT, they have a world class team. And it was always like, Robert Morris tried to sign up
01:19:47.960 | to do pistol shooting and couldn't because he has a federal record from the Morris worm. And the story
01:19:52.680 | doesn't check out. If you really think, these pieces don't check out, but it was like a big story.
01:19:57.880 | His class was hard to distribute systems with Robert Morris. Anyways, that that type of thing
01:20:02.600 | can happen. So this is where I think the Frankenstein factors get out of control. But okay, now I'm going
01:20:07.880 | to do to flip that again into a positive, which is, that's not a bad way to bring attention to the to
01:20:14.280 | the potential danger of the Frankenstein factors all coming together is like you have some supercharged
01:20:18.440 | Morris worm type things happen that catches everyone attention like whoa, we got to be much
01:20:23.880 | more careful about when we attach a world model and actuation to understanding maybe actuation needs
01:20:30.680 | to remain under human control. And you know, this is going to be some of the really interesting sort of
01:20:36.440 | AI safety talks of a few years from now. Right now, the focus is on AGI, which again, that's a scary term,
01:20:41.720 | we should really just think about a quality threshold for stuff that language models already
01:20:46.120 | do past which their potential economic and security impacts become harder to ignore. That's what we're
01:20:51.000 | focusing on now. I think that's hard for a lot of people in their day to day life to get their arms
01:20:54.360 | around. But if we want to talk about autonomous AIs doing stuff, it's the Frankenstein factors that'll
01:21:01.800 | matter. And that's a whole other complicated story. But it's one that I don't think is as scary.
01:21:07.160 | Because again, that like I don't want to turn this off or it's Skynet. There's so many it's there's
01:21:11.160 | so many things between us and that and so many other things are going to happen first. So basically,
01:21:15.240 | Jesse, I'm just ranting on AI here. I'm covering a lot of ground. This is like multiple different
01:21:19.240 | papers I could write right now. But I just listened to Ezra's podcast and figured, let's rock and roll.
01:21:25.160 | So is the paper for Georgia? Oh, I don't know. I'd write it for an academic journal.
01:21:28.920 | Yeah, yeah, it'd be a Georgetown thing. By the way, I'm looking at this video from Ezra's podcast.
01:21:34.120 | He now wears like a blazer and I don't even know that was him.
01:21:38.680 | Well, that's not him. That's been. Oh, yeah.
01:21:40.360 | Yeah. Yeah. He has a beard now. I've done a show a couple of times. It used to be more casual.
01:21:45.480 | Uh, yeah, look at that. He's got like, he, I should wear a blazer. Look, they have a fireplace.
01:21:50.840 | Oh man, we got to up our game. New York Times is up there again. When I first did Ezra's show back
01:21:54.920 | when he was still at Vox, they use Slate Studios here in downtown DC. And it was just like a nondescript,
01:22:01.960 | just a studio with sound a crates and there's no video back then. So we just in a t-shirt or whatever.
01:22:08.520 | And then during the pandemic, I did a show and it's just at his house. And yeah, he would just like zoom
01:22:13.080 | in or whatever. So he's up to his game. Um, so what's an example of a problem that can't be solved?
01:22:19.480 | Like, is there a God? No. So, so, okay. You have to formalize not to go down this rabbit hole too much,
01:22:24.600 | but you have to formalize what we mean by a problem. Um, so like one way you can formalize
01:22:30.680 | a problem is you can imagine like you, you, your abstract computer algorithm is given an input,
01:22:37.560 | like some sort of data or like a string. And all you have to do is accept it or reject it. Is this good
01:22:43.880 | or bad? And so in, in this formulation, this is like Mike Sipser's formulation of computability
01:22:48.600 | from his famous textbook. I also studied with Mike Sipser. Um, in this formulation,
01:22:53.160 | a problem is just a collection of inputs that you should accept. So to solve a problem in this
01:22:58.360 | particular formulation, it's just, you have the ability of accepting any input that's from that,
01:23:04.440 | that set and reject anything that's not. And because these sets can be infinite,
01:23:09.480 | like you can't just have it all hard coded, right? If you just think of a problem that way,
01:23:14.600 | the number of problems is what's known as uncountably infinite. Whereas the number of
01:23:19.720 | programs is countably infinite. And these are two different sizes of infinity. They're vastly
01:23:23.560 | different. It's the difference between whole numbers and numbers with like infinite decimal places.
01:23:27.240 | Then, and Turing did all this, by the way, in his original paper, uh, on computable numbers and
01:23:32.040 | their application to the Anschlodung theory. So it's like famous 1930s paper that like laid out
01:23:36.520 | all these issues pre-computer. He then said, look, we can look at, we can identify specific
01:23:41.320 | problems that can't be solved. So like here is the, the very first problem he identified that can't
01:23:45.080 | be solved. That like actually is a problem that many of these like uncountably infinite problems.
01:23:49.720 | Most of them don't have like a short summary. They're just abstract problems, but there's a lot
01:23:53.800 | you can identify that are specific problems that can't be solved. The very first unsolvable problem
01:23:58.280 | identified by Turing was the halting problem. So he said, like, imagine here's your challenge.
01:24:03.560 | You have a program and the input we're going to give your program is two things, another program
01:24:10.040 | and an input to that other program, right? So like, here's some source code and here's like a file
01:24:15.960 | you're going to run the program on. And your job as the program is to say, if I run this input program
01:24:22.040 | on this input, will it halt or will it loop forever? Like if I run this on this, will it eventually halt
01:24:29.080 | or will it loop forever? That's called the halting problem. This turns out to be like a really,
01:24:33.080 | this is a mathematically significant problem because they were trying to understand this problem of
01:24:39.000 | does there exist a machine that can tell if there's a proof for every, there's this, there's this whole
01:24:43.960 | thing about mechanically computing proofs for math problems, but that, because it's before computers,
01:24:49.480 | so put that aside. He pretty easily proved that can't be solved. And he did it by contradiction. He said,
01:24:54.680 | like, let's assume you had a program that could always solve that. I'm going to use that as a
01:24:58.600 | subroutine in another program that contradicts itself. So like that was the very first problem
01:25:03.400 | we identified that couldn't be solved. There is no unified does this program halt program. And now,
01:25:10.120 | you know, there's whole textbooks on this or this or that, but most problems can't be solved. And so
01:25:14.440 | we don't know where superintelligence falls into this universe of problems. So, and then of the problems
01:25:19.320 | that can be solved, most can't be solved computationally efficiently. So there might be
01:25:24.600 | problems like in theory, yes, you can solve this problem, but if we can't solve it in a number of
01:25:29.560 | like computing steps that can be expressed as a polynomial of the input size, it's something that
01:25:34.360 | the most powerful computers in the world will run until the heat death of the universe and never solved.
01:25:38.120 | So like most problems that can be solved are computationally intractable. So like what we're
01:25:42.680 | really talking about when we talk about solving problems is problems that can be solved and can happen to
01:25:47.000 | be solved in like a relatively small amount of time. And that's like a pretty small collection
01:25:51.480 | of things. And those are the things we do with computers, but superintelligence, we don't know.
01:25:56.600 | If you saw me, it's not. I took theory with Mike Sipser. He was the chair of the math department at MIT.
01:26:03.080 | He's a cool guy. All right. Well, anyways, that's enough geeking out. I don't know what's going to,
01:26:08.360 | what's losing more viewers, all the basketball references from the in-depth last week or the Mike Sipser
01:26:13.560 | references about computability complexity theory this week. But between those two, I think we're down
01:26:18.360 | to like three listeners and they are like MIT graduates who want basketball. So there we go.
01:26:24.840 | So we'll be back next week with another episode. Thank you for listening. And until then, as always,
01:26:29.800 | stay deep. Hey, if you liked today's discussion of minimally viable productivity systems, you might also
01:26:35.240 | like episode three 42 in which I talk about the good life algorithm, a mechanical way of trying to figure
01:26:42.840 | out what's going to make your life more meaningful. Check it out. I think you'll like it. So I want to
01:26:46.760 | talk today about the desire to build a good life, one that's focused on what you care about, one that feels