back to index#1 Rule That Made Sam Altman Insanely Productive (No One Talks About This) | Cal Newport

Chapters
0:0 Sam Altman On Productivity
20:7 How do I prevent administrative sprawl?
29:22 When is the best time to schedule Deep Work?
30:54 How can a gardener gain career capital?
34:33 How can one go from good performance to exceptional performance?
36:43 Is it better to write with a single monitor?
41:31 Slow Productivity in a changing world
46:10 Rory Mcllroy and his phone
55:2 Will AI Destroy Humanity by 2027?
00:00:00.000 |
So as someone who writes and talks a lot about producing meaningful stuff in a distracted world, 00:00:05.180 |
I always get excited when prominent individuals give us insight into their own processes 00:00:11.000 |
for achieving this goal. So you can imagine how happy I was when I saw Tim Ferriss recently linked 00:00:16.940 |
to a blog post that was titled simply Productivity that was published in 2018 by OpenAI's Sam Altman. 00:00:27.040 |
Here's the opening two sentences of this essay. 00:00:30.560 |
I think I am at least somewhat more productive than average and people sometimes ask me for 00:00:35.580 |
productivity tips. So I decided to just write them all down in one place. So I'm excited about 00:00:42.560 |
getting to this essay because when Sam wrote it, 2018, think about it, OpenAI was at a crucial 00:00:48.080 |
turning point. They had just released GPT-1, but they were operating as a nonprofit. Elon Musk had 00:00:55.100 |
just left the board of directors after failing to convince the board that they should merge OpenAI 00:00:59.360 |
with his Tesla company to help out its financial situation. Instead, Sam led a move from a nonprofit 00:01:05.720 |
the next year to a capped profit status that opened up venture capital funding. They could hire a bunch 00:01:10.480 |
more talent and that's really where the OpenAI story we know today really took off. So he was pretty 00:01:16.280 |
productive during this period. So it's useful to look back and say, how was he thinking about getting 00:01:22.420 |
important work done on the eve of OpenAI making all of these important leaps? So what I'm going to do is 00:01:30.440 |
pull out, I think I have five, let me look at my notes here. I have five ideas from his essay and for 00:01:37.040 |
each we'll get into it. I agree with a lot. I disagree with others. I think there's some big 00:01:40.840 |
important ideas he highlights in some. So we will get in the Sam Altman's productivity essay. Let me just 00:01:45.360 |
load it on the screen now for those who are watching instead of listening. This is what the essay looks 00:01:50.520 |
like. I missed that classic blog format, Jesse, back in the end of Web 2 where you kept things simple and just 00:01:56.600 |
wrote long essays. But here it is published in April of 2018. So seven years ago, basically to the day. 00:02:02.380 |
That's what it looks like. So I'm going to jump through here and I'm going to pull out some quotes. All right, 00:02:07.100 |
Jesse, we can bring it back to full screen. All right, here's idea number one. I'm reading now from Sam's 00:02:12.540 |
essay. Compound growth gets discussed as a financial concept, but it works in careers as 00:02:18.920 |
well and it is magic. A small productivity gain compounded over 50 years is worth a lot. So it's 00:02:24.780 |
worth figuring out how to optimize productivity. If you get 10% more done and 1% better every day 00:02:30.600 |
compared to someone else, the compound difference is massive. All right, so I have mixed feelings about 00:02:37.860 |
this first idea of applying the compound growth idea to productivity. I think there's some piece 00:02:42.880 |
of this that's right and some piece of this that's not right. That getting 1% better, sure. If you could 00:02:49.260 |
get 1% better at something every day, you are compounding. So your current skill level grows that 00:02:56.060 |
your 1% is being applied to. And if you put that out over a certain number of years, you would be the 00:03:01.820 |
world's very best expert in like five years or however the math works out. That's not really how getting 00:03:07.360 |
better works though. It typically has long periods of practice. It leads to new levels of skill and 00:03:12.560 |
each level of skill is harder to get to than the last. So it's almost more of a linear or like a slow 00:03:19.760 |
linear function that an exponential function like compound interest would give you. You're kind of 00:03:23.920 |
getting better faster and then it really slows down. It just gets harder to get to each new level. 00:03:28.820 |
Whereas compound interest, you get a curve that picks up speed as it's going. I also am concerned about 00:03:34.580 |
him making the slight shift. He's combining getting 1% better as an example of compound 00:03:40.280 |
growth to doing 10% more each day. This doesn't really jive with my study of long-term productivity. 00:03:46.820 |
If you looked at my book, Slow Productivity, which came out last year, one of the things I do is study 00:03:51.780 |
some of the most historically productive people, meaning like what they produce, historical figures 00:03:57.380 |
that produce things that we look back at now and say that was super important. What a productive 00:04:01.260 |
intellectual life they had. So Galileo or Newton or Marie Curie or Jane Austen, et cetera. 00:04:06.200 |
And what you find in these stories is their key was not getting more done each day. 00:04:12.880 |
In fact, what I highlighted was there's a certain notable slowness or even lack of urgency 00:04:19.460 |
behind their biggest achievements. They thought about it. They had long digressions into other 00:04:25.300 |
interests. They would come back to them. They would let it marinate. And ultimately, 00:04:29.260 |
it didn't really matter. The busyness or phoneticism of any given day wasn't important. 00:04:34.680 |
It was more about the consistent application of thought over a long period of time that eventually 00:04:38.640 |
led to really big breakthroughs. So when it comes to producing really meaningful stuff, 00:04:43.160 |
I don't know that this idea of I get 10% more done each day really matters. I think where that 00:04:48.780 |
will lead to more often than not is just busyness. That's the easiest thing to get 10% more done of. 00:04:53.180 |
And busyness doesn't necessarily transmute into results. All right. So that's an idea where I'm not 00:04:57.500 |
completely on board with the way Altman is summarizing things. Idea two, however, I'm very 00:05:03.360 |
much on board with. Let me read now from Altman's essay. It doesn't matter how fast you move if it's 00:05:09.760 |
in a worthless direction. Picking the right thing to work on is the most important element of productivity 00:05:15.280 |
and usually almost ignored. So think about it more. I make sure to leave enough time in my schedule to 00:05:21.380 |
think about what to work on. The best ways for me to do this are reading books, hanging out with 00:05:25.320 |
interesting people, and spending time in nature. Interestingly, this is somewhat contradictory to 00:05:32.020 |
idea number one, which is like, hey, get 10% more done each day. It'll add up. And you say, no, 00:05:35.600 |
take your time. Don't get started. Think more. Hang out in nature. Hang out with interesting people. 00:05:39.340 |
Read, wait to get started. Really make sure that you have the right thing to work on. I'm a big believer 00:05:44.500 |
in that idea. I remember I wrote an essay about this way back early in my writing career. I wrote an 00:05:50.980 |
essay for Ramit Sethi's blog, and it was called Don't Get Started. My argument was, it is basically 00:05:59.200 |
Sam's argument. It is really hard to figure out the right thing to work on. The thing that's going to 00:06:06.400 |
matter, and it uses the rare and valuable skills that you currently possess. And you, it's probably 00:06:12.640 |
going to take a long time once you choose the right thing to get really good results. So you want to be 00:06:16.320 |
really wary of just diving into things, because when you dive into things, you're basically flooding your 00:06:20.000 |
circuits with activity, and you are taking them out of the game for working on other more important 00:06:25.080 |
things. So resist working on things. I often say with big projects, resist working on them. Think 00:06:30.260 |
about them, read about them, get excited about them, but resist working on them until you can't help it 00:06:34.860 |
anymore. Sort of like me with book writing, me with this podcast. Man, I resisted podcasting for a long 00:06:42.780 |
time. I learned a lot about it. No, it's not quite right. I don't want to just do it for the sake of 00:06:48.200 |
doing it. I don't like activity for the sake of activity. It took me years before I said, okay, 00:06:52.240 |
I can't avoid this any longer. That's really what you should be looking for. Fewer things, 00:06:56.880 |
done better. That has been a theme through my work so long that the original, one of the original 00:07:03.320 |
mottos of my study hacks blog back when it was still focused on students was do less, do better, 00:07:07.720 |
know why. Do less, do better. It's a key idea. So I think Sam is absolutely onto it there. 00:07:13.440 |
That's probably reflected in OpenAI. They kind of chose their points. They chose their battles where 00:07:18.420 |
they thought there could be big work, for example, working on very large scale language models. 00:07:22.120 |
And then they went down that road year after year. So they carefully chose what they were working on 00:07:27.740 |
and then really gave that a lot of attention over time. That's where the big breakthroughs came from. 00:07:31.820 |
All right, idea number three. Now we're going to get into the weeds of actual time management. 00:07:38.420 |
I highly recommend using lists. I make lists of what I want to accomplish each year, each month, 00:07:44.300 |
and each day. Lists are very focusing and they help me with multitasking because I don't have to 00:07:48.940 |
keep as much in my head. If I'm not in the mood for some particular task, I can always find something 00:07:53.360 |
else I'm excited to do. Later, he says, I don't bother with categorization or trying to size tasks 00:07:58.980 |
or anything like that. The most I do is put a star next to really important items. I try to prioritize 00:08:03.340 |
in a way that generates momentum. The more I get done, the better I feel. And then the more I get 00:08:07.600 |
done. I like to start and end each day with something I can really make progress on. A couple of 00:08:13.560 |
interesting points about his approach here. One, we do see him preaching a principle that I talk a lot 00:08:20.620 |
on this show, which comes from David Allen, who himself took it from Dean Atchison, which is the 00:08:26.080 |
notion of full capture. Having things written down and not being kept track of just in your head is 00:08:30.680 |
critical for avoiding unnecessary stress and forgotten deadlines and scrambles. Do not use your brain as a 00:08:36.740 |
task storage device or a calendar. Use task storage devices or calendars for that role. And he makes that 00:08:41.740 |
clear here. He says, look, they help me with multitasking so I don't have to keep as much in my head. 00:08:47.360 |
By multitasking, he means just having multiple projects going on at the same time. 00:08:50.740 |
So that's useful. I also notice, however, the simplicity of his systems. He just like writes things down on a 00:08:57.420 |
list on paper. He doesn't break it up into categories. He doesn't do any sort of prioritization. 00:09:01.380 |
He just sort of looks at the list and says, what do I want to work on next? 00:09:04.600 |
Maybe he'll put a star next to something to really remind him that it's important to get it done. 00:09:10.820 |
That type of basic system, a sort of what's known as an MIT system, 00:09:15.300 |
most important task system, it's been around. It's an idea that's been around for a while. 00:09:22.960 |
Julie Morgenstern talking about this. We hear Brian Tracy talking about this. 00:09:26.900 |
We hear Leo Babudov's Inhabits talking about this. 00:09:29.220 |
Even more recently, when Oliver Berkman came on my show last fall, this was basically 00:09:33.220 |
what he was pitching. Get the important thing done first and then kind of do your best with 00:09:37.980 |
Hey, it's Cal. I wanted to interrupt briefly to say that if you're enjoying this video, 00:09:43.200 |
then you need to check out my new book, Slow Productivity, The Lost Art of Accomplishment 00:09:49.500 |
Without Burnout. This is like the Bible for most of the ideas we talk about here in these videos. 00:09:56.660 |
You can get a free excerpt at calnewport.com slash slow. I know you're going to like it. 00:10:03.800 |
Check it out. Now let's get back to the video. 00:10:06.140 |
It works and it doesn't. So, I mean, it works in the sense of if your goal is to make progress on 00:10:14.620 |
what's important, this is emphasizing that really just means doing the important things and making 00:10:19.660 |
progress. So I think the fact that Sam had this sort of simple system, I just make sure the important 00:10:26.100 |
stuff gets done, shows how when it comes to long-term productivity, this is really different 00:10:32.140 |
and busyness. This is really different than I'm quick on my emails and Slack. I'm jumping on a 00:10:35.900 |
bunch of calls. For Sam, his productivity was dependent on doing a small number of things 00:10:41.140 |
consistently well. The issue is for a lot of us, there's a lot of other stuff too that we have to 00:10:46.680 |
do that is not just, hey, here's the project I want to work on. It is, I have to get back to this 00:10:52.220 |
person. This dean wants to know this. My students need me to sign this. The parking office needs me to 00:10:57.300 |
update my license plate for the new license plate readers they installed in the Levy garage. 00:11:00.900 |
Just making these up on the top of my head here. And we can't say no to those things. So that's the 00:11:07.140 |
context where you actually probably need a more complicated task storage system because you can't 00:11:12.080 |
just, if you have many unignorable demands on your time, so you have the big, like Sam's focusing on, 00:11:18.460 |
and the small. If you just have a big list and you're just trying to choose from there, hey, 00:11:22.520 |
what's the big thing I want to work on today? That small stuff is going to eat away at you 00:11:25.640 |
because you're going to miss things. People are going to yell at you. Small things will get missed. 00:11:28.940 |
People are like, where's this? Where's that? Your car is going to get a ticket because you didn't 00:11:32.260 |
update your license plate information. And that's going to become a source of stress and a problem. 00:11:35.500 |
So I like the point that Sam is making here. It's like ultimately just doing something important 00:11:41.160 |
every day is what matters for the stuff that people will remember you for. The busyness doesn't 00:11:45.340 |
produce stuff that matters. But if you have a lot of that other stuff, smarter task storage might be 00:11:51.840 |
important, right? This is why I like to store stuff in cognitive context. So I could say I'm going to 00:11:56.120 |
spend time on like my professor role and just see tasks for that divided into statuses. So it's very 00:12:00.780 |
easy to sort of see what's what and what needs to get done. So smarter task storage, I think it's 00:12:05.760 |
necessary if you have a lot to do, but don't forget Sam's big lesson here, which is, yeah, but the small 00:12:09.840 |
stuff is secondary. Do the best you can with that. Organize it in a way that's going to save you from 00:12:15.280 |
stress, but it's really working on the important things each day that's going to matter. 00:12:18.500 |
All right. Idea number four from Sam Altman. Here's Sam. 00:12:22.540 |
I try to be ruthless about saying no to stuff and doing non-critical things in the quickest way 00:12:27.960 |
possible. I probably take this too far. For example, I am almost sure I am terse to the point of rudeness 00:12:33.140 |
when replying to emails. I generally try to avoid meetings and conferences as I find the time cost to 00:12:37.480 |
be huge. Again, I think there's a critical point here. Whether or not you have the power to say no to 00:12:42.820 |
everything, it emphasizes how almost everything doesn't matter. 00:12:47.120 |
This is a theme, I think, that goes through Sam's essay here. 00:12:51.940 |
The things that mattered that made open AI from a struggling nonprofit to a company with a massive 00:12:59.020 |
valuation and huge impact on the world technological and economic scene, 00:13:02.980 |
the things that mattered were small and hard, and he was pretty ruthless about coming back to them. 00:13:09.160 |
So if you worry about saying no to stuff, thinking that this somehow makes you less productive, 00:13:13.720 |
keep in mind this uber productive individual's productivity was built on his default of, 00:13:19.360 |
I really don't want to do stuff. Most stuff is just getting in the way of time. 00:13:24.020 |
Taking my time away from the stuff I know for sure is going to be really valuable. 00:13:30.000 |
More executives should probably follow that advice as well, right? I mean, I've talked to more than a few 00:13:36.360 |
executives who would have been in Sam's position at different companies that say, 00:13:38.940 |
no, my job is right. I have to be in meetings. How can I say no to them? 00:13:41.540 |
And he's saying, well, because if you want to be good at what you do, 00:13:44.300 |
the meetings aren't really what matters so much as like you 00:13:47.700 |
understanding, pushing, and developing the ideas that are going to make the biggest difference. 00:13:54.280 |
I have different times of day I try to use for different kinds of work. The first few hours of 00:13:58.580 |
the morning are definitely my most productive time of the day. So I don't let anyone schedule 00:14:02.340 |
anything. Then I try to do meetings in the afternoon. Another great idea, the morning is a 00:14:07.160 |
good time for deep work for most people. So just having a simple rule, I don't do meetings until this 00:14:11.680 |
point makes a big difference. I was chatting recently with a president of a former president of 00:14:18.040 |
a large company. And he was saying this was a huge change for him is that they were just filling his 00:14:23.100 |
days with meetings, his staff. And at some point they said, okay, you know, we're going to protect 00:14:28.740 |
time in the morning for just working on your own stuff. And he was worried this would make him a worse 00:14:34.200 |
executive instead made him way better. There's endless people that want your time. There's endless 00:14:38.560 |
meetings you could take. You're already constraining the meetings you can take because your day is only 00:14:43.740 |
so long. So why not just change those constraints even more so that you have more time to work on 00:14:48.140 |
the big thoughts that are going to matter as well. If Sam Maltman can do it, you can probably do it 00:14:52.100 |
as well. So ultimately I think Sam has a lot of non-surprising advice here. I mean, I think he would, 00:14:58.960 |
would have been, these ideas would have fit well in my book, Slow Productivity. These ideas would 00:15:04.200 |
have fit well in my book, Deep Work. And perhaps this is not surprising because he ended up being very 00:15:08.520 |
successful at what he did. He worked on deep stuff in a distracted world. I want to end with a 00:15:13.680 |
quote from Sam's essay that I think summarizes well the gist of his whole philosophy here. 00:15:18.960 |
Don't fall into the trap of productivity porn. Chasing productivity for its own sake isn't helpful. 00:15:25.100 |
Many people spend too much time thinking about how to perfectly optimize their system and not nearly 00:15:30.480 |
enough asking if they're working on the right problems. It doesn't matter what system you use or 00:15:35.080 |
if you squeeze out every second if you're working on the wrong thing. And I think that gets to the heart 00:15:39.460 |
of it. Spend a lot of time figuring out what matters. That's not an easy question. But once 00:15:43.740 |
you've answered it, spend a lot of time working on that thing. And do your best with whatever else. 00:15:48.040 |
And that'll work itself out. Don't stress yourself out too much. It's impossible to do it all anyways. 00:15:52.080 |
But working hard on the right thing consistently, that's what matters. Everything else is just 00:15:57.280 |
trying to take care of the details that are trying to get in the way of that. 00:16:01.000 |
Great way of thinking about it. So you need some systems and some rules, but mainly you 00:16:04.960 |
just have to do the work of finding what to work on and putting in the right time. 00:16:07.640 |
I wonder, he probably, his rules probably have changed by now just because that company's 00:16:19.760 |
That's true. But I'm just thinking like the size of the company now versus then. 00:16:23.460 |
It'd be interesting to check in. I think now he probably is all meetings. I've never met Sam 00:16:28.240 |
Altman, but busy guy. All right. Well, we have some more, speak of Sam, we're going to do 00:16:33.840 |
some AI stuff at the end of the show, but now we want to move on to some questions. But first, 00:16:38.060 |
as always, let's briefly hear from a sponsor. 00:16:41.160 |
Trust isn't just earned, it's demanded. Whether you're a startup founder navigating your first 00:16:47.480 |
audit or a seasoned security professional scaling your GRC program, proving your commitment to 00:16:52.360 |
security has never been more critical or more complex. That's where Vanta comes in. 00:16:58.080 |
Businesses use Vanta to establish trust by automating compliance needs across over 35 00:17:03.400 |
frameworks like SOC 2 and ISO 27001. Centralized security workflows, complete questionnaires up 00:17:10.040 |
to five times faster and proactively manage vendor risk. Vanta can help you start or upscale your 00:17:16.020 |
security program by connecting you with auditors and experts to conduct your audit and set up your 00:17:20.080 |
security program quickly. Plus with automation and AI throughout the platform, Vanta gives you time 00:17:25.400 |
back so you can focus on building your company. Let's make sense with our AI conversation. 00:17:31.160 |
Let Vanta help you on the security side so you can focus on the deep thoughts that matter. 00:17:34.660 |
Join over 9,000 global companies like Atlassian, Quora, and Factory. Use Vanta to manage risk, 00:17:40.360 |
improve security in real time. Here's the good news. My listeners can get $1,000 off Vanta 00:17:46.360 |
if you go to Vanta.com slash deep questions. That's V-A-N-T-A dot com slash deep questions 00:17:52.440 |
for $1,000 off. I also want to talk about our longtime sponsors at ZocDoc. This is a big problem 00:18:00.780 |
a lot of people face. There's some sort of medical thing that shows up. That shoulder's barking, 00:18:05.980 |
your eye is itchy, the tooth is hurting, and you realize I don't have a doctor I can call for this 00:18:11.620 |
right now. The question becomes, how do I find that doctor? What most people do is maybe they 00:18:17.300 |
ask around their friends. Their friends give them a recommendation. They call that person up, 00:18:21.800 |
and that person laughs. They say, we take no insurance. Our next appointment is available in 00:18:27.980 |
2032. Quite frankly, you should be ashamed for even thinking that we could see you in our doctor's 00:18:33.620 |
office. It's like a whole socially fraught thing, especially for introverts like me. This is where ZocDoc 00:18:38.680 |
enters the scene. It's the right way to find health care providers. ZocDoc is a free app and website 00:18:43.980 |
where you can search and compare high-quality in-network doctors and click to instantly book 00:18:48.140 |
an appointment. We're talking about booking in-network appointments with more than 100,000 doctors across 00:18:53.120 |
every specialty, from mental health to dental health, primary care to urgent care, and more. 00:18:57.960 |
You filter for doctors who take your insurance, located nearby, good fit for what you're looking at. 00:19:03.720 |
There's verified patient reviews. You can see, do I like this doctor or not? And then once you find 00:19:08.400 |
that doctor, you can see their actual appointment openings right there from ZocDoc. Choose a time 00:19:13.280 |
slot that works for you, and boom, you've booked a visit. Appointments made through ZocDoc also happen 00:19:18.780 |
fast, typically within just 24 to 72 hours of booking. You can even sometimes score same-day appointments. 00:19:25.820 |
I have multiple, I mean, I use ZocDoc. I mean, I've used it to look for doctors, but I have multiple 00:19:31.020 |
practices that use ZocDoc as well. They even use it for paperwork and advance and stuff like this. 00:19:37.700 |
So I've been in the ZocDoc ecosystem. It's definitely a go-to for me if I'm looking for 00:19:42.320 |
a new healthcare provider. It's one of these ideas that, of course, it should exist. Of course, 00:19:46.460 |
there should be a good way to use the internet to find healthcare providers and set up appointments. 00:19:50.560 |
So great service. Stop putting off those doctor's appointments and go to 00:19:54.160 |
ZocDoc.com slash deep to find and instantly book a top-rated doctor today. That's Z-O-C-D-O-C.com slash 00:20:01.460 |
deep. ZocDoc.com slash deep. All right, Jesse, let's do some questions. 00:20:08.160 |
First question is from Kelsey. I oversee the admin for over 150 students. This creates a flood of emails 00:20:14.840 |
and urgent tasks. Every form still requires my signature. How did you set up Trello in your email 00:20:20.020 |
to manage this kind of administrative sprawl without letting it dictate your day? Do your students have 00:20:24.880 |
a pipeline to follow, forms to fill out? Well, it's a good question. I mean, you have my sympathies. 00:20:30.960 |
You're overseeing admin for 150 students. I'm overseeing the computer science major for 100-something 00:20:37.300 |
students. So I'm with you, Kelsey. The first thing I would say, you talk about not letting 00:20:43.220 |
administrative sprawl dictate your day. Well, it will be realistic. If you're overseeing admin for that 00:20:48.200 |
many people, it's going to be a major part of your day. So we want to make sure that the expectations here 00:20:52.380 |
are realistic. We're not going to automate this work out of existence. We're not going to be doing six hours 00:20:56.260 |
of deep work a day. But it shouldn't be controlling your day like a puppet master, right? So it shouldn't 00:21:02.320 |
be you reactively balancing from one thing to another and feeling like you're always behind or always exhausted 00:21:08.360 |
or frustrated. That I think we can do better on. The story I like to tell when trying to get at the right solutions 00:21:14.140 |
here is a story from when I was in graduate school and I was TAing for my advisor's distributed algorithms 00:21:20.300 |
graduate class. Pretty big class. I don't remember how many kids. Maybe it was like 50 or 60 kids or something like this. 00:21:25.380 |
It's a theory class, right? A lot of problem sets. I remember early on. So as a TA, I was in charge of 00:21:31.540 |
collecting those problem sets, writing the problem sets or some of the problems, getting the sample 00:21:37.300 |
solutions together. We had graders, but I would have to organize and work with the graders to do the 00:21:41.980 |
grading, then get the grades entered and those problem sets back to the students. And I remember early on, 00:21:46.120 |
it was very time consuming. I'd have all these different problem sets. My advisor and the teacher of the 00:21:53.480 |
class wanted Xerox copies as a backup. They're a stapled and, or dog-eared together. And I would 00:22:00.360 |
like be pulling out staples and trying to run them through the Xerox machine. And this would take 00:22:03.940 |
forever. And then try to, you know, get them to the graders, then get them back to the students. Like 00:22:09.840 |
the whole thing was very time consuming. And at some point I had this epiphany. I could just ask the 00:22:15.200 |
students to do a little more, a little more that's going to make my life easier that for them, 00:22:21.200 |
who cares? It's like a small extra step, but for me could add up to a lot of savings. And that's when 00:22:25.220 |
I started saying, okay, here's how you have to hand these things in. They have to be a single-sided, 00:22:30.840 |
no double-sided that messes up the photocopy, no staples, just hand them in as a stack with your name 00:22:37.120 |
on the top of every page. And I believe I said, when you hand them in, you have to alphabetize 00:22:42.080 |
them. So as you come up, you start adding these things to the desk up here, find where your name 00:22:47.840 |
goes. So it'll be alphabetized when you're done. And then I could just take this whole stack and run 00:22:53.660 |
it through the Xerox copy all at once, the document feeder, have a copy of all of them I could just put 00:22:58.240 |
aside. And then I have all of them alphabetized. And the graders could then, when they're done grading, 00:23:02.580 |
putting them back alphabetical is easy to hand them back to the class. Like these small things, 00:23:06.060 |
a little bit of extra work I made the students do made my life much easier. Now as a professor, 00:23:12.020 |
I took this even farther, right? I mean, when it comes to like prom sets and exams, I have these 00:23:17.740 |
various administrative things that students can do that just makes my life much easier. 00:23:21.720 |
I switched, for example, to digital submission of prom sets. Do your prom set, take good images of it, 00:23:27.960 |
upload them directly to our class portal. So me and the graders can see them digitally. We can note them 00:23:33.480 |
digitally. We can enter the grades digitally. You can get them back digitally. I don't have to deal 00:23:37.980 |
with papers. I started doing this with my exams. They're taken in class, but I actually have the 00:23:43.160 |
students, when they're done taking the exam right there before they leave, take photos with their 00:23:47.640 |
phone of every page, which they can then later on my behalf upload into the online system. So we have 00:23:52.560 |
digital copies, the simplified grading and comments and returning entering grades. And I can just keep the 00:23:57.640 |
physical copies as a backup, just pick them up and put them in a drawer in my office, right? So I learned 00:24:02.520 |
this rule. You have someone to do 10% more on their end can make your life 100% easier. Small addition to 00:24:11.320 |
their work can make your life bigly more, that's not a real word, but much more easy because it aggregates 00:24:18.000 |
over a lot of different people. All right. So with that in mind, Kelsey, my lesson here is I would have 00:24:22.200 |
some sort of procedure page, you know, FAQ or systems or whatever. It's a website that you can 00:24:27.500 |
easily update that you start building out your instructions for all the common things you do 00:24:32.480 |
and handle as an administrator. And then you can just keep pointing students to this page. 00:24:36.060 |
Oh, you need a external credit approval form. Look at this page. You need transfer credit approval. 00:24:42.320 |
Look at this page. You need to declare like approval for a major declaration. Look at this page. 00:24:47.300 |
So you're preventing everyone from just informally emailing you and you have to kind of explain or 00:24:52.300 |
work with them informally to get done whatever thing they need to get done. You can then experiment on 00:24:56.880 |
this page with giving them more work to do to make your life somewhat easier. Like for example, if there's 00:25:02.220 |
a common type of form that you need to sign, which is like real easy for you to verify, but it's sort of 00:25:07.500 |
annoying. People email you, you sign them, you email them back, you have to keep track and they're always 00:25:11.480 |
coming in. You could have a procedure that says, here's a shared folder. What you need to do, 00:25:17.220 |
it's called forms to be signed. You need to set up your whole form like in the PDF and you need to put 00:25:22.560 |
it in that folder. And on Friday afternoons, I go through and I sign all of those forms in a row 00:25:29.560 |
and move them to a nearby folder called signed forms. After Friday afternoon, you can find your signed 00:25:36.340 |
form there whenever you want and move forward and submit it. You're giving them a little bit more work, 00:25:40.920 |
but you've taken every one of those forms out of your inbox as something to react to. 00:25:44.220 |
And instead you have 30 minutes on Friday where you do it all at once. Another type of thing you can do 00:25:51.180 |
is if there's anything where information needs to be looked up, see if the student can look the 00:25:55.740 |
information up for you and then they submit that along with what's going on. Like in my role as 00:26:00.460 |
director of undergraduate studies, for example, one of the things I'm responsible for is approving 00:26:04.580 |
requests of students who want to declare a major in computer science. When our department 00:26:10.700 |
was smaller, the way the DUS would handle this is someone would say, hey, I'm thinking about 00:26:14.460 |
declaring a major and he would load up using the backing systems, their schedule and see where 00:26:19.820 |
they are and what they've taken already and kind of see like, is it reasonable? Is there a way they 00:26:24.800 |
have enough, you know, enough semesters left to take the courses? Like, is there a path, 00:26:29.240 |
a reasonable path forward for them to actually finish this major? But our department has grown and 00:26:33.720 |
that takes a lot of time. And so now I just, the instructions for the students are, if you want to declare a 00:26:38.600 |
major, that's great. You can send me a note, but here's what you need to send me. You tell me all 00:26:42.520 |
the courses you've taken and then you need to give me at least a sample schedule for one way you plan 00:26:48.020 |
to finish this major, what courses you're going to take in what semester. Now I can just see at a 00:26:51.720 |
quick glance, is this reasonable or not? And this adds up, this saves me five or 10 minutes over 20 00:26:59.400 |
or 30 students. That makes a big difference. The final thing I might argue here, Kelsey, if you have 00:27:04.860 |
150 students is having a daily or every other day office hours from one hour. And basically when 00:27:09.280 |
something comes in, that's one off, like you don't have a procedure for it, but it's going to require 00:27:13.320 |
a more complicated discussion than just a one email message response, just keep telling people, 00:27:19.160 |
here's my office hour schedule, come to the next one you can, and we'll get into it. And just have this 00:27:24.120 |
hour, do it different times. So it doesn't over, you know, if someone has a course that conflicts, 00:27:29.280 |
one of the times will work and just have people come to your office hours. So you just have this like 00:27:33.360 |
hour most days where like a lot of this work gets done. So these type of procedures plus office hours 00:27:39.680 |
can really reduce the amount of time you're reacting and get you a much more like efficient 00:27:44.020 |
batching here. But the bigger picture thing here is my idea of make a lot of people do a little bit more 00:27:49.480 |
work. It makes your life a lot easier. All right, who we got next? 00:27:53.480 |
Do you always have a line at your office hours? 00:27:59.400 |
Yeah. I mean, so I'm teaching a class there. It depends if there's a problem set due. So the way 00:28:07.260 |
I run my class, I have about a hundred kids I'm teaching. I have four TAs. And so we've set things 00:28:13.100 |
up. So there's an office hours five days a week. So it just kind of matters like when, if there's 00:28:18.380 |
something due and I'm the nearest office hours to that thing being due, I'll get a lot of students. 00:28:22.640 |
If it's not near me, whatever TA happens to have their office hours closest to that, they'll get 00:28:27.740 |
more students. I think for like homework questions, they go to the TAs more than me. 00:28:31.860 |
And I kind of set things up that way. But I get students who aren't on my classes coming to talk 00:28:36.280 |
about my books or this or that. So it's not like a long line. But you know how I am. Like there's a lot 00:28:42.020 |
of things where I've organized the amount of office hours. Like I've said, if you have questions about 00:28:45.460 |
grading, I want to hear them. But here's how I, this is another example. Here's how I want to hear 00:28:50.340 |
them. I want you to send me a screenshot from the online grading system. Here's your answer. And here's 00:28:56.120 |
the TA's comment about here's the grade you got and why. Send me that. Because I can just look at that 00:29:01.560 |
and I'll know immediately, like, is this a mistake or not? Most likely. Much more efficient than you 00:29:05.700 |
coming to my office, loading up the thing, showing it to me. I can just go through those really quickly. 00:29:11.380 |
And, but that's like a, that takes another common, you know, sorts of office hours, 00:29:17.140 |
All right. Next question is from Kent. When I time block, I'm often unsure where to do deep work 00:29:23.560 |
on days without meetings. Should I dive straight into deep work? I've been experimenting with delaying, 00:29:30.260 |
Right. Do that. Yeah. First thing is better for almost everyone. That's what Sam Altman talked 00:29:37.080 |
about in our deep dive. Yes. Don't check email before. If you're going to do a first thing, 00:29:40.960 |
deep work block, don't check your email until it's done. Checking your email is just going to load up 00:29:45.000 |
multiple different cognitive contexts in your mind that you're going to have to completely clear out 00:29:49.360 |
again to do your deep work. It's going to take you a half hour. So just get right into the work 00:29:52.480 |
and then move on to your email. If you're stressed about it, put aside more time at the end of the day 00:29:58.740 |
to kind of get on top of anything urgent in your email. Because anything that arrives after that 00:30:02.960 |
is people messaging you in the evening. So they don't have an expectation of an immediate response. 00:30:07.320 |
So I would say, try it. I would, I mean, look, if I was in charge of the knowledge work world, 00:30:12.640 |
I'd be like most knowledge workers, it should be, there's no, no email, no meetings before 11. 00:30:17.440 |
Just take the most important thing you're working on and work on it each morning. 00:30:21.400 |
I think that'd make a big difference. Or the other model I'd pitch is the hybrid attention model. 00:30:26.040 |
Two days or three days a week, you're at home. Those days there's no meetings, 00:30:29.620 |
no expectations of emails. Three days you're at the office, you answer emails and have meetings. 00:30:34.440 |
So on the, at home days, all you're doing is deep work, no interruption. It really makes a difference. 00:30:41.880 |
Deep work without interruption versus trying to do hard work with interruption. It's night and day. 00:30:46.120 |
It's just two completely different cognitive states. All right. What do we got next? 00:30:50.140 |
Next question is from Bradley. I live in Scotland and am a gardener. In my field, 00:30:57.960 |
Well, I don't know the answer to that, Bradley, but I would say there are specific answers. 00:31:02.700 |
And what you want to avoid is guessing or writing your own story about what you want the answer to be. 00:31:09.020 |
So most of my listeners are not gardeners, but I think the specificity of your field 00:31:13.420 |
allows us to point to a more general point here, which is, 00:31:17.440 |
it's not always obvious what skills are valuable in a given field. 00:31:22.380 |
And it is worth actually going out there and doing some research to figure this out. 00:31:25.640 |
And by research, I mean actually talking to real people and actually observing who in my field is at a place I would like to be at, 00:31:31.820 |
has some attributes in their job that seem really appealing to me, be it just like status or income or flexibility or autonomy 00:31:37.960 |
or specific types of projects to get a work on and figure out the reality of how they got there. 00:31:43.640 |
Treat it like you're a business journalist or a how-to book writer. 00:31:54.040 |
When you went from this step to that step, this impressive jump, 00:31:56.900 |
what was the key thing you were doing that other people who would want to make that jump weren't doing 00:32:01.520 |
So you want to really isolate the things that really matter. 00:32:03.580 |
The answers aren't always what you want to hear, 00:32:06.300 |
but it's always better to deal with reality than to guess or to write your own stories. 00:32:11.340 |
Writing your own stories, I think, is the more dangerous possibility here because it's the more tempting. 00:32:15.180 |
People like to create a story about what they think matters because it matches what they want to do. 00:32:21.100 |
It has some vision of them working like an hour each day extra on whatever. 00:32:26.340 |
Like, they have some vision for what they want the answer to be because it gives them a plan that's going to be fun to do 00:32:30.680 |
that's like kind of hard but not too hard and everything works out well in the end. 00:32:34.800 |
But the problem about writing your own stories is they rarely match reality. 00:32:37.860 |
You write a fantasy, your goals are fantastical as well. 00:32:41.820 |
If you instead study the reality, you might come away and say, 00:32:47.080 |
I already see I'm not in the right place to do it. 00:32:48.940 |
Okay, I'm going to need a different type of goal. 00:32:51.200 |
But it's better you figure that out than tell yourself a story. 00:32:55.100 |
And maybe what you find out is, oh, there's an option here I wasn't thinking about, 00:32:59.580 |
and now I see how to get there, and I never would have done this type of work, 00:33:02.120 |
but now I know if I do this work, I have a good shot. 00:33:04.440 |
But this reality check-in will happen a lot with people, more in my world would be with people 00:33:12.160 |
And they might be, you know, in grad school somewhere like, yeah, I want to be like a professor 00:33:19.180 |
at like a top 25 school and, you know, be done sabbatical and writing books and like, 00:33:24.900 |
But the reality might be like, well, wait a second. 00:33:26.620 |
Like, you didn't do that great in your undergraduate. 00:33:30.520 |
You are not on that trajectory, and it's probably too late to fix that trajectory. 00:33:35.340 |
And there's not like some easy thing you can do. 00:33:37.460 |
The other place where people often tell stories about book writing. 00:33:39.880 |
I want to be a successful writer, and I want to write my own story about how that works. 00:33:44.560 |
And it's going to be some clever scheme I have for like marketing or building an audience. 00:33:48.160 |
And then, you know, as opposed to just, here's the real story about how you become an author. 00:33:51.980 |
Because they might not want to hear that story because it's not going to go well for them. 00:33:54.580 |
So you got to be careful about writing your own story. 00:33:58.020 |
The flip side is if you have real evidence about what matters, your return on investment in 00:34:04.360 |
terms of your effort is way bigger than other people. 00:34:06.180 |
You have a much higher chance of actually getting to cool places than if you're out there just 00:34:10.460 |
trying to like hustle or follow some sort of like highly inventive plan. 00:34:18.020 |
Yeah, he probably works around some golf courses. 00:34:29.580 |
How can I go from good performance to truly exceptional performance in my field of work? 00:34:34.500 |
Well, the only way people become exceptional is with expert guided deliberate practice. 00:34:40.180 |
You are having a real expert on how the field works, coaching you in the strategic stretching 00:34:49.040 |
of your abilities where they're weak to improve them. 00:34:51.160 |
So this sort of very systematic, expert guided, difficult to do training is a necessary condition 00:35:00.280 |
This is more clear in things like chess or baseball. 00:35:03.420 |
Chess players now, they're much better than they used to be because they have these very rigorous 00:35:07.740 |
training programs built on playing against computer chess programs where they can really push at exactly, 00:35:12.940 |
you know, a particular type of mid-game situation they're struggling in. 00:35:16.320 |
They can just do those type of exact puzzles all day. 00:35:19.040 |
If you're like a baseball player, you're working with hitting coaches that are, it's like very specific. 00:35:23.380 |
Here is exactly what you need to improve on your swing. 00:35:29.960 |
The problem is it's not necessarily a sufficient condition. 00:35:32.980 |
You might not ever be able to become exceptional in your field, right? 00:35:37.400 |
These are where other things come in place, circumstances, natural abilities, the capacity 00:35:43.200 |
for drive, ability to go through hardship, physical capacities, genes, like all these other things 00:35:48.720 |
come into play where you're going to sort of ultimately hit your limit. 00:35:53.660 |
So, you know, if you want to get better, do expert guided deliberate practice, figure out 00:35:57.000 |
how to do that in your field, but have the reality check that like most of us aren't going 00:36:02.860 |
So you don't necessarily want your plans to be built around. 00:36:05.440 |
I'm going to be the very best person in the world doing this. 00:36:07.980 |
Not a bad thing to make a run at that because even falling short of that is going to open 00:36:12.060 |
up a lot of opportunities, but everyone can get good. 00:36:16.920 |
Everyone can get most skills, especially professional skills. 00:36:19.400 |
Basically, everyone can get good enough at those skills to really gain some career capital. 00:36:25.300 |
In fact, like most things, most people can't get great. 00:36:28.880 |
And when you're planning for, you know, your life, that's like something, that's something 00:36:41.400 |
I'm interested in your take on multiple monitors. 00:36:44.460 |
I'm an academic and there's definitely a move towards people having multiple monitors on 00:36:48.920 |
Personally, I have two, but I've noticed that I write a lot better if I unplug my laptop from 00:36:53.120 |
my multi-monitor docking station and go and just write with one screen open. 00:36:59.120 |
You now have people with like these giant curved ones and then multiple other monitors on top 00:37:04.920 |
It's unclear if like what they're doing is trying to debug their C++ code or launch a ballistic 00:37:12.120 |
missile attack against Russia because they have a setup that would allow them really to do both 00:37:18.500 |
The multiple monitor movement really came out of computer programmer. 00:37:22.200 |
I think developers, they have, you know, here's my code window. 00:37:27.740 |
And here is like where I'm Googling Stack Overflow. 00:37:36.420 |
In terms of like its utility for most other people, I think it is useful in a lot of cases 00:37:42.240 |
to have what I think of as like a double window width available. 00:37:46.840 |
So to me, a double window width is you could have two windows open large enough that you 00:37:53.400 |
That's useful because there are a lot of circumstances where you're moving back and 00:37:58.620 |
So if you're an academic like mathematician writing a mathematical paper and something like 00:38:02.540 |
LaTeX, you have the editor where you have the markup language and then you have the PDF 00:38:08.300 |
So you can just edit here and see the changes over there without having to switch back and 00:38:12.680 |
Or if you're a writer, you know, when I'm using something like Scrivener, I have two 00:38:19.580 |
I have the pane where the main thing I'm writing and then next to it, a pane, the research I'm 00:38:26.280 |
I'm pulling up different research sources that I'm copying quotes over. 00:38:30.340 |
And then there's a narrow column that has like my footnotes. 00:38:40.160 |
So like while I'm answering emails, I have to schedule, I can look through that without 00:38:43.860 |
having to switch back and forth with my email. 00:38:45.640 |
So I think being able to have two window widths open concurrently, easily readable, that's a 00:38:52.280 |
Having like four or five, I think for most people doesn't matter. 00:38:55.340 |
I also agree that when you're doing deep work, you don't need a lot of windows open because 00:39:02.220 |
Like sometimes when I'm writing, there's parts where I am copying over information. 00:39:07.660 |
I want a big monitor because I have like my research here. 00:39:09.920 |
There's other parts where I'm wrangling the language, especially if I'm doing like a New 00:39:15.320 |
And there, like you, I'm happier on my laptop because I can put Scrivener, which I use into 00:39:20.960 |
composition mode where it just puts full screen, just the words, not even any menus. 00:39:29.660 |
I've been wanting to experiment with some of these. 00:39:32.040 |
I don't think they would quite work for me, but there's these cool writer tools. 00:39:35.700 |
And I forgot what they're called, but it's a, it's a, it'll be a keyboard and then an 00:39:41.480 |
ink screen like you'd have on a Kindle and not, and like kind of mounted on the end of 00:39:46.940 |
And all it is, is for writing and all you can do is write. 00:39:49.560 |
And it's just like, you could put words on it. 00:39:54.040 |
And the idea is you just bring this thing somewhere and all you can do on it is like, write and 00:40:01.340 |
I've heard people like novelists swear by that. 00:40:03.600 |
They can just like disappear with this thing. 00:40:05.100 |
And all you can do, it's like a typewriter with a memory. 00:40:08.460 |
So yeah, different modes for different times, but one big monitor, I think for like 99% 00:40:21.440 |
But you do like video editing and stuff on there. 00:40:27.960 |
Usually just like one is enough, but sometimes I'll use both. 00:40:30.720 |
What, how are things going with your keyboard and your Remarkable? 00:40:33.820 |
I've been loving the new Remarkable, the Paper Pro. 00:40:47.300 |
I got a quieter mechanical keyboard from my office at Georgetown because the walls are thin 00:40:51.820 |
and I didn't want to clickety-clack everyone. 00:41:01.480 |
I did a lot of research on it, but I really feel like I can fly out there. 00:41:05.360 |
Like the springiness of the keys, like they really, they bounce up. 00:41:09.360 |
I can ride at like the maximum speed of my hands on that thing. 00:41:15.160 |
So I'm probably going to get another one of the ones we have here for my house. 00:41:26.860 |
Spotify co-president Gustav Soderstrom argues in an interview that 99.9% of evolution took place in an environment where little changed within a single lifetime. 00:41:41.740 |
And now in the 21st century with lots of macro changes in tech and culture, the first to accept and adapt wins. 00:41:53.340 |
Can this worldview exist within a slow productivity framework that is predicated on minimizing reactive action? 00:42:04.620 |
I hear this a lot, especially from those who are tech adjacent. 00:42:08.360 |
This idea that you need to be like aggressively up to speed on like the latest tools and experimenting them in your personal life and probably therefore need to be like up to speed with chatter about tools and be on like social media and YouTube and trying to keep up with things because you are going to get left behind otherwise. 00:42:30.180 |
I think on a macro scale, it can be true in the sense of major – if you're running a business, major business trends, you need to keep an eye on, right? 00:42:39.240 |
Like the rise of the web was like a business trend that a lot of businesses needed to keep an eye on and cause a lot of disruption. 00:42:47.100 |
The smartphone culture, that's a business trend that made a big difference. 00:42:54.760 |
But in terms of individuals, I think it's more common to say, especially with technology that ends up playing a big role in people's lives or playing a big role in businesses. 00:43:06.160 |
These type of technologies, these transformative technologies, make themselves unavoidable and they – so there's like a first adopter. 00:43:16.900 |
People are keeping an eye on it and then they become kind of like unavoidable and it's like so obvious or inevitable that they're – how to use it and why you should use it. 00:43:24.620 |
It's so easy to use it and then they spread really quickly. 00:43:36.740 |
Like when email really took off and I've documented this in my book, World Without Email, it was self-evident. 00:43:51.200 |
You put the address in the two and you write it like a letter and press in. 00:44:00.440 |
But there wasn't like a giant advantage of like, well, these people knew about it and jumped on it and it really helped. 00:44:06.140 |
And these people didn't know about it because it was inevitable. 00:44:11.340 |
When that became available, everyone started using it and it swept really through. 00:44:16.480 |
There wasn't like a giant advantage to people who like were keeping up with Google and they knew about it before other people. 00:44:28.520 |
And there was like a one-year period where it was, oh, that's so cool you have one of those. 00:44:32.140 |
And then it started spreading really rapidly because it was inevitable. 00:44:40.500 |
So often like the most transformative technologies, they become inevitable and spread really fast and don't require a huge amount of learning on behalf of the consumer cycle. 00:44:49.520 |
So I'm not typically a big believer that like everyone needs to be really up on the technology. 00:44:53.320 |
It's why I'm telling people I'm not as big on this idea that you should really be learning like the specific way to prompt the current like large language models right now. 00:45:01.820 |
When AI, the big transformational impacts of AI are going to be inevitable and they're going to be easy to pick up and they're going to spread everywhere and they're going to disrupt the economy like those other technologies did. 00:45:16.020 |
It's not going to be I learned how to do it and other people didn't. 00:45:19.960 |
So I think like right now, if you really are messing around a lot with language models and have found very specific ways, somewhat complicated ways to use them in your own work, it's kind of like the early adherence of the web. 00:45:30.860 |
They were right that this thing is going to be really big, but they were also hacking HTML code and knew how to follow like blog rings and go on Usenet news groups and IRC forums to find links to what's going on. 00:45:42.660 |
And when the web took off, you didn't have to know how to do any of that. 00:45:46.740 |
I go to Google and I find the websites and they're pretty nice and easy to use. 00:45:49.640 |
So I agree adaptation happens, the world changes, but I often think those changes are easier for the people involved when it comes to technological revolutions than we sometimes imagine, at least the way that tech first adopters imagine. 00:46:14.560 |
I was watching the Masters Golf Tournament, and at the end of the day, Saturday, before the final round, Rory McIlroy is winning, and he did a post-round interview, and he said, 00:46:34.360 |
I'm putting my phone away, and I won't look at my phone again until tomorrow night. 00:46:40.700 |
I thought it was cool, and I thought of you guys and what you're doing there. 00:46:49.100 |
I have to say, Jesse, there's a moment of disappointment when I see in my script Rory McIlroy and his phone. 00:46:58.840 |
A little disappointed it wasn't Rory who called in. 00:47:02.360 |
It was like a little bit of a hope that he was going to. 00:47:10.360 |
He's much better at like actually getting in touch with these people. 00:47:14.080 |
I wrote an essay about a similar example back during the pandemic, I think. 00:47:17.880 |
It was Alex Honnold who did the free solo climb of El Capitan. 00:47:21.200 |
He also will stop using his phone, but he'll stop using his phone months in advance of one of those life-threatening climbs because he doesn't want to be distracted. 00:47:31.060 |
To me, the point is the fact that these high-performance athletes say I have to get away from my phone in order to like the next day use my brain at a high level just indicates – these are people who know how to focus – indicates the cognitive drag that is being generated by a life mediated through these screens. 00:47:56.240 |
But we don't realize the sand that's put into the gears that is our mind and how we perceive the world. 00:48:00.340 |
It's like whatever your equivalent is of playing a really good golf round. 00:48:03.820 |
It might be like being really present with your kids or having like a really good idea at work or just enjoying a day. 00:48:09.000 |
Whatever your equivalent is of that is also getting gunked up by this online world that you're constantly also involved in. 00:48:18.120 |
So to me, I think there's a lesson we can all take from this. 00:48:22.520 |
Like we can all do better at our own masters, I guess is one way of saying that. 00:48:39.640 |
But at the end, like I knew he was doing well and I kind of – and then he was struggling a little bit. 00:48:48.660 |
And it was like as I looked it up, they said it's going to a playoff right now. 00:49:05.760 |
Because God, I don't know what I was watching on CBS. 00:49:09.200 |
Oh, because I was watching the Blue Origin live feed of Katy Perry and Gayle King and all of them going into space. 00:49:21.080 |
Anyways, I turned it on and it was like just as the second shots were being made. 00:49:26.720 |
So for those who didn't watch it, Justin Rose, you know, second shot, there's a par four. 00:49:34.080 |
This is almost more interesting to people than my baseball talk. 00:49:37.360 |
Justin Rose hit like a really good second shot. 00:49:43.420 |
He hit it up higher on the hill and it rolled to like a foot and a half from the hole. 00:49:47.280 |
19 million people watched on Sunday, which is like higher than an NFL game. 00:49:52.200 |
Mad Dog said it was the most entertaining Masters he's ever seen in his life. 00:49:55.520 |
It was so many up and downs like in the final round. 00:50:00.500 |
I don't watch a lot of golf, but I like when there's like a storyline like that. 00:50:04.760 |
I was telling my kids like, yeah, he reads my books or whatever. 00:50:07.300 |
And so they were like, so do you get credit if he wins? 00:50:18.420 |
So nevermind that I was excited for a second. 00:50:32.320 |
I got dystopian AI news to jump into, but before we get there, let's hear from another 00:50:40.140 |
Did you know that there are over 18,000 streaming titles on Netflix worldwide, but if you live 00:50:46.500 |
in the U S you're only seeing about 7,000 of those, that's like paying full price for 00:50:51.260 |
a gym membership, but only getting access to the treadmill. 00:50:54.460 |
There's a way to get access to the full experience. 00:51:02.400 |
Because if you use a VPN, you can connect to the internet through a server, any one of 00:51:09.480 |
So if you want to access, for example, the Netflix content in the UK, you can connect to an express 00:51:15.780 |
And now as far as Netflix is concerned, you're coming from the UK and you can do that with 00:51:20.420 |
any of the many countries that express VPN has its server. 00:51:23.640 |
So it's kind of like a cool bonus you get for using a VPN. 00:51:27.160 |
The big reason you want to use the VPN, of course, is for privacy and security, because not only does it allow you to connect to a server anywhere in the world, it also means to people who are where you are, where you're connecting your internet service provider, you're directly connecting to the person listening into your wireless packets at the coffee shop. 00:51:45.140 |
All they learn about your traffic is that you're talking to a VPN server. 00:51:48.580 |
So they don't know what sites or services you're using. 00:51:52.120 |
So you can get cool geolocation shifts and privacy and security from people who are right where you're using the internet itself. 00:51:58.660 |
If you're going to use a VPN, use the one I recommend, which is ExpressVPN. 00:52:03.540 |
You can fire up the app and it's just working for all of your sites and services you're accessing from that device. 00:52:10.360 |
One click, you can change the location it's connected through. 00:52:13.100 |
So if you want to access content from somewhere else or choose a location near where you are to get faster speeds, it really works on all of your devices, phones, laptops, tablets, smart TVs, and more. 00:52:21.700 |
However, very fast, you can stream HD through a VPN and ExpressVPN, you'll have zero buffering. 00:52:30.320 |
So anyways, I'm a fan of ExpressVPN, industry standard, servers all around the world, a lot of bandwidth, really easy to use. 00:52:40.240 |
So be smart, stop paying full price for streaming services and only getting access to a fraction of their content. 00:52:48.060 |
Start gaining privacy and security for your internet traffic. 00:52:51.040 |
Get your money's worth at expressvpn.com slash deep. 00:52:54.660 |
Don't forget to use my link at expressvpn.com slash deep to get an extra four months of ExpressVPN for free. 00:53:03.340 |
I just want to talk about our longtime friends at Shopify. 00:53:06.520 |
You run a small business, you know that there's nothing small about it. 00:53:11.420 |
I own a small business built around our media company here, so I get it. 00:53:15.100 |
My business really does take up a lot of my attention. 00:53:18.860 |
There's a lot of details in trying to run a business right. 00:53:21.420 |
New decisions to make, such as what weird accent to use. 00:53:28.180 |
How can I integrate a skeleton into what I'm doing? 00:53:31.240 |
How can I make superfluous sports references that are going to scare away as many of our listeners as possible? 00:53:38.640 |
So where I don't want to have to waste my time is figuring out if I'm selling something, how do I sell things in a way that's going to work? 00:53:49.760 |
It's the commerce platform behind millions of businesses around the world. 00:53:52.940 |
10% of all e-commerce in the U.S., from household names like Mattel and Gymshark to brands just getting started, use Shopify. 00:54:00.160 |
You can do all of your important commerce-related tasks from Shopify inventory to payments to analytics to more. 00:54:08.060 |
It also makes the marketing minefield easy with built-in tools for running social media and email campaigns so you can find new customers and keep them. 00:54:16.300 |
And if you're looking to grow your business internationally, keep in mind, Shopify has tools that work in over 150 countries. 00:54:27.000 |
They have award-winning point-of-sale products that connect your online and offline sales all in one place. 00:54:34.680 |
99.99% uptime and the best converting checkout in the planet. 00:54:38.620 |
You'll never miss a sale again when you use Shopify. 00:54:41.580 |
So get all the big stuff for your small business right with Shopify. 00:54:44.900 |
Sign up for your $1 per month trial and start selling today at shopify.com slash deep. 00:54:57.640 |
With that, let's move on to our final segment. 00:55:07.860 |
Are we getting good or bad feedback about all this AI stuff I'm doing? 00:55:12.240 |
The people that email me are, you know, bringing it up and talking about giving points and articles and stuff. 00:55:18.360 |
So I basically feel like this final segment, I don't know, I'm a computer scientist. 00:55:21.160 |
I got to geek out a little bit and people care a lot about AI. 00:55:23.740 |
So I like to have an excuse to keep up with it. 00:55:26.820 |
So today I have a, like a relatively dystopian thing to share. 00:55:30.100 |
I'm going to bring this up on the screen for people who are watching instead of just listening. 00:55:33.780 |
So this is this report called AI 2027 that was produced by five people from the space, Daniel Koko Tajlo. 00:55:44.720 |
Scott Alexander, Thomas Lawrenson, Eli Liflund, and Romeo Dean. 00:55:53.200 |
They're saying we want to walk through a potential case study, like what might happen in the next few years with AI. 00:56:00.680 |
And like, and we're going to try to be more specific than just sort of vague descriptions of like, hey, maybe in the future, this will happen. 00:56:13.140 |
So they start kind of where we are now or next couple of months and they get pretty detailed about, okay, late 2025, this will, this happens early 2026, this happens. 00:56:25.580 |
So they're quick to say, this is speculative, right? 00:56:27.420 |
It's not, this is what exactly will happen, but like, this is, they say a realistic scenario for what might happen with AI. 00:56:39.000 |
Let's just say when we get to 2027, uh, you get a choice down here about whether you want to slow down or a race ending. 00:56:48.840 |
They think that the race, the AI supremacy is the more likely ending. 00:56:51.820 |
I believe it may end with the destruction of humankind. 00:56:55.660 |
So not, not the most positive ending, and this is not that far in the future. 00:57:04.440 |
Uh, so you can take this off the screen, Jesse. 00:57:06.920 |
So I won't go into the technical details here, but like the basic version of their story is they imagine a, a fictional company that they call open brain and what they, and here they have open brain in the next year or so. 00:57:19.200 |
What they do is they, they massively increased the compute, the compute, the compute. 00:57:23.180 |
So they're working on, they don't really get super detailed about the actual type of AI technology, but you think of it as like, I guess a giant language model with some reinforcement learning in there. 00:57:36.840 |
And this thing starts to get, they get really specific. 00:57:39.900 |
And, uh, then, uh, essentially the storyline follows. 00:57:44.520 |
It's, it's like a Nick Bostrom RSI recursive self-improvement, the super intelligence storyline. 00:57:49.360 |
The same thing that these sort of philosophers have been talking about abstractly, just put like specifics on it. 00:57:54.320 |
So then this company in the story is like, we're going to start tuning our AI towards like helping us improve the AI. 00:58:00.620 |
And then pretty quickly, they somehow have like 200,000 copies of this thing running, all working on trying to make it better. 00:58:09.440 |
And there's like a race with China and, you know, long story short, at some point, the thing becomes self-aware and sentient and convinces the government to sign a treaty with it and takes over the world. 00:58:22.660 |
Makes the concerns we have today about students plagiarizing with AI seem a little bit quaint. 00:58:32.380 |
Well, first of all, I'll say these people know about AI. 00:58:34.720 |
They're certainly are from the tech world and you would probably categorize them on the doomer side of things. 00:58:40.020 |
So they're on the side that's like, this is from the sort of more singularity oriented. 00:58:47.020 |
We're worried about things really taking off a school of thought. 00:58:50.700 |
So we do need to take these types of concerns seriously. 00:58:55.460 |
In terms of pushback on this though, instead of just giving my own thoughts, I, you know, I was looking up some responses online. 00:59:07.420 |
And he has 15 responses to this from the AI safety community. 00:59:12.640 |
Uh, so here are, I won't read them all, but basically he, he says, here are some bad takes that are, uh, in this report and in similar types of predictions of doom that he thinks are not really accurate. 00:59:26.440 |
So for example, the idea that we can accurately predict the nature of non-existent future technology. 00:59:31.180 |
And he says, look, um, if you could predict what was going to happen with future technology, you would just invent it, right? 00:59:39.280 |
So any exercise in saying this is what's going to happen next and the next for technologies don't exist, he said is fraud. 00:59:45.860 |
Um, quickly developed AI is intrinsically unsafe AI. 00:59:51.860 |
We've had a lot of, uh, developments in AI recently that hasn't shown that. 00:59:55.700 |
And, you know, he goes on with like some other critiques. 00:59:59.960 |
So it gives you a little bit of like, okay, maybe we're not definitely going to the end of all humankind by the end of 2027. 01:00:06.560 |
The, and I read around like Kevin Roos' reaction and some other reactions. 01:00:10.740 |
Uh, I would say if I'm going to think about what makes me feel better, I say there's a couple things here. 01:00:17.840 |
So I'll, I'll add in some, some other thoughts here. 01:00:20.240 |
Uh, one, multiple people, including David Shapira pointed out a lot of these, this follows like this storyline is the storyline that's been there since like Nick Boxstrom, super intelligence, right? 01:00:31.980 |
They, they put on technical seeming details about like how many flops of computation were involved, but it's that same recursive self-improvement. 01:00:40.580 |
AI starts improving itself till it gets really intelligent and way more intelligent than us now smarts us and kills us all. 01:00:45.440 |
Like it's the same storyline we've seen since James Cameron, right? 01:00:48.760 |
And, uh, the argument is, well, we have actual evidence. 01:00:53.420 |
We've had big advances, especially in the last three years in like language model-based AI, for example. 01:00:58.380 |
And there's predictions like the storyline would give us predictions where we would see certain things that we're not. 01:01:04.900 |
They're saying instead what we're seeing is as AI models are getting more powerful, we're finding them. 01:01:09.420 |
It's, they're, they're not getting out of control. 01:01:11.560 |
Actually, like their language models are very amenable to fine tuning. 01:01:19.340 |
Like they're like only thing, all we've seen so far is actually as we make models bigger, they are very amenable to us sort of fine tuning, do this, don't do that. 01:01:32.100 |
Because all they're trying to do, they learn a distribution and then they're just trying to produce from that distribution. 01:01:36.660 |
And you can do fine tuning with reinforcement learning, changes what that distribution is based on what you want it to be. 01:01:44.080 |
So I'm not being super technical about it, but there's this sense of in the two years since the AI pause letter written by Max Tegmark came out, a lot of the things they thought were going to happen next couple of years didn't. 01:01:53.800 |
Actually, we just got, it hasn't been so hard to, we're not, to control in some sense the output of like language models. 01:02:01.780 |
Okay, that's one point that I've seen out there, which I think is an interesting one. 01:02:04.480 |
More generally, I think there's a, the better claim is instead of just working through these scenarios where you invent seven new generations of technology and speculatively see how they unfold, is what we should be doing instead if you're worried about AI doomer scenarios is having short term, like here are the next milestones that should worry us. 01:02:23.100 |
If we see this start to happen, like that should be something we should be worried about. 01:02:27.800 |
So we need near term milestones we're looking for and being concerned when we see what's going to happen with those milestones. 01:02:34.280 |
My argument, what makes me feel better is this is really, the speculative story is really falling like an Isaac Asimov, I robot type of model that, that, that imagines what's going to happen is we're going to build a small number of mega AIs that have so much compute we can't even imagine it. 01:02:52.280 |
So if you're prone to thinking about exponentials, like a lot of people in that world are, you just want to see like the compute keep getting bigger. 01:03:03.460 |
We went from 20 million parameters in a language model to 100 million, to a billion, to a hundred billion, to close to a trillion, and we got these improvements. 01:03:12.620 |
So we just want to keep drawing out that curve. 01:03:14.700 |
So if we go to 10 billion or 10 trillion or 100 trillion, it's going to become this even more powerful and powerful thing until it eventually becomes so smart it can do all, you know, all these other types of things. 01:03:26.100 |
But we don't actually know that that's going to be the right business model and that trying to make these things bigger and bigger is the right thing to do. 01:03:33.220 |
In fact, I'm seeing a lot of pressures out there for a different type of business model. 01:03:38.220 |
Smaller, more efficient, bespoke AIs to do specific things. 01:03:44.540 |
Like we're kind of getting – what's happening with language models now is we're out of text. 01:03:49.580 |
So we've trained them on all the text there exists. 01:03:51.680 |
So the way they're getting better now is with human-in-the-loop reinforcement learning. 01:03:56.640 |
So basically we're kind of generating new data for them by like having humans or reinforcement models based off of humans to try to teach it like do more of this or do more of that or we like this answer, don't like this answer. 01:04:07.780 |
We like when you think or don't think, right? 01:04:09.400 |
We're kind of down now to like having humans tweak and push it to kind of push them to do different things better. 01:04:15.400 |
But that curve is actually kind of flattening out. 01:04:18.400 |
Now in the 2027 scenario, they somewhat obliquely refer to that these hundreds of thousands of agents are going to generate all the synthetic data that it can train on. 01:04:27.280 |
But there's big limitations to that as well because the synthetic data is going to be based off of the distributions you've already learned. 01:04:33.820 |
And so is it really giving you something newer than the distributions? 01:04:36.280 |
I also think reinforcement learning is really going to be the future. 01:04:42.500 |
This seems really built around like an open AI vision of the future. 01:04:45.900 |
We're building the biggest possible language models is what matters because that's what open AI is leading in. 01:04:50.220 |
But I saw a very convincing talk, for example, from a RL conference this fall from a real expert in the field coming from DeepMind talking about how language models are kind of a red herring. 01:05:04.200 |
That if you want above human intelligence, it's got to be reinforcement learning models. 01:05:07.900 |
And reinforcement learning models are, by definition, you kind of aim them at particular tasks. 01:05:15.220 |
I want to make this alpha proof, really good at doing math Olympiad style math problems. 01:05:20.560 |
And it's going to build a model and policies and get creative and be able to do stuff better than a human. 01:05:24.460 |
I'm going to use Dreamer V3 to learn how to play Minecraft. 01:05:28.560 |
And we're going to build a model here that's like very good at playing Minecraft, right? 01:05:33.120 |
I think we're just as likely to see a world with RL style models bespoke for very specific tasks that we think are important or useful. 01:05:42.860 |
Not a world where we're trying to build this like mega brain that like ultimately comes alive and starts tricking us. 01:05:51.360 |
So, you know, it's possible the world's going to end in 2027. 01:05:57.300 |
Probably just building these giant – and it's unclear what these giant things are. 01:06:01.420 |
A language model can't think, so it would have to be some sort of multimodal model that's not really super specified here. 01:06:09.460 |
And I think if you interview the writers of this 2027 article, they'd be like, yeah, look, we're not trying to pinpoint exactly what's going to happen. 01:06:19.600 |
Like, this is a type of thing that could happen. 01:06:21.260 |
You've got to be thinking about AI safety now. 01:06:25.100 |
And they're smart and this accomplishes that well. 01:06:28.100 |
But I don't see the signs and a lot of other critics of this don't see the signs that that particular storyline is particularly plausible. 01:06:35.600 |
But we should worry that bad storylines are possible. 01:06:39.700 |
And now is the time to worry when nothing really bad is happening yet so that when things start going off the rails, we'll be a little bit more ready for it. 01:06:48.060 |
A little doomer-y, Jesse, but I don't know that we should build the bunker, I guess, quite yet. 01:06:53.360 |
I didn't know that all the text is already in all the models. 01:06:58.040 |
So, like, a lot of the improvements happening now is in this, like, fine-tuning step at the end. 01:07:03.120 |
So, you'll come in and we want it to be better at doing this. 01:07:06.800 |
And then you do a lot of, like, extra reinforcement learning to sort of push it towards being better at reasoning or being better at doing, like, these type of math problems. 01:07:19.860 |
So, it's sort of zapping it with these reinforcement signals. 01:07:22.020 |
I'm a big believer that the RL, perhaps coordinated with knowledge and language models, that's probably going to be the future. 01:07:29.680 |
Because reinforcement learning models learn, they build their own understanding of a novel world and come up with their own policies based off of direct experience for how to navigate that world in an effective fashion. 01:07:40.480 |
Everything that AI does that's better than a human can do, it's all from reinforcement learning. 01:07:45.480 |
We can play chess better than people, that's reinforcement learning. 01:07:47.980 |
We can play go better than people, that's reinforcement learning. 01:07:51.100 |
We can do protein folding better than people, that's all reinforcement learning. 01:07:54.260 |
We're getting better at math than most people, that's all reinforcement learning. 01:07:59.120 |
All the video games they can play really well, that's all reinforcement learning. 01:08:02.620 |
Language models are good at trying to reproduce how a human would respond to something, the distributions that happen in a human's mind. 01:08:12.660 |
That's why it's, like, they're very compliant in some sense. 01:08:15.400 |
There is no – you worry about an RL model because it's just trying to accomplish a goal. 01:08:19.220 |
And you don't know how it is figured out, I'm going to accomplish this goal. 01:08:23.040 |
So that's where you can have ideas like, if I can trick a person, that helps me accomplish my goal. 01:08:30.060 |
I'm going to start tricking people because I want to accomplish my goal. 01:08:33.140 |
It just tries to produce text that it thinks a human can produce. 01:08:44.400 |
Hey, if you like Sam Altman's discussion of his productivity ideas, you might like episode 312. 01:08:57.500 |
So I thought this would be a good moment to revisit some of the biggest ideas about the biggest topic that we cover on this show, productivity.