back to indexErik Brynjolfsson: Economics of AI, Social Networks, and Technology | Lex Fridman Podcast #141
Chapters
0:0 Introduction
2:56 Exponential growth
7:24 Elon Musk exponential thinking
9:41 Moore's law is a series of revolutions
15:3 GPT-3
16:42 Autonomous vehicles
23:43 Electricity
28:12 Productivity
33:19 Why is Twitter and Facebook free?
43:36 Dismantling the nature of truth
46:56 Nutpicking and Cancel Culture
53:11 How will AI change our world
59:12 Existential threats
61:5 AI and the nature of work
67:11 Thoughts on Andrew Yang and UBI
73:3 Economics of innovation
79:9 Effect of COVID on the economy
88:22 MIT and Stanford
92:56 Book recommendations
96:1 Meaning of life
00:00:00.000 |
The following is a conversation with Eric Brinjalsen. 00:00:05.880 |
and the director of Stanford's Digital Economy Lab. 00:00:09.440 |
Previously, he was a long, long time professor at MIT 00:00:29.200 |
followed by some thoughts related to the episode. 00:00:33.040 |
the maker of classy, well-performing watches. 00:00:49.080 |
Please check out these sponsors in the description 00:00:51.040 |
to get a discount and to support this podcast. 00:00:55.640 |
let me say that the impact of artificial intelligence 00:01:06.440 |
to predicting the future evolution of technology, 00:01:09.200 |
it is often too easy to fall into one of two camps, 00:01:18.320 |
As always, the future will land us somewhere in between. 00:01:21.640 |
I prefer to wear two hats in these discussions 00:01:32.000 |
This is probably a good time to mention Andrew Yang, 00:01:36.720 |
who has been one of the high-profile thinkers on this topic. 00:01:44.840 |
A conversation with Andrew has been on the table many times. 00:01:50.640 |
especially because I have a strongly held to preference 00:01:54.520 |
for long form, two, three, four hours or more, 00:02:12.680 |
and uncomfortable things that minimize risk for the guest. 00:02:20.960 |
That's something, that magic, I think is worth the effort, 00:02:25.360 |
even if it ultimately leads to a failed conversation. 00:02:31.560 |
treasuring the possibility of a rare moment of magic. 00:02:39.880 |
If you enjoy this thing, subscribe on YouTube, 00:02:51.280 |
And now, here's my conversation with Eric Greenjohnson. 00:02:55.320 |
You posted a quote on Twitter by Albert Bartlett, 00:03:00.000 |
saying that the greatest shortcoming of the human race 00:03:03.440 |
is our inability to understand the exponential function. 00:03:16.840 |
Andy McAfee and I said in "The Second Machine Age," 00:03:21.440 |
when COVID was really just beginning to take off, 00:03:25.840 |
There were actually only a couple dozen cases, 00:03:30.000 |
but they were doubling every two or three days, 00:03:32.240 |
and I could see, oh my God, this is gonna be a catastrophe, 00:03:38.960 |
or not a lot of people were taking it very seriously. 00:03:47.880 |
and I was the only person on the plane wearing a mask, 00:03:54.440 |
She was touching me all over, which I wasn't thrilled about, 00:03:56.640 |
and she goes, "Do you have some kind of anxiety disorder? 00:04:05.240 |
But I was worried because I knew I could see, 00:04:08.600 |
or I suspected, I guess, that that doubling would continue, 00:04:18.720 |
I try to, it's motivated by more optimistic things 00:04:25.640 |
But in either case, it can be very counterintuitive. 00:04:38.280 |
But if something doubles for 10 times as long, 00:04:47.280 |
After 30, it's a, no, sorry, after 20, it's a million. 00:04:53.600 |
And pretty soon after that, it just gets to these numbers 00:04:57.840 |
Our world is becoming more and more exponential, 00:05:03.640 |
So more and more often, our intuitions are out of whack, 00:05:06.520 |
and that can be good in the case of things creating wonders, 00:05:10.960 |
but it can be dangerous in the case of viruses 00:05:35.520 |
So we just start seeing it more and more often. 00:05:45.000 |
And I'm getting used to it, but I still make mistakes. 00:05:51.120 |
and how rapidly natural language has improved. 00:05:54.240 |
But I think that as the world becomes more exponential, 00:05:58.440 |
we'll all start experiencing it more frequently. 00:06:04.840 |
in the meantime using our old kind of caveman intuitions 00:06:09.480 |
- Well, the weird thing is it always kind of looks linear 00:06:17.120 |
it's hard to introspect and really acknowledge 00:06:22.120 |
how much has changed in just a couple of years 00:06:39.640 |
- A lot of stuff, I think there are parts of the world, 00:06:44.440 |
The way humans learn, the way organizations change, 00:06:49.400 |
the way our whole institutions adapt and evolve, 00:06:56.520 |
between these exponentially improving technologies 00:07:00.640 |
'cause some of them are exponentially more dangerous 00:07:06.880 |
and our institutions that just don't change very fast at all 00:07:15.960 |
the growing inequality and other dysfunctions 00:07:24.400 |
- So one guy that talks about exponential functions 00:07:52.360 |
And also using that kind of thinking to estimate 00:07:56.600 |
create deadlines and estimate when you'll be able to deliver 00:08:13.200 |
like he doesn't meet the initial estimates of the deadlines 00:08:25.040 |
Like what are your thoughts about this whole thing? 00:08:30.120 |
I talked about two ways of getting more of a grip 00:08:34.560 |
And one of them just comes from first principles. 00:08:42.960 |
could become thousands or tens or hundreds of thousands 00:08:53.680 |
You know, in fairness, I think he also benefits 00:08:56.960 |
and he's experienced it in a lot of different applications. 00:09:00.600 |
So, you know, it's not as much of a shock to him anymore 00:09:07.160 |
In my own life, I remember one of my first experiences 00:09:10.440 |
really seeing it was when I was a grad student 00:09:12.960 |
and my advisor asked me to plot the growth of computer power 00:09:20.000 |
And there are all these exponentially growing curves. 00:09:35.880 |
we're gonna have orders of magnitude more computer power 00:09:41.320 |
- So, you know, when people look at Moore's Law, 00:09:46.840 |
so the exponential function is actually a stack of S-curves. 00:09:56.360 |
take the most advantage of a particular little revolution 00:10:06.720 |
Do you have any intuition about how the heck humans 00:10:12.280 |
- Well, first, let me just unpack that first point 00:10:20.280 |
It's been said that if anything can't go on forever, 00:10:28.120 |
- It's very profound, but it seems that a lot of people 00:10:31.920 |
don't appreciate that half of it as well either. 00:10:38.240 |
or stop in some other way, maybe catastrophically. 00:10:42.800 |
I mean, it was, it went up and then it sort of, 00:10:52.960 |
And it's beginning to happen with Moore's Law 00:11:02.400 |
is that we've been able to come up with a new S-curve 00:11:07.440 |
generation after generation with new materials, 00:11:10.680 |
new processes, and just extend it further and further. 00:11:14.560 |
I don't think anyone has a really good theory 00:11:17.440 |
about why we've been so successful in doing that. 00:11:26.440 |
but it's, you know, one beginning of a theory 00:11:33.680 |
when other parts of the system are going on that clock speed 00:11:39.320 |
If there's one component of it that's not keeping up, 00:11:42.160 |
then the economic incentives become really large 00:11:47.520 |
and anyone who can do improvements in that part 00:11:56.600 |
- Do you think some version of the Moore's Law will continue? 00:12:01.480 |
I mean, one version that has become more important 00:12:08.440 |
who I should mention was also my college roommate, 00:12:10.280 |
but he identified the fact that energy consumption 00:12:18.960 |
The new iPhones came out today as we're recording this. 00:12:21.320 |
I'm not sure when you're gonna make it available. 00:12:24.920 |
- And for most of us, having the iPhone be twice as fast, 00:12:30.000 |
that's nice, but having it the battery life longer, 00:12:35.480 |
And the fact that a lot of the progress in chips now 00:12:38.240 |
is reducing energy consumption is probably more important 00:12:42.840 |
for many applications than just the raw speed. 00:12:50.200 |
Those tend to be very parallelizable functions, 00:13:02.640 |
or you can have a GPU, a graphic processing unit, 00:13:13.080 |
or 100x improvement above and beyond Moore's Law. 00:13:20.840 |
but these other dimensions are becoming important, 00:13:24.000 |
more important, and we're seeing progress in them. 00:13:26.360 |
- I don't know if you've seen the work by OpenAI 00:13:44.440 |
more and more tricks on how to train networks faster? 00:13:49.560 |
As you look at image recognition, as you mentioned, 00:13:51.760 |
I think it's a function of at least three things 00:14:10.040 |
it used to be chemical, and now everything is digital. 00:14:16.480 |
I wouldn't have done that if it was chemical, 00:14:24.160 |
it's just broadcasting a huge amount of digital data 00:14:34.280 |
there have been significant improvements in the techniques. 00:14:41.200 |
but the advances in making it work more efficiently 00:14:44.240 |
have also improved a couple of orders of magnitude or more. 00:14:48.200 |
So you multiply together a hundredfold improvement 00:14:50.760 |
in computer power, a hundredfold or more improvement 00:15:00.640 |
and soon you're getting into millionfold improvements. 00:15:13.960 |
And that's one of the, I've seen arguments made, 00:15:25.640 |
which is a fascinating idea that it literally 00:15:29.160 |
will just run out of human-generated data to train on. 00:15:34.880 |
where it's consumed basically all of human knowledge, 00:15:47.400 |
as a way to argue against exponential growth. 00:15:49.960 |
They say, well, there's no way you can overcome 00:15:58.000 |
whatever bottlenecks the critics come up with, 00:16:02.000 |
I don't know how you overcome the data bottleneck, 00:16:04.640 |
but probably more efficient training algorithms. 00:16:08.240 |
that these training algorithms are getting much better 00:16:12.440 |
We also are just capturing a lot more data than we used to, 00:16:20.480 |
In some applications, you can simulate the data, 00:16:24.160 |
video games, some of the self-driving car systems 00:16:38.080 |
all the different ways you could beat a video game, 00:16:44.920 |
Make sure talking to the CTO of Waymo tomorrow, 00:16:48.640 |
and obviously, I'm talking to Elon again in a couple weeks. 00:17:01.840 |
that has the potential of revolutionizing the world? 00:17:06.800 |
but it's become much clearer that the original way 00:17:09.960 |
that I thought about it, most people thought about it, 00:17:11.640 |
will we have a self-driving car or not, is way too simple. 00:17:19.440 |
of how much driving and assisting the car can do. 00:17:35.040 |
- So there's kind of the, I think it's a good counterpart 00:17:37.040 |
to say what Elon is doing, and hopefully they can be frank 00:17:41.520 |
'cause I've heard both of them talk about it. 00:17:47.520 |
that watches over you as opposed to try to do everything. 00:17:50.520 |
I think there's some things like driving on a highway, 00:17:55.400 |
where it's mostly good weather, straight roads. 00:17:58.200 |
That's close to a solved problem, let's face it. 00:18:01.320 |
In other situations, you know, driving through the snow 00:18:06.320 |
and most importantly, you have to make a lot of judgments 00:18:09.520 |
at these intersections that aren't really right angles 00:18:17.160 |
and requires understanding human motivations. 00:18:27.600 |
and others where it could probably take decades. 00:18:40.720 |
we're not going to release anything until it's perfect, 00:18:55.080 |
I know there's very bright people on both sides 00:19:04.040 |
Some people thought that when the first few people died 00:19:07.480 |
from self-driving cars, that would shut down the industry, 00:19:16.080 |
that there could be setbacks if we do this wrong. 00:19:28.800 |
that this idea of really focusing on level four, 00:19:32.600 |
where you only go in areas that are well-mapped 00:19:45.040 |
and eventually they all become kind of interconnected. 00:19:47.960 |
And that could be a kind of another way of progressing 00:19:55.280 |
- I mean, that's kind of like the Waymo approach, 00:20:01.520 |
anyone from the public in the Phoenix, Arizona, 00:20:09.960 |
you can get a ride in a Waymo car with no person, no driver. 00:20:17.640 |
- Oh yeah, for a while now, there's been no safety driver. 00:20:23.200 |
but I thought it was kind of funny about a year ago 00:20:28.400 |
'cause the first safety driver would fall asleep. 00:20:39.520 |
They actually have a very interesting infrastructure 00:20:47.520 |
So they're not controlling the vehicles remotely, 00:20:49.880 |
but they're able to, it's like a customer service. 00:20:55.600 |
I bet they can probably remotely control it as well, 00:20:58.200 |
but that's officially not the function that they- 00:21:02.800 |
because I think the thing that's proven harder 00:21:10.880 |
So you can deal with 90, 99, 99.99% of the cases, 00:21:15.440 |
but then there's something that just never been seen before 00:21:18.880 |
And humans, more or less can work around that, 00:21:25.680 |
just in the United States and maybe a million worldwide. 00:21:30.040 |
But I think people have higher expectations of machines. 00:21:50.520 |
where people don't talk enough about creating products 00:22:11.440 |
Before he came along, electric cars were all kind of 00:22:15.680 |
and they were sort of wimpy cars that weren't fun. 00:22:22.600 |
he made a Roadster that went zero to 60 faster 00:22:25.640 |
than just about any other car and went the other end. 00:22:28.480 |
And I think that was a really wise marketing move 00:22:34.000 |
It's difficult to figure out what the right marketing move 00:22:46.360 |
I mean, to the chagrin of perhaps investors or whatever, 00:22:52.240 |
It also requires, you know, rethinking what you're doing. 00:22:54.360 |
I think way too many people are unimaginative, 00:23:04.040 |
Maybe we'll save some costs, we'll have less labor." 00:23:07.600 |
it's not necessarily the worst thing in the world to do, 00:23:09.480 |
but it's really not leading to a quantum change 00:23:20.160 |
"Okay, let's put a robot cashier where the human cashier is 00:23:39.680 |
of these general purpose technologies all through history. 00:23:43.560 |
And in my books, I write about like electricity 00:23:50.320 |
from the electrification of factories a century ago. 00:24:09.920 |
with the new business models, new business organization. 00:24:14.000 |
and it takes more creativity than most people have. 00:24:20.040 |
- Yeah, well, sure, I'll tell you what happened. 00:24:22.480 |
Before electricity, there were basically steam engines 00:24:42.360 |
all the equipment clustered around it, multi-story. 00:24:44.320 |
They'd have it vertical to minimize the distance 00:24:50.080 |
they got the biggest electric motor they could buy 00:24:57.400 |
It took until a generation of managers retired or died, 00:25:01.520 |
30 years later, that people started thinking, 00:25:05.320 |
"You can make electric motors, big, small, medium. 00:25:09.000 |
"You can put one with each piece of equipment." 00:25:12.920 |
between what they called group drive versus unit drive, 00:25:15.720 |
where every machine would have its own motor. 00:25:18.520 |
Well, once they did that, once they went to unit drive, 00:25:22.600 |
Then you started having a new kind of factory, 00:25:24.640 |
which is sometimes spread out over acres, single story. 00:25:29.000 |
And each piece of equipment has its own motor. 00:25:31.960 |
they weren't laid out based on who needed the most power. 00:25:39.600 |
Assembly line, let's have it go from this machine 00:26:13.040 |
I mean, one other interesting point about all that 00:26:26.600 |
I just wrote a paper with Chad Severson and Daniel Rock 00:26:31.560 |
which basically shows that in a lot of these cases, 00:26:36.560 |
And that downward dip is when everyone's trying to like, 00:26:40.440 |
And you could say that they're creating knowledge 00:26:44.680 |
but that doesn't show up on anyone's balance sheet. 00:26:50.120 |
Like take self-driving cars, we were just talking about it. 00:26:52.520 |
There have been hundreds of billions of dollars 00:27:13.280 |
that they will get the upward part of the J-Curve 00:27:19.360 |
That's happening with a lot of other AI technologies, 00:27:25.920 |
we're having relatively low productivity growth lately. 00:27:29.280 |
As an economist, one of the things that disappoints me 00:27:31.440 |
is that as eye popping as these technologies are, 00:27:34.440 |
you and I are both excited about some things they can do. 00:27:36.920 |
The economic productivity statistics are kind of dismal. 00:27:51.360 |
And so that's not what you would have expected 00:27:55.520 |
But I think we're in kind of a long J-Curve there. 00:28:04.520 |
But the past decade has been a bit disappointing 00:28:08.520 |
if you thought there's a one-to-one relationship 00:28:10.000 |
between cool technology and higher productivity. 00:28:22.880 |
what has been so revolutionary in the last 10 years, 00:28:28.200 |
I would, 15 years, and thinking about the internet, 00:28:46.080 |
but I would expect to see some kind of big productivity 00:28:49.760 |
increases from just the connectivity between people 00:29:00.040 |
I've done quite a bit of research on actually, 00:29:01.840 |
is these free goods like Wikipedia, Facebook, Twitter, Zoom. 00:29:08.120 |
but almost everything else I do these days is online. 00:29:24.040 |
- Take a small pause and say, I donate to Wikipedia. 00:29:44.480 |
So, these digital goods that we're getting more and more, 00:29:59.920 |
I get a lot of value from watching cat videos 00:30:03.320 |
and reading Wikipedia articles and listening to podcasts, 00:30:12.440 |
since Simon Kuznets invented GDP and productivity, 00:30:50.420 |
GDP-B measures the benefits you get, not the cost. 00:30:55.400 |
If you get benefit from Zoom or Wikipedia or Facebook, 00:31:07.340 |
I think there is a lot of gain over the past decade 00:31:10.440 |
in these digital goods that doesn't show up in GDP, 00:31:22.040 |
you mismeasure productivity by the exact same amount. 00:31:30.160 |
And over the coming years, I think we'll see. 00:31:33.220 |
We're not gonna do away with GDP, it's very useful, 00:31:38.360 |
- How difficult is it to get that B in the GDP-B? 00:31:41.880 |
I mean, one of the reasons it hasn't been done before 00:31:48.960 |
but how do you measure what they would have paid, 00:31:57.440 |
And to do that, what we do is we can use online experiments. 00:32:03.040 |
We ask hundreds of thousands, now millions of people, 00:32:10.800 |
How much would I have to pay you to stop using your phone? 00:32:18.880 |
Like we pay somebody $30 to stop using Facebook, 00:32:26.240 |
Some people won't give it up even if you give them $100. 00:32:31.040 |
You get to see what all the different prices are 00:32:35.960 |
And not surprisingly, different people have different values. 00:32:38.240 |
We find that women tend to value Facebook more than men. 00:32:41.540 |
Old people tend to value it a little bit more 00:32:44.860 |
I think young people maybe know about other networks 00:32:46.600 |
that I don't know the name of that are better than Facebook. 00:33:01.260 |
Is this work that will soon eventually be published? 00:33:07.020 |
in the Proceedings of the National Academy of Sciences 00:33:09.520 |
about, I think we call it Massive Online Choice Experiments. 00:33:11.860 |
I should remember the title, but it's on my website. 00:33:14.900 |
So yeah, we have some more papers coming out on it, 00:33:20.140 |
- You know, it's kind of a fascinating mystery 00:33:26.780 |
And it seems like almost none of them, except for YouTube, 00:33:31.420 |
have experimented with removing ads for money. 00:33:37.140 |
from both economics and the product perspective? 00:33:41.020 |
so I teach a course on digital business models. 00:33:43.260 |
So I used to at MIT, at Stanford, I'm not quite sure. 00:33:47.340 |
I'm still thinking what my course is gonna be. 00:33:50.020 |
But there are a lot of different business models. 00:33:52.220 |
And when you have something that has zero marginal cost, 00:33:56.420 |
especially if there's any kind of competition 00:34:05.540 |
you can have volunteer, you mentioned Wikipedia, 00:34:16.060 |
Actually, how do you, this podcast, how is this, 00:34:19.460 |
- There's sponsors at the beginning, and then, and people. 00:34:27.860 |
So if you wanna skip the sponsors, you're free. 00:34:42.940 |
those are people who are actually interested, you know? 00:34:48.340 |
and all of a sudden, all the car ads were like, 00:35:01.340 |
So there are a lot of these different revenue models. 00:35:09.500 |
when it's better to monetize it with charging people, 00:35:13.180 |
versus when you're better off doing advertising. 00:35:32.780 |
then you, advertising isn't gonna work as well, 00:35:47.580 |
there's just a lot of experimentation that's needed, 00:35:49.620 |
because sometimes things are a little counterintuitive, 00:36:06.100 |
and then they harvest the revenue from advertising. 00:36:08.980 |
So that's another way of kind of thinking about it. 00:36:12.060 |
- Is it strange to you that they haven't experimented? 00:36:17.660 |
about what the willingness is for people to pay. 00:36:23.580 |
it's gonna work out that they still are better off 00:36:36.380 |
the customer to decide exactly which model they prefer. 00:36:55.220 |
and everybody can kind of roll their own mix. 00:37:00.300 |
about how much advertising you want or are willing to take. 00:37:05.060 |
And if it's done right, and it's incentive compatible, 00:37:07.380 |
it could be a win-win where both the content provider 00:37:14.460 |
- Yeah, you know, the done right part is a really good point. 00:37:17.940 |
Like with Jeff Bezos and the single-click purchase on Amazon, 00:37:29.260 |
is I have to click so many times to subscribe to them 00:37:37.380 |
just because of the number of times I have to click. 00:37:44.580 |
I mean, another example is when you buy a new iPhone 00:37:48.900 |
I feel like, okay, I'm gonna lose an afternoon 00:37:51.420 |
just loading up and getting all my stuff back. 00:38:06.460 |
on making it more painless for us to buy your products. 00:38:30.420 |
and social networks are at the core, arguably, 00:38:37.460 |
and some of the most important things happening in society. 00:38:39.660 |
So it feels like it's important to get this right, 00:38:54.820 |
to try different business models, like really try. 00:39:01.300 |
I'm also worried that some of the business models 00:39:06.380 |
And Danny Kahneman talks about system one and system two, 00:39:20.860 |
our frontal cortex that's supposed to be more careful 00:39:27.860 |
I think there's a tendency for a lot of these 00:39:31.700 |
social networks to really exploit system one, 00:39:37.740 |
Make it so we just click on stuff and pass it on. 00:39:42.340 |
And that system, it tends to be driven by sex, violence, 00:39:47.340 |
disgust, anger, fear, these relatively primitive 00:39:53.860 |
Maybe they're important for a lot of purposes, 00:39:55.980 |
but they're not a great way to organize a society. 00:40:01.260 |
this huge, amazing information infrastructure we've had 00:40:04.380 |
that's connected billions of brains across the globe, 00:40:11.740 |
Arguably the most important thing that that network 00:40:21.660 |
not necessarily intentionally, is exactly the opposite. 00:40:24.700 |
My MIT colleagues, Aral and Deb Roy and others at MIT 00:40:29.460 |
did a terrific paper on the cover of Science. 00:40:37.780 |
on social networks, they looked at a bunch of tweets 00:40:41.540 |
and retweets, and they found that false information 00:40:44.580 |
was more likely to spread further, faster to more people. 00:40:53.420 |
It's because people like things that are shocking, amazing. 00:41:00.340 |
Not that something everybody else already knew. 00:41:07.380 |
And so if you wanna find something unbelievable, 00:41:24.020 |
that wasn't necessarily driven by the algorithms. 00:41:29.460 |
you know, Zainab Tufekci and others have pointed out 00:41:31.580 |
in YouTube, some of the algorithms unintentionally 00:41:34.180 |
were tuned to amplify more extremist content. 00:41:37.900 |
But in the study of Twitter that Sinan and Deb 00:41:41.140 |
and others did, they found that even if you took out 00:41:47.860 |
you still had lies spreading significantly faster. 00:41:52.580 |
that we just can't resist passing on this salacious content. 00:41:57.060 |
But I also blame the platforms because, you know, 00:42:01.260 |
there's different ways you can design a platform. 00:42:03.140 |
You can design a platform in a way that makes it easy 00:42:06.340 |
to spread lies and to retweet and spread things on, 00:42:15.380 |
you know, the guy who helped found Wikipedia. 00:42:32.340 |
you're more likely or less likely to have false news. 00:42:35.060 |
- Create a little bit of friction, like you said. 00:42:38.820 |
- It could be friction or it could be speeding the truth, 00:42:41.300 |
you know, either way, but, and I don't totally understand-- 00:42:49.460 |
like in academia, which is far, far from perfect, 00:42:52.500 |
but, you know, when someone has an important discovery, 00:42:55.620 |
it tends to get more cited and people kind of look to it 00:42:57.860 |
more and sort of, it tends to get amplified a little bit. 00:43:07.420 |
thinking about it, we can amplify truth over falsehoods. 00:43:10.780 |
And I'm disappointed in the heads of these social networks 00:43:16.660 |
or maybe haven't tried as hard to amplify truth. 00:43:19.500 |
And part of it, going back to what we said earlier, 00:43:21.540 |
is, you know, these revenue models may push them 00:43:25.100 |
more towards growing fast, spreading information rapidly, 00:43:34.580 |
- Yeah, I mean, implicit in what you're saying now 00:43:42.260 |
we can take a step towards greater and greater 00:44:05.020 |
And as a thought experiment, I've been thinking about 00:44:11.260 |
like the idea of truth is something we won't even have. 00:44:14.420 |
Do you think it's possible, like in the future, 00:44:17.820 |
that everything is on the table in terms of truth 00:44:21.020 |
and we're just swimming in this kind of digital economy 00:44:40.460 |
I don't think it's inevitable that it doesn't happen. 00:44:45.460 |
and I emphasize it in my books and elsewhere, 00:44:47.360 |
is that technology doesn't shape our destiny, 00:44:54.740 |
I hope that your audience is gonna take it upon themselves 00:44:58.540 |
as they design their products and they think about it, 00:45:10.980 |
and not abdicate and say, well, we just build the tools. 00:45:18.380 |
when they were working on the missiles in late World War II, 00:45:23.100 |
they said, well, our job is to make the missiles go up, 00:45:25.780 |
where they come down, that's someone else's department. 00:45:28.760 |
And that's obviously not the, I think it's obvious, 00:45:31.820 |
that's not the right attitude that technologists should have 00:45:40.580 |
we can avoid the kind of world that you just described 00:45:47.100 |
from a world of where people don't check facts 00:45:51.260 |
and where truth is relative and popularity or fame or money 00:46:01.860 |
that we've had so much progress over the past few hundred 00:46:04.500 |
years is the invention of the scientific method, 00:46:10.140 |
for finding truth and favoring things that are true 00:46:27.740 |
have done a lot better than the ones who haven't. 00:46:30.500 |
And so I'm hoping that people keep that in mind 00:46:32.780 |
and continue to try to embrace not just the truth, 00:46:40.480 |
if one were to try to build a competitor to Twitter, 00:46:47.300 |
Is there, I mean, the bigger, the meta question, 00:46:55.680 |
- Yeah, no, I think that the underlying premise 00:46:59.380 |
behind Twitter and all these networks is amazing 00:47:03.980 |
There's a sub part of Twitter called econ Twitter, 00:47:16.860 |
'cause it's really sped up the scientific process, 00:47:36.780 |
or the not so old days when you'd see it posted 00:47:43.960 |
and it's a real art to getting to the essence of things. 00:47:57.340 |
unfortunately, misinformation tends to spread 00:48:11.660 |
that explicitly create efforts at misinformation 00:48:15.680 |
and efforts at getting people to hate each other. 00:48:24.320 |
Nut picking is when you find like an extreme nut case 00:48:30.700 |
and make it seem like that's typical of the other side. 00:48:39.920 |
whether they're in the KKK or Antifa or whatever, 00:48:46.040 |
Like 12 people would see him and it'd be the end. 00:48:59.720 |
Let me tell all my friends about this terrible person. 00:49:11.560 |
and they're very clever about literally being 00:49:15.880 |
They would have some people pretend to be part of BLM, 00:49:18.640 |
some people pretend to be white nationalists, 00:49:21.040 |
and they would be throwing epithets at each other, 00:49:25.100 |
And they're literally playing both sides of it, 00:49:26.600 |
but their goal wasn't for one or the other to win. 00:49:32.000 |
So these tools can definitely be used for that. 00:49:36.580 |
It's been super destructive for our democracy 00:49:39.680 |
and our society, and the people who run these platforms, 00:49:48.700 |
to do a better job and to shut that stuff down. 00:49:52.960 |
but to design them in a way that, as I said earlier, 00:49:56.480 |
favors truth over falsehoods and favors positive types 00:50:21.800 |
is not picking applied to individual statements 00:50:28.420 |
So basically, worst case analysis in computer science 00:50:45.400 |
I often talk about Hitler on this podcast with folks, 00:50:59.040 |
and history in long form is actually very fascinating 00:51:04.160 |
to think about, but I could see how that could be taken 00:51:09.160 |
totally out of context, and it's very worrying. 00:51:11.360 |
- Right, 'cause you think about these digital infrastructures, 00:51:12.840 |
not just the semi-things, but they're sort of permanent. 00:51:16.560 |
someone can go back and find something you said 00:51:18.200 |
three years ago, perhaps jokingly, perhaps not. 00:51:21.120 |
Maybe you're just wrong, and you made a, you know? 00:51:22.840 |
And that becomes, they can use that to define you 00:51:26.880 |
And we all need to be a little more forgiving. 00:51:32.280 |
I was going through all my different friends, 00:51:41.040 |
- And I was like, there's nobody who's completely, 00:51:45.680 |
And so you just kinda learn to be a little bit tolerant 00:51:51.120 |
- Yeah, I wonder who the responsibility lays on there. 00:52:23.600 |
and spreading this kind of nut picking in all its forms. 00:52:27.280 |
No, and your point that we have to learn over time 00:52:35.200 |
nobody can design a platform that withstands that. 00:52:37.840 |
And every new technology people learn, it's dangerous. 00:52:48.200 |
maybe somebody invented a fire extinguisher later. 00:53:08.680 |
that are more likely to be successful than dangerous. 00:53:12.600 |
about how artificial intelligence might change our world. 00:53:21.280 |
again, it's impossible to predict the future, 00:53:44.600 |
for some significantly improved living standards, 00:53:53.120 |
You know, when I talked earlier about the J-curve, 00:53:54.920 |
it could take 10, 20, 30 years for an existing technology 00:54:03.800 |
vision systems, voice recognition, problem-solving systems, 00:54:13.480 |
And I think that's gonna lead to us being wealthier, 00:54:16.880 |
healthier, I mean, the healthcare is probably 00:54:18.800 |
one of the applications I'm most excited about. 00:54:23.780 |
I don't think we're gonna have the end of work anytime soon. 00:54:27.640 |
There's just too many things that machines still can't do. 00:54:31.000 |
When I look around the world and think of whether 00:54:32.880 |
it's childcare or healthcare, cleaning the environment, 00:54:39.080 |
artistic creativity, these are things that for now, 00:54:42.600 |
machines aren't able to do nearly as well as humans, 00:54:48.760 |
And many of these, I think, are gonna be years 00:54:54.760 |
You know, I may be surprised on some of them, 00:55:04.200 |
We'll have to, as machines are able to do some tasks, 00:55:07.880 |
people are gonna have to reskill and move into other areas. 00:55:12.760 |
for the next, you know, 10, 20, 30 years or more, 00:55:18.960 |
We'll get wealthier and people will have to do new skills. 00:55:22.440 |
Now, if you turn the dial further, I don't know, 00:55:29.640 |
Then it's possible that machines will be able to do 00:55:34.240 |
You know, say one or 200 years, I think it's even likely. 00:55:37.360 |
And at that point, then we're more in the sort of 00:55:41.040 |
Then we're in a world where there's really little 00:55:44.040 |
for the humans can do economically better than machines, 00:55:49.900 |
And, you know, that will take a transition as well, 00:55:53.640 |
kind of more of a transition of how we get meaning in life 00:56:00.440 |
I mean, that should be like great, great news. 00:56:02.760 |
And it kind of saddens me that some people see that 00:56:05.560 |
I think it should be wonderful if people have all the health 00:56:16.840 |
and doing all the other things that don't require work. 00:56:19.480 |
- Do you think you'll be surprised to see what the 20, 00:56:29.600 |
like if I gave you a month to like talk to people, 00:56:34.160 |
would you be able to understand what the hell is going on? 00:56:43.040 |
So like, so I'll give you one thought experiment is like, 00:56:52.480 |
You know, I've played around with some of those VR headsets 00:57:04.440 |
compared to what they could be in 30 or 50 years, 00:57:16.360 |
and we'd all, you know, in some ways be as rich as we wanted 00:57:29.980 |
you've had Elon Musk on and others, you know, 00:57:34.120 |
makes the simulation argument that maybe we're already there. 00:57:37.720 |
So, but in general, or do you not even think about 00:57:41.200 |
in this kind of way, you're self-critically thinking 00:57:52.000 |
I feel reasonably comfortable next, you know, 00:58:01.720 |
artificial intelligence, kind of by definition, 00:58:08.280 |
and create a world that I couldn't even imagine. 00:58:28.000 |
We really hope that their values are aligned with our values 00:58:32.440 |
and it's even tricky to finding what our values are. 00:58:34.480 |
I mean, first off, we all have different values. 00:58:41.640 |
Like, you know, I like to think that we have better values 00:58:53.360 |
And it may be that if I thought about it more deeply, 00:59:24.320 |
ferociously destructive, whether it's in nanotechnology 00:59:39.240 |
that could be devastating or even existential 00:59:45.200 |
So that's a branch that I think is pretty significant. 00:59:50.200 |
And there are those who think that one of the reasons 00:59:54.240 |
we haven't been contacted by other civilizations, right, 01:00:19.040 |
out there in the universe, or at least in our galaxy, 01:00:24.880 |
one of the great filters or some of the great filters 01:00:30.040 |
- Yeah, no, I think it's Robin Hanson has a good way of, 01:00:32.240 |
maybe others, they have a good way of thinking about this, 01:00:33.920 |
that if there are no other intelligence creatures out there 01:00:40.640 |
one possibility is that there's a filter ahead of us. 01:00:50.600 |
The other one is the great filter's behind us. 01:00:56.400 |
don't even evolve life, or if they don't evolve life, 01:01:02.080 |
and so now maybe we're on the good side of the great filter. 01:01:05.720 |
- So, if we sort of rewind back and look at the thing 01:01:10.520 |
where we could say something a little bit more comfortably 01:01:24.800 |
in terms of artificial intelligence that it might have. 01:01:28.320 |
It's a fascinating question of what kind of jobs are safe, 01:01:45.760 |
which is AI that can just do the full breadth 01:01:49.560 |
but we do have human level or superhuman level 01:02:04.480 |
We actually set out to address that question. 01:02:13.440 |
And we went and interviewed a whole bunch of AI experts 01:02:19.920 |
machine learning was good at and wasn't good at. 01:02:25.560 |
basically a set of questions you can ask about any task 01:02:28.160 |
that will tell you whether it's likely to score high or low 01:02:39.080 |
in the US economy, believe it or not, it's called O*NET. 01:02:45.040 |
They divide the economy into about 970 occupations, 01:02:57.640 |
Like for radiologists, there are 27 distinct tasks. 01:03:02.220 |
to see whether or not a machine could do them. 01:03:10.280 |
So, what we found was that there was no occupation 01:03:13.800 |
in our data set where machine learning just ran the table 01:03:22.160 |
Like take radiology, a lot of people I hear saying, 01:03:26.720 |
And one of the 27 tasks is read medical images, 01:03:29.920 |
really important one, like it's kind of a core job. 01:03:35.920 |
There was just an article in Nature last week, 01:03:40.880 |
showing that machine learning can do as well as humans 01:03:51.120 |
they sometimes administer conscious sedation, 01:03:57.320 |
and explain to the other doctors or to the patients. 01:04:02.520 |
machine learning isn't really up to snuff yet. 01:04:05.560 |
So, that job, we're gonna see a lot of restructuring. 01:04:09.300 |
Parts of the job, they'll hand over to machines, 01:04:13.160 |
That's been more or less the pattern all of them. 01:04:17.080 |
we see a lot of restructuring, reorganization of work. 01:04:22.280 |
it is a great time for smart entrepreneurs and managers 01:04:33.120 |
the kinds of tasks that machines tend to be good at 01:04:42.540 |
If you have a lot of data on the Xs and the Ys, 01:04:45.700 |
you can do that kind of mapping and find the relationships. 01:04:53.660 |
emotional intelligence and human interactions, 01:05:05.020 |
But even asking the right questions, that's hard. 01:05:12.940 |
Apparently, Pablo Picasso was shown an early computer 01:05:22.740 |
And to him, the interesting thing was asking the questions. 01:05:26.780 |
- Yeah, try to replace me, GPT-3, I dare you. 01:05:46.280 |
aggregating the information of how replaceable the job is. 01:05:50.280 |
- That's the suitability for machine learning index, exactly. 01:05:52.480 |
So we have all 970 occupations on that chart. 01:05:59.240 |
have some occupations, but there is a definite pattern, 01:06:02.760 |
which is the lower wage occupations tend to have more tasks 01:06:05.720 |
that are suitable for machine learning, like cashiers. 01:06:08.000 |
I mean, anyone who's gone to a supermarket or CVS 01:06:12.440 |
but they can recognize an apple and an orange 01:06:23.560 |
that are among the highest paid in our economy, 01:06:26.640 |
but also a lot of them are suitable for machine learning. 01:06:36.100 |
maybe not as much as some of us think they should be. 01:06:48.500 |
And I should say, I didn't like create that data. 01:06:54.440 |
And over time, that scatter plot will be updated 01:06:59.920 |
But it was just interesting to see the pattern there. 01:07:05.120 |
as if you just take the technology as it is today, 01:07:21.040 |
one of the people that have been speaking about it 01:08:03.800 |
And the thing that he really made the centerpiece 01:08:15.920 |
although I'm beginning to come back a little bit. 01:08:17.360 |
So let me tell you a little bit of my evolution. 01:08:43.320 |
And you know, you can deal with the need thing 01:09:17.200 |
So it's not as simple as just writing people a check. 01:09:32.240 |
that there's just like not enough work to be done. 01:09:46.480 |
There's so much work that require at least partly, 01:09:52.280 |
So rather than like write all these people off, 01:09:58.240 |
Now that said, I would like to see more buying power 01:10:20.680 |
while some other people who have been super smart 01:10:30.840 |
And I don't think they need those hundreds of billions 01:10:33.720 |
to have the right incentives to invent things. 01:10:35.680 |
I think if you talk to almost any of them, as I have, 01:10:38.480 |
they don't think that they need an extra $10 billion 01:10:50.720 |
- I mean, you know, an interesting point to make is, 01:10:55.160 |
would have founded Microsoft if tax rates were 70%? 01:10:59.480 |
because they were tax rates of 70% when he founded it. 01:11:03.000 |
You know, so I don't think that's as big a deterrent 01:11:06.040 |
and we could provide more buying power to people. 01:11:08.960 |
My own favorite tool is the earned income tax credit, 01:11:12.640 |
which is basically a way of supplementing income 01:11:22.240 |
but the earned income tax credit encourages employment 01:11:51.680 |
where he suggested instead of a universal basic income, 01:11:55.760 |
he suggested, or instead of an unconditional basic income, 01:12:00.400 |
where the condition is you learn some new skills, 01:12:04.880 |
so let's make it easier for people to find ways 01:12:08.920 |
to get those skills and get rewarded for doing them. 01:12:25.920 |
But I think, I guess you're speaking to the intuition 01:12:31.160 |
like there needs to be some incentive to reskill, 01:12:35.960 |
I mean, there are lots of self-motivated people, 01:12:37.840 |
but there are also people that maybe need a little guidance 01:12:47.120 |
to know what is the new area I should be learning skills in. 01:12:50.480 |
And we could provide a much better set of tools 01:12:54.360 |
okay, here's a set of skills you already have. 01:12:58.000 |
Let's create a path for you to go from where you are 01:13:02.120 |
- So I'm a total, how do I put it nicely about myself? 01:13:09.520 |
It's not totally true, but pretty good approximation. 01:13:26.680 |
or some fundamental problems about our economy, 01:13:33.440 |
- You know, I definitely think our whole tax system, 01:13:47.320 |
I don't think we need to totally reinvent stuff. 01:13:56.360 |
that have worked really well in the 20th century 01:14:03.920 |
investing in infrastructure, welcoming immigrants, 01:14:07.480 |
having a tax system that was more progressive and fair 01:14:18.000 |
And they've come down a lot to the point where 01:14:23.360 |
So, and we could do things like earned income tax credit 01:14:31.400 |
What that means is you tax things that are bad 01:14:42.200 |
because one of the basic principles of economics, 01:14:44.040 |
if you tax something, you tend to get less of it. 01:14:46.360 |
So, you know, right now there's still work to be done 01:14:51.160 |
but instead we should be taxing things like pollution 01:14:55.920 |
And if we did that, we would have less pollution. 01:15:02.080 |
almost every economist would say it's a no brainer, 01:15:16.000 |
and of course a lot of Democratic economists agree as well. 01:15:22.720 |
we could raise hundreds of billions of dollars. 01:15:28.520 |
through an earned income tax credit or other things 01:15:31.120 |
so that overall our tax system would become more progressive. 01:15:36.920 |
One of the things that kills me as an economist 01:15:44.760 |
- You could just visualize the cost and productivity. 01:15:47.000 |
- Exactly, because they are taking costs from me 01:16:00.680 |
the traffic just flows 'cause they have a congestion tax. 01:16:03.600 |
They invited me and others to go talk to them. 01:16:09.200 |
I'd be paying a congestion tax instead of paying my time, 01:16:11.680 |
but that money would now be available for healthcare, 01:16:18.600 |
So it saddens me when you're sitting in a traffic jam, 01:16:23.280 |
it's like taxing me and then taking that money 01:16:25.000 |
and dumping it in the ocean, just like destroying it. 01:16:27.760 |
So there are a lot of things like that that economists, 01:16:33.880 |
Most good economists would, I probably agree with me 01:16:40.960 |
and our whole economy would become much more efficient. 01:16:43.720 |
It'd become fair, invest in R&D and research, 01:16:46.960 |
which is close to a free lunch is what we have. 01:16:53.120 |
got the Nobel Prize, not yesterday, but 30 years ago, 01:16:57.320 |
for describing that most improvements in living standards 01:17:04.560 |
for noting that investments in R&D and human capital 01:17:11.040 |
So if we do that, then we'll be healthier and wealthier. 01:17:20.360 |
It seemed from all the plots I saw that R&D is an obvious, 01:17:29.040 |
It seemed like obvious that we should do more research. 01:17:39.480 |
It'd be great if everybody did more research. 01:17:41.440 |
And I would make this thing to be to apply development 01:17:57.120 |
If they make a better self-driving car system, 01:18:21.640 |
it's very hard for them to capture the benefits from it. 01:18:23.880 |
It's shared by everybody, which is great in a way, 01:18:26.720 |
but it means that they're not gonna have the incentives 01:18:32.960 |
There you need the government to be involved in it. 01:18:35.120 |
And the US government used to be investing much more in R&D, 01:18:39.360 |
but we have slashed that part of the government 01:18:50.000 |
We're not having the kind of scientific progress 01:18:55.840 |
eating the seed corn, whatever metaphor you wanna use, 01:19:00.240 |
where people grab some money, put it in their pockets today, 01:19:07.120 |
they're a lot poorer than they otherwise would have been. 01:19:10.160 |
- So we're living through a pandemic right now 01:19:18.840 |
how do you think this pandemic will change the world? 01:19:26.520 |
how many people have suffered, the amount of death, 01:19:31.280 |
It's also striking just the amount of change in work 01:19:47.080 |
The most obvious one is the shift to remote work. 01:19:50.200 |
And I and many other people stopped going into the office 01:19:56.160 |
I did a study on this with a bunch of colleagues 01:19:59.200 |
And what we found was that before the pandemic 01:20:05.400 |
a little over 15% of Americans were working remotely. 01:20:08.640 |
When the pandemic hit, that grew steadily and hit 50%, 01:20:20.480 |
If you're an information worker, a professional, 01:20:24.360 |
then you're much more likely to work at home. 01:20:28.760 |
working with other people or physical things, 01:20:34.480 |
And instead, those people were much more likely 01:20:39.240 |
So it's been something that's had very disparate effects 01:21:03.440 |
I personally, for instance, I moved my seminars, 01:21:11.600 |
- Yeah, I mean, obviously we were able to reach 01:21:18.480 |
just all over the United States for that matter. 01:21:27.160 |
who might've been a little shy about speaking up, 01:21:29.360 |
we now kind of have more of ability for lots of voices 01:21:32.680 |
and they're answering each other's questions. 01:21:35.920 |
Like if someone had some question about, you know, 01:21:40.640 |
then someone else in the chat would answer it. 01:21:42.600 |
And the whole thing just became like a higher bandwidth, 01:21:58.000 |
I mean, all the terrible things we've seen with COVID 01:22:00.200 |
and the real failure of many of our institutions 01:22:05.040 |
One area that's been a bright spot is our technologies. 01:22:11.920 |
and all of our email and other tools have just, 01:22:29.080 |
And I think that we're going to have a fair amount 01:22:40.800 |
is that the people with lots of followers on Twitter 01:22:51.440 |
people that can, voices that can be magnified by, 01:22:55.280 |
you know, reporters and all that kind of stuff 01:23:14.800 |
whose jobs are disturbed profoundly by this pandemic, 01:23:21.200 |
but they don't have many followers on Twitter. 01:23:31.840 |
but I've been reading the rise and fall of the third Reich 01:23:53.560 |
is like what is this suffering going to materialize itself 01:23:58.080 |
Is that something you worry about, think about? 01:24:01.040 |
- It's like the center of what I worry about. 01:24:05.400 |
You know, there's a moral and ethical aspect to it 01:24:09.340 |
I mean, I'm share the values of, I think most Americans, 01:24:16.620 |
And we would like to see people not falling behind 01:24:20.220 |
and they have fallen behind, not just due to COVID, 01:24:29.880 |
And the incomes of the top 1% have skyrocketed. 01:24:33.360 |
And part of that is due to the ways technology 01:24:37.920 |
our political system has continually shifted more wealth 01:24:42.480 |
into those people who have the powerful interest. 01:24:55.060 |
But the second thing is that there's a real political risk. 01:25:11.420 |
they wanna smash the system in different ways. 01:25:13.680 |
In 2016 and 2018, and now, I think there are a lot of people 01:25:23.740 |
Unfortunately, demagogues have harnessed that 01:25:28.160 |
in a way that is pretty destructive to the country. 01:25:32.040 |
And an analogy I see is what happened with trade. 01:25:42.440 |
almost by definition, they're both better off 01:25:56.240 |
in some of the people who didn't have the skills 01:26:13.480 |
but then you compensate the people who were hurt. 01:26:23.240 |
but you can also compensate those who don't win. 01:26:28.500 |
What happened was that we didn't fulfill that promise. 01:26:38.000 |
but we didn't compensate the people who were hurt. 01:26:43.840 |
reneged on the bargain, and I think they did. 01:26:45.960 |
And so then there's a backlash against trade. 01:27:07.140 |
Technology has a lot of similar characteristics. 01:27:16.180 |
It creates wealth and health, but it can also be uneven. 01:27:22.940 |
even a majority of people, to get left behind 01:27:41.020 |
Mathematically, everyone could be better off. 01:27:45.460 |
And again, people are saying, this isn't working for us. 01:27:48.980 |
And again, instead of fixing the distribution, 01:28:01.300 |
And there were the Luddites almost exactly 200 years ago 01:28:04.780 |
who smashed the looms and the spinning machines 01:28:08.060 |
because they felt like those machines weren't helping them. 01:28:14.740 |
but to do the thing that is gonna save the country, 01:28:17.580 |
which is make sure that we create not just prosperity, 01:28:21.680 |
- So you've been at MIT for over 30 years, I think. 01:29:01.980 |
was not just the weather, but also Silicon Valley, 01:29:04.940 |
let's face it, is really more of the epicenter 01:29:12.340 |
A lot of it is being invested at MIT, for that matter, 01:29:18.940 |
But being a little closer to some of the key technologists 01:29:36.660 |
and my eyes were burning, the sky was orange, 01:29:41.340 |
And so it wasn't exactly what I'd been promised, 01:29:44.460 |
but fingers crossed it'll get back to better. 01:29:50.700 |
there's been some criticism of academia and universities 01:29:55.740 |
And I, as a person who's gotten to enjoy universities 01:30:00.780 |
from the pure playground of ideas that it can be, 01:30:04.380 |
always kind of try to find the words to tell people 01:30:17.020 |
that is beautiful or powerful about universities? 01:30:24.540 |
economists have this concept called revealed preference. 01:30:33.140 |
I'm out here, I could be doing lots of other things, 01:30:37.700 |
And I think the word magical is exactly right, 01:30:50.260 |
I love to have conversations like this with you 01:30:51.940 |
and with my students, with my fellow colleagues. 01:30:53.860 |
I love being around the smartest people I can find 01:31:02.580 |
And every day I find something new and exciting to work on. 01:31:05.620 |
And a university environment is really filled 01:31:09.940 |
And so I feel very fortunate to be part of it. 01:31:22.900 |
And I appreciate that it's not necessarily easy 01:31:25.460 |
for everybody to have a job that they both love. 01:31:29.500 |
So there are things that don't go well in academia, 01:31:36.140 |
kinder, gentler version of a lot of the world. 01:31:46.100 |
Of course there's harsh debates and discussions about things 01:31:55.700 |
It's not my thing and so it doesn't affect me 01:31:57.660 |
most of the time, sometimes a little bit maybe. 01:32:09.300 |
that are just doing stuff that I learned from. 01:32:19.340 |
And that's really, to me, actually I really enjoy that, 01:32:23.060 |
being in a room with lots of other smart people. 01:32:25.140 |
And Stanford has made it very easy to attract those people. 01:32:30.140 |
I just say I'm gonna do a seminar or whatever 01:32:33.700 |
and the people come, they come and wanna work with me. 01:32:45.060 |
And we feel like we're working on important problems 01:32:47.820 |
and we're doing things that I think are first order 01:32:58.060 |
What three books, technical, fiction, philosophical, 01:33:02.220 |
you've enjoyed, had a big impact in your life? 01:33:10.020 |
and I read "Sidartha" which is a philosophical book 01:33:17.380 |
Don't get too wrapped up in material things or other things 01:33:21.220 |
and just sort of try to find peace on things. 01:33:27.620 |
"The Worldly Philosophers" by Robert Heilbroner. 01:33:31.700 |
It goes through a series of different companies 01:33:34.980 |
It probably sounds boring, but it did describe 01:33:37.860 |
whether it's Adam Smith or Karl Marx or John Maynard Keynes 01:33:40.900 |
and each of them sort of what their key insights were 01:33:50.660 |
how they grappled with the big questions of the world. 01:33:53.140 |
- So would you recommend it as a good whirlwind overview 01:33:59.100 |
It kinda takes you through the different things 01:34:04.060 |
thinking some of the strengths and weaknesses. 01:34:06.420 |
I mean, probably it's a little out of date now. 01:34:07.900 |
It needs to be updated a bit, but you could at least look 01:34:10.220 |
through the first couple hundred years of economics 01:34:21.340 |
You should have him on your podcast if you haven't already. 01:34:33.460 |
and he makes you think about profound things. 01:34:38.560 |
and so that's been a great book and I learn a lot from it. 01:34:44.380 |
even though he's so brilliant, that everyone can understand, 01:34:50.020 |
That's three, but let me mention maybe one or two others. 01:34:58.580 |
It made me optimistic about how we can continue 01:35:07.820 |
because of technology, because of digitization 01:35:21.360 |
that I found kind of useful, people, is "Atomic Habits." 01:35:30.420 |
and most of the sentences I read in that book, 01:35:34.460 |
but it just really helps to have somebody remind you 01:35:52.540 |
You know, one, atomic means they're really small 01:35:55.540 |
but also like atomic power, it can have big impact. 01:36:04.220 |
especially to ask an economist, but also a human being, 01:36:09.300 |
- I hope you've gotten the answer to that from somebody. 01:36:14.780 |
You know, I actually learned a lot from my son, Luke, 01:36:18.140 |
and he's 19 now, but he's always loved philosophy, 01:36:22.180 |
and he reads way more sophisticated philosophy than I do. 01:36:24.940 |
I once took him to Oxford and he spent the whole time 01:36:26.820 |
pulling all these obscure books down and reading them. 01:36:29.020 |
And a couple years ago, we had this argument, 01:36:32.600 |
and he was trying to convince me that hedonism 01:36:34.520 |
was the ultimate meaning of life, just pleasure seeking. 01:36:44.420 |
But he made a really good intellectual argument for it too. 01:36:50.180 |
And I think that while I am kind of a utilitarian, 01:36:54.520 |
like I do think we should do the greatest good 01:36:56.060 |
for the greatest number, that's just too shallow. 01:36:58.740 |
And I think I've convinced myself that real happiness 01:37:17.620 |
if you look right at it, it kind of disappears, 01:37:23.180 |
that are better at absorbing light can pick it up better. 01:37:27.440 |
I think you need to sort of find something other goal, 01:37:46.160 |
and I'm not like an evolutionary psychologist, 01:37:50.900 |
not just for pleasure, but we're social animals, 01:38:10.500 |
than if we just do something selfish and shallow. 01:38:14.480 |
I don't think there's a better way to end it. 01:38:30.960 |
and people should definitely read your other books. 01:38:33.140 |
And I think we're all part of the invisible college, 01:38:36.460 |
We're all part of this intellectual and human community 01:38:49.460 |
with Eric Ben-Jolson, and thank you to our sponsors. 01:38:56.860 |
Four Sigmatic, the maker of delicious mushroom coffee. 01:39:05.240 |
And Cash App, the app I use to send money to friends. 01:39:09.140 |
Please check out these sponsors in the description 01:39:11.180 |
to get a discount and to support this podcast. 01:39:14.900 |
If you enjoy this thing, subscribe on YouTube, 01:39:32.820 |
"that our technology has exceeded our humanity." 01:39:37.500 |
Thank you for listening, and hope to see you next time.