back to index

Erik Brynjolfsson: Economics of AI, Social Networks, and Technology | Lex Fridman Podcast #141


Chapters

0:0 Introduction
2:56 Exponential growth
7:24 Elon Musk exponential thinking
9:41 Moore's law is a series of revolutions
15:3 GPT-3
16:42 Autonomous vehicles
23:43 Electricity
28:12 Productivity
33:19 Why is Twitter and Facebook free?
43:36 Dismantling the nature of truth
46:56 Nutpicking and Cancel Culture
53:11 How will AI change our world
59:12 Existential threats
61:5 AI and the nature of work
67:11 Thoughts on Andrew Yang and UBI
73:3 Economics of innovation
79:9 Effect of COVID on the economy
88:22 MIT and Stanford
92:56 Book recommendations
96:1 Meaning of life

Whisper Transcript | Transcript Only Page

00:00:00.000 | The following is a conversation with Eric Brinjalsen.
00:00:03.280 | He's an economics professor at Stanford
00:00:05.880 | and the director of Stanford's Digital Economy Lab.
00:00:09.440 | Previously, he was a long, long time professor at MIT
00:00:13.520 | where he did groundbreaking work
00:00:15.240 | on the economics of information.
00:00:17.800 | He's the author of many books,
00:00:19.840 | including "The Second Machine Age"
00:00:22.040 | and "Machine Platform Crowd,"
00:00:24.600 | co-authored with Andrew McAfee.
00:00:27.600 | Quick mention of each sponsor,
00:00:29.200 | followed by some thoughts related to the episode.
00:00:31.720 | Vincero Watches,
00:00:33.040 | the maker of classy, well-performing watches.
00:00:36.120 | Four Sigmatic,
00:00:37.280 | the maker of delicious mushroom coffee.
00:00:39.880 | ExpressVPN,
00:00:41.240 | the VPN I've used for many years
00:00:42.960 | to protect my privacy on the internet.
00:00:45.080 | And Cash App,
00:00:46.680 | the app I use to send money to friends.
00:00:49.080 | Please check out these sponsors in the description
00:00:51.040 | to get a discount and to support this podcast.
00:00:54.640 | As a side note,
00:00:55.640 | let me say that the impact of artificial intelligence
00:00:58.120 | and automation on our economy and our world
00:01:01.840 | is something worth thinking deeply about.
00:01:04.520 | Like with many topics that are linked
00:01:06.440 | to predicting the future evolution of technology,
00:01:09.200 | it is often too easy to fall into one of two camps,
00:01:12.720 | the fear-mongering camp
00:01:14.840 | or the technological utopianism camp.
00:01:18.320 | As always, the future will land us somewhere in between.
00:01:21.640 | I prefer to wear two hats in these discussions
00:01:24.400 | and alternate between them often.
00:01:26.600 | The hat of a pragmatic engineer
00:01:29.600 | and the hat of a futurist.
00:01:32.000 | This is probably a good time to mention Andrew Yang,
00:01:35.160 | the presidential candidate
00:01:36.720 | who has been one of the high-profile thinkers on this topic.
00:01:41.280 | And I'm sure I will speak with him
00:01:42.880 | on this podcast eventually.
00:01:44.840 | A conversation with Andrew has been on the table many times.
00:01:48.760 | Our schedules just haven't aligned,
00:01:50.640 | especially because I have a strongly held to preference
00:01:54.520 | for long form, two, three, four hours or more,
00:01:58.240 | and in person.
00:02:00.200 | I work hard to not compromise on this.
00:02:03.120 | Trust me, it's not easy.
00:02:05.000 | Even more so in the times of COVID,
00:02:07.280 | which requires getting tested nonstop,
00:02:09.840 | staying isolated and doing a lot of costly
00:02:12.680 | and uncomfortable things that minimize risk for the guest.
00:02:15.960 | The reason I do this is because to me,
00:02:18.000 | something is lost in remote conversation.
00:02:20.960 | That's something, that magic, I think is worth the effort,
00:02:25.360 | even if it ultimately leads to a failed conversation.
00:02:29.640 | This is how I approach life,
00:02:31.560 | treasuring the possibility of a rare moment of magic.
00:02:36.080 | I'm willing to go to the ends of the world
00:02:38.480 | for just such a moment.
00:02:39.880 | If you enjoy this thing, subscribe on YouTube,
00:02:43.320 | review it with 5 Stars on Apple Podcasts,
00:02:45.560 | follow on Spotify, support on Patreon,
00:02:48.200 | connect with me on Twitter @LexFriedman.
00:02:51.280 | And now, here's my conversation with Eric Greenjohnson.
00:02:55.320 | You posted a quote on Twitter by Albert Bartlett,
00:03:00.000 | saying that the greatest shortcoming of the human race
00:03:03.440 | is our inability to understand the exponential function.
00:03:06.520 | Why would you say the exponential growth
00:03:09.880 | is important to understand?
00:03:11.240 | - Yeah, that quote, I remember posting that.
00:03:15.200 | It's actually a reprise of something
00:03:16.840 | Andy McAfee and I said in "The Second Machine Age,"
00:03:19.440 | but I posted it in early March
00:03:21.440 | when COVID was really just beginning to take off,
00:03:23.920 | and I was really scared.
00:03:25.840 | There were actually only a couple dozen cases,
00:03:28.320 | maybe less at that time,
00:03:30.000 | but they were doubling every two or three days,
00:03:32.240 | and I could see, oh my God, this is gonna be a catastrophe,
00:03:35.480 | and it's gonna happen soon.
00:03:37.040 | But nobody was taking it very seriously,
00:03:38.960 | or not a lot of people were taking it very seriously.
00:03:41.000 | In fact, I remember I did my last
00:03:42.640 | in-person conference that week.
00:03:45.800 | I was flying back from Las Vegas,
00:03:47.880 | and I was the only person on the plane wearing a mask,
00:03:50.880 | and the flight attendant came over to me.
00:03:52.360 | She looked very concerned.
00:03:53.280 | She kinda put her hands on my shoulder.
00:03:54.440 | She was touching me all over, which I wasn't thrilled about,
00:03:56.640 | and she goes, "Do you have some kind of anxiety disorder?
00:03:59.440 | "Are you okay?"
00:04:00.560 | And I was like, "No, it's 'cause of COVID."
00:04:03.000 | - This is early March.
00:04:04.120 | - Early March.
00:04:05.240 | But I was worried because I knew I could see,
00:04:08.600 | or I suspected, I guess, that that doubling would continue,
00:04:12.960 | and it did, and pretty soon
00:04:14.120 | we had thousands of times more cases.
00:04:17.240 | Most of the time when I use that quote,
00:04:18.720 | I try to, it's motivated by more optimistic things
00:04:21.360 | like Moore's Law and the wonders
00:04:23.280 | of having more computer power.
00:04:25.640 | But in either case, it can be very counterintuitive.
00:04:28.720 | I mean, if you walk for 10 minutes,
00:04:31.680 | you get about 10 times as far away
00:04:33.160 | as if you walk for one minute.
00:04:34.960 | That's the way our physical world works.
00:04:36.160 | That's the way our brains are wired.
00:04:38.280 | But if something doubles for 10 times as long,
00:04:41.720 | you don't get 10 times as much.
00:04:43.320 | You get 1,000 times as much.
00:04:45.480 | And after 20, it's a billion.
00:04:47.280 | After 30, it's a, no, sorry, after 20, it's a million.
00:04:50.920 | After 30, it's a billion.
00:04:53.600 | And pretty soon after that, it just gets to these numbers
00:04:55.680 | that you can barely grasp.
00:04:57.840 | Our world is becoming more and more exponential,
00:05:00.840 | mainly because of digital technologies.
00:05:03.640 | So more and more often, our intuitions are out of whack,
00:05:06.520 | and that can be good in the case of things creating wonders,
00:05:10.960 | but it can be dangerous in the case of viruses
00:05:13.720 | and other things.
00:05:14.640 | - Do you think it generally applies?
00:05:16.440 | Like, is there spaces where it does apply
00:05:18.320 | and where it doesn't?
00:05:19.520 | How are we supposed to build an intuition
00:05:21.800 | about in which aspects of our society
00:05:25.400 | does exponential growth apply?
00:05:27.520 | - Well, you can learn the math,
00:05:29.680 | but the truth is our brains, I think,
00:05:32.240 | tend to learn more from experiences.
00:05:35.520 | So we just start seeing it more and more often.
00:05:37.640 | So hanging around Silicon Valley,
00:05:39.560 | hanging around AI and computer researchers,
00:05:41.760 | I see this kind of exponential growth
00:05:43.720 | a lot more frequently.
00:05:45.000 | And I'm getting used to it, but I still make mistakes.
00:05:46.960 | I still underestimate some of the progress
00:05:48.880 | in just talking to someone about GPT-3
00:05:51.120 | and how rapidly natural language has improved.
00:05:54.240 | But I think that as the world becomes more exponential,
00:05:58.440 | we'll all start experiencing it more frequently.
00:06:01.960 | The danger is that we may make some mistakes
00:06:04.840 | in the meantime using our old kind of caveman intuitions
00:06:07.880 | about how the world works.
00:06:09.480 | - Well, the weird thing is it always kind of looks linear
00:06:11.960 | in the moment.
00:06:12.880 | Like it's hard to feel,
00:06:17.120 | it's hard to introspect and really acknowledge
00:06:22.120 | how much has changed in just a couple of years
00:06:24.200 | or five years or 10 years with the internet.
00:06:27.440 | If we just look at advancements of AI
00:06:29.840 | or even just social media,
00:06:31.880 | all the various technologies
00:06:33.880 | that go under the digital umbrella.
00:06:35.680 | It feels pretty calm and normal and gradual.
00:06:39.640 | - A lot of stuff, I think there are parts of the world,
00:06:42.160 | most of the world is not exponential.
00:06:44.440 | The way humans learn, the way organizations change,
00:06:49.400 | the way our whole institutions adapt and evolve,
00:06:52.200 | those don't improve at exponential paces.
00:06:54.720 | And that leads to a mismatch oftentimes
00:06:56.520 | between these exponentially improving technologies
00:06:59.120 | or let's say changing technologies
00:07:00.640 | 'cause some of them are exponentially more dangerous
00:07:03.240 | and our intuitions and our human skills
00:07:06.880 | and our institutions that just don't change very fast at all
00:07:11.880 | and that mismatch I think is at the root
00:07:13.920 | of a lot of the problems in our society,
00:07:15.960 | the growing inequality and other dysfunctions
00:07:20.960 | in our political and economic systems.
00:07:24.400 | - So one guy that talks about exponential functions
00:07:28.400 | a lot is Elon Musk.
00:07:29.640 | He seems to internalize this kind of way
00:07:33.000 | of exponential thinking.
00:07:34.840 | He calls it first principles thinking,
00:07:37.400 | sort of the kind of going to the basics,
00:07:40.600 | asking the question,
00:07:42.480 | like what were the assumptions of the past?
00:07:44.840 | How can we throw them out the window?
00:07:47.760 | How can we do this 10X much more efficiently
00:07:50.280 | and constantly practicing that process?
00:07:52.360 | And also using that kind of thinking to estimate
00:07:56.600 | create deadlines and estimate when you'll be able to deliver
00:08:04.720 | on some of these technologies.
00:08:07.160 | Now, it often gets them in trouble
00:08:10.200 | because he overestimates,
00:08:13.200 | like he doesn't meet the initial estimates of the deadlines
00:08:18.040 | but he seems to deliver late but deliver.
00:08:23.040 | - Right.
00:08:23.920 | - And which is kind of interesting.
00:08:25.040 | Like what are your thoughts about this whole thing?
00:08:26.880 | - I think we can all learn from Elon.
00:08:28.840 | I think going to first principles,
00:08:30.120 | I talked about two ways of getting more of a grip
00:08:32.840 | on the exponential function.
00:08:34.560 | And one of them just comes from first principles.
00:08:36.480 | If you understand the math of it,
00:08:37.800 | you can see what's gonna happen.
00:08:39.040 | And even if it seems counterintuitive
00:08:41.040 | that a couple of dozen of COVID cases
00:08:42.960 | could become thousands or tens or hundreds of thousands
00:08:46.200 | of them in a month,
00:08:48.080 | it makes sense once you just do the math.
00:08:50.360 | And I think Elon tries to do that a lot.
00:08:53.680 | You know, in fairness, I think he also benefits
00:08:55.120 | from hanging out in Silicon Valley
00:08:56.960 | and he's experienced it in a lot of different applications.
00:09:00.600 | So, you know, it's not as much of a shock to him anymore
00:09:04.080 | but that's something we can all learn from.
00:09:07.160 | In my own life, I remember one of my first experiences
00:09:10.440 | really seeing it was when I was a grad student
00:09:12.960 | and my advisor asked me to plot the growth of computer power
00:09:17.560 | in the US economy in different industries.
00:09:20.000 | And there are all these exponentially growing curves.
00:09:23.000 | And I was like, holy shit, look at this.
00:09:24.560 | In each industry, it was just taking off.
00:09:26.880 | And you didn't have to be a rocket scientist
00:09:29.240 | to extend that and say, wow, this means that
00:09:31.880 | this was in the late '80s and early '90s
00:09:33.600 | that if it goes anything like that,
00:09:35.880 | we're gonna have orders of magnitude more computer power
00:09:38.160 | than we did at that time.
00:09:39.440 | And of course we do.
00:09:41.320 | - So, you know, when people look at Moore's Law,
00:09:43.720 | they often talk about it as just,
00:09:46.840 | so the exponential function is actually a stack of S-curves.
00:09:51.360 | So basically, it's you milk or whatever,
00:09:56.360 | take the most advantage of a particular little revolution
00:10:01.240 | and then you search for another revolution.
00:10:03.000 | And it's basically-- - Yes.
00:10:04.400 | - Revolutions stack on top of revolutions.
00:10:06.720 | Do you have any intuition about how the heck humans
00:10:08.960 | keep finding ways to revolutionize things?
00:10:12.280 | - Well, first, let me just unpack that first point
00:10:14.200 | that I talked about exponential curves,
00:10:17.160 | but no exponential curve continues forever.
00:10:20.280 | It's been said that if anything can't go on forever,
00:10:24.960 | eventually it will stop. (laughs)
00:10:26.680 | And-- - That's very profound.
00:10:28.120 | - It's very profound, but it seems that a lot of people
00:10:31.920 | don't appreciate that half of it as well either.
00:10:33.960 | And that's why all exponential functions
00:10:35.800 | eventually turn into some kind of S-curve
00:10:38.240 | or stop in some other way, maybe catastrophically.
00:10:41.240 | And that's happened with COVID as well.
00:10:42.800 | I mean, it was, it went up and then it sort of,
00:10:44.600 | at some point it starts saturating the pool
00:10:47.160 | of people to be infected.
00:10:49.040 | There's a standard epidemiological model
00:10:51.320 | that's based on that.
00:10:52.960 | And it's beginning to happen with Moore's Law
00:10:55.040 | or different generations of computer power.
00:10:56.920 | It happens with all exponential curves.
00:10:59.280 | The remarkable thing, as you allude,
00:11:01.000 | in the second part of your question,
00:11:02.400 | is that we've been able to come up with a new S-curve
00:11:05.320 | on top of the previous one and do that
00:11:07.440 | generation after generation with new materials,
00:11:10.680 | new processes, and just extend it further and further.
00:11:14.560 | I don't think anyone has a really good theory
00:11:17.440 | about why we've been so successful in doing that.
00:11:21.440 | It's great that we have been,
00:11:23.200 | and I hope it continues for some time,
00:11:26.440 | but it's, you know, one beginning of a theory
00:11:31.520 | is that there's huge incentives
00:11:33.680 | when other parts of the system are going on that clock speed
00:11:36.920 | of doubling every two to three years.
00:11:39.320 | If there's one component of it that's not keeping up,
00:11:42.160 | then the economic incentives become really large
00:11:44.840 | to improve that one part.
00:11:46.240 | It becomes a bottleneck,
00:11:47.520 | and anyone who can do improvements in that part
00:11:50.280 | can reap huge returns so that the resources
00:11:52.680 | automatically get focused on whatever part
00:11:54.800 | of the system isn't keeping up.
00:11:56.600 | - Do you think some version of the Moore's Law will continue?
00:11:59.720 | - Some version, yes.
00:12:00.640 | It is.
00:12:01.480 | I mean, one version that has become more important
00:12:04.600 | is something I call Cume's Law,
00:12:06.280 | which is named after John Cume,
00:12:08.440 | who I should mention was also my college roommate,
00:12:10.280 | but he identified the fact that energy consumption
00:12:14.360 | has been declining by a factor of two.
00:12:17.280 | And for most of us, that's more important.
00:12:18.960 | The new iPhones came out today as we're recording this.
00:12:21.320 | I'm not sure when you're gonna make it available.
00:12:23.040 | - Very soon after this, yeah.
00:12:24.920 | - And for most of us, having the iPhone be twice as fast,
00:12:30.000 | that's nice, but having it the battery life longer,
00:12:33.200 | that would be much more valuable.
00:12:35.480 | And the fact that a lot of the progress in chips now
00:12:38.240 | is reducing energy consumption is probably more important
00:12:42.840 | for many applications than just the raw speed.
00:12:46.080 | Other dimensions of Moore's Law are in AI,
00:12:49.200 | machine learning.
00:12:50.200 | Those tend to be very parallelizable functions,
00:12:55.200 | especially deep neural nets.
00:12:58.480 | And so instead of having one chip,
00:13:01.320 | you can have multiple chips,
00:13:02.640 | or you can have a GPU, a graphic processing unit,
00:13:06.160 | that goes faster, and now special chips
00:13:08.520 | designed for machine learning,
00:13:09.680 | like tensor processing units.
00:13:11.280 | Each time you switch, there's another 10x
00:13:13.080 | or 100x improvement above and beyond Moore's Law.
00:13:16.800 | So I think that the raw silicon
00:13:18.920 | isn't improving as much as it used to,
00:13:20.840 | but these other dimensions are becoming important,
00:13:24.000 | more important, and we're seeing progress in them.
00:13:26.360 | - I don't know if you've seen the work by OpenAI
00:13:28.280 | where they show the exponential improvement
00:13:31.960 | of the training of neural networks,
00:13:34.400 | just literally in the techniques used.
00:13:37.000 | So that's almost like the algorithm.
00:13:40.560 | It's fascinating to think, like,
00:13:42.000 | can I actually continue us figuring out
00:13:44.440 | more and more tricks on how to train networks faster?
00:13:47.040 | - The progress has been staggering.
00:13:49.560 | As you look at image recognition, as you mentioned,
00:13:51.760 | I think it's a function of at least three things
00:13:53.480 | that are coming together.
00:13:54.560 | One, we just talked about faster chips,
00:13:56.600 | not just Moore's Law, but GPUs, TPUs,
00:13:59.160 | and other technologies.
00:14:00.480 | The second is just a lot more data.
00:14:02.840 | I mean, we are awash in digital data today
00:14:05.640 | in a way we weren't 20 years ago.
00:14:08.160 | Photography, I'm old enough to remember,
00:14:10.040 | it used to be chemical, and now everything is digital.
00:14:12.840 | I took probably 50 digital photos yesterday.
00:14:16.480 | I wouldn't have done that if it was chemical,
00:14:17.960 | and we have the Internet of Things
00:14:20.720 | and all sorts of other types of data.
00:14:22.960 | When we walk around with our phone,
00:14:24.160 | it's just broadcasting a huge amount of digital data
00:14:27.280 | that can be used as training sets.
00:14:29.240 | And then last but not least,
00:14:30.840 | as they mentioned at OpenAI,
00:14:34.280 | there have been significant improvements in the techniques.
00:14:37.200 | The core idea of deep neural nets
00:14:39.280 | has been around for a few decades,
00:14:41.200 | but the advances in making it work more efficiently
00:14:44.240 | have also improved a couple of orders of magnitude or more.
00:14:48.200 | So you multiply together a hundredfold improvement
00:14:50.760 | in computer power, a hundredfold or more improvement
00:14:53.800 | in data, a hundredfold improvement
00:14:56.280 | in techniques of software and algorithms,
00:15:00.640 | and soon you're getting into millionfold improvements.
00:15:03.480 | - You know, somebody brought this up,
00:15:05.200 | this idea with GPT-3 that it's,
00:15:09.920 | so it's training in a self-supervised way
00:15:11.960 | on basically Internet data.
00:15:13.960 | And that's one of the, I've seen arguments made,
00:15:18.920 | and they seem to be pretty convincing,
00:15:21.120 | that the bottleneck there is going to be
00:15:23.400 | how much data there is on the Internet,
00:15:25.640 | which is a fascinating idea that it literally
00:15:29.160 | will just run out of human-generated data to train on.
00:15:32.920 | - I know, we're making it to the point
00:15:34.880 | where it's consumed basically all of human knowledge,
00:15:37.520 | or all digitized human knowledge, yeah.
00:15:39.160 | - And that will be the bottleneck.
00:15:40.920 | But the interesting thing with bottlenecks
00:15:44.600 | is people often use bottlenecks
00:15:47.400 | as a way to argue against exponential growth.
00:15:49.960 | They say, well, there's no way you can overcome
00:15:53.000 | this bottleneck, but we seem to somehow
00:15:55.040 | keep coming up in new ways to overcome
00:15:58.000 | whatever bottlenecks the critics come up with,
00:16:01.160 | which is fascinating.
00:16:02.000 | I don't know how you overcome the data bottleneck,
00:16:04.640 | but probably more efficient training algorithms.
00:16:07.040 | - Yeah, well, you already mentioned that,
00:16:08.240 | that these training algorithms are getting much better
00:16:10.280 | at using smaller amounts of data.
00:16:12.440 | We also are just capturing a lot more data than we used to,
00:16:15.880 | especially in China, but all around us.
00:16:18.880 | So those are both important.
00:16:20.480 | In some applications, you can simulate the data,
00:16:24.160 | video games, some of the self-driving car systems
00:16:28.920 | are simulating driving, and of course,
00:16:32.520 | that has some risks and weaknesses,
00:16:34.280 | but you can also, if you want to exhaust
00:16:38.080 | all the different ways you could beat a video game,
00:16:39.880 | you could just simulate all the options.
00:16:42.320 | - Can we take a step in that direction
00:16:43.640 | of autonomous vehicles?
00:16:44.920 | Make sure talking to the CTO of Waymo tomorrow,
00:16:48.640 | and obviously, I'm talking to Elon again in a couple weeks.
00:16:52.420 | What's your thoughts on autonomous vehicles?
00:16:57.280 | Where do we stand as a problem
00:17:01.840 | that has the potential of revolutionizing the world?
00:17:04.520 | - Well, I'm really excited about that,
00:17:06.800 | but it's become much clearer that the original way
00:17:09.960 | that I thought about it, most people thought about it,
00:17:11.640 | will we have a self-driving car or not, is way too simple.
00:17:15.680 | The better way to think about it
00:17:17.440 | is that there's a whole continuum
00:17:19.440 | of how much driving and assisting the car can do.
00:17:22.400 | I noticed that you're right next door
00:17:24.840 | to Toyota Research Institute.
00:17:26.080 | - That's a total accident.
00:17:27.680 | I love the TRI folks, but yeah.
00:17:29.400 | - Have you talked to Gil Pratt?
00:17:30.880 | - Yeah, we're supposed to talk.
00:17:33.840 | It's kind of hilarious.
00:17:35.040 | - So there's kind of the, I think it's a good counterpart
00:17:37.040 | to say what Elon is doing, and hopefully they can be frank
00:17:40.120 | in what they think about each other,
00:17:41.520 | 'cause I've heard both of them talk about it.
00:17:44.000 | But they're much more, you know,
00:17:45.480 | this is an assistive, a guardian angel
00:17:47.520 | that watches over you as opposed to try to do everything.
00:17:50.520 | I think there's some things like driving on a highway,
00:17:53.400 | you know, from LA to Phoenix,
00:17:55.400 | where it's mostly good weather, straight roads.
00:17:58.200 | That's close to a solved problem, let's face it.
00:18:01.320 | In other situations, you know, driving through the snow
00:18:03.760 | in Boston where the roads are kind of crazy,
00:18:06.320 | and most importantly, you have to make a lot of judgments
00:18:08.280 | about what the other driver's gonna do
00:18:09.520 | at these intersections that aren't really right angles
00:18:11.760 | and aren't very well described.
00:18:13.480 | It's more like game theory.
00:18:15.400 | That's a much harder problem
00:18:17.160 | and requires understanding human motivations.
00:18:20.640 | So there's a continuum there of some places
00:18:24.400 | where the cars will work very well,
00:18:27.600 | and others where it could probably take decades.
00:18:30.960 | - What do you think about the Waymo?
00:18:33.520 | So you mentioned two companies
00:18:36.080 | that actually have cars on the road.
00:18:38.080 | There's the Waymo approach that's more like,
00:18:40.720 | we're not going to release anything until it's perfect,
00:18:42.840 | and we're gonna be very strict
00:18:45.360 | about the streets that we travel on,
00:18:47.720 | but it'd be perfect.
00:18:49.200 | - Yeah.
00:18:50.240 | Well, I'm smart enough to be humble
00:18:53.960 | and not try to get between.
00:18:55.080 | I know there's very bright people on both sides
00:18:57.080 | of the argument.
00:18:57.920 | I've talked to them,
00:18:58.760 | and they make convincing arguments to me
00:19:00.040 | about how careful they need to be
00:19:01.720 | and the social acceptance.
00:19:04.040 | Some people thought that when the first few people died
00:19:07.480 | from self-driving cars, that would shut down the industry,
00:19:09.880 | but it was more of a blip, actually.
00:19:11.840 | So that was interesting.
00:19:14.480 | Of course, there's still a concern
00:19:16.080 | that there could be setbacks if we do this wrong.
00:19:20.360 | Your listeners may be familiar
00:19:22.640 | with the different levels of self-driving,
00:19:24.320 | level one, two, three, four, five.
00:19:26.800 | I think Andrew Ng has convinced me
00:19:28.800 | that this idea of really focusing on level four,
00:19:32.600 | where you only go in areas that are well-mapped
00:19:35.000 | rather than just going out in the wild,
00:19:37.480 | is the way things are gonna evolve.
00:19:39.720 | But you can just keep expanding those areas
00:19:42.600 | where you've mapped things really well,
00:19:44.040 | where you really understand them,
00:19:45.040 | and eventually they all become kind of interconnected.
00:19:47.960 | And that could be a kind of another way of progressing
00:19:51.160 | to make it more feasible over time.
00:19:55.280 | - I mean, that's kind of like the Waymo approach,
00:19:57.440 | which is they just now released,
00:19:59.560 | I think just like a day or two ago,
00:20:01.520 | anyone from the public in the Phoenix, Arizona,
00:20:09.960 | you can get a ride in a Waymo car with no person, no driver.
00:20:14.960 | - Oh, they've taken away the safety driver?
00:20:17.640 | - Oh yeah, for a while now, there's been no safety driver.
00:20:21.040 | - Okay, 'cause I mean,
00:20:21.880 | I've been following that one in particular,
00:20:23.200 | but I thought it was kind of funny about a year ago
00:20:25.880 | when they had the safety driver,
00:20:26.960 | and then they added a second safety driver
00:20:28.400 | 'cause the first safety driver would fall asleep.
00:20:30.800 | That's like, I'm not sure they're going
00:20:32.120 | in the right direction with that.
00:20:33.360 | - No, they've Waymo in particular
00:20:38.160 | have done a really good job of that.
00:20:39.520 | They actually have a very interesting infrastructure
00:20:44.400 | of remote observation.
00:20:47.520 | So they're not controlling the vehicles remotely,
00:20:49.880 | but they're able to, it's like a customer service.
00:20:52.480 | They can anytime tune into the car.
00:20:55.600 | I bet they can probably remotely control it as well,
00:20:58.200 | but that's officially not the function that they-
00:21:00.920 | - Yeah, I can see that being really,
00:21:02.800 | because I think the thing that's proven harder
00:21:06.320 | than maybe some of the early people expected
00:21:08.080 | is there's a long tail of weird exceptions.
00:21:10.880 | So you can deal with 90, 99, 99.99% of the cases,
00:21:15.440 | but then there's something that just never been seen before
00:21:17.720 | in the training data.
00:21:18.880 | And humans, more or less can work around that,
00:21:21.120 | although let me be clear and note,
00:21:22.880 | there are about 30,000 human fatalities
00:21:25.680 | just in the United States and maybe a million worldwide.
00:21:28.400 | So they're far from perfect.
00:21:30.040 | But I think people have higher expectations of machines.
00:21:33.480 | They wouldn't tolerate that level of death
00:21:36.320 | and damage from a machine.
00:21:40.000 | And so we have to do a lot better
00:21:41.800 | at dealing with those edge cases.
00:21:43.480 | - And also the tricky thing that,
00:21:45.840 | if I have a criticism for the Waymo folks,
00:21:47.880 | there's such a huge focus on safety
00:21:50.520 | where people don't talk enough about creating products
00:21:55.160 | that people, that customers love,
00:21:57.240 | that human beings love using.
00:22:00.680 | It's very easy to create a thing that's safe
00:22:03.320 | at the extremes,
00:22:04.960 | but then nobody wants to get into it.
00:22:06.880 | - Yeah.
00:22:07.720 | Well, back to Elon,
00:22:08.760 | I think one of, part of his genius
00:22:10.520 | was with the electric cars.
00:22:11.440 | Before he came along, electric cars were all kind of
00:22:14.000 | underpowered, really light,
00:22:15.680 | and they were sort of wimpy cars that weren't fun.
00:22:20.680 | And the first thing he did was,
00:22:22.600 | he made a Roadster that went zero to 60 faster
00:22:25.640 | than just about any other car and went the other end.
00:22:28.480 | And I think that was a really wise marketing move
00:22:30.800 | as well as a wise technology move.
00:22:33.160 | - Yeah.
00:22:34.000 | It's difficult to figure out what the right marketing move
00:22:36.400 | is for AI systems.
00:22:37.960 | That's always been,
00:22:38.960 | I think it requires guts and risk-taking,
00:22:43.040 | which is what Elon practices.
00:22:46.360 | I mean, to the chagrin of perhaps investors or whatever,
00:22:51.400 | - It requires guts and risk-taking.
00:22:52.240 | It also requires, you know, rethinking what you're doing.
00:22:54.360 | I think way too many people are unimaginative,
00:22:57.560 | intellectually lazy.
00:22:58.560 | And when they take AI, they basically say,
00:23:00.560 | "What are we doing now?
00:23:01.680 | How can we make a machine do the same thing?
00:23:04.040 | Maybe we'll save some costs, we'll have less labor."
00:23:06.760 | And yeah, you know,
00:23:07.600 | it's not necessarily the worst thing in the world to do,
00:23:09.480 | but it's really not leading to a quantum change
00:23:11.680 | in the way you do things.
00:23:12.880 | You know, when Jeff Bezos said,
00:23:15.240 | "Hey, we're gonna use the internet
00:23:16.600 | to change how bookstores work,
00:23:18.080 | and we're gonna use technology."
00:23:19.320 | He didn't go and say,
00:23:20.160 | "Okay, let's put a robot cashier where the human cashier is
00:23:24.000 | and leave everything else alone."
00:23:25.640 | That would have been a very lame way
00:23:27.040 | to automate a bookstore.
00:23:28.320 | He's like, went from soup to nuts,
00:23:29.840 | said, "Let's just rethink it.
00:23:31.400 | We get rid of the physical bookstore.
00:23:33.000 | We have a warehouse, we have delivery,
00:23:34.520 | we have people order on a screen."
00:23:36.360 | And everything was reinvented.
00:23:38.440 | And that's been the story
00:23:39.680 | of these general purpose technologies all through history.
00:23:43.560 | And in my books, I write about like electricity
00:23:46.280 | and how for 30 years,
00:23:48.600 | there was almost no productivity gain
00:23:50.320 | from the electrification of factories a century ago.
00:23:53.000 | Now, it's not because electricity
00:23:54.120 | is a wimpy, useless technology.
00:23:55.800 | We all know how awesome electricity is.
00:23:57.560 | It's 'cause at first,
00:23:58.400 | they really didn't rethink the factories.
00:24:00.560 | It was only after they reinvented them,
00:24:02.160 | and we describe how in the book,
00:24:04.040 | then you suddenly got a doubling
00:24:05.440 | and tripling of productivity growth.
00:24:07.560 | But it's the combination of the technology
00:24:09.920 | with the new business models, new business organization.
00:24:12.960 | That just takes a long time
00:24:14.000 | and it takes more creativity than most people have.
00:24:16.920 | - Can you maybe linger on electricity?
00:24:19.000 | 'Cause that's the fun one.
00:24:20.040 | - Yeah, well, sure, I'll tell you what happened.
00:24:22.480 | Before electricity, there were basically steam engines
00:24:25.760 | or sometimes water wheels.
00:24:27.040 | And to power the machinery,
00:24:28.360 | you had to have pulleys and crankshafts.
00:24:30.560 | And you really can't make them too long
00:24:32.120 | 'cause they'll break the torsion.
00:24:34.040 | So all the equipment was kind of clustered
00:24:35.960 | around this one giant steam engine.
00:24:37.920 | You can't make small steam engines either
00:24:39.480 | 'cause of thermodynamics.
00:24:40.800 | So you have one giant steam engine,
00:24:42.360 | all the equipment clustered around it, multi-story.
00:24:44.320 | They'd have it vertical to minimize the distance
00:24:46.080 | as well as horizontal.
00:24:47.760 | And then when they did electricity,
00:24:48.960 | they took out the steam engine,
00:24:50.080 | they got the biggest electric motor they could buy
00:24:51.800 | from General Electric or someone like that.
00:24:54.200 | And nothing much else changed.
00:24:57.400 | It took until a generation of managers retired or died,
00:25:01.520 | 30 years later, that people started thinking,
00:25:03.920 | "Wait, we don't have to do it that way.
00:25:05.320 | "You can make electric motors, big, small, medium.
00:25:09.000 | "You can put one with each piece of equipment."
00:25:11.040 | There was this big debate,
00:25:11.880 | if you read the management literature
00:25:12.920 | between what they called group drive versus unit drive,
00:25:15.720 | where every machine would have its own motor.
00:25:18.520 | Well, once they did that, once they went to unit drive,
00:25:20.600 | those guys won the debate.
00:25:22.600 | Then you started having a new kind of factory,
00:25:24.640 | which is sometimes spread out over acres, single story.
00:25:29.000 | And each piece of equipment has its own motor.
00:25:31.120 | And most importantly,
00:25:31.960 | they weren't laid out based on who needed the most power.
00:25:34.600 | They were laid out based on
00:25:37.200 | what is the workflow of materials?
00:25:39.600 | Assembly line, let's have it go from this machine
00:25:41.320 | to that machine, to that machine.
00:25:42.840 | Once they rethought the factory that way,
00:25:45.640 | huge increases in productivity.
00:25:47.280 | It was just staggering.
00:25:48.120 | People like Paul David have documented this
00:25:49.680 | in their research papers.
00:25:51.800 | And I think that there's a,
00:25:54.280 | that is a lesson you see over and over.
00:25:55.920 | It happened when the steam engine changed
00:25:57.440 | manual production.
00:25:58.560 | It's happened with the computerization.
00:26:00.280 | People like Michael Hammer said,
00:26:01.520 | "Don't automate, obliterate."
00:26:03.600 | In each case, the big gains only came
00:26:07.480 | once smart entrepreneurs and managers
00:26:10.440 | basically reinvented their industries.
00:26:13.040 | I mean, one other interesting point about all that
00:26:14.680 | is that during that reinvention period,
00:26:17.920 | you often actually,
00:26:20.680 | not only you don't see productivity growth,
00:26:22.240 | you can actually see a slipping back.
00:26:24.360 | Measured productivity actually falls.
00:26:26.600 | I just wrote a paper with Chad Severson and Daniel Rock
00:26:29.080 | called the Productivity J-Curve,
00:26:31.560 | which basically shows that in a lot of these cases,
00:26:33.880 | you have a downward dip before it goes up.
00:26:36.560 | And that downward dip is when everyone's trying to like,
00:26:39.000 | reinvent things.
00:26:40.440 | And you could say that they're creating knowledge
00:26:43.160 | and intangible assets,
00:26:44.680 | but that doesn't show up on anyone's balance sheet.
00:26:46.800 | It doesn't show up in GDP.
00:26:48.360 | So it's as if they're doing nothing.
00:26:50.120 | Like take self-driving cars, we were just talking about it.
00:26:52.520 | There have been hundreds of billions of dollars
00:26:55.080 | spent developing self-driving cars.
00:26:57.960 | And basically no chauffeur has lost his job,
00:27:01.600 | no taxi driver.
00:27:02.440 | I guess I got to check on the ones that-
00:27:03.280 | - Big J-Curve.
00:27:04.480 | - Yeah, so there's a bunch of spending
00:27:06.120 | and no real consumer benefit.
00:27:08.160 | Now, they're doing that in the belief,
00:27:11.480 | I think the justified belief,
00:27:13.280 | that they will get the upward part of the J-Curve
00:27:15.200 | and there will be some big returns,
00:27:16.960 | but in the short run, you're not seeing it.
00:27:19.360 | That's happening with a lot of other AI technologies,
00:27:21.880 | just as it happened
00:27:22.720 | with earlier general purpose technologies.
00:27:25.080 | And it's one of the reasons
00:27:25.920 | we're having relatively low productivity growth lately.
00:27:29.280 | As an economist, one of the things that disappoints me
00:27:31.440 | is that as eye popping as these technologies are,
00:27:34.440 | you and I are both excited about some things they can do.
00:27:36.920 | The economic productivity statistics are kind of dismal.
00:27:40.320 | We actually, believe it or not,
00:27:42.200 | have had lower productivity growth
00:27:44.560 | in the past about 15 years,
00:27:47.000 | than we did in the previous 15 years,
00:27:48.840 | in the '90s and early 2000s.
00:27:51.360 | And so that's not what you would have expected
00:27:53.200 | if these technologies were that much better.
00:27:55.520 | But I think we're in kind of a long J-Curve there.
00:27:59.400 | Personally, I'm optimistic.
00:28:00.560 | We'll start seeing the upward tick,
00:28:02.080 | maybe as soon as next year.
00:28:04.520 | But the past decade has been a bit disappointing
00:28:08.520 | if you thought there's a one-to-one relationship
00:28:10.000 | between cool technology and higher productivity.
00:28:12.760 | - What would you place your biggest hope
00:28:15.240 | for productivity increases on?
00:28:17.240 | Because you kind of said at a high level AI,
00:28:19.880 | but if I were to think about
00:28:22.880 | what has been so revolutionary in the last 10 years,
00:28:28.200 | I would, 15 years, and thinking about the internet,
00:28:32.240 | I would say things like,
00:28:34.280 | hope I'm not saying anything ridiculous,
00:28:37.160 | but everything from Wikipedia to Twitter.
00:28:41.040 | So like these kind of websites,
00:28:43.600 | not so much AI,
00:28:46.080 | but I would expect to see some kind of big productivity
00:28:49.760 | increases from just the connectivity between people
00:28:54.160 | and the access to more information.
00:28:58.040 | - Yeah, well, so that's another area
00:29:00.040 | I've done quite a bit of research on actually,
00:29:01.840 | is these free goods like Wikipedia, Facebook, Twitter, Zoom.
00:29:06.800 | We're actually doing this in person,
00:29:08.120 | but almost everything else I do these days is online.
00:29:12.360 | The interesting thing about all those is
00:29:14.680 | most of them have a price of zero.
00:29:17.720 | What do you pay for Wikipedia?
00:29:19.920 | Maybe like a little bit for the electrons
00:29:21.640 | to come to your house.
00:29:22.920 | Basically zero, right?
00:29:24.040 | - Take a small pause and say, I donate to Wikipedia.
00:29:28.000 | Often you should too, 'cause that makes it--
00:29:28.840 | - It's good for you, yeah.
00:29:30.040 | So, but what does that do, mean for GDP?
00:29:32.440 | GDP is based on the price and quantity
00:29:36.080 | of all the goods, things bought and sold.
00:29:37.720 | If something has zero price,
00:29:39.520 | you know how much it contributes to GDP?
00:29:42.240 | To a first approximation, zero.
00:29:44.480 | So, these digital goods that we're getting more and more,
00:29:47.480 | we're spending more and more hours a day
00:29:50.160 | consuming stuff off of screens,
00:29:52.000 | little screens, big screens,
00:29:53.400 | that doesn't get priced into GDP.
00:29:56.080 | It's like they don't exist.
00:29:58.600 | That doesn't mean they don't create value.
00:29:59.920 | I get a lot of value from watching cat videos
00:30:03.320 | and reading Wikipedia articles and listening to podcasts,
00:30:06.100 | even if I don't pay for them.
00:30:08.400 | So, we've got a mismatch there.
00:30:10.380 | Now, in fairness, economists,
00:30:12.440 | since Simon Kuznets invented GDP and productivity,
00:30:15.260 | all those statistics back in the 1930s,
00:30:17.720 | he recognized, he in fact said,
00:30:19.660 | this is not a measure of well-being,
00:30:21.480 | this is not a measure of welfare,
00:30:23.160 | it's a measure of production.
00:30:25.080 | But almost everybody has kind of forgotten
00:30:28.960 | that he said that, and they just use it.
00:30:30.920 | It's like, how well off are we?
00:30:32.160 | What was GDP last year?
00:30:33.200 | It was 2.3% growth or whatever.
00:30:35.760 | That is how much physical production,
00:30:39.440 | but it's not the value we're getting.
00:30:42.340 | We need a new set of statistics,
00:30:43.820 | and I'm working with some colleagues,
00:30:45.260 | Avi Kallis and others,
00:30:46.580 | to develop something we call GDP-B.
00:30:50.420 | GDP-B measures the benefits you get, not the cost.
00:30:55.400 | If you get benefit from Zoom or Wikipedia or Facebook,
00:31:00.340 | then that gets counted in GDP-B,
00:31:02.500 | even if you pay zero for it.
00:31:04.540 | So, back to your original point,
00:31:07.340 | I think there is a lot of gain over the past decade
00:31:10.440 | in these digital goods that doesn't show up in GDP,
00:31:15.240 | doesn't show up in productivity.
00:31:16.400 | By the way, productivity is just defined
00:31:17.880 | as GDP divided by hours worked.
00:31:20.040 | So, if you mismeasure GDP,
00:31:22.040 | you mismeasure productivity by the exact same amount.
00:31:25.300 | That's something we need to fix.
00:31:26.440 | I'm working with the statistical agencies
00:31:28.380 | to come up with a new set of metrics.
00:31:30.160 | And over the coming years, I think we'll see.
00:31:33.220 | We're not gonna do away with GDP, it's very useful,
00:31:35.200 | but we'll see a parallel set of accounts
00:31:37.200 | that measure the benefits.
00:31:38.360 | - How difficult is it to get that B in the GDP-B?
00:31:41.040 | - It's pretty hard.
00:31:41.880 | I mean, one of the reasons it hasn't been done before
00:31:44.320 | is that you can measure at the cash register
00:31:47.320 | what people pay for stuff,
00:31:48.960 | but how do you measure what they would have paid,
00:31:50.980 | like what the value is?
00:31:51.980 | That's a lot harder.
00:31:53.960 | How much is Wikipedia worth to you?
00:31:55.960 | That's what we have to answer.
00:31:57.440 | And to do that, what we do is we can use online experiments.
00:32:00.660 | We do massive online choice experiments.
00:32:03.040 | We ask hundreds of thousands, now millions of people,
00:32:05.660 | to do lots of sort of A/B tests.
00:32:07.680 | How much would I have to pay you
00:32:09.040 | to give up Wikipedia for a month?
00:32:10.800 | How much would I have to pay you to stop using your phone?
00:32:14.080 | And in some cases, it's hypothetical.
00:32:15.920 | In other cases, we actually enforce it,
00:32:17.480 | which is kind of expensive.
00:32:18.880 | Like we pay somebody $30 to stop using Facebook,
00:32:22.380 | and we see if they'll do it.
00:32:23.880 | And some people will give it up for $10.
00:32:26.240 | Some people won't give it up even if you give them $100.
00:32:28.880 | And then you get a whole demand curve.
00:32:31.040 | You get to see what all the different prices are
00:32:33.560 | and how much value different people get.
00:32:35.960 | And not surprisingly, different people have different values.
00:32:38.240 | We find that women tend to value Facebook more than men.
00:32:41.540 | Old people tend to value it a little bit more
00:32:43.200 | than young people.
00:32:44.040 | That was interesting.
00:32:44.860 | I think young people maybe know about other networks
00:32:46.600 | that I don't know the name of that are better than Facebook.
00:32:49.600 | And so you get to see these patterns,
00:32:53.460 | but every person's individual.
00:32:55.440 | And then if you add up all those numbers,
00:32:57.260 | you start getting an estimate of the value.
00:33:00.040 | - Okay, first of all, that's brilliant.
00:33:01.260 | Is this work that will soon eventually be published?
00:33:05.740 | - Yeah, well, there's a version of it
00:33:07.020 | in the Proceedings of the National Academy of Sciences
00:33:09.520 | about, I think we call it Massive Online Choice Experiments.
00:33:11.860 | I should remember the title, but it's on my website.
00:33:14.900 | So yeah, we have some more papers coming out on it,
00:33:17.500 | but the first one is already out.
00:33:20.140 | - You know, it's kind of a fascinating mystery
00:33:22.300 | that Twitter, Facebook,
00:33:24.340 | like all these social networks are free.
00:33:26.780 | And it seems like almost none of them, except for YouTube,
00:33:31.420 | have experimented with removing ads for money.
00:33:35.180 | Can you like, do you understand that
00:33:37.140 | from both economics and the product perspective?
00:33:39.820 | - Yeah, it's something that, you know,
00:33:41.020 | so I teach a course on digital business models.
00:33:43.260 | So I used to at MIT, at Stanford, I'm not quite sure.
00:33:45.780 | I'm not teaching until next spring.
00:33:47.340 | I'm still thinking what my course is gonna be.
00:33:50.020 | But there are a lot of different business models.
00:33:52.220 | And when you have something that has zero marginal cost,
00:33:54.860 | there's a lot of forces,
00:33:56.420 | especially if there's any kind of competition
00:33:57.900 | that push prices down to zero.
00:33:59.940 | But you can have ad supported systems,
00:34:03.340 | you can bundle things together,
00:34:05.540 | you can have volunteer, you mentioned Wikipedia,
00:34:07.340 | there's donations.
00:34:08.740 | And I think economists underestimate
00:34:11.100 | the power of volunteerism and donations.
00:34:14.580 | You know, National Public Radio.
00:34:16.060 | Actually, how do you, this podcast, how is this,
00:34:18.540 | what's the revenue model?
00:34:19.460 | - There's sponsors at the beginning, and then, and people.
00:34:23.780 | The funny thing is, I tell people they can,
00:34:25.980 | it's very, I tell them the timestamp.
00:34:27.860 | So if you wanna skip the sponsors, you're free.
00:34:30.980 | But it's funny that a bunch of people,
00:34:33.580 | so I read the advertisement,
00:34:36.220 | and a bunch of people enjoy reading it.
00:34:38.420 | - Well, they may learn something from it.
00:34:39.780 | And also, from the advertiser's perspective,
00:34:42.940 | those are people who are actually interested, you know?
00:34:45.100 | Like, I mean, the example I sometimes give,
00:34:46.700 | like, I bought a car recently,
00:34:48.340 | and all of a sudden, all the car ads were like,
00:34:50.740 | interesting to me, I was paying attention.
00:34:53.180 | And then, like, now that I have the car,
00:34:54.340 | like, I sort of zone out on it.
00:34:55.820 | But that's fine, the car companies,
00:34:57.460 | they don't really wanna be advertising to me
00:34:58.940 | if I'm not gonna buy their product.
00:35:01.340 | So there are a lot of these different revenue models.
00:35:03.580 | And, you know, it's a little complicated,
00:35:06.900 | but the economic theory has to do
00:35:08.020 | with what the shape of the demand curve is,
00:35:09.500 | when it's better to monetize it with charging people,
00:35:13.180 | versus when you're better off doing advertising.
00:35:15.620 | I mean, in short, when the demand curve
00:35:18.300 | is relatively flat and wide,
00:35:20.580 | like generic news and things like that,
00:35:22.780 | then you tend to do better with advertising.
00:35:25.940 | If it's a good that's only useful
00:35:28.100 | to a small number of people,
00:35:29.260 | but they're willing to pay a lot,
00:35:30.380 | they have a very high value for it,
00:35:32.780 | then you, advertising isn't gonna work as well,
00:35:34.580 | and you're better off charging for it.
00:35:36.140 | Both of them have some inefficiencies.
00:35:38.140 | And then when you get into targeting,
00:35:39.500 | and you get into these other revenue models,
00:35:40.620 | it gets more complicated.
00:35:41.980 | But there's some economic theory on it.
00:35:45.340 | I also think, to be frank,
00:35:47.580 | there's just a lot of experimentation that's needed,
00:35:49.620 | because sometimes things are a little counterintuitive,
00:35:53.220 | especially when you get into what are called
00:35:55.180 | two-sided networks or platform effects,
00:35:57.660 | where you may grow the market on one side,
00:36:01.860 | and harvest the revenue on the other side.
00:36:04.140 | Facebook tries to get more and more users,
00:36:06.100 | and then they harvest the revenue from advertising.
00:36:08.980 | So that's another way of kind of thinking about it.
00:36:12.060 | - Is it strange to you that they haven't experimented?
00:36:14.420 | - Well, they are experimenting.
00:36:15.380 | So they are doing some experiments
00:36:17.660 | about what the willingness is for people to pay.
00:36:20.420 | I think that when they do the math,
00:36:23.580 | it's gonna work out that they still are better off
00:36:26.420 | with an advertising-driven model.
00:36:28.500 | - What about a mix?
00:36:30.420 | Like this is what YouTube is, right?
00:36:32.500 | You allow the person to decide,
00:36:36.380 | the customer to decide exactly which model they prefer.
00:36:39.340 | - No, that can work really well.
00:36:40.900 | And newspapers, of course,
00:36:41.780 | have known this for a long time.
00:36:42.780 | The Wall Street Journal, The New York Times,
00:36:44.580 | they have subscription revenue,
00:36:45.860 | they also have advertising revenue.
00:36:48.060 | And that can definitely work.
00:36:52.180 | Online is a lot easier to have a dial
00:36:54.060 | that's much more personalized,
00:36:55.220 | and everybody can kind of roll their own mix.
00:36:57.700 | And I could imagine having a little slider
00:37:00.300 | about how much advertising you want or are willing to take.
00:37:05.060 | And if it's done right, and it's incentive compatible,
00:37:07.380 | it could be a win-win where both the content provider
00:37:10.940 | and the consumer are better off
00:37:12.540 | than they would have been before.
00:37:14.460 | - Yeah, you know, the done right part is a really good point.
00:37:17.940 | Like with Jeff Bezos and the single-click purchase on Amazon,
00:37:21.980 | the frictionless effort there.
00:37:23.900 | If I could just rant for a second
00:37:25.780 | about The Wall Street Journal,
00:37:27.220 | all the newspapers you mentioned,
00:37:29.260 | is I have to click so many times to subscribe to them
00:37:34.260 | that I literally don't subscribe
00:37:37.380 | just because of the number of times I have to click.
00:37:39.500 | - I'm totally with you.
00:37:40.340 | I don't understand why so many companies
00:37:43.060 | make it so hard to subscribe.
00:37:44.580 | I mean, another example is when you buy a new iPhone
00:37:47.260 | or a new computer, whatever,
00:37:48.900 | I feel like, okay, I'm gonna lose an afternoon
00:37:51.420 | just loading up and getting all my stuff back.
00:37:54.260 | And for a lot of us,
00:37:56.100 | that's more of a deterrent than the price.
00:37:58.660 | And if they could make it painless,
00:38:01.860 | we'd give them a lot more money.
00:38:03.740 | So I'm hoping somebody listening is working
00:38:06.460 | on making it more painless for us to buy your products.
00:38:10.060 | - If we could just linger a little bit
00:38:12.340 | on the social network thing,
00:38:13.740 | because there's this Netflix social dilemma.
00:38:18.220 | - Yeah, no, I saw that.
00:38:19.420 | Tristan Harris and company, yeah.
00:38:22.660 | - And people's data,
00:38:27.140 | it's really sensitive,
00:38:30.420 | and social networks are at the core, arguably,
00:38:33.100 | of many of societal tension
00:38:37.460 | and some of the most important things happening in society.
00:38:39.660 | So it feels like it's important to get this right,
00:38:42.060 | both from a business model perspective
00:38:44.020 | and just like a trust perspective.
00:38:45.780 | I mean, it just still feels like,
00:38:49.900 | I know there's experimentation going on,
00:38:52.220 | it still feels like everyone is afraid
00:38:54.820 | to try different business models, like really try.
00:38:57.580 | - Well, I'm worried that people are afraid
00:38:59.660 | to try different business models.
00:39:01.300 | I'm also worried that some of the business models
00:39:03.540 | may lead them to bad choices.
00:39:06.380 | And Danny Kahneman talks about system one and system two,
00:39:11.060 | sort of like our reptilian brain
00:39:12.340 | that reacts quickly to what we see.
00:39:14.420 | See something interesting, we click on it,
00:39:16.220 | we retweet it, versus our system two,
00:39:20.860 | our frontal cortex that's supposed to be more careful
00:39:24.140 | and rational, that really doesn't make
00:39:26.300 | as many decisions as it should.
00:39:27.860 | I think there's a tendency for a lot of these
00:39:31.700 | social networks to really exploit system one,
00:39:35.380 | our quick, instant reaction.
00:39:37.740 | Make it so we just click on stuff and pass it on.
00:39:41.020 | And not really think carefully about it.
00:39:42.340 | And that system, it tends to be driven by sex, violence,
00:39:47.340 | disgust, anger, fear, these relatively primitive
00:39:52.500 | kinds of emotions.
00:39:53.860 | Maybe they're important for a lot of purposes,
00:39:55.980 | but they're not a great way to organize a society.
00:39:58.980 | And most importantly, when you think about
00:40:01.260 | this huge, amazing information infrastructure we've had
00:40:04.380 | that's connected billions of brains across the globe,
00:40:08.060 | not just we can all access information,
00:40:09.660 | we can all contribute to it and share it.
00:40:11.740 | Arguably the most important thing that that network
00:40:14.820 | should do is favor truth over falsehoods.
00:40:19.380 | And the way it's been designed,
00:40:21.660 | not necessarily intentionally, is exactly the opposite.
00:40:24.700 | My MIT colleagues, Aral and Deb Roy and others at MIT
00:40:29.460 | did a terrific paper on the cover of Science.
00:40:31.780 | And when they document what we all feared,
00:40:33.500 | which is that lies spread faster than truth
00:40:37.780 | on social networks, they looked at a bunch of tweets
00:40:41.540 | and retweets, and they found that false information
00:40:44.580 | was more likely to spread further, faster to more people.
00:40:49.020 | And why was that?
00:40:49.980 | It's not because people like lies.
00:40:53.420 | It's because people like things that are shocking, amazing.
00:40:56.580 | Can you believe this?
00:40:57.900 | Something that is not mundane.
00:41:00.340 | Not that something everybody else already knew.
00:41:02.500 | And what are the most unbelievable things?
00:41:05.420 | Well, lies.
00:41:07.380 | And so if you wanna find something unbelievable,
00:41:09.860 | it's a lot easier to do that
00:41:10.700 | if you're not constrained by the truth.
00:41:12.460 | So they found that the emotional valence
00:41:15.700 | of false information was just much higher.
00:41:18.020 | It was more likely to be shocking,
00:41:19.700 | and therefore more likely to be spread.
00:41:21.660 | Another interesting thing was that
00:41:24.020 | that wasn't necessarily driven by the algorithms.
00:41:26.620 | I know that there is some evidence,
00:41:29.460 | you know, Zainab Tufekci and others have pointed out
00:41:31.580 | in YouTube, some of the algorithms unintentionally
00:41:34.180 | were tuned to amplify more extremist content.
00:41:37.900 | But in the study of Twitter that Sinan and Deb
00:41:41.140 | and others did, they found that even if you took out
00:41:43.780 | all the bots and all the automated tweets,
00:41:47.860 | you still had lies spreading significantly faster.
00:41:50.740 | It's just the problems with ourselves
00:41:52.580 | that we just can't resist passing on this salacious content.
00:41:57.060 | But I also blame the platforms because, you know,
00:42:01.260 | there's different ways you can design a platform.
00:42:03.140 | You can design a platform in a way that makes it easy
00:42:06.340 | to spread lies and to retweet and spread things on,
00:42:09.540 | or you can kind of put some friction on that
00:42:11.540 | and try to favor truth.
00:42:13.980 | I had dinner with Jimmy Wales once,
00:42:15.380 | you know, the guy who helped found Wikipedia.
00:42:19.740 | And he convinced me that, look, you know,
00:42:22.580 | you can make some design choices,
00:42:24.500 | whether it's at Facebook, at Twitter,
00:42:26.380 | at Wikipedia or Reddit, whatever,
00:42:29.260 | and depending on how you make those choices,
00:42:32.340 | you're more likely or less likely to have false news.
00:42:35.060 | - Create a little bit of friction, like you said.
00:42:37.140 | - Yeah.
00:42:37.980 | - You know, that's the--
00:42:38.820 | - It could be friction or it could be speeding the truth,
00:42:41.300 | you know, either way, but, and I don't totally understand--
00:42:44.420 | - Speeding the truth, I love it.
00:42:45.460 | - Yeah, yeah, you know, amplifying it
00:42:47.820 | and giving it more credit and, you know,
00:42:49.460 | like in academia, which is far, far from perfect,
00:42:52.500 | but, you know, when someone has an important discovery,
00:42:55.620 | it tends to get more cited and people kind of look to it
00:42:57.860 | more and sort of, it tends to get amplified a little bit.
00:43:00.740 | So you could try to do that too.
00:43:03.300 | I don't know what the silver bullet is,
00:43:04.660 | but the meta point is that if we spend time
00:43:07.420 | thinking about it, we can amplify truth over falsehoods.
00:43:10.780 | And I'm disappointed in the heads of these social networks
00:43:14.860 | that they haven't been as successful
00:43:16.660 | or maybe haven't tried as hard to amplify truth.
00:43:19.500 | And part of it, going back to what we said earlier,
00:43:21.540 | is, you know, these revenue models may push them
00:43:25.100 | more towards growing fast, spreading information rapidly,
00:43:29.860 | getting lots of users,
00:43:31.460 | which isn't the same thing as finding truth.
00:43:34.580 | - Yeah, I mean, implicit in what you're saying now
00:43:38.860 | is a hopeful message that with platforms,
00:43:42.260 | we can take a step towards greater and greater
00:43:47.260 | popularity of truth.
00:43:51.160 | But the more cynical view is that
00:43:54.060 | what the last few years have revealed
00:43:56.860 | is that there's a lot of money to be made
00:43:59.060 | in dismantling even the idea of truth,
00:44:03.140 | that nothing is true.
00:44:05.020 | And as a thought experiment, I've been thinking about
00:44:08.660 | if it's possible that our future will have,
00:44:11.260 | like the idea of truth is something we won't even have.
00:44:14.420 | Do you think it's possible, like in the future,
00:44:17.820 | that everything is on the table in terms of truth
00:44:21.020 | and we're just swimming in this kind of digital economy
00:44:24.900 | where ideas are just little toys
00:44:29.900 | that are not at all connected to reality?
00:44:33.220 | - Yeah, I think that's definitely possible.
00:44:35.920 | I'm not a technological determinist,
00:44:38.140 | so I don't think that's inevitable.
00:44:40.460 | I don't think it's inevitable that it doesn't happen.
00:44:42.460 | I mean, the thing that I've come away with
00:44:44.100 | every time I do these studies,
00:44:45.460 | and I emphasize it in my books and elsewhere,
00:44:47.360 | is that technology doesn't shape our destiny,
00:44:50.180 | we shape our destiny.
00:44:51.820 | So just by us having this conversation,
00:44:54.740 | I hope that your audience is gonna take it upon themselves
00:44:58.540 | as they design their products and they think about it,
00:45:00.580 | they use products as they manage companies,
00:45:02.860 | how can they make conscious decisions
00:45:05.380 | to favor truth over falsehoods,
00:45:08.940 | favor the better kinds of societies,
00:45:10.980 | and not abdicate and say, well, we just build the tools.
00:45:13.820 | I think there was a saying that,
00:45:15.660 | was it the German scientists,
00:45:18.380 | when they were working on the missiles in late World War II,
00:45:23.100 | they said, well, our job is to make the missiles go up,
00:45:25.780 | where they come down, that's someone else's department.
00:45:28.760 | And that's obviously not the, I think it's obvious,
00:45:31.820 | that's not the right attitude that technologists should have
00:45:33.940 | that engineers should have.
00:45:35.660 | They should be very conscious
00:45:36.860 | about what the implications are.
00:45:38.780 | And if we think carefully about it,
00:45:40.580 | we can avoid the kind of world that you just described
00:45:42.860 | where the truth is all relative.
00:45:45.020 | There are going to be people who benefit
00:45:47.100 | from a world of where people don't check facts
00:45:51.260 | and where truth is relative and popularity or fame or money
00:45:56.260 | is orthogonal to truth.
00:45:59.840 | But one of the reasons I suspect
00:46:01.860 | that we've had so much progress over the past few hundred
00:46:04.500 | years is the invention of the scientific method,
00:46:07.560 | which is a really powerful tool or meta tool
00:46:10.140 | for finding truth and favoring things that are true
00:46:15.140 | versus things that are false.
00:46:16.560 | If they don't pass the scientific method,
00:46:18.540 | they're less likely to be true.
00:46:20.620 | And that has, the societies and the people
00:46:25.540 | and the organizations that embrace that
00:46:27.740 | have done a lot better than the ones who haven't.
00:46:30.500 | And so I'm hoping that people keep that in mind
00:46:32.780 | and continue to try to embrace not just the truth,
00:46:35.420 | but methods that lead to the truth.
00:46:37.600 | - So maybe on a more personal question,
00:46:40.480 | if one were to try to build a competitor to Twitter,
00:46:45.700 | what would you advise?
00:46:47.300 | Is there, I mean, the bigger, the meta question,
00:46:53.340 | is that the right way to improve systems?
00:46:55.680 | - Yeah, no, I think that the underlying premise
00:46:59.380 | behind Twitter and all these networks is amazing
00:47:01.380 | that we can communicate with each other.
00:47:02.800 | And I use it a lot.
00:47:03.980 | There's a sub part of Twitter called econ Twitter,
00:47:05.900 | where we economists tweet to each other
00:47:08.660 | and talk about new papers.
00:47:10.580 | Something came out in NBER,
00:47:11.980 | the National Bureau of Economic Research,
00:47:13.300 | and we share about it.
00:47:14.140 | People critique it.
00:47:15.380 | I think it's been a godsend
00:47:16.860 | 'cause it's really sped up the scientific process,
00:47:20.000 | if you can call economics scientific.
00:47:21.820 | - Does it get divisive in that little--
00:47:23.540 | - Sometimes, yeah, sure.
00:47:24.460 | Sometimes it does.
00:47:25.300 | It can also be done in nasty ways,
00:47:26.980 | and there's the bad parts.
00:47:28.320 | But the good parts are great
00:47:29.660 | because you just speed up that clock speed
00:47:31.620 | of learning about things.
00:47:33.300 | Instead of like in the old, old days,
00:47:35.460 | waiting to read in a journal,
00:47:36.780 | or the not so old days when you'd see it posted
00:47:39.660 | on a website and you'd read it.
00:47:41.560 | Now on Twitter, people will distill it down,
00:47:43.960 | and it's a real art to getting to the essence of things.
00:47:47.140 | So that's been great.
00:47:49.060 | But it certainly, we all know that Twitter
00:47:52.340 | can be a cesspool of misinformation.
00:47:55.540 | And like I just said,
00:47:57.340 | unfortunately, misinformation tends to spread
00:47:59.780 | faster on Twitter than truth.
00:48:02.320 | And there are a lot of people
00:48:03.160 | who are very vulnerable to it.
00:48:04.220 | I'm sure I've been fooled at times.
00:48:05.980 | There are agents, whether from Russia
00:48:09.140 | or from political groups or others,
00:48:11.660 | that explicitly create efforts at misinformation
00:48:15.680 | and efforts at getting people to hate each other.
00:48:17.920 | Or even more important lately,
00:48:19.000 | I've discovered is nut picking.
00:48:21.200 | You know the idea of nut picking?
00:48:22.320 | - No, what's that?
00:48:23.160 | - It's a good term.
00:48:24.320 | Nut picking is when you find like an extreme nut case
00:48:27.800 | on the other side, and then you amplify them
00:48:30.700 | and make it seem like that's typical of the other side.
00:48:34.000 | So you're not literally lying.
00:48:35.480 | You're taking some idiot, you know,
00:48:37.760 | ranting on the subway, or just, you know,
00:48:39.920 | whether they're in the KKK or Antifa or whatever,
00:48:42.820 | they're just, and you, normally,
00:48:44.800 | nobody would pay attention to this guy.
00:48:46.040 | Like 12 people would see him and it'd be the end.
00:48:48.100 | Instead, with video or whatever,
00:48:51.120 | you get tens of millions of people say it.
00:48:54.460 | And I've seen this, you know, I look at it,
00:48:56.320 | I'm like, I get angry.
00:48:57.160 | I'm like, I can't believe that person
00:48:58.280 | did such things, it's so terrible.
00:48:59.720 | Let me tell all my friends about this terrible person.
00:49:01.880 | (laughing)
00:49:02.920 | And it's a great way to generate division.
00:49:06.640 | I talked to a friend who studied
00:49:09.440 | Russian misinformation campaigns,
00:49:11.560 | and they're very clever about literally being
00:49:14.000 | on both sides of some of these debates.
00:49:15.880 | They would have some people pretend to be part of BLM,
00:49:18.640 | some people pretend to be white nationalists,
00:49:21.040 | and they would be throwing epithets at each other,
00:49:22.960 | saying crazy things at each other.
00:49:25.100 | And they're literally playing both sides of it,
00:49:26.600 | but their goal wasn't for one or the other to win.
00:49:28.600 | It was for everybody to get behaving
00:49:30.120 | and distrusting everyone else.
00:49:32.000 | So these tools can definitely be used for that.
00:49:34.500 | And they are being used for that.
00:49:36.580 | It's been super destructive for our democracy
00:49:39.680 | and our society, and the people who run these platforms,
00:49:43.560 | I think, have a social responsibility,
00:49:46.120 | a moral and ethical personal responsibility
00:49:48.700 | to do a better job and to shut that stuff down.
00:49:51.800 | Well, I don't know if you can shut it down,
00:49:52.960 | but to design them in a way that, as I said earlier,
00:49:56.480 | favors truth over falsehoods and favors positive types
00:50:00.960 | of communication versus destructive ones.
00:50:06.080 | - And just like you said, it's also on us.
00:50:09.840 | I try to be all about love and compassion,
00:50:12.400 | empathy on Twitter.
00:50:13.240 | I mean, one of the things,
00:50:14.820 | not picking is a fascinating term.
00:50:16.600 | One of the things that people do
00:50:18.940 | that's, I think, even more dangerous
00:50:21.800 | is not picking applied to individual statements
00:50:26.760 | of good people.
00:50:28.420 | So basically, worst case analysis in computer science
00:50:32.160 | is taking, sometimes out of context,
00:50:35.360 | but sometimes in context,
00:50:37.060 | a statement, one statement by a person.
00:50:42.360 | Like I've been, because I've been reading
00:50:43.800 | The Rise and Fall of the Third Reich,
00:50:45.400 | I often talk about Hitler on this podcast with folks,
00:50:49.000 | and it is so easy.
00:50:50.560 | - That's really dangerous.
00:50:52.080 | - But I'm all leaning in, I'm 100%.
00:50:54.600 | 'Cause, well, it's actually a safer place
00:50:57.000 | than people realize 'cause it's history,
00:50:59.040 | and history in long form is actually very fascinating
00:51:04.160 | to think about, but I could see how that could be taken
00:51:09.160 | totally out of context, and it's very worrying.
00:51:11.360 | - Right, 'cause you think about these digital infrastructures,
00:51:12.840 | not just the semi-things, but they're sort of permanent.
00:51:14.840 | So anything you say, at some point,
00:51:16.560 | someone can go back and find something you said
00:51:18.200 | three years ago, perhaps jokingly, perhaps not.
00:51:21.120 | Maybe you're just wrong, and you made a, you know?
00:51:22.840 | And that becomes, they can use that to define you
00:51:25.640 | if they have ill intent.
00:51:26.880 | And we all need to be a little more forgiving.
00:51:29.120 | I mean, somewhere in my 20s, I told myself,
00:51:32.280 | I was going through all my different friends,
00:51:33.860 | and I was like, you know, every one of them
00:51:37.320 | has at least one nutty opinion.
00:51:39.360 | (both laughing)
00:51:40.200 | - Yeah, exactly.
00:51:41.040 | - And I was like, there's nobody who's completely,
00:51:43.080 | except me, of course.
00:51:43.920 | - Yeah, exactly. - But I'm sure they thought
00:51:44.840 | that about me, too.
00:51:45.680 | And so you just kinda learn to be a little bit tolerant
00:51:48.720 | that, okay, there's just, you know.
00:51:51.120 | - Yeah, I wonder who the responsibility lays on there.
00:51:56.120 | I think ultimately it's about leadership.
00:51:59.680 | Like the previous president, Barack Obama,
00:52:02.760 | has been, I think, quite eloquent
00:52:06.020 | at walking this very difficult line
00:52:07.680 | of talking about cancel culture.
00:52:09.860 | But it's difficult.
00:52:10.700 | It takes skill.
00:52:12.160 | 'Cause you say the wrong thing,
00:52:13.760 | and you piss off a lot of people.
00:52:15.360 | And so you have to do it well.
00:52:17.440 | But then also the platform, the technology,
00:52:19.480 | should slow down, create friction,
00:52:23.600 | and spreading this kind of nut picking in all its forms.
00:52:26.440 | - Absolutely.
00:52:27.280 | No, and your point that we have to learn over time
00:52:29.800 | how to manage it.
00:52:30.640 | I mean, we can't put it all on the platform
00:52:31.800 | and say, you guys design it.
00:52:33.200 | 'Cause if we're idiots about using it,
00:52:35.200 | nobody can design a platform that withstands that.
00:52:37.840 | And every new technology people learn, it's dangerous.
00:52:41.720 | You know, when someone invented fire,
00:52:43.960 | it's great cooking and everything,
00:52:44.960 | but then somebody burned himself.
00:52:46.160 | And then you had to learn how to avoid,
00:52:48.200 | maybe somebody invented a fire extinguisher later.
00:52:50.640 | So you kinda figure out ways
00:52:52.840 | of working around these technologies.
00:52:54.680 | Someone invented seat belts, et cetera.
00:52:57.440 | And that's certainly true
00:52:58.640 | with all the new digital technologies
00:53:00.620 | that we have to figure out,
00:53:02.360 | not just technologies that protect us,
00:53:05.320 | but ways of using them that emphasize
00:53:08.680 | that are more likely to be successful than dangerous.
00:53:11.560 | - So you've written quite a bit
00:53:12.600 | about how artificial intelligence might change our world.
00:53:16.080 | How do you think, if we look forward,
00:53:21.280 | again, it's impossible to predict the future,
00:53:23.240 | but if we look at trends from the past
00:53:26.480 | and we try to predict what's gonna happen
00:53:28.240 | in the rest of the 21st century,
00:53:29.760 | how do you think AI will change our world?
00:53:31.860 | - That's a big question.
00:53:34.240 | You know, I'm mostly a techno-optimist.
00:53:37.480 | I'm not at the extreme, you know,
00:53:38.680 | the singularity is near end of the spectrum,
00:53:41.120 | but I do think that we're likely in
00:53:44.600 | for some significantly improved living standards,
00:53:47.520 | some really important progress,
00:53:49.280 | even just the technologies that are already
00:53:50.840 | kinda like in the can that haven't diffused.
00:53:53.120 | You know, when I talked earlier about the J-curve,
00:53:54.920 | it could take 10, 20, 30 years for an existing technology
00:53:58.800 | to have the kind of profound effects.
00:54:00.800 | And when I look at whether it's, you know,
00:54:03.800 | vision systems, voice recognition, problem-solving systems,
00:54:07.880 | even if nothing new got invented,
00:54:09.440 | we would have a few decades of progress.
00:54:11.840 | So I'm excited about that.
00:54:13.480 | And I think that's gonna lead to us being wealthier,
00:54:16.880 | healthier, I mean, the healthcare is probably
00:54:18.800 | one of the applications I'm most excited about.
00:54:21.380 | So that's good news.
00:54:23.780 | I don't think we're gonna have the end of work anytime soon.
00:54:27.640 | There's just too many things that machines still can't do.
00:54:31.000 | When I look around the world and think of whether
00:54:32.880 | it's childcare or healthcare, cleaning the environment,
00:54:36.260 | interacting with people, scientific work,
00:54:39.080 | artistic creativity, these are things that for now,
00:54:42.600 | machines aren't able to do nearly as well as humans,
00:54:45.680 | even just something as mundane as, you know,
00:54:47.200 | folding laundry or whatever.
00:54:48.760 | And many of these, I think, are gonna be years
00:54:52.400 | or decades before machines catch up.
00:54:54.760 | You know, I may be surprised on some of them,
00:54:56.160 | but overall, I think there's plenty of work
00:54:58.800 | for humans to do.
00:54:59.780 | There's plenty of problems in society
00:55:01.360 | that need the human touch.
00:55:02.600 | So we'll have to repurpose.
00:55:04.200 | We'll have to, as machines are able to do some tasks,
00:55:07.880 | people are gonna have to reskill and move into other areas.
00:55:11.080 | And that's probably what's gonna be going on
00:55:12.760 | for the next, you know, 10, 20, 30 years or more,
00:55:16.280 | kind of big restructuring of society.
00:55:18.960 | We'll get wealthier and people will have to do new skills.
00:55:22.440 | Now, if you turn the dial further, I don't know,
00:55:24.400 | 50 or 100 years into the future,
00:55:26.960 | then, you know, maybe all bets are off.
00:55:29.640 | Then it's possible that machines will be able to do
00:55:32.880 | most of what people do.
00:55:34.240 | You know, say one or 200 years, I think it's even likely.
00:55:37.360 | And at that point, then we're more in the sort of
00:55:39.880 | abundance economy.
00:55:41.040 | Then we're in a world where there's really little
00:55:44.040 | for the humans can do economically better than machines,
00:55:48.000 | other than be human.
00:55:49.900 | And, you know, that will take a transition as well,
00:55:53.640 | kind of more of a transition of how we get meaning in life
00:55:56.520 | and what our values are.
00:55:58.240 | But shame on us if we screw that up.
00:56:00.440 | I mean, that should be like great, great news.
00:56:02.760 | And it kind of saddens me that some people see that
00:56:04.560 | as like a big problem.
00:56:05.560 | I think it should be wonderful if people have all the health
00:56:09.120 | and material things that they need
00:56:11.080 | and can focus on loving each other
00:56:14.200 | and discussing philosophy and playing
00:56:16.840 | and doing all the other things that don't require work.
00:56:19.480 | - Do you think you'll be surprised to see what the 20,
00:56:23.480 | like if we were to travel in time,
00:56:25.400 | a hundred years into the future,
00:56:27.440 | do you think you'll be able to,
00:56:29.600 | like if I gave you a month to like talk to people,
00:56:32.320 | no, like let's say a week,
00:56:34.160 | would you be able to understand what the hell is going on?
00:56:37.800 | - You mean if I was there for a week?
00:56:39.220 | - Yeah, if you were there for a week.
00:56:40.880 | - A hundred years in the future?
00:56:42.160 | - Yeah.
00:56:43.040 | So like, so I'll give you one thought experiment is like,
00:56:46.640 | isn't it possible that we're all living
00:56:48.760 | in virtual reality by then?
00:56:50.360 | - Yeah, no, I think that's very possible.
00:56:52.480 | You know, I've played around with some of those VR headsets
00:56:54.680 | and they're not great, but I mean,
00:56:57.000 | the average person spends many waking hours
00:57:01.000 | staring at screens right now.
00:57:03.200 | You know, they're kind of low res
00:57:04.440 | compared to what they could be in 30 or 50 years,
00:57:07.720 | but certainly games and why not,
00:57:11.740 | any other interactions could be done with VR
00:57:15.400 | and that would be a pretty different world
00:57:16.360 | and we'd all, you know, in some ways be as rich as we wanted
00:57:19.400 | you know, we could have castles
00:57:20.680 | we could be traveling anywhere we want
00:57:22.560 | and it could obviously be multisensory.
00:57:25.960 | So that would be possible.
00:57:27.920 | And you know, there's people, you know,
00:57:29.980 | you've had Elon Musk on and others, you know,
00:57:32.720 | there are people, Nick Bostrom, you know,
00:57:34.120 | makes the simulation argument that maybe we're already there.
00:57:36.880 | - We're already there.
00:57:37.720 | So, but in general, or do you not even think about
00:57:41.200 | in this kind of way, you're self-critically thinking
00:57:45.080 | how good are you as an economist
00:57:47.920 | at predicting what the future looks like?
00:57:50.360 | - Well, it starts getting, I mean,
00:57:52.000 | I feel reasonably comfortable next, you know,
00:57:54.160 | five, 10, 20 years in terms of that path.
00:57:58.720 | When you start getting truly superhuman
00:58:01.720 | artificial intelligence, kind of by definition,
00:58:06.000 | I'll be able to think of a lot of things
00:58:07.080 | that I couldn't have thought of
00:58:08.280 | and create a world that I couldn't even imagine.
00:58:10.980 | And so I'm not sure I can predict
00:58:14.680 | what that world is going to be like.
00:58:16.520 | One thing that AI researchers,
00:58:18.840 | AI safety researchers worry about
00:58:20.400 | is what's called the alignment problem.
00:58:22.560 | When an AI is that powerful,
00:58:25.080 | then they can do all sorts of things.
00:58:28.000 | We really hope that their values are aligned with our values
00:58:32.440 | and it's even tricky to finding what our values are.
00:58:34.480 | I mean, first off, we all have different values.
00:58:37.240 | And secondly, maybe if we were smarter,
00:58:40.440 | we would have better values.
00:58:41.640 | Like, you know, I like to think that we have better values
00:58:44.220 | than he did in 1860 and, or in, you know,
00:58:48.820 | the year 200 BC on a lot of dimensions,
00:58:51.280 | things that we consider barbaric today.
00:58:53.360 | And it may be that if I thought about it more deeply,
00:58:56.000 | I would also be morally evolved.
00:58:57.320 | Maybe I'd be a vegetarian or do other things
00:59:00.040 | that right now, whether my future self
00:59:02.920 | would consider kind of immoral.
00:59:05.160 | So that's a tricky problem,
00:59:07.680 | getting the AI to do what we want,
00:59:11.040 | assuming it's even a friendly AI.
00:59:12.880 | I mean, I should probably mention
00:59:14.720 | there's a non-trivial other branch
00:59:17.040 | where we destroy ourselves, right?
00:59:18.680 | I mean, there's a lot of exponentially
00:59:21.600 | improving technologies that could be
00:59:24.320 | ferociously destructive, whether it's in nanotechnology
00:59:28.320 | or biotech and weaponized viruses, AI,
00:59:33.200 | and other things that--
00:59:34.240 | - Nuclear weapons.
00:59:35.080 | - Nuclear weapons, of course.
00:59:36.200 | - The old school technology.
00:59:37.280 | - Yeah, good old nuclear weapons
00:59:39.240 | that could be devastating or even existential
00:59:43.480 | and new things yet to be invented.
00:59:45.200 | So that's a branch that I think is pretty significant.
00:59:50.200 | And there are those who think that one of the reasons
00:59:54.240 | we haven't been contacted by other civilizations, right,
00:59:57.480 | is that once you get to a certain level
01:00:00.600 | of complexity in technology,
01:00:03.040 | there's just too many ways to go wrong.
01:00:04.600 | There's a lot of ways to blow yourself up
01:00:06.200 | and people, or I should say species,
01:00:09.640 | end up falling into one of those traps.
01:00:12.520 | The great filter.
01:00:13.600 | - The great filter.
01:00:15.000 | I mean, there's an optimistic view of that.
01:00:16.720 | If there is literally no intelligent life
01:00:19.040 | out there in the universe, or at least in our galaxy,
01:00:22.360 | that means that we've passed at least
01:00:24.880 | one of the great filters or some of the great filters
01:00:27.840 | that we survived.
01:00:30.040 | - Yeah, no, I think it's Robin Hanson has a good way of,
01:00:32.240 | maybe others, they have a good way of thinking about this,
01:00:33.920 | that if there are no other intelligence creatures out there
01:00:38.920 | that we've been able to detect,
01:00:40.640 | one possibility is that there's a filter ahead of us.
01:00:43.440 | And when you get a little more advanced,
01:00:44.800 | maybe in 100 or 1,000 or 10,000 years,
01:00:47.640 | things just get destroyed for some reason.
01:00:50.600 | The other one is the great filter's behind us.
01:00:53.040 | That'll be good, is that most planets
01:00:56.400 | don't even evolve life, or if they don't evolve life,
01:00:58.960 | they don't evolve intelligent life.
01:01:00.320 | Maybe we've gotten past that,
01:01:02.080 | and so now maybe we're on the good side of the great filter.
01:01:05.720 | - So, if we sort of rewind back and look at the thing
01:01:10.520 | where we could say something a little bit more comfortably
01:01:12.800 | at five years and 10 years out,
01:01:14.560 | you've written about jobs and the impact
01:01:20.960 | on sort of our economy and the jobs
01:01:24.800 | in terms of artificial intelligence that it might have.
01:01:28.320 | It's a fascinating question of what kind of jobs are safe,
01:01:30.680 | what kind of jobs are not.
01:01:32.640 | Can you maybe speak to your intuition
01:01:34.680 | about how we should think about AI
01:01:37.800 | changing the landscape of work?
01:01:40.040 | - Sure, absolutely.
01:01:40.960 | Well, this is a really important question
01:01:42.720 | because I think we're very far
01:01:43.960 | from artificial general intelligence,
01:01:45.760 | which is AI that can just do the full breadth
01:01:48.160 | of what humans can do,
01:01:49.560 | but we do have human level or superhuman level
01:01:53.040 | narrow artificial intelligence.
01:01:56.840 | And obviously my calculator can do math
01:01:59.520 | a lot better than I can,
01:02:00.520 | and there's a lot of other things
01:02:01.360 | that machines can do better than I can.
01:02:03.200 | So, which is which?
01:02:04.480 | We actually set out to address that question.
01:02:06.880 | With Tom Mitchell, I wrote a paper
01:02:10.400 | called "What Can Machine Learning Do?"
01:02:12.200 | that was in science.
01:02:13.440 | And we went and interviewed a whole bunch of AI experts
01:02:16.840 | and kind of synthesized what they thought
01:02:19.920 | machine learning was good at and wasn't good at.
01:02:22.200 | And we came up with, we called a rubric,
01:02:25.560 | basically a set of questions you can ask about any task
01:02:28.160 | that will tell you whether it's likely to score high or low
01:02:30.960 | on suitability for machine learning.
01:02:33.840 | And then we've applied that
01:02:34.760 | to a bunch of tasks in the economy.
01:02:36.960 | In fact, there's a data set of all the tasks
01:02:39.080 | in the US economy, believe it or not, it's called O*NET.
01:02:41.600 | The US government put it together,
01:02:43.160 | part of Bureau of Labor Statistics.
01:02:45.040 | They divide the economy into about 970 occupations,
01:02:48.720 | like bus driver, economist,
01:02:50.880 | primary school teacher, radiologist.
01:02:53.440 | And then for each one of them,
01:02:54.840 | they describe which tasks need to be done.
01:02:57.640 | Like for radiologists, there are 27 distinct tasks.
01:03:00.760 | So, we went through all those tasks
01:03:02.220 | to see whether or not a machine could do them.
01:03:05.000 | And what we found, interestingly, was--
01:03:06.720 | - Brilliant study, by the way.
01:03:07.680 | That's so awesome.
01:03:08.920 | - Yeah, thank you.
01:03:10.280 | So, what we found was that there was no occupation
01:03:13.800 | in our data set where machine learning just ran the table
01:03:16.280 | and did everything.
01:03:17.560 | And there was almost no occupation
01:03:19.040 | where machine learning didn't have
01:03:19.920 | like a significant ability to do things.
01:03:22.160 | Like take radiology, a lot of people I hear saying,
01:03:24.400 | you know, it's the end of radiology.
01:03:26.720 | And one of the 27 tasks is read medical images,
01:03:29.920 | really important one, like it's kind of a core job.
01:03:32.000 | And machines have basically gotten as good
01:03:34.680 | or better than radiologists.
01:03:35.920 | There was just an article in Nature last week,
01:03:38.400 | but they've been publishing them
01:03:39.640 | for the past few years,
01:03:40.880 | showing that machine learning can do as well as humans
01:03:46.480 | on many kinds of diagnostic imaging tasks.
01:03:49.600 | But other things radiologists do, you know,
01:03:51.120 | they sometimes administer conscious sedation,
01:03:54.440 | they sometimes do physical exams,
01:03:55.940 | they have to synthesize the results
01:03:57.320 | and explain to the other doctors or to the patients.
01:04:01.680 | In all those categories,
01:04:02.520 | machine learning isn't really up to snuff yet.
01:04:05.560 | So, that job, we're gonna see a lot of restructuring.
01:04:09.300 | Parts of the job, they'll hand over to machines,
01:04:11.400 | others, humans will do more of.
01:04:13.160 | That's been more or less the pattern all of them.
01:04:15.080 | So, you know, to oversimplify a bit,
01:04:17.080 | we see a lot of restructuring, reorganization of work.
01:04:20.400 | And it's real, gonna be a great time,
01:04:22.280 | it is a great time for smart entrepreneurs and managers
01:04:24.720 | to do that reinvention of work.
01:04:27.280 | Not gonna see mass unemployment.
01:04:29.420 | To get more specifically to your question,
01:04:33.120 | the kinds of tasks that machines tend to be good at
01:04:36.560 | are a lot of routine problem solving,
01:04:39.040 | mapping inputs X into outputs Y.
01:04:42.540 | If you have a lot of data on the Xs and the Ys,
01:04:44.860 | the inputs and the outputs,
01:04:45.700 | you can do that kind of mapping and find the relationships.
01:04:48.520 | They tend to not be very good at,
01:04:50.680 | even now, fine motor control and dexterity,
01:04:53.660 | emotional intelligence and human interactions,
01:04:58.920 | and thinking outside the box, creative work.
01:05:01.660 | If you give it a well-structured task,
01:05:03.180 | machines can be very good at it.
01:05:05.020 | But even asking the right questions, that's hard.
01:05:08.640 | There's a quote that Andrew McAfee and I use
01:05:10.660 | in our book, "Second Machine Age."
01:05:12.940 | Apparently, Pablo Picasso was shown an early computer
01:05:16.820 | and he came away kind of unimpressed.
01:05:18.420 | He goes, "Well, I don't see all the fusses.
01:05:20.580 | "All that does is answer questions."
01:05:22.740 | And to him, the interesting thing was asking the questions.
01:05:26.780 | - Yeah, try to replace me, GPT-3, I dare you.
01:05:31.300 | Although some people think I'm a robot.
01:05:33.220 | You have this cool plot that shows,
01:05:37.060 | I just remember where economists land,
01:05:39.680 | where I think the x-axis is the income,
01:05:43.440 | and then the y-axis is, I guess,
01:05:46.280 | aggregating the information of how replaceable the job is.
01:05:49.440 | Or I think there's an index.
01:05:50.280 | - That's the suitability for machine learning index, exactly.
01:05:52.480 | So we have all 970 occupations on that chart.
01:05:56.560 | And there's scatters in all four corners
01:05:59.240 | have some occupations, but there is a definite pattern,
01:06:02.760 | which is the lower wage occupations tend to have more tasks
01:06:05.720 | that are suitable for machine learning, like cashiers.
01:06:08.000 | I mean, anyone who's gone to a supermarket or CVS
01:06:10.440 | knows that they not only read barcodes,
01:06:12.440 | but they can recognize an apple and an orange
01:06:14.560 | and a lot of things that cashiers,
01:06:17.120 | humans used to be needed for.
01:06:18.620 | At the other end of the spectrum,
01:06:21.040 | there are some jobs like airline pilot
01:06:23.560 | that are among the highest paid in our economy,
01:06:26.640 | but also a lot of them are suitable for machine learning.
01:06:28.760 | A lot of those tasks are.
01:06:30.920 | And then, yeah, you mentioned economists.
01:06:32.520 | I couldn't help peeking at those.
01:06:33.800 | And they're paid a fair amount,
01:06:36.100 | maybe not as much as some of us think they should be.
01:06:39.120 | But they have some tasks
01:06:43.320 | that are suitable for machine learning,
01:06:44.160 | but for now, at least,
01:06:45.520 | most of the tasks that economists do
01:06:47.120 | didn't end up being in that category.
01:06:48.500 | And I should say, I didn't like create that data.
01:06:50.600 | We just took the analysis
01:06:52.440 | and that's what came out of it.
01:06:54.440 | And over time, that scatter plot will be updated
01:06:57.280 | as the technology improves.
01:06:59.920 | But it was just interesting to see the pattern there.
01:07:02.840 | And it is a little troubling in so far
01:07:05.120 | as if you just take the technology as it is today,
01:07:08.080 | it's likely to worsen income inequality
01:07:10.480 | on a lot of dimensions.
01:07:12.200 | - So on this topic of the effect of AI
01:07:16.240 | on the landscape of work,
01:07:21.040 | one of the people that have been speaking about it
01:07:23.640 | in the public domain, public discourse,
01:07:25.760 | is the presidential candidate, Andrew Yang.
01:07:28.080 | - Yeah.
01:07:29.000 | - What are your thoughts about Andrew?
01:07:31.880 | What are your thoughts about UBI,
01:07:34.280 | that universal basic income,
01:07:36.680 | that he made one of the core ideas.
01:07:39.040 | By the way, he has like hundreds of ideas
01:07:40.760 | about like everything.
01:07:41.840 | - He does.
01:07:42.680 | - It's kind of interesting.
01:07:43.600 | - Yeah.
01:07:44.440 | - But what are your thoughts about him
01:07:45.360 | and what are your thoughts about UBI?
01:07:46.720 | - Let me answer the question
01:07:49.600 | about his broader approach first.
01:07:52.040 | I mean, I just love that.
01:07:52.880 | He's really thoughtful, analytical.
01:07:56.420 | I agree with his values.
01:07:58.200 | So that's awesome.
01:07:59.360 | And he read my book
01:08:00.720 | and mentions it sometimes,
01:08:02.160 | so it makes me even more excited.
01:08:03.800 | And the thing that he really made the centerpiece
01:08:07.600 | of his campaign was UBI.
01:08:09.880 | And I was originally kind of a fan of it.
01:08:13.200 | And then as I studied it more,
01:08:14.560 | I became less of a fan,
01:08:15.920 | although I'm beginning to come back a little bit.
01:08:17.360 | So let me tell you a little bit of my evolution.
01:08:19.080 | You know, as an economist,
01:08:20.120 | we have by looking at the problem
01:08:23.000 | of people not having enough income,
01:08:24.280 | and the simplest thing is,
01:08:25.200 | well, why don't we write them a check?
01:08:26.800 | Problem solved.
01:08:27.960 | But then I talked to my sociologist friends
01:08:30.360 | and they really convinced me
01:08:32.720 | that just writing a check
01:08:34.320 | doesn't really get at the core values.
01:08:36.640 | You know, Voltaire once said
01:08:38.240 | that work solves three great ills,
01:08:40.600 | boredom, vice, and need.
01:08:43.320 | And you know, you can deal with the need thing
01:08:45.440 | by writing a check,
01:08:46.600 | but people need a sense of meaning,
01:08:49.240 | they need something to do.
01:08:50.760 | And when, you know, say steel workers
01:08:54.760 | or coal miners lost their jobs
01:08:57.920 | and were just given checks,
01:09:00.360 | alcoholism, depression, divorce,
01:09:03.760 | all those social indicators,
01:09:05.000 | drug use all went way up.
01:09:06.520 | People just weren't happy
01:09:08.000 | just sitting around collecting a check.
01:09:10.440 | Maybe it's part of the way they were raised,
01:09:13.240 | maybe it's something innate in people
01:09:14.720 | that they need to feel wanted and needed.
01:09:17.200 | So it's not as simple as just writing people a check.
01:09:19.560 | You need to also give them a way
01:09:22.600 | to have a sense of purpose.
01:09:23.960 | And that was important to me.
01:09:25.400 | And the second thing is that,
01:09:27.040 | as I mentioned earlier,
01:09:28.560 | you know, we are far from the end of work.
01:09:30.800 | You know, I don't buy the idea
01:09:32.240 | that there's just like not enough work to be done.
01:09:34.160 | I see like our cities need to be cleaned up.
01:09:37.120 | And robots can't do most of that.
01:09:39.000 | You know, we need to have better childcare,
01:09:40.760 | we need better healthcare,
01:09:41.640 | we need to take care of people
01:09:43.040 | who are mentally ill or older.
01:09:44.920 | We need to repair our roads.
01:09:46.480 | There's so much work that require at least partly,
01:09:49.960 | maybe entirely a human component.
01:09:52.280 | So rather than like write all these people off,
01:09:54.680 | let's find a way to repurpose them
01:09:56.840 | and keep them engaged.
01:09:58.240 | Now that said, I would like to see more buying power
01:10:03.680 | from people who are sort of
01:10:05.720 | at the bottom end of the spectrum.
01:10:07.240 | The economy has been designed and evolved
01:10:12.160 | in a way that's I think very unfair
01:10:14.040 | to a lot of hardworking people.
01:10:15.520 | I see super hardworking people
01:10:16.920 | who aren't really seeing their wages grow
01:10:18.960 | over the past 20, 30 years,
01:10:20.680 | while some other people who have been super smart
01:10:24.000 | and or super lucky have made billions
01:10:29.000 | or hundreds of billions.
01:10:30.840 | And I don't think they need those hundreds of billions
01:10:33.720 | to have the right incentives to invent things.
01:10:35.680 | I think if you talk to almost any of them, as I have,
01:10:38.480 | they don't think that they need an extra $10 billion
01:10:42.360 | to do what they're doing.
01:10:43.480 | Most of them probably would love to do it
01:10:46.440 | for only a billion or maybe for nothing.
01:10:48.920 | - For nothing, many of them, yeah.
01:10:50.720 | - I mean, you know, an interesting point to make is,
01:10:53.440 | you know, like, do we think that Bill Gates
01:10:55.160 | would have founded Microsoft if tax rates were 70%?
01:10:58.560 | Well, we know he would have
01:10:59.480 | because they were tax rates of 70% when he founded it.
01:11:03.000 | You know, so I don't think that's as big a deterrent
01:11:06.040 | and we could provide more buying power to people.
01:11:08.960 | My own favorite tool is the earned income tax credit,
01:11:12.640 | which is basically a way of supplementing income
01:11:16.080 | of people who have jobs and giving employers
01:11:18.000 | an incentive to hire even more people.
01:11:20.160 | The minimum wage can discourage employment,
01:11:22.240 | but the earned income tax credit encourages employment
01:11:25.000 | by supplementing people's wages.
01:11:27.560 | You know, if the employer can only afford
01:11:29.840 | to pay him $10 for a task,
01:11:31.600 | the rest of us kick in another five or $10
01:11:35.040 | and bring their wages up to 15 or 20 total.
01:11:37.480 | And then they have more buying power.
01:11:39.200 | Then entrepreneurs are thinking,
01:11:41.000 | how can we cater to them?
01:11:42.160 | How can we make products for them?
01:11:43.920 | And it becomes a self-reinforcing system
01:11:47.040 | where people are better off.
01:11:48.840 | Andrew Ng and I had a good discussion
01:11:51.680 | where he suggested instead of a universal basic income,
01:11:55.760 | he suggested, or instead of an unconditional basic income,
01:11:58.880 | how about a conditional basic income
01:12:00.400 | where the condition is you learn some new skills,
01:12:02.840 | we need to reskill our workforce,
01:12:04.880 | so let's make it easier for people to find ways
01:12:08.920 | to get those skills and get rewarded for doing them.
01:12:11.120 | And that's kind of a neat idea as well.
01:12:12.880 | - That's really interesting.
01:12:13.720 | So, I mean, one of the questions,
01:12:15.960 | one of the dreams of UBI is that you provide
01:12:19.520 | some little safety net while you retrain,
01:12:24.160 | while you learn a new skill.
01:12:25.920 | But I think, I guess you're speaking to the intuition
01:12:28.960 | that that doesn't always,
01:12:31.160 | like there needs to be some incentive to reskill,
01:12:33.640 | to train, to learn a new thing.
01:12:35.120 | - Well, I think it helps.
01:12:35.960 | I mean, there are lots of self-motivated people,
01:12:37.840 | but there are also people that maybe need a little guidance
01:12:40.480 | or help.
01:12:41.320 | And I think it's a really hard question
01:12:44.840 | for someone who is losing a job in one area
01:12:47.120 | to know what is the new area I should be learning skills in.
01:12:50.480 | And we could provide a much better set of tools
01:12:52.480 | and platforms that map to,
01:12:54.360 | okay, here's a set of skills you already have.
01:12:56.280 | Here's something that's in demand.
01:12:58.000 | Let's create a path for you to go from where you are
01:13:00.320 | to where you need to be.
01:13:02.120 | - So I'm a total, how do I put it nicely about myself?
01:13:06.960 | I'm totally clueless about the economy.
01:13:09.520 | It's not totally true, but pretty good approximation.
01:13:14.320 | If you were to try to fix our tax system,
01:13:17.240 | and, or maybe from another side,
01:13:23.240 | if there's fundamental problems in taxation
01:13:26.680 | or some fundamental problems about our economy,
01:13:29.720 | what would you try to fix?
01:13:31.320 | What would you try to speak to?
01:13:33.440 | - You know, I definitely think our whole tax system,
01:13:36.320 | our political and economic system
01:13:39.240 | has gotten more and more screwed up
01:13:40.800 | over the past 20, 30 years.
01:13:43.480 | I don't think it's that hard to make headway
01:13:46.440 | in improving it.
01:13:47.320 | I don't think we need to totally reinvent stuff.
01:13:49.800 | A lot of it is what,
01:13:51.000 | but I've been elsewhere with Andy and others
01:13:52.720 | called economics 101.
01:13:54.600 | You know, there's just some basic principles
01:13:56.360 | that have worked really well in the 20th century
01:14:00.600 | that we sort of forgot, you know,
01:14:01.800 | in terms of investing in education,
01:14:03.920 | investing in infrastructure, welcoming immigrants,
01:14:07.480 | having a tax system that was more progressive and fair
01:14:13.240 | at one point tax rates were on top incomes
01:14:16.520 | were significantly higher.
01:14:18.000 | And they've come down a lot to the point where
01:14:19.840 | in many cases they're lower now
01:14:21.400 | than they are for poorer people.
01:14:23.360 | So, and we could do things like earned income tax credit
01:14:27.920 | to get a little more wonky.
01:14:29.200 | I'd like to see more Pigouvian taxes.
01:14:31.400 | What that means is you tax things that are bad
01:14:35.680 | instead of things that are good.
01:14:36.920 | So right now we tax labor, we tax capital,
01:14:40.640 | and which is unfortunate
01:14:42.200 | because one of the basic principles of economics,
01:14:44.040 | if you tax something, you tend to get less of it.
01:14:46.360 | So, you know, right now there's still work to be done
01:14:48.760 | and still capital to be invested in,
01:14:51.160 | but instead we should be taxing things like pollution
01:14:54.560 | and congestion.
01:14:55.920 | And if we did that, we would have less pollution.
01:14:59.960 | So a carbon tax is, you know,
01:15:02.080 | almost every economist would say it's a no brainer,
01:15:04.080 | whether they're Republican or Democrat,
01:15:07.480 | Greg Mankiw, who's head of George Bush's
01:15:09.600 | Council of Economic Advisers,
01:15:10.680 | or Dick Shmolensey,
01:15:12.920 | who is another Republican economist degree,
01:15:16.000 | and of course a lot of Democratic economists agree as well.
01:15:21.000 | If we taxed carbon,
01:15:22.720 | we could raise hundreds of billions of dollars.
01:15:25.960 | We could take that money and redistribute it
01:15:28.520 | through an earned income tax credit or other things
01:15:31.120 | so that overall our tax system would become more progressive.
01:15:35.200 | We could tax congestion.
01:15:36.920 | One of the things that kills me as an economist
01:15:38.960 | is every time I sit in a traffic jam,
01:15:41.000 | I know that it's completely unnecessary.
01:15:43.200 | This is complete wasted time.
01:15:44.760 | - You could just visualize the cost and productivity.
01:15:47.000 | - Exactly, because they are taking costs from me
01:15:51.200 | and all the people around me,
01:15:52.640 | and if they charged a congestion tax,
01:15:54.800 | they would take that same amount of money
01:15:57.000 | and it would streamline the roads.
01:15:59.680 | Like when you're in Singapore,
01:16:00.680 | the traffic just flows 'cause they have a congestion tax.
01:16:02.600 | They listen to economists.
01:16:03.600 | They invited me and others to go talk to them.
01:16:06.440 | And then I'd still be paying.
01:16:09.200 | I'd be paying a congestion tax instead of paying my time,
01:16:11.680 | but that money would now be available for healthcare,
01:16:14.200 | be available for infrastructure,
01:16:15.480 | or be available just to give to people
01:16:16.840 | so they could buy food or whatever.
01:16:18.600 | So it saddens me when you're sitting in a traffic jam,
01:16:23.280 | it's like taxing me and then taking that money
01:16:25.000 | and dumping it in the ocean, just like destroying it.
01:16:27.760 | So there are a lot of things like that that economists,
01:16:31.640 | and I'm not doing anything radical here.
01:16:33.880 | Most good economists would, I probably agree with me
01:16:37.720 | point by point on these things.
01:16:39.360 | And we could do those things
01:16:40.960 | and our whole economy would become much more efficient.
01:16:43.720 | It'd become fair, invest in R&D and research,
01:16:46.960 | which is close to a free lunch is what we have.
01:16:50.000 | My erstwhile MIT colleague, Bob Sola,
01:16:53.120 | got the Nobel Prize, not yesterday, but 30 years ago,
01:16:57.320 | for describing that most improvements in living standards
01:17:01.320 | come from tech progress.
01:17:02.840 | And Paul Romer later got a Nobel Prize
01:17:04.560 | for noting that investments in R&D and human capital
01:17:08.040 | can speed the rate of tech progress.
01:17:11.040 | So if we do that, then we'll be healthier and wealthier.
01:17:14.680 | - Yeah, from an economics perspective,
01:17:16.200 | I remember taking an undergrad econ,
01:17:18.440 | you mentioned econ 101.
01:17:20.360 | It seemed from all the plots I saw that R&D is an obvious,
01:17:25.360 | that's close to a free lunch as we have.
01:17:29.040 | It seemed like obvious that we should do more research.
01:17:32.360 | - It is.
01:17:33.200 | - Like what?
01:17:34.040 | Like, there's no--
01:17:36.640 | - Well, we should do basic research.
01:17:38.040 | I mean, so let me just be clear.
01:17:39.480 | It'd be great if everybody did more research.
01:17:41.440 | And I would make this thing to be to apply development
01:17:44.400 | versus basic research.
01:17:46.120 | So apply development, like,
01:17:48.160 | how do we get this self-driving car feature
01:17:52.640 | to work better in the Tesla?
01:17:54.000 | That's great for private companies
01:17:55.280 | 'cause they can capture the value from that.
01:17:57.120 | If they make a better self-driving car system,
01:17:59.720 | they can sell cars that are more valuable
01:18:02.280 | and they'll make money.
01:18:03.120 | So there's an incentive.
01:18:03.960 | That there's not a big problem there.
01:18:05.720 | And smart companies, Amazon, Tesla,
01:18:08.200 | and others are investing in it.
01:18:09.440 | The problem is with basic research,
01:18:11.240 | like coming up with core basic ideas,
01:18:14.400 | whether it's in nuclear fusion
01:18:16.120 | or artificial intelligence or biotech.
01:18:18.960 | There, if someone invents something,
01:18:21.640 | it's very hard for them to capture the benefits from it.
01:18:23.880 | It's shared by everybody, which is great in a way,
01:18:26.720 | but it means that they're not gonna have the incentives
01:18:28.640 | to put as much effort into it.
01:18:30.680 | There you need, it's a classic public good.
01:18:32.960 | There you need the government to be involved in it.
01:18:35.120 | And the US government used to be investing much more in R&D,
01:18:39.360 | but we have slashed that part of the government
01:18:42.960 | really foolishly and we're all poorer,
01:18:46.920 | significantly poorer as a result.
01:18:48.440 | Growth rates are down.
01:18:50.000 | We're not having the kind of scientific progress
01:18:51.680 | we used to have.
01:18:53.240 | It's been sort of a short-term,
01:18:55.840 | eating the seed corn, whatever metaphor you wanna use,
01:19:00.240 | where people grab some money, put it in their pockets today,
01:19:03.320 | but five, 10, 20 years later,
01:19:07.120 | they're a lot poorer than they otherwise would have been.
01:19:10.160 | - So we're living through a pandemic right now
01:19:12.320 | globally in the United States.
01:19:14.760 | From an economics perspective,
01:19:18.840 | how do you think this pandemic will change the world?
01:19:23.080 | - It's been remarkable and it's horrible
01:19:26.520 | how many people have suffered, the amount of death,
01:19:29.040 | the economic destruction.
01:19:31.280 | It's also striking just the amount of change in work
01:19:34.320 | that I've seen.
01:19:35.880 | In the last 20 weeks, I've seen more change
01:19:38.480 | than there were in the previous 20 years.
01:19:41.200 | There's been nothing like it
01:19:42.440 | since probably the World War II mobilization
01:19:44.720 | in terms of reorganizing our economy.
01:19:47.080 | The most obvious one is the shift to remote work.
01:19:50.200 | And I and many other people stopped going into the office
01:19:54.280 | and teaching my students in person.
01:19:56.160 | I did a study on this with a bunch of colleagues
01:19:57.760 | at MIT and elsewhere.
01:19:59.200 | And what we found was that before the pandemic
01:20:02.440 | in the beginning of 2020, about one in six,
01:20:05.400 | a little over 15% of Americans were working remotely.
01:20:08.640 | When the pandemic hit, that grew steadily and hit 50%,
01:20:13.560 | roughly half of Americans working at home.
01:20:16.080 | So a complete transformation.
01:20:17.720 | And of course it wasn't even,
01:20:19.120 | it wasn't like everybody did it.
01:20:20.480 | If you're an information worker, a professional,
01:20:22.720 | if you work mainly with data,
01:20:24.360 | then you're much more likely to work at home.
01:20:26.840 | If you're a manufacturing worker,
01:20:28.760 | working with other people or physical things,
01:20:32.280 | then it wasn't so easy to work at home.
01:20:34.480 | And instead, those people were much more likely
01:20:36.440 | to become laid off or unemployed.
01:20:39.240 | So it's been something that's had very disparate effects
01:20:41.800 | on different parts of the workforce.
01:20:44.480 | - Do you think it's gonna be sticky
01:20:46.120 | in a sense that after vaccine comes out
01:20:49.280 | and the economy reopens,
01:20:51.000 | do you think remote work will continue?
01:20:54.240 | - That's a great question.
01:20:56.680 | I, my hypothesis is yes, a lot of it will.
01:20:59.320 | Of course, some of it will go back,
01:21:00.760 | but a surprising amount of it will stay.
01:21:03.440 | I personally, for instance, I moved my seminars,
01:21:06.560 | my academic seminars to Zoom
01:21:08.800 | and I was surprised how well it worked.
01:21:10.760 | - That's how it works?
01:21:11.600 | - Yeah, I mean, obviously we were able to reach
01:21:13.560 | a much broader audience.
01:21:14.760 | So we have people tuning in from Europe
01:21:16.560 | and other countries,
01:21:18.480 | just all over the United States for that matter.
01:21:20.280 | I also actually found that it would,
01:21:21.720 | in many ways, it's more egalitarian.
01:21:23.480 | We use the chat feature and other tools
01:21:25.880 | and grad students and others
01:21:27.160 | who might've been a little shy about speaking up,
01:21:29.360 | we now kind of have more of ability for lots of voices
01:21:32.680 | and they're answering each other's questions.
01:21:34.320 | So you kind of get parallel.
01:21:35.920 | Like if someone had some question about, you know,
01:21:38.160 | some of the data or a reference or whatever,
01:21:40.640 | then someone else in the chat would answer it.
01:21:42.600 | And the whole thing just became like a higher bandwidth,
01:21:44.520 | higher quality thing.
01:21:46.600 | So I thought that was kind of interesting.
01:21:48.480 | I think a lot of people are discovering
01:21:50.320 | that these tools that, you know,
01:21:52.040 | thanks to technologists have been developed
01:21:54.520 | over the past decade,
01:21:56.520 | they're a lot more powerful than we thought.
01:21:58.000 | I mean, all the terrible things we've seen with COVID
01:22:00.200 | and the real failure of many of our institutions
01:22:03.440 | that I thought would work better.
01:22:05.040 | One area that's been a bright spot is our technologies.
01:22:08.880 | You know, bandwidth has held up pretty well
01:22:11.920 | and all of our email and other tools have just,
01:22:14.640 | you know, scaled up kind of gracefully.
01:22:18.040 | So that's been a plus.
01:22:20.320 | Economists call this question
01:22:21.720 | of whether it'll go back a hysteresis.
01:22:23.960 | The question is like when you boil an egg,
01:22:25.960 | after it gets cold again, it stays hard.
01:22:29.080 | And I think that we're going to have a fair amount
01:22:30.920 | of hysteresis in the economy.
01:22:32.240 | We're going to move to this new,
01:22:33.520 | we have moved to a new remote work system
01:22:35.720 | and it's not going to snap all the way back
01:22:37.320 | to where it was before.
01:22:38.800 | - One of the things that worries me
01:22:40.800 | is that the people with lots of followers on Twitter
01:22:47.160 | and people with voices,
01:22:51.440 | people that can, voices that can be magnified by,
01:22:55.280 | you know, reporters and all that kind of stuff
01:22:57.400 | are the people that fall into this category
01:22:59.280 | that we were referring to just now
01:23:01.640 | where they can still function
01:23:03.040 | and be successful with remote work.
01:23:06.280 | And then there is a kind of quiet suffering
01:23:11.280 | of what feels like millions of people
01:23:14.800 | whose jobs are disturbed profoundly by this pandemic,
01:23:21.200 | but they don't have many followers on Twitter.
01:23:23.500 | What do we,
01:23:27.160 | and again, I apologize,
01:23:31.840 | but I've been reading the rise and fall of the third Reich
01:23:35.840 | and there's a connection to the depression
01:23:38.120 | on the American side.
01:23:39.620 | There's a deep, complicated connection
01:23:42.360 | to how suffering can turn into forces
01:23:46.440 | that potentially change the world
01:23:48.160 | in destructive ways.
01:23:51.960 | So like it's something I worry about,
01:23:53.560 | is like what is this suffering going to materialize itself
01:23:56.600 | in five, 10 years?
01:23:58.080 | Is that something you worry about, think about?
01:24:01.040 | - It's like the center of what I worry about.
01:24:03.320 | And let me break it down to two parts.
01:24:05.400 | You know, there's a moral and ethical aspect to it
01:24:07.240 | that we need to relieve this suffering.
01:24:09.340 | I mean, I'm share the values of, I think most Americans,
01:24:13.280 | we like to see shared prosperity
01:24:15.000 | or most people on the planet.
01:24:16.620 | And we would like to see people not falling behind
01:24:20.220 | and they have fallen behind, not just due to COVID,
01:24:23.080 | but in the previous couple of decades,
01:24:25.640 | a median income has barely moved,
01:24:27.920 | depending on how you measure it.
01:24:29.880 | And the incomes of the top 1% have skyrocketed.
01:24:33.360 | And part of that is due to the ways technology
01:24:35.920 | has been used.
01:24:36.760 | Part of this has been due to, frankly,
01:24:37.920 | our political system has continually shifted more wealth
01:24:42.480 | into those people who have the powerful interest.
01:24:45.120 | So there's just, I think, a moral imperative
01:24:48.720 | to do a better job.
01:24:49.800 | And ultimately, we're all gonna be wealthier
01:24:51.900 | if more people can contribute,
01:24:53.320 | more people have the wherewithal.
01:24:55.060 | But the second thing is that there's a real political risk.
01:24:58.640 | I'm not a political scientist,
01:24:59.940 | but you don't have to be one, I think,
01:25:02.560 | to see how a lot of people are really upset
01:25:05.660 | with their getting a raw deal.
01:25:07.380 | And they are going to, you know,
01:25:11.420 | they wanna smash the system in different ways.
01:25:13.680 | In 2016 and 2018, and now, I think there are a lot of people
01:25:18.280 | who are looking at the political system
01:25:19.600 | and they feel like it's not working for them
01:25:21.120 | and they just wanna do something radical.
01:25:23.740 | Unfortunately, demagogues have harnessed that
01:25:28.160 | in a way that is pretty destructive to the country.
01:25:32.040 | And an analogy I see is what happened with trade.
01:25:36.560 | You know, almost every economist thinks
01:25:38.680 | that free trade is a good thing,
01:25:40.320 | that when two people voluntarily exchange,
01:25:42.440 | almost by definition, they're both better off
01:25:44.940 | if it's voluntary.
01:25:45.920 | And so generally trade is a good thing,
01:25:49.800 | but they also recognize that trade
01:25:51.320 | can lead to uneven effects,
01:25:54.080 | that there can be winners and losers
01:25:56.240 | in some of the people who didn't have the skills
01:25:59.280 | to compete with somebody else,
01:26:00.420 | or didn't have other assets.
01:26:02.880 | And so trade can shift prices
01:26:04.920 | in ways that are averse to some people.
01:26:07.500 | So there's a formula that economists have,
01:26:11.340 | which is that you have free trade,
01:26:13.480 | but then you compensate the people who were hurt.
01:26:15.960 | And free trade makes the pie bigger.
01:26:18.440 | And since the pie is bigger,
01:26:19.480 | it's possible for everyone to be better off.
01:26:21.960 | You can make the winners better off,
01:26:23.240 | but you can also compensate those who don't win.
01:26:25.440 | And so they end up being better off as well.
01:26:28.500 | What happened was that we didn't fulfill that promise.
01:26:33.200 | We did have some more increased free trade
01:26:36.080 | in the '80s and '90s,
01:26:38.000 | but we didn't compensate the people who were hurt.
01:26:40.680 | And so they felt like the people in power
01:26:43.840 | reneged on the bargain, and I think they did.
01:26:45.960 | And so then there's a backlash against trade.
01:26:48.800 | And now both political parties,
01:26:50.880 | but especially Trump and company,
01:26:53.680 | have really pushed back against free trade.
01:26:58.260 | Ultimately, that's bad for the country.
01:27:00.720 | Ultimately, that's bad for living standards.
01:27:02.760 | But in a way I can understand
01:27:04.440 | that people felt they were betrayed.
01:27:07.140 | Technology has a lot of similar characteristics.
01:27:10.740 | Technology can make us all better off.
01:27:14.980 | It makes the pie bigger.
01:27:16.180 | It creates wealth and health, but it can also be uneven.
01:27:18.980 | Not everyone automatically benefits.
01:27:21.340 | It's possible for some people,
01:27:22.940 | even a majority of people, to get left behind
01:27:25.140 | while a small group benefits.
01:27:27.240 | What most economists would say,
01:27:29.620 | well, let's make the pie bigger,
01:27:30.940 | but let's make sure we adjust the system
01:27:33.020 | so we compensate the people who are hurt.
01:27:35.260 | And since the pie is bigger,
01:27:36.960 | we can make the rich richer,
01:27:38.040 | we can make the middle class richer,
01:27:39.220 | we can make the poor richer.
01:27:41.020 | Mathematically, everyone could be better off.
01:27:43.700 | But again, we're not doing that.
01:27:45.460 | And again, people are saying, this isn't working for us.
01:27:48.980 | And again, instead of fixing the distribution,
01:27:52.580 | a lot of people are beginning to say,
01:27:54.340 | hey, technology sucks, we've got to stop it.
01:27:57.300 | Let's throw rocks at the Google bus.
01:27:59.060 | - Let's blow it up.
01:28:00.020 | - Let's blow it up.
01:28:01.300 | And there were the Luddites almost exactly 200 years ago
01:28:04.780 | who smashed the looms and the spinning machines
01:28:08.060 | because they felt like those machines weren't helping them.
01:28:11.340 | We have a real imperative,
01:28:12.740 | not just to do the morally right thing,
01:28:14.740 | but to do the thing that is gonna save the country,
01:28:17.580 | which is make sure that we create not just prosperity,
01:28:20.460 | but shared prosperity.
01:28:21.680 | - So you've been at MIT for over 30 years, I think.
01:28:27.620 | - Don't tell anyone how old I am.
01:28:28.460 | Yeah, that's true, that's true.
01:28:30.300 | - And you're now moved to Stanford.
01:28:33.980 | I'm gonna try not to say anything
01:28:35.700 | about how great MIT is.
01:28:38.880 | What's that movement like?
01:28:41.580 | It's East Coast to West Coast.
01:28:44.940 | - Well, MIT is great.
01:28:46.180 | MIT has been very good to me,
01:28:48.100 | it continues to be very good to me.
01:28:49.580 | It's an amazing place.
01:28:51.460 | I continue to have so many amazing friends
01:28:53.220 | and colleagues there.
01:28:54.580 | I'm very fortunate to have been able
01:28:56.140 | to spend a lot of time at MIT.
01:28:58.500 | Stanford's also amazing.
01:29:00.220 | And part of what attracted me out here
01:29:01.980 | was not just the weather, but also Silicon Valley,
01:29:04.940 | let's face it, is really more of the epicenter
01:29:07.380 | of the technological revolution.
01:29:08.980 | And I wanna be close to the people
01:29:10.380 | who are inventing AI and elsewhere.
01:29:12.340 | A lot of it is being invested at MIT, for that matter,
01:29:14.940 | in Europe and China and elsewhere, India.
01:29:18.940 | But being a little closer to some of the key technologists
01:29:23.780 | was something that was important to me.
01:29:25.940 | And it may be shallow,
01:29:28.600 | but I also do enjoy the good weather.
01:29:31.900 | I felt a little ripped off when I came here
01:29:34.220 | a couple of months ago,
01:29:35.060 | and immediately there are the fires,
01:29:36.660 | and my eyes were burning, the sky was orange,
01:29:39.860 | and there's the heat waves.
01:29:41.340 | And so it wasn't exactly what I'd been promised,
01:29:44.460 | but fingers crossed it'll get back to better.
01:29:47.940 | - Then maybe on a brief aside,
01:29:50.700 | there's been some criticism of academia and universities
01:29:53.860 | and different avenues.
01:29:55.740 | And I, as a person who's gotten to enjoy universities
01:30:00.780 | from the pure playground of ideas that it can be,
01:30:04.380 | always kind of try to find the words to tell people
01:30:09.980 | that these are magical places.
01:30:13.180 | Is there something that you can speak to
01:30:17.020 | that is beautiful or powerful about universities?
01:30:22.020 | - Well, sure.
01:30:23.300 | I mean, first off, I mean,
01:30:24.540 | economists have this concept called revealed preference.
01:30:26.700 | You can ask people what they say,
01:30:28.340 | or you can watch what they do.
01:30:29.980 | And so obviously by revealed preferences,
01:30:32.300 | I love academia.
01:30:33.140 | I'm out here, I could be doing lots of other things,
01:30:35.660 | but it's something I enjoy a lot.
01:30:37.700 | And I think the word magical is exactly right,
01:30:39.740 | at least it is for me.
01:30:41.580 | I do what I love.
01:30:43.220 | Hopefully my dean won't be listening,
01:30:44.420 | but I would do this for free.
01:30:45.820 | It's just what I like to do.
01:30:49.180 | I like to do research.
01:30:50.260 | I love to have conversations like this with you
01:30:51.940 | and with my students, with my fellow colleagues.
01:30:53.860 | I love being around the smartest people I can find
01:30:55.860 | and learning something from them
01:30:57.340 | and having them challenge me.
01:30:58.740 | And that just gives me joy.
01:31:02.580 | And every day I find something new and exciting to work on.
01:31:05.620 | And a university environment is really filled
01:31:08.140 | with other people who feel that way.
01:31:09.940 | And so I feel very fortunate to be part of it.
01:31:13.060 | And I'm lucky that I'm in a society
01:31:14.940 | where I can actually get paid for it
01:31:16.300 | and put food on the table
01:31:17.340 | while doing the stuff that I really love.
01:31:19.380 | And I hope someday everybody can have jobs
01:31:21.700 | that are like that.
01:31:22.900 | And I appreciate that it's not necessarily easy
01:31:25.460 | for everybody to have a job that they both love.
01:31:27.500 | And also they get paid for.
01:31:29.500 | So there are things that don't go well in academia,
01:31:34.140 | but by and large, I think it's a kind of,
01:31:36.140 | kinder, gentler version of a lot of the world.
01:31:38.180 | - Yes, that's true, exactly.
01:31:39.340 | - We sort of cut each other a little slack
01:31:41.380 | on things like, on just a lot of things.
01:31:46.100 | Of course there's harsh debates and discussions about things
01:31:49.980 | and some petty politics here and there.
01:31:52.140 | I personally, I try to stay away
01:31:53.620 | from most of that sort of politics.
01:31:55.700 | It's not my thing and so it doesn't affect me
01:31:57.660 | most of the time, sometimes a little bit maybe.
01:32:00.500 | But being able to pull together something,
01:32:03.180 | we have the Digital Economy Lab.
01:32:04.860 | We get all these brilliant grad students
01:32:07.460 | and undergraduates and postdocs
01:32:09.300 | that are just doing stuff that I learned from.
01:32:12.340 | And every one of them has some aspect
01:32:14.780 | of what they're doing that's just,
01:32:16.660 | I couldn't even understand.
01:32:17.620 | It's like way, way more brilliant.
01:32:19.340 | And that's really, to me, actually I really enjoy that,
01:32:23.060 | being in a room with lots of other smart people.
01:32:25.140 | And Stanford has made it very easy to attract those people.
01:32:30.140 | I just say I'm gonna do a seminar or whatever
01:32:33.700 | and the people come, they come and wanna work with me.
01:32:36.820 | We get funding, we get data sets
01:32:38.900 | and it's come together real nicely.
01:32:41.460 | - And the rest is just fun.
01:32:44.220 | - It's fun, yeah.
01:32:45.060 | And we feel like we're working on important problems
01:32:47.820 | and we're doing things that I think are first order
01:32:51.860 | in terms of what's important in the world
01:32:54.140 | and that's very satisfying to me.
01:32:56.300 | - Maybe a bit of a fun question.
01:32:58.060 | What three books, technical, fiction, philosophical,
01:33:02.220 | you've enjoyed, had a big impact in your life?
01:33:07.420 | - Well, I guess I go back to my teen years
01:33:10.020 | and I read "Sidartha" which is a philosophical book
01:33:13.460 | and kinda helps keep me centered.
01:33:15.420 | - Herman Hest.
01:33:16.260 | - Yeah, my Herman Hest, exactly.
01:33:17.380 | Don't get too wrapped up in material things or other things
01:33:21.220 | and just sort of try to find peace on things.
01:33:24.860 | A book that actually influenced me a lot
01:33:26.380 | in terms of my career was called
01:33:27.620 | "The Worldly Philosophers" by Robert Heilbroner.
01:33:30.500 | It's actually about economists.
01:33:31.700 | It goes through a series of different companies
01:33:33.620 | written in a very lively form.
01:33:34.980 | It probably sounds boring, but it did describe
01:33:37.860 | whether it's Adam Smith or Karl Marx or John Maynard Keynes
01:33:40.900 | and each of them sort of what their key insights were
01:33:43.380 | but also kind of their personalities
01:33:45.380 | and I think that's one of the reasons
01:33:46.580 | I became an economist was just understanding
01:33:50.660 | how they grappled with the big questions of the world.
01:33:53.140 | - So would you recommend it as a good whirlwind overview
01:33:56.380 | of the history of economics?
01:33:57.580 | - Yeah, yeah, I think that's exactly right.
01:33:59.100 | It kinda takes you through the different things
01:34:00.900 | so you can understand how they reach,
01:34:04.060 | thinking some of the strengths and weaknesses.
01:34:06.420 | I mean, probably it's a little out of date now.
01:34:07.900 | It needs to be updated a bit, but you could at least look
01:34:10.220 | through the first couple hundred years of economics
01:34:12.940 | which is not a bad place to start.
01:34:15.020 | More recently, a book I really enjoyed
01:34:17.580 | is by my friend and colleague Max Tegmark
01:34:20.260 | called "Life 3.0."
01:34:21.340 | You should have him on your podcast if you haven't already.
01:34:23.300 | - It was episode number one.
01:34:25.460 | - Oh my God.
01:34:26.540 | - And he's back, he'll be back soon.
01:34:30.220 | - Yeah, no, he's terrific.
01:34:31.460 | I love the way his brain works
01:34:33.460 | and he makes you think about profound things.
01:34:35.780 | He's got such a joyful approach to life
01:34:38.560 | and so that's been a great book and I learn a lot from it.
01:34:42.700 | I think everybody explains it in a way,
01:34:44.380 | even though he's so brilliant, that everyone can understand,
01:34:47.340 | that I can understand.
01:34:50.020 | That's three, but let me mention maybe one or two others.
01:34:52.900 | I mean, I recently read "More from Less"
01:34:55.300 | by my sometimes co-author Andrew McAfee.
01:34:58.580 | It made me optimistic about how we can continue
01:35:01.900 | to have rising living standards
01:35:04.540 | while living more lightly on the planet.
01:35:06.100 | In fact, because of higher living standards,
01:35:07.820 | because of technology, because of digitization
01:35:10.260 | that I mentioned, we don't have to have
01:35:12.220 | as big an impact on the planet
01:35:13.540 | and that's a great story to tell
01:35:15.700 | and he documents it very carefully.
01:35:19.720 | You know, a personal kind of self-help book
01:35:21.360 | that I found kind of useful, people, is "Atomic Habits."
01:35:24.100 | I think it's, what's his name, James Clear.
01:35:26.180 | - Yeah, James Clear.
01:35:27.500 | - He's just, yeah, it's a good name
01:35:29.100 | 'cause he writes very clearly
01:35:30.420 | and most of the sentences I read in that book,
01:35:33.600 | I was like, yeah, I know that,
01:35:34.460 | but it just really helps to have somebody remind you
01:35:37.200 | and tell you and kind of just reinforce it.
01:35:39.740 | - So build habits in your life
01:35:42.500 | that you hope to have a positive impact
01:35:46.100 | and don't have to make it big things.
01:35:48.040 | It could be just tiny little--
01:35:49.180 | - Exactly, I mean, the word atomic,
01:35:50.740 | it's a little bit of a pun, I think he says.
01:35:52.540 | You know, one, atomic means they're really small
01:35:54.020 | and you take these little things,
01:35:55.540 | but also like atomic power, it can have big impact.
01:35:59.500 | - That's funny, yeah.
01:36:00.600 | The biggest ridiculous question,
01:36:04.220 | especially to ask an economist, but also a human being,
01:36:07.140 | what's the meaning of life?
01:36:08.460 | (laughing)
01:36:09.300 | - I hope you've gotten the answer to that from somebody.
01:36:11.540 | I think we're all still working on that one,
01:36:13.560 | but what is it?
01:36:14.780 | You know, I actually learned a lot from my son, Luke,
01:36:18.140 | and he's 19 now, but he's always loved philosophy,
01:36:22.180 | and he reads way more sophisticated philosophy than I do.
01:36:24.940 | I once took him to Oxford and he spent the whole time
01:36:26.820 | pulling all these obscure books down and reading them.
01:36:29.020 | And a couple years ago, we had this argument,
01:36:32.600 | and he was trying to convince me that hedonism
01:36:34.520 | was the ultimate meaning of life, just pleasure seeking.
01:36:39.520 | (laughing)
01:36:40.580 | - Well, how old was he at the time?
01:36:41.580 | - 17.
01:36:42.420 | (laughing)
01:36:44.420 | But he made a really good intellectual argument for it too.
01:36:47.860 | - Of course.
01:36:48.700 | - But it just didn't strike me as right.
01:36:50.180 | And I think that while I am kind of a utilitarian,
01:36:54.520 | like I do think we should do the greatest good
01:36:56.060 | for the greatest number, that's just too shallow.
01:36:58.740 | And I think I've convinced myself that real happiness
01:37:02.820 | doesn't come from seeking pleasure.
01:37:04.260 | It's kind of a little, it's ironic.
01:37:05.700 | Like if you really focus on being happy,
01:37:07.700 | I think it doesn't work.
01:37:09.740 | You gotta be doing something bigger.
01:37:12.440 | I think the analogy I sometimes use is
01:37:14.900 | when you look at a dim star in the sky,
01:37:17.620 | if you look right at it, it kind of disappears,
01:37:19.460 | but you have to look a little to the side,
01:37:20.760 | and then the parts of your retina
01:37:23.180 | that are better at absorbing light can pick it up better.
01:37:26.360 | It's the same thing with happiness.
01:37:27.440 | I think you need to sort of find something other goal,
01:37:32.440 | something, some meaning in life,
01:37:34.020 | and that ultimately makes you happier
01:37:36.220 | than if you go squarely at just pleasure.
01:37:39.100 | And so for me, the kind of research I do
01:37:42.300 | that I think is trying to change the world,
01:37:44.260 | make the world a better place,
01:37:46.160 | and I'm not like an evolutionary psychologist,
01:37:48.020 | but my guess is that our brains are wired
01:37:50.900 | not just for pleasure, but we're social animals,
01:37:53.900 | and we're wired to help others.
01:37:57.260 | And ultimately, that's something
01:37:59.460 | that's really deeply rooted in our psyche.
01:38:02.100 | And if we do help others, if we do,
01:38:04.540 | or at least feel like we're helping others,
01:38:06.900 | our reward systems kick in,
01:38:08.280 | and we end up being more deeply satisfied
01:38:10.500 | than if we just do something selfish and shallow.
01:38:13.660 | - Beautifully put.
01:38:14.480 | I don't think there's a better way to end it.
01:38:16.220 | Eric, you were one of the people
01:38:17.900 | when I first showed up at MIT
01:38:20.540 | that made me proud to be at MIT.
01:38:22.460 | So it's so sad that you're now at Stanford,
01:38:24.580 | but I'm sure you'll do wonderful things
01:38:27.820 | at Stanford as well.
01:38:28.780 | I can't wait till future books,
01:38:30.960 | and people should definitely read your other books.
01:38:32.300 | - Well, thank you so much.
01:38:33.140 | And I think we're all part of the invisible college,
01:38:35.620 | as we call it.
01:38:36.460 | We're all part of this intellectual and human community
01:38:40.220 | where we all can learn from each other.
01:38:41.740 | It doesn't really matter physically
01:38:43.140 | where we are so much anymore.
01:38:44.860 | - Beautiful.
01:38:45.700 | Thanks for talking today.
01:38:46.580 | - My pleasure.
01:38:48.060 | - Thanks for listening to this conversation
01:38:49.460 | with Eric Ben-Jolson, and thank you to our sponsors.
01:38:52.620 | Vincero Watches, the maker of classy,
01:38:55.060 | well-performing watches.
01:38:56.860 | Four Sigmatic, the maker of delicious mushroom coffee.
01:39:00.060 | ExpressVPN, the VPN I've used for many years
01:39:03.100 | to protect my privacy on the internet.
01:39:05.240 | And Cash App, the app I use to send money to friends.
01:39:09.140 | Please check out these sponsors in the description
01:39:11.180 | to get a discount and to support this podcast.
01:39:14.900 | If you enjoy this thing, subscribe on YouTube,
01:39:17.280 | review it with Five Stars on Apple Podcasts,
01:39:19.500 | follow on Spotify, support on Patreon,
01:39:22.100 | or connect with me on Twitter @LexFriedman.
01:39:25.380 | And now, let me leave you with some words
01:39:27.700 | from Albert Einstein.
01:39:29.940 | "It has become appallingly obvious
01:39:32.820 | "that our technology has exceeded our humanity."
01:39:37.500 | Thank you for listening, and hope to see you next time.
01:39:40.340 | (upbeat music)
01:39:42.920 | (upbeat music)
01:39:45.500 | [BLANK_AUDIO]