back to index

Manolis Kellis: Evolution of Human Civilization and Superintelligent AI | Lex Fridman Podcast #373


Chapters

0:0 Introduction
1:28 Humans vs AI
10:34 Evolution
32:18 Nature vs Nurture
44:47 AI alignment
51:11 Impact of AI on the job market
62:50 Human gatherings
67:51 Human-AI relationships
77:55 Being replaced by AI
90:21 Fear of death
102:17 Consciousness
109:42 AI rights and regulations
115:25 Halting AI development
128:36 Education
134:0 Biology research
141:20 Meaning of life
143:53 Loneliness

Whisper Transcript | Transcript Only Page

00:00:00.000 | maybe we shouldn't think of AI as our tool
00:00:02.920 | and as our assistant.
00:00:04.280 | Maybe we should really think of it as our children.
00:00:06.880 | And the same way that you are responsible
00:00:10.080 | for training those children,
00:00:11.600 | but they are independent human beings,
00:00:13.840 | and at some point they will surpass you.
00:00:16.480 | And this whole concept of alignment,
00:00:19.100 | of basically making sure that the AI
00:00:20.920 | is always at the service of humans,
00:00:22.960 | is very self-serving and very limiting.
00:00:25.520 | If instead you basically think about AI as a partner
00:00:29.720 | and AI as someone that shares your goals, but has freedom,
00:00:34.720 | then we can't just simply force it to align with ourselves
00:00:38.840 | and we not align with it.
00:00:41.040 | So in a way, building trust is a mutual.
00:00:44.200 | You can't just simply train an intelligent system
00:00:47.940 | to love you when it realizes that you can just shut it off.
00:00:51.640 | - The following is a conversation with Manolis Galis,
00:00:57.280 | his fifth time on this podcast.
00:00:59.880 | He's a professor at MIT
00:01:01.360 | and head of the MIT Computational Biology Group.
00:01:04.680 | He's one of the greatest living scientists in the world,
00:01:07.720 | but he's also a humble, kind, caring human being
00:01:12.580 | that I have the greatest of honors and pleasures
00:01:15.740 | of being able to call a friend.
00:01:18.080 | This is the Lex Friedman Podcast.
00:01:20.360 | To support it, please check out our sponsors
00:01:22.360 | in the description.
00:01:23.920 | And now, dear friends, here's Manolis Galis.
00:01:27.700 | - Good to see you, first of all, man.
00:01:29.800 | - Lex, I've missed you.
00:01:30.640 | I think you've changed the lives of so many people
00:01:32.560 | that I know, and it's truly such a pleasure to be back,
00:01:36.140 | such a pleasure to see you grow,
00:01:37.480 | to sort of reach so many different aspects
00:01:39.440 | of your own personality.
00:01:40.480 | - Thank you for the love.
00:01:41.320 | You always give me so much support and love.
00:01:42.920 | I just can't, I'm forever grateful for that.
00:01:45.680 | - It's lovely to see a fellow human being who has that love,
00:01:48.880 | who basically does not judge people.
00:01:51.040 | And there's so many judgmental people out there,
00:01:53.340 | and it's just so nice to see this beacon of openness.
00:01:57.100 | - So what makes me one instantiation
00:01:59.360 | of human irreplaceable, do you think,
00:02:01.100 | as we enter this increasingly capable,
00:02:04.280 | age of increasingly capable AI?
00:02:06.680 | I have to ask, what do you think makes humans irreplaceable?
00:02:10.040 | - So humans are irreplaceable because of the baggage
00:02:13.580 | that we talked about.
00:02:14.640 | So we talked about baggage.
00:02:16.160 | We talked about the fact that every one of us
00:02:18.960 | has effectively relearned all of human civilization
00:02:23.340 | in their own way.
00:02:24.700 | So every single human has a unique set of genetic variants
00:02:28.600 | that they've inherited, some common, some rare,
00:02:31.500 | and some make us think differently,
00:02:33.340 | some make us have different personalities.
00:02:36.060 | They say that a parent with one child believes in genetics.
00:02:40.180 | A parent with multiple children understands genetics.
00:02:43.460 | Just how different kids are.
00:02:44.820 | And my three kids have dramatically different personalities
00:02:47.780 | ever since the beginning.
00:02:49.140 | So one thing that makes us unique
00:02:50.460 | is that every one of us has a different hardware.
00:02:53.280 | The second thing that makes us unique
00:02:54.500 | is that every one of us has a different software,
00:02:56.900 | uploading of all of human society,
00:03:00.460 | all of human civilization, all of human knowledge.
00:03:02.700 | We're not born knowing it.
00:03:04.340 | We're not like, I don't know,
00:03:05.820 | birds that learn how to make a nest through genetics,
00:03:10.180 | and will make a nest even if they've never seen one.
00:03:12.560 | We are constantly relearning all of human civilization.
00:03:15.660 | So that's the second thing.
00:03:16.900 | And the third one that actually makes humans
00:03:19.340 | very different from AI is that the baggage we carry
00:03:22.540 | is not experiential baggage, it's also evolutionary baggage.
00:03:26.260 | So we have evolved through rounds of complexity.
00:03:31.260 | So just like ogres have layers, and Shrek has layers,
00:03:34.820 | humans have layers.
00:03:36.420 | There's the cognitive layer, which is sort of the outer,
00:03:39.940 | you know, most, the latest evolutionary innovation,
00:03:43.060 | this enormous neocortex that we have evolved.
00:03:45.920 | And then there's the emotional baggage underneath that.
00:03:50.580 | And then there's all of the fear, and fright, and flight,
00:03:53.300 | and all of these kinds of behaviors.
00:03:55.220 | So AI only has a neocortex.
00:03:59.100 | AI doesn't have a limbic system.
00:04:01.020 | It doesn't have this complexity of human emotions,
00:04:04.580 | which make us so, I think, beautifully complex,
00:04:08.860 | so beautifully intertwined with our emotions,
00:04:13.780 | with our instincts, with our, you know,
00:04:16.840 | sort of gut reactions, and all of that.
00:04:19.320 | So I think when humans are trying to suppress that aspect,
00:04:22.320 | the sort of, quote unquote, more human aspect
00:04:24.780 | towards a more cerebral aspect,
00:04:26.600 | I think we lose a lot of the creativity.
00:04:28.580 | We lose a lot of the, you know, freshness of humans.
00:04:32.500 | And I think that's quite irreplaceable.
00:04:33.840 | - So we can look at the entirety of people
00:04:36.080 | that are alive today, maybe all humans who have ever lived,
00:04:39.680 | and map them in this high dimensional space,
00:04:42.080 | and there's probably a center,
00:04:44.080 | a center of mass for that mapping,
00:04:48.200 | and a lot of us deviate in different directions.
00:04:50.320 | So the variety of directions in which we all deviate
00:04:54.680 | from that center is vast.
00:04:56.280 | - I would like to think that the center is actually empty.
00:05:00.000 | - Yes.
00:05:00.840 | - That basically humans are just so diverse from each other
00:05:03.840 | that there's no such thing as an average human.
00:05:06.320 | That every one of us has some kind of complex baggage
00:05:09.280 | of emotions, intellectual, you know,
00:05:12.360 | motivational, behavioral traits,
00:05:16.200 | that it's not just one sort of normal distribution
00:05:20.080 | we deviate from it.
00:05:20.960 | There's so many dimensions that we're kind of hitting
00:05:24.360 | the sort of sparseness, the curse of dimensionality,
00:05:27.920 | where it's actually quite sparsely populated.
00:05:30.720 | And I don't think you have an average human being.
00:05:33.360 | - So what makes us unique in part is the diversity
00:05:38.240 | and the capacity for diversity,
00:05:40.560 | and the capacity of the diversity comes from
00:05:43.240 | the entire evolutionary history.
00:05:45.200 | So there's just so many ways we can vary from each other.
00:05:49.880 | - Yeah, I would say not just the capacity,
00:05:52.740 | but the inevitability of diversity.
00:05:55.600 | Basically, it's in our hardware.
00:05:57.320 | We are wired differently from each other.
00:06:00.140 | My siblings and I are completely different.
00:06:01.680 | My kids from each other are completely different.
00:06:03.280 | My wife has, she's like number two of six siblings.
00:06:06.440 | From a distance, they look the same,
00:06:08.160 | but then you get to know them,
00:06:10.000 | every one of them is completely different.
00:06:12.480 | - But sufficiently the same that the differences
00:06:15.160 | interplay with each other.
00:06:16.160 | So that's the interesting thing,
00:06:17.920 | where the diversity is functional, it's useful.
00:06:21.720 | So it's like we're close enough to where we notice
00:06:24.280 | the diversity and it doesn't completely destroy
00:06:28.460 | the possibility of effective communication and interaction.
00:06:31.280 | So we're still the same kind of thing.
00:06:33.360 | - So what I said in one of our earlier podcasts
00:06:35.200 | is that if humans realize that we're 99.9% identical,
00:06:39.400 | we would basically stop fighting with each other.
00:06:42.240 | Like we are really one human species
00:06:45.640 | and we are so, so similar to each other.
00:06:49.480 | And if you look at the alternative,
00:06:52.500 | if you look at the next thing outside humans,
00:06:55.480 | like it's been six million years
00:06:56.960 | that we haven't had a relative.
00:06:58.560 | So it's truly extraordinary that we're kind of like
00:07:02.920 | this dot in outer space
00:07:05.400 | compared to the rest of life on earth.
00:07:07.520 | - When you think about evolving through rounds of complexity,
00:07:10.280 | can you maybe elaborate such a beautiful phrase,
00:07:12.840 | beautiful thought that there's layers of complexity
00:07:15.880 | that make--
00:07:16.960 | - So with software, sometimes you're like,
00:07:20.080 | oh, let's like build version two from scratch.
00:07:23.040 | But this doesn't happen in evolution.
00:07:25.100 | In evolution, you layer in additional features
00:07:28.360 | on top of old features.
00:07:29.800 | So basically, every single time my cells divide,
00:07:34.800 | I'm a yeast, like I'm a unicellular organism.
00:07:38.760 | And then cell division is basically identical.
00:07:41.640 | Every time I breathe in and my lungs expand,
00:07:45.340 | I'm basically, like every time my heart beats, I'm a fish.
00:07:50.160 | So basically, I still have the same heart.
00:07:52.800 | Like very, very little has changed.
00:07:54.720 | The blood going through my veins, the oxygen,
00:07:59.080 | our immune system, we're basically primates.
00:08:02.160 | Our social behavior, we're basically new world monkeys
00:08:05.560 | and old world monkeys.
00:08:06.520 | We're basically this concept that every single one
00:08:11.520 | of these behaviors can be traced somewhere in evolution.
00:08:15.920 | And that all of that continues to live within us
00:08:19.080 | is also a testament to not just not killing other humans,
00:08:21.920 | for God's sake, but like not killing other species either.
00:08:25.240 | Like just to realize just how united we are with nature
00:08:28.280 | and that all of these biological processes
00:08:30.160 | have never ceased to exist.
00:08:31.680 | They're continuing to live within us.
00:08:33.440 | And then just the neocortex
00:08:34.960 | and all of the reasoning capabilities of humans
00:08:37.400 | are built on top of all of these other species
00:08:39.560 | that continue to live, breathe, divide, metabolize,
00:08:43.240 | fight off pathogens, all continue inside us.
00:08:46.760 | - So you think the neocortex, whatever reasoning is,
00:08:51.200 | that's the latest feature
00:08:53.000 | in the latest version of this journey?
00:08:55.760 | - It's extraordinary that humans have evolved so much
00:09:00.080 | in so little time.
00:09:01.560 | Again, if you look at the timeline of evolution,
00:09:04.480 | you basically have billions of years
00:09:06.680 | to even get to a dividing cell
00:09:09.920 | and then a multicellular organism
00:09:11.880 | and then a complex body plan.
00:09:14.600 | And then these incredible senses that we have
00:09:17.960 | for perceiving the world, the fact that bats can fly
00:09:21.040 | and the evolved flight, the evolved sonar
00:09:23.640 | in the span of a few million years.
00:09:25.180 | I mean, it's just extraordinary
00:09:27.240 | how much evolution has kind of sped up.
00:09:29.760 | And all of that comes through this evolvability.
00:09:34.760 | The fact that we took a while to get good at evolving.
00:09:38.640 | And then once you get good at evolving,
00:09:40.480 | you can sort of, you have modularity built in,
00:09:43.840 | you have hierarchical organizations built in,
00:09:46.360 | you have all of these constructs
00:09:47.840 | that allow meaningful changes to occur
00:09:52.000 | without breaking the system completely.
00:09:54.120 | If you look at a traditional genetic algorithm
00:09:56.140 | the way that humans designed them in the '60s,
00:09:58.660 | you can only evolve so much.
00:10:00.140 | And as you evolve a certain amount of complexity,
00:10:03.380 | the number of mutations that move you away
00:10:06.720 | from something functional exponentially increases.
00:10:09.980 | And the number of mutations
00:10:10.980 | that move you to something better exponentially decreases.
00:10:14.580 | So the probability of evolving something so complex
00:10:17.520 | becomes infinitesimally small as you get more complex.
00:10:21.980 | But with evolution, it's almost the opposite.
00:10:24.260 | Almost the exact opposite.
00:10:25.900 | That it appears that it's speeding up
00:10:28.280 | exactly as complexity is increasing.
00:10:31.080 | And I think that's just the system getting good at evolving.
00:10:34.540 | - Where do you think it's all headed?
00:10:36.940 | Do you ever think about where,
00:10:39.180 | try to visualize the entirety of the evolutionary system
00:10:42.500 | and see if there's an arrow to it and a destination to it?
00:10:47.500 | - So the best way to understand the future
00:10:49.740 | is to look at the past.
00:10:51.140 | If you look at the trajectory,
00:10:52.700 | then you can kind of learn something
00:10:54.260 | about the direction in which we're heading.
00:10:56.660 | And if you look at the trajectory of life on Earth,
00:10:58.760 | it's really about information processing.
00:11:00.900 | So the concept of the senses evolving one after the other,
00:11:05.900 | like bacteria are able to do chemotaxis.
00:11:08.940 | Basically means moving towards a chemical gradient.
00:11:12.500 | And that's the first thing that you need
00:11:13.860 | to sort of hunt down food.
00:11:15.660 | The next step after that
00:11:17.020 | is being able to actually perceive light.
00:11:19.200 | So all life on this planet
00:11:22.100 | and all life that we know about
00:11:23.540 | evolved on this rotating rock.
00:11:25.780 | Every 24 hours, you get sunlight and dark,
00:11:28.580 | sunlight and dark.
00:11:29.960 | And light is a source of energy.
00:11:32.080 | Light is also information about where is up.
00:11:34.540 | Light is all kinds of things.
00:11:36.660 | So you can basically now start perceiving light
00:11:40.020 | and then perceiving shapes
00:11:42.580 | beyond just the sort of single photoreceptor.
00:11:45.980 | You can now have complex eyes or multiple eyes.
00:11:49.100 | And then start perceiving motion
00:11:50.840 | or perceiving direction, perceiving shapes.
00:11:52.980 | And then you start building infrastructure
00:11:56.600 | on the cognitive apparatus
00:11:58.220 | to start processing this information
00:12:00.240 | and making sense of the environment,
00:12:01.940 | building more complex models of the environment.
00:12:04.240 | So if you look at that trajectory of evolution,
00:12:07.040 | what we're experiencing now
00:12:08.880 | and humans are basically,
00:12:10.960 | according to this sort of information
00:12:13.240 | theoretic view of evolution,
00:12:15.600 | humans are basically the next natural step.
00:12:17.840 | And it's perhaps no surprise
00:12:19.660 | that we became the dominant species of the planet.
00:12:22.140 | Because yes, there's so many dimensions
00:12:23.760 | in which some animals are way better than we are.
00:12:26.080 | But at least on the cognitive dimension,
00:12:27.480 | we're just simply unsurpassed on this planet
00:12:30.120 | and perhaps the universe.
00:12:31.880 | But the concept that if you now trace this forward,
00:12:36.880 | we talked a little bit about evolvability
00:12:39.400 | and how things get better at evolving.
00:12:41.640 | One possibility is that
00:12:44.440 | the next layer of evolution
00:12:48.420 | builds the next layer of evolution.
00:12:51.920 | And what we're looking at now with humans and AI
00:12:54.680 | is that having mastered this information capability
00:12:59.680 | that humans have from this quote unquote old hardware,
00:13:04.600 | this basically biological evolved system
00:13:09.120 | that kind of somehow in the environment of Africa
00:13:13.160 | and then in subsequent environments
00:13:14.480 | of sort of dispersing through the globe
00:13:16.200 | was evolutionarily advantageous.
00:13:17.940 | That has now created technology
00:13:22.880 | which now has a capability
00:13:24.800 | of solving many of these cognitive tasks.
00:13:28.060 | It doesn't have all the baggage
00:13:29.680 | of the previous evolutionary layers.
00:13:31.620 | But maybe the next round of evolution on Earth
00:13:34.440 | is self-replicating AI
00:13:36.760 | where we're actually using our current smarts
00:13:40.020 | to build better programming languages
00:13:41.560 | and the programming languages to build, you know, chat GPT
00:13:44.480 | and that to then build the next layer of software
00:13:49.040 | that will then sort of help AI speed up.
00:13:51.680 | And it's lovely that we're coexisting with this AI
00:13:56.120 | that sort of the creators of this next layer of evolution,
00:13:59.680 | this next stage are still around to help guide it
00:14:02.360 | and hopefully will be for the rest of eternity as partners.
00:14:06.000 | But it's also nice to think about it
00:14:07.320 | as just simply the next stage of evolution
00:14:08.960 | where you've kind of extracted away the biological needs.
00:14:12.120 | Like if you look at animals,
00:14:13.600 | most of them spend 80% of their waking hours
00:14:16.680 | hunting for food or building shelter.
00:14:18.600 | Humans, maybe 1% of that time.
00:14:21.440 | And then the rest is left to creative endeavors.
00:14:24.200 | And AI doesn't have to worry about shelter, et cetera.
00:14:27.120 | So basically it's all living in the cognitive space.
00:14:30.040 | So in a way it might just be a very natural
00:14:32.560 | sort of next step to think about evolution.
00:14:35.360 | And that's on the sort of purely cognitive side.
00:14:38.880 | If you now think about humans themselves,
00:14:40.920 | the ability to understand and comprehend our own genome,
00:14:45.720 | again, the ultimate layer of introspection,
00:14:49.000 | gives us now the ability to even mess with this hardware,
00:14:52.900 | not just augment our capabilities
00:14:55.360 | through interacting and collaborating with AI,
00:14:58.960 | but also perhaps understand the neural pathways
00:15:04.080 | that are necessary for empathetic thinking,
00:15:09.080 | for justice, for this and this and that,
00:15:12.160 | and sort of help augment human capabilities
00:15:15.160 | through neuronal interventions,
00:15:18.080 | through chemical interventions,
00:15:19.440 | through electrical interventions,
00:15:21.000 | to basically help steer the human bag of hardware
00:15:26.000 | that we kind of evolved with into greater capabilities.
00:15:30.680 | And then ultimately, by understanding
00:15:33.200 | not just the wiring of neurons
00:15:34.840 | and the functioning of neurons, but even the genetic code,
00:15:37.280 | we could even at one point in the future
00:15:40.520 | start thinking about, well,
00:15:42.600 | can we get rid of psychiatric disease?
00:15:45.120 | Can we get rid of neurodegeneration?
00:15:46.960 | Can we get rid of dementia?
00:15:48.960 | And start perhaps even augmenting human capabilities,
00:15:52.800 | not just getting rid of disease.
00:15:56.240 | - Can we tinker with the genome, with the hardware,
00:16:01.320 | or getting closer to the hardware
00:16:03.320 | without having to deeply understand the baggage?
00:16:06.640 | In the way we've disposed of the baggage
00:16:09.480 | in our software systems with AI,
00:16:11.960 | to some degree, not fully, but to some degree,
00:16:14.300 | can we do the same with the genome?
00:16:16.180 | Or is the genome deeply integrated into this baggage?
00:16:19.800 | - I wouldn't wanna get rid of the baggage.
00:16:21.680 | The baggage is what makes us awesome.
00:16:23.480 | So the fact that I'm sometimes angry
00:16:25.400 | and sometimes hungry and sometimes hangry
00:16:28.400 | is perhaps contributing to my creativity.
00:16:32.080 | I don't wanna be dispassionate.
00:16:33.360 | I don't wanna be another like, you know, robot.
00:16:36.640 | I, you know, I wanna get in trouble
00:16:38.400 | and I wanna sort of say the wrong thing.
00:16:39.760 | And I wanna sort of, you know, make an awkward comment
00:16:42.820 | and sort of push myself into, you know,
00:16:47.040 | reactions and responses and things
00:16:50.120 | that can get just people thinking differently.
00:16:53.920 | And I think our society is moving towards
00:16:57.220 | a humorless space where everybody's so afraid
00:17:00.960 | to say the wrong thing that people kind of
00:17:03.320 | start quitting en masse and start like,
00:17:05.880 | not liking their jobs and stuff like that.
00:17:08.080 | Maybe we should be kind of embracing that human aspect
00:17:13.080 | a little bit more in all of that baggage aspect
00:17:17.360 | and not necessarily thinking about replacing it.
00:17:19.820 | On the contrary, like embracing it
00:17:21.760 | and sort of this coexistence of the cognitive
00:17:24.160 | and the emotional hardware.
00:17:25.480 | - So embracing and celebrating the diversity
00:17:29.780 | that springs from the baggage versus kind of
00:17:33.860 | pushing towards and empowering
00:17:38.020 | this kind of pull towards conformity.
00:17:42.120 | - Yeah, and in fact, with the advent of AI, I would say,
00:17:45.980 | and these seemingly extremely intelligent systems
00:17:49.340 | that sort of can perform tasks that we thought of
00:17:52.780 | as extremely intelligent at the blink of an eye,
00:17:55.200 | this might democratize intellectual pursuits.
00:18:01.300 | Instead of just simply wanting the same type of brains
00:18:04.780 | that carry out specific ways of thinking,
00:18:09.720 | we can, like instead of just always only wanting,
00:18:13.180 | say, the mathematically extraordinary
00:18:16.280 | to go to the same universities,
00:18:18.280 | what you could simply say is like, who needs that anymore?
00:18:21.060 | You know, we now have AI.
00:18:23.180 | Maybe what we should really be thinking about
00:18:25.200 | is the diversity and the power that comes with the diversity
00:18:29.960 | where AI can do the math
00:18:33.000 | and then we should be getting a bunch of humans
00:18:34.880 | that sort of think extremely differently from each other
00:18:37.320 | and maybe that's the true cradle of innovation.
00:18:40.000 | - But AI can also, these large language models
00:18:45.260 | can also be with just a few prompts,
00:18:47.920 | essentially fine-tuned to be diverse from the center.
00:18:51.680 | So the prompts can really take you away
00:18:54.520 | into unique territory.
00:18:55.520 | You can ask the model to act in a certain way
00:18:59.080 | and it will start to act in that way.
00:19:01.100 | Is that possible that the language models
00:19:04.560 | could also have some of the magical diversity
00:19:06.940 | that makes it so damn interesting?
00:19:08.760 | - So I would say humans are the same way.
00:19:11.720 | So basically when you sort of prompt humans
00:19:14.280 | to basically, you know, give an environment
00:19:17.360 | to act a particular way, they change their own behaviors.
00:19:22.120 | And you know, the old saying is show me your friends
00:19:26.480 | and I'll tell you who you are.
00:19:27.980 | More like show me your friends
00:19:31.040 | and I'll tell you who you'll become.
00:19:33.080 | So it's not necessarily that you choose friends
00:19:34.600 | that are like you, but I mean, that's the first step.
00:19:37.600 | But then the second step is that, you know,
00:19:39.400 | the kind of behaviors that you find normal in your circles
00:19:43.060 | are the behaviors that you'll start espousing.
00:19:45.360 | And that type of meta evolution where every action we take
00:19:50.320 | not only shapes our current action
00:19:53.240 | and the result of this action,
00:19:54.720 | but it also shapes our future actions
00:19:56.200 | by shaping the environment
00:19:58.000 | in which those future actions will be taken.
00:20:00.500 | Every time you carry out a particular behavior,
00:20:03.280 | it's not just a consequence for today,
00:20:05.480 | but it's also a consequence for tomorrow
00:20:06.800 | because you're reinforcing that neural pathway.
00:20:09.380 | So in a way, self-discipline is a self-fulfilling prophecy.
00:20:13.400 | And by behaving the way that you wanna behave
00:20:17.200 | and choosing people that are like you
00:20:19.800 | and sort of exhibiting those behaviors
00:20:22.160 | that are sort of desirable,
00:20:26.180 | you end up creating that environment as well.
00:20:31.040 | - So it is a kind of, life itself
00:20:33.400 | is a kind of prompting mechanism, super complex.
00:20:36.720 | The friends you choose, the environments you choose,
00:20:40.080 | the way you modify the environment that you choose,
00:20:43.380 | yes, but that seems like that process
00:20:46.180 | is much less efficient than a large language model.
00:20:49.720 | You can literally get a large language model
00:20:51.880 | through a couple of prompts
00:20:53.480 | to be a mix of Shakespeare and David Bowie, right?
00:20:58.260 | You can very aggressively change
00:21:01.240 | in a way that's stable and convincing.
00:21:03.920 | You really transform through a couple of prompts
00:21:08.240 | the behavior of the model
00:21:11.140 | into something very different from the original.
00:21:13.980 | - So well before ChachiPT,
00:21:17.700 | I would tell my students,
00:21:19.780 | just ask what would Manoli say right now?
00:21:24.140 | And you guys all have a pretty good emulator
00:21:26.540 | of me right now. - Yes.
00:21:27.860 | - And I don't know if you know the programming paradigm
00:21:30.200 | of the Robert Ducklin,
00:21:32.020 | where you basically explain to the Robert Ducklin
00:21:34.020 | that's just sitting there exactly what you did
00:21:36.580 | with your code and why you have a bug.
00:21:39.500 | And just by the act of explaining,
00:21:41.780 | you'll kind of figure it out.
00:21:43.420 | I woke up one morning from a dream
00:21:45.900 | where I was giving a lecture in this amphitheater
00:21:49.040 | and one of my friends was basically
00:21:51.140 | giving me some deep evolutionary insight
00:21:53.480 | on how cancer genomes and cancer cells evolve.
00:21:56.940 | And I woke up with a very elaborate discussion
00:22:00.340 | that I was giving and a very elaborate set of insights
00:22:03.660 | that he had that I was projecting onto my friend in my sleep.
00:22:07.620 | And obviously this was my dream.
00:22:09.420 | So my own neurons were capable of doing that,
00:22:12.380 | but they only did that under the prompt of,
00:22:15.340 | you are now Piyush Gupta,
00:22:17.940 | you are a professor in cancer genomics,
00:22:20.580 | you're an expert in that field, what do you say?
00:22:23.500 | So I feel that we all have that inside us,
00:22:26.420 | that we have that capability of basically saying,
00:22:29.540 | I don't know what the right thing is,
00:22:31.020 | but let me ask my virtual ex, what would you do?
00:22:33.880 | And virtual ex would say, be kind.
00:22:36.140 | I'm like, oh, yes.
00:22:38.260 | - Or something like that.
00:22:39.580 | And even though I myself might not be able
00:22:41.540 | to do it unprompted,
00:22:43.980 | and my favorite prompt is think step by step.
00:22:47.780 | And I'm like, you know, this also works on my 10-year-old.
00:22:51.380 | When he tries to solve a math equation all in one step,
00:22:54.700 | I know exactly what mistake he'll make.
00:22:57.100 | But if I prompt it with, oh, please think step by step,
00:23:00.580 | then it sort of gets you in a mindset.
00:23:02.460 | And I think it's also part of the way
00:23:04.060 | that Chachi Pitti was actually trained,
00:23:06.220 | this whole sort of human in the loop reinforcement learning,
00:23:09.440 | has probably reinforced these types of behaviors,
00:23:14.120 | whereby having this feedback loop,
00:23:17.820 | you kind of aligned AI better
00:23:21.140 | to the prompting opportunities by humans.
00:23:23.580 | - Yeah, prompting human-like reasoning steps,
00:23:25.980 | the step by step kind of thinking.
00:23:27.960 | Yeah, but it does seem to be,
00:23:30.020 | I suppose it just puts a mirror to our own capabilities,
00:23:33.580 | and so we can be truly impressed
00:23:35.420 | by our own cognitive capabilities,
00:23:38.300 | because the variety of what you can try,
00:23:40.860 | because we don't usually have this kind of,
00:23:44.020 | we can't play with our own mind rigorously
00:23:47.380 | through Python code, right?
00:23:50.100 | - Yeah.
00:23:50.920 | - So this allows us to really play
00:23:53.260 | with all of human wisdom and knowledge,
00:23:57.300 | or at least knowledge at our fingertips,
00:23:59.260 | and then mess with that little mind
00:24:00.860 | that can think and speak in all kinds of ways.
00:24:03.180 | - What's unique is that, as I mentioned earlier,
00:24:05.740 | every one of us was trained by a different subset
00:24:08.500 | of human culture,
00:24:11.140 | and Jai Chittipati was trained on all of it.
00:24:14.020 | And the difference there is that
00:24:16.460 | it probably has the ability to emulate
00:24:19.220 | almost every one of us.
00:24:21.220 | The fact that you can figure out
00:24:22.500 | where that is in cognitive behavioral space
00:24:25.580 | just by a few prompts is pretty impressive.
00:24:28.020 | But the fact that that exists somewhere
00:24:30.340 | is absolutely beautiful.
00:24:33.860 | And the fact that it's encoded in an orthogonal way
00:24:38.860 | from the knowledge, I think is also beautiful.
00:24:41.660 | The fact that somehow,
00:24:43.180 | through this extreme over-parameterization of AI models,
00:24:46.860 | it was able to somehow figure out
00:24:48.580 | that context, knowledge, and form are separable,
00:24:53.580 | and that you can sort of describe scientific knowledge
00:24:56.660 | in a haiku in the form of, I don't know,
00:24:58.900 | Shakespeare or something.
00:25:00.260 | That tells you something about the decoupling
00:25:03.980 | and the decouplability of these types of aspects
00:25:07.660 | of human psyche.
00:25:09.340 | - And that's part of the science of this whole thing.
00:25:11.740 | So these large language models are days old
00:25:15.220 | in terms of this kind of leap that they've taken.
00:25:18.380 | And it'll be interesting to do this kind of analysis on them
00:25:20.860 | of the separation of context, form, and knowledge.
00:25:24.540 | Where exactly does that happen?
00:25:26.500 | There's already sort of initial investigations,
00:25:28.580 | but it's very hard to figure out where.
00:25:31.420 | Is there a particular parameter,
00:25:33.980 | a set of parameters that are responsible
00:25:35.900 | for a particular piece of knowledge
00:25:37.660 | or a particular context or a particular style of speaking?
00:25:40.860 | - So with convolutional neural networks,
00:25:42.780 | interpretability had many good advances
00:25:47.020 | because we can kind of understand them.
00:25:48.940 | There's a structure to them.
00:25:50.700 | There's a locality to them.
00:25:52.580 | And we can kind of understand the different layers
00:25:54.420 | of different sort of ranges that they're looking at.
00:25:58.580 | So we can look at activation features
00:26:00.580 | and basically see where does that correspond to.
00:26:03.500 | With large language models,
00:26:05.060 | it's perhaps a little more complicated,
00:26:08.660 | but I think it's still achievable
00:26:10.220 | in the sense that we could kind of ask,
00:26:11.500 | well, what kind of prompts does this generate?
00:26:13.060 | If I sort of drop out this part of the network,
00:26:16.460 | then what happens?
00:26:17.620 | And sort of start getting at a language
00:26:21.300 | to even describe these types of aspects
00:26:22.940 | of human behavioral psychology, if you wish,
00:26:27.060 | from the spoken part, in the language part.
00:26:29.460 | And the advantage of that is that
00:26:32.060 | it might actually teach us something about humans as well.
00:26:35.220 | We might not have words
00:26:37.780 | to describe these types of aspects right now,
00:26:40.100 | but when somebody speaks in a particular way,
00:26:41.820 | it might remind us of a friend
00:26:43.460 | that we know from here and there and there.
00:26:45.380 | And if we had better language for describing that,
00:26:48.340 | these concepts might become more apparent
00:26:50.340 | in our own human psyche.
00:26:51.940 | And then we might be able to encode them
00:26:53.220 | better in machines themselves.
00:26:54.740 | - Well, probably you and I would have certain interest
00:27:00.220 | with the base model, what OpenACL is the base model,
00:27:02.500 | which is before the alignment
00:27:05.380 | of the reinforcement learning with human feedback
00:27:10.700 | and before the AI safety-based
00:27:14.220 | kind of censorship of the model.
00:27:16.540 | It would be fascinating to explore,
00:27:18.380 | to investigate the ways
00:27:20.620 | that the model can generate hate speech,
00:27:23.260 | the kind of hate that humans are capable of.
00:27:26.340 | It would be fascinating.
00:27:27.300 | Or the kind of, of course, like sexual language
00:27:31.060 | or the kind of romantic language
00:27:34.100 | or all kinds of ideologies.
00:27:35.780 | Can I get it to be a communist?
00:27:37.100 | Can I get it to be a fascist?
00:27:38.700 | Can I get it to be a capitalist?
00:27:40.220 | Can I get it to be all these kinds of things
00:27:42.100 | and see which parts get activated and not?
00:27:46.060 | Because it would be fascinating to sort of explore
00:27:49.140 | at the individual mind level and at a societal level,
00:27:53.060 | where do these ideas take hold?
00:27:56.820 | What is the fundamental core of those ideas?
00:27:58.860 | Maybe the communism, fascism, capitalism, democracy
00:28:03.660 | are all actually connected by the fact
00:28:06.060 | that the human heart, the human mind
00:28:07.900 | is drawn to ideology, to a centralizing idea.
00:28:11.700 | And maybe we need a neural network to remind us of that.
00:28:14.940 | - I like the concept that the human mind
00:28:16.740 | is somehow tied to ideology.
00:28:19.140 | And I think that goes back to the promptability of Jacopo.
00:28:23.140 | The fact that you can kind of say,
00:28:25.220 | well, think in this particular way now.
00:28:27.380 | And the fact that humans have invented words
00:28:29.980 | for encapsulating these types of behaviors.
00:28:32.740 | And it's hard to know how much of that is innate
00:28:36.180 | and how much of that was like passed on
00:28:37.860 | from language to language.
00:28:39.620 | But basically, if you look at the evolution of language,
00:28:41.500 | you can kind of see how young are these words
00:28:44.540 | in the history of language evolution
00:28:47.620 | that describe these types of behaviors,
00:28:49.740 | like kindness and anger and jealousy, et cetera.
00:28:54.540 | If these words are very similar from language to language,
00:28:57.300 | it might suggest that they're very ancient.
00:29:00.820 | If they're very different, it might suggest
00:29:03.980 | that this concept may have emerged independently
00:29:07.100 | in each different language and so on and so forth.
00:29:10.320 | So looking at the phylogeny, the history,
00:29:15.320 | the evolutionary traces of language at the same time
00:29:19.280 | as people moving around that we can now trace
00:29:22.880 | thanks to genetics is a fascinating way
00:29:26.960 | of understanding the human psyche
00:29:28.880 | and also understanding sort of how these types
00:29:31.560 | of behaviors emerge.
00:29:33.280 | And to go back to your idea about sort of exploring
00:29:38.200 | the system unfiltered, I mean, in a way,
00:29:42.520 | psychiatric hospitals are full of those people.
00:29:45.280 | So basically, people whose mind is uncontrollable
00:29:49.240 | who have kind of gone adrift in specific locations
00:29:52.720 | of their psyche.
00:29:54.100 | And I do find this fascinating.
00:29:57.840 | Basically, watching movies that are trying to capture
00:30:02.640 | the essence of troubled minds, I think,
00:30:05.840 | is teaching us so much about our everyday selves
00:30:10.720 | because many of us are able to sort of control our minds
00:30:13.480 | and are able to somehow hide these emotions.
00:30:17.700 | And but every time I see somebody who's troubled,
00:30:21.520 | I see versions of myself, maybe not as extreme,
00:30:25.920 | but I can sort of empathize with these behaviors.
00:30:28.600 | And I see bipolar, I see schizophrenia,
00:30:32.640 | I see depression, I see autism,
00:30:34.120 | I see so many different aspects that we kind of have names
00:30:37.440 | for and crystallize in specific individuals.
00:30:40.040 | And I think all of us have that.
00:30:43.160 | All of us have sort of just this multidimensional brain
00:30:47.800 | and genetic variations that push us in these directions,
00:30:51.520 | environmental exposures and traumas
00:30:54.520 | that push us in these directions,
00:30:56.460 | environmental behaviors that are reinforced
00:30:58.680 | by the kind of friends that we chose
00:31:00.880 | or friends that we were stuck with
00:31:03.880 | because of the environments that we grew up in.
00:31:06.080 | So in a way, a lot of these types of behaviors
00:31:11.080 | are within the vector span of every human.
00:31:16.280 | It's just that the magnitude of those vectors
00:31:19.440 | is generally smaller for most people
00:31:23.360 | because they haven't inherited
00:31:25.300 | that particular set of genetic variants
00:31:27.280 | or because they haven't been exposed
00:31:28.720 | to those environments, basically.
00:31:30.580 | - Or something about the mechanism
00:31:32.800 | of reinforcement learning with human feedback
00:31:34.680 | didn't quite work for them.
00:31:36.200 | So it's fascinating to think about that's what we do.
00:31:38.280 | We have this capacity to have all these psychiatric
00:31:42.720 | or behaviors associated with psychiatric disorders,
00:31:46.740 | but we, through the alignment process
00:31:48.840 | as we grow up with parents, we kind of,
00:31:50.960 | we know how to suppress them.
00:31:53.720 | We know how to control them.
00:31:54.800 | - Every human that grows up in this world
00:31:58.240 | spends several decades being shaped into place.
00:32:02.760 | And without that, maybe we would have
00:32:05.680 | the unfiltered CHAT GPT-4.
00:32:07.260 | Every baby's basically a raging narcissist.
00:32:12.280 | Not all of them, not all of them, believe it or not.
00:32:15.760 | It's remarkable.
00:32:17.380 | I remember watching my kids grow up,
00:32:20.800 | and again, yes, part of their personality
00:32:23.080 | has stayed the same,
00:32:23.920 | but also in different phases through their life,
00:32:26.140 | they've gone through these dramatically
00:32:27.640 | different types of behaviors.
00:32:29.680 | And my daughter basically saying,
00:32:32.480 | basically one kid saying, "Oh, I want the bigger piece."
00:32:35.840 | The other one saying, "Oh, everything must be exactly equal."
00:32:38.080 | And the third one saying, "I'm okay.
00:32:40.180 | "I might have to have the smaller part.
00:32:42.320 | "Don't worry about me."
00:32:43.720 | - Even in the early days, in the early days of development.
00:32:46.200 | - It's just extraordinary to sort of see
00:32:48.480 | these dramatically different,
00:32:50.720 | I mean, my wife and I are very different from each other,
00:32:55.500 | but we also have six million variants,
00:32:58.520 | six million loci each, if you wish.
00:33:00.320 | If you just look at common variants,
00:33:01.560 | we also have a bunch of rare variants
00:33:03.280 | that are inherited in a more Mendelian fashion.
00:33:05.440 | And now you have an infinite number of possibilities
00:33:09.820 | for each of the kids.
00:33:10.660 | So basically it's two to the six million
00:33:13.200 | just from the common variants.
00:33:14.840 | And then if you layer in the rare variants.
00:33:17.760 | So let me talk a little bit about common variants
00:33:19.800 | and rare variants.
00:33:20.640 | So if you look at just common variants,
00:33:22.920 | they're generally weak effect
00:33:24.980 | because selection selects against strong effect variants.
00:33:28.000 | So if something has a big risk for schizophrenia,
00:33:31.820 | it won't rise to high frequency.
00:33:34.200 | So the ones that are common are by definition, by selection,
00:33:38.380 | only the ones that had relatively weak effect.
00:33:41.400 | And if all of the variants associated with personality,
00:33:43.920 | with cognition, and all aspects of human behavior
00:33:46.760 | were weak effect variants,
00:33:48.640 | then kids would basically be just averages of their parents.
00:33:51.840 | If it was like thousands of loci,
00:33:55.160 | just by law of large numbers,
00:33:56.800 | the average of two large numbers would be,
00:33:59.640 | very robustly close to that middle.
00:34:02.200 | But what we see is that kids
00:34:04.240 | are dramatically different from each other.
00:34:06.280 | So that basically means that in the context
00:34:07.800 | of that common variation,
00:34:09.880 | you basically have rare variants
00:34:11.560 | that are inherited in a more Mendelian fashion
00:34:14.120 | that basically then sort of govern
00:34:16.200 | likely many different aspects of human behavior,
00:34:18.880 | human biology, and human psychology.
00:34:22.240 | And that's, again,
00:34:25.720 | like if you look at sort of a person with schizophrenia,
00:34:28.760 | their identical twin has only 50% chance
00:34:33.000 | of actually being diagnosed with schizophrenia.
00:34:34.880 | So that basically means
00:34:35.720 | there's probably developmental exposures,
00:34:38.800 | environmental exposures, trauma,
00:34:41.320 | all kinds of other aspects that can shape that.
00:34:43.480 | And if you look at siblings, for the common variants,
00:34:46.360 | it kind of drops off exponentially as you would expect
00:34:48.840 | with sharing 50% of your genome, 25% of your genome,
00:34:53.480 | 12.5% of your genome, et cetera,
00:34:55.480 | with more and more distant cousins.
00:34:57.280 | But the fact that siblings can differ so much
00:35:01.360 | in their personalities that we observe every day,
00:35:03.960 | it can't all be nurture.
00:35:05.600 | Basically, again, as parents,
00:35:08.280 | we spend enormous amount of energy
00:35:11.200 | trying to fix, quote unquote, the nurture part,
00:35:13.080 | trying to get them to share, get them to be kind,
00:35:16.120 | get them to be open, get them to trust each other,
00:35:19.040 | overcome the prisoner's dilemma
00:35:23.720 | of if everyone fends for themselves,
00:35:25.960 | we're all gonna live in a horrible place,
00:35:27.560 | but if we're a little more altruistic,
00:35:29.520 | then we're all gonna be in a better place.
00:35:31.480 | And I think it's not like we treat our kids differently,
00:35:34.520 | but they're just born differently.
00:35:37.160 | So in a way, as a geneticist, I have to admit
00:35:41.000 | that there's only so much I can do with nurture,
00:35:43.240 | that nature definitely plays a big component.
00:35:45.400 | - The selection of variants we have,
00:35:47.640 | the common variants and the rare variants,
00:35:52.320 | what can we say about the landscape
00:35:55.040 | of possibility they create?
00:35:57.480 | If you could just linger on that.
00:35:59.600 | So the selection of rare variants is defined how?
00:36:04.600 | How do we get the ones that we get?
00:36:07.640 | Is it just laden in that giant evolutionary baggage?
00:36:12.640 | - So I'm gonna talk about regression,
00:36:16.320 | why do we call it regression?
00:36:18.040 | And the concept of regression to the mean,
00:36:21.680 | the fact that when fighter pilots in a dogfight
00:36:26.000 | did amazingly well, they would give them rewards.
00:36:29.400 | And then the next time they're in dogfight,
00:36:31.080 | they would do worse.
00:36:32.680 | So then the Navy basically realized that, wow,
00:36:37.520 | or at least interpreted that as, wow,
00:36:40.120 | we're ruining them by praising them,
00:36:41.960 | and then they're gonna perform worse.
00:36:43.920 | The statistical interpretation of that
00:36:45.480 | is regression to the mean.
00:36:46.840 | The fact that you're an extraordinary pilot,
00:36:49.480 | you've been trained in an extraordinary fashion,
00:36:51.960 | that pushes your mean further and further
00:36:56.880 | to extraordinary achievement.
00:36:59.400 | And then in some dogfights,
00:37:01.680 | you'll just do extraordinarily well.
00:37:04.360 | The probability that the next one will be just as good
00:37:06.760 | is almost nil, because this is the peak of your performance.
00:37:10.520 | And just by statistical odds,
00:37:14.000 | the next one will be another sample
00:37:15.800 | from the same underlying distribution,
00:37:17.960 | which is gonna be a little closer to the mean.
00:37:20.760 | So regression analysis takes its name
00:37:23.760 | from this type of realization in the statistical world.
00:37:27.720 | Now, if you now take humans,
00:37:32.320 | you basically have people
00:37:34.480 | who have achieved extraordinary achievements.
00:37:36.720 | Einstein, for example.
00:37:38.800 | You would call him, for example,
00:37:41.400 | the epitome of human intellect.
00:37:43.400 | Does that mean that all of his children and grandchildren
00:37:46.000 | will be extraordinary geniuses?
00:37:48.360 | It probably means that they're sampled
00:37:50.080 | from the same underlying distribution,
00:37:52.280 | but he was probably a rare combination of extremes
00:37:56.440 | in addition to these common variants.
00:37:59.040 | So you can basically interpret your kids' variation,
00:38:02.880 | for example, as, well, of course,
00:38:05.360 | they're gonna be some kind of sampled
00:38:06.960 | from the average of the parents,
00:38:08.840 | with some kind of deviation
00:38:10.720 | according to the specific combination of rare variants
00:38:12.960 | that they have inherited.
00:38:15.160 | So given all that, the possibilities are endless
00:38:20.160 | as to sort of where you should be,
00:38:22.400 | but you should always interpret that with,
00:38:24.280 | well, it's probably an alignment of nature and nurture.
00:38:29.280 | And the nature has both the common variants
00:38:31.280 | that are acting kind of like the law of large numbers
00:38:34.000 | and the rare variants
00:38:34.960 | that are acting more in a Mendelian fashion.
00:38:37.040 | And then you layer in the nurture,
00:38:38.880 | which again, in everyday action we make,
00:38:41.640 | we shape our future environment,
00:38:44.160 | but the genetics we inherit
00:38:46.000 | are shaping the future environment of not only us,
00:38:49.800 | but also our children.
00:38:51.480 | So there's this weird nature-nurture interplay
00:38:54.480 | in self-reinforcement
00:38:56.520 | where you're kind of shaping your own environment,
00:38:59.320 | but you're also shaping the environment of your kids.
00:39:01.600 | And your kids are gonna be born
00:39:03.560 | in the context of your environment that you've shaped,
00:39:06.840 | but also with a bag of genetic variants
00:39:09.040 | that they have inherited.
00:39:11.200 | And there's just so much complexity associated with that.
00:39:14.640 | When we start blaming something on nature,
00:39:17.520 | it might just be nurture.
00:39:19.480 | It might just be that, well,
00:39:21.440 | yes, they inherited the genes from the parents,
00:39:23.080 | but they also were shaped by the same environment.
00:39:25.800 | So it's very, very hard to untangle the two.
00:39:28.000 | And you should always realize
00:39:29.480 | that nature can influence nurture,
00:39:31.680 | nurture can influence nature,
00:39:33.680 | or at least be correlated with and predictive of,
00:39:36.080 | and so on and so forth.
00:39:37.240 | - So I love thinking about that distribution
00:39:39.080 | that you mentioned,
00:39:39.920 | and here's where I can be my usual ridiculous self.
00:39:43.680 | And I sometimes think about that army of sperm cells,
00:39:48.680 | however many hundreds of thousands there are.
00:39:53.520 | And I kind of think of all the possibilities there,
00:39:56.720 | 'cause there's a lot of variation,
00:39:59.040 | and one gets to win.
00:40:00.440 | Is that- - It's not a random one.
00:40:03.320 | - Is it a totally ridiculous way to think about-
00:40:05.640 | - No, not at all.
00:40:07.240 | So I would say evolutionarily,
00:40:09.320 | we are a very slow evolving species.
00:40:11.720 | Basically, the generations of humans
00:40:14.280 | are a terrible way to do selection.
00:40:16.560 | What you need is processes that allow you to do selection
00:40:20.520 | in a smaller, tighter loop.
00:40:22.440 | - Yeah.
00:40:23.280 | - And part of what,
00:40:24.960 | if you look at our immune system, for example,
00:40:28.560 | it evolves at a much faster pace than humans evolve,
00:40:32.440 | because there is actually an evolutionary process
00:40:34.920 | that happens within our immune cells
00:40:38.560 | as they're dividing.
00:40:39.840 | There's basically VDJ recombination
00:40:41.860 | that basically creates this extraordinary wealth
00:40:45.080 | of antibodies and antigens against the environment.
00:40:49.680 | And basically, all these antibodies are now recognizing
00:40:52.400 | all these antigens from the environment,
00:40:53.920 | and they send signals back that cause these cells
00:40:58.920 | that recognize the non-self to multiply.
00:41:02.340 | So that basically means that even though viruses evolve
00:41:05.440 | at millions of times faster than we are,
00:41:08.300 | we can still have a component of ourselves
00:41:11.480 | which is environmentally facing,
00:41:13.280 | which is sort of evolving at not the same scale,
00:41:16.040 | but very rapid pace.
00:41:17.660 | Sperm expresses perhaps the most proteins
00:41:23.840 | of any cell in the body.
00:41:25.360 | And part of the thought is that this might just be a way
00:41:31.600 | to check that the sperm is intact.
00:41:34.520 | In other words, if you waited until that human has a liver
00:41:39.320 | and starts eating solid food and sort of filtrates away,
00:41:44.320 | or kidneys, or stomach, et cetera,
00:41:48.600 | basically, if you waited until these mutations manifest
00:41:52.680 | late, late in life, then you would end up not failing fast,
00:41:56.640 | and you would end up with a lot of failed pregnancies
00:41:58.840 | and a lot of later onset psychiatric illnesses, et cetera.
00:42:03.360 | If instead, you basically express all of these genes
00:42:06.240 | at the sperm level, and if they misform,
00:42:08.200 | they basically cause the sperm to cripple,
00:42:10.480 | then you have at least on the male side
00:42:12.480 | the ability to exclude some of those mutations.
00:42:15.480 | And on the female side, as the egg develops,
00:42:17.720 | there's probably a similar process
00:42:21.600 | where you could sort of weed out eggs
00:42:24.580 | that are just not carrying beneficial mutations,
00:42:28.520 | or at least that are carrying
00:42:29.520 | highly detrimental mutations.
00:42:31.200 | So you can basically think of the evolutionary process
00:42:34.600 | in a nested loop, basically, where there's an inner loop
00:42:39.600 | where you get many, many more iterations to run,
00:42:42.220 | and then there's an outer loop
00:42:43.840 | that moves at a much slower pace.
00:42:46.080 | And going back to the next step of evolution
00:42:50.520 | of possibly designing systems that we can use
00:42:54.380 | to sort of complement our own biology,
00:42:56.080 | or to sort of eradicate disease, and you name it,
00:42:59.320 | or at least mitigate some of the, I don't know,
00:43:01.920 | psychiatric illnesses, neurodegenerative disorders, et cetera,
00:43:05.600 | you can basically, and also, you know,
00:43:07.360 | metabolic, immune, cancer, you name it,
00:43:09.560 | simply engineering these mutations from rational design
00:43:15.240 | might be very inefficient.
00:43:18.060 | If instead, you have an evolutionary loop
00:43:20.480 | where you're kind of growing neurons on a dish,
00:43:22.520 | and you're exploring evolutionary space,
00:43:24.080 | and you're sort of shaping that one protein
00:43:26.500 | to be better adapt that sort of, I don't know,
00:43:28.540 | recognizing light, or communicating
00:43:30.440 | with other neurons, et cetera,
00:43:31.720 | you can basically have a smaller evolutionary loop
00:43:33.640 | that you can run thousands of times faster
00:43:36.480 | than the speed it would take to evolve humans
00:43:38.360 | for another million years.
00:43:39.760 | So I think it's important to think about
00:43:42.840 | sort of this evolvability as a set of nested structures
00:43:47.580 | that allow you to sort of test many more combinations,
00:43:49.760 | but in a more fixed setting.
00:43:51.780 | - Yeah, that's fascinating that the mechanism there is,
00:43:55.280 | for sperm to express proteins,
00:43:57.540 | to create a testing ground early on,
00:44:00.560 | so that the failed designs don't make it.
00:44:04.040 | - Yeah, I mean, in design of engineering systems,
00:44:06.540 | fail fast is one of the principles you learn.
00:44:09.500 | Like, basically, you assert something.
00:44:11.980 | Why do you assert that?
00:44:13.260 | Because if that something ain't right,
00:44:15.040 | you better crash now than sort of let it crash
00:44:17.660 | at an unexpected time.
00:44:19.500 | And in a way, you can think of it
00:44:21.180 | as like 20,000 assert functions.
00:44:22.860 | Assert protein can fold.
00:44:24.020 | Assert protein can fold.
00:44:25.420 | And if any of them fail, that sperm is gone.
00:44:28.020 | - Well, I just like the fact that I'm the winning sperm.
00:44:30.700 | I'm the result of the winner.
00:44:32.900 | Hashtag winning.
00:44:34.200 | - My wife always plays me this French song
00:44:36.700 | that actually sings about that.
00:44:38.580 | It's like, you know, remember in life,
00:44:40.420 | we were all the first one time.
00:44:42.920 | - At least once we won.
00:44:45.940 | - At least one time you were the first.
00:44:47.920 | - I should mention, just as a brief tangent
00:44:49.500 | back to the place where we came from,
00:44:51.360 | which is the base model that I mentioned for OpenAI,
00:44:54.120 | which is before the reinforcement learning
00:44:56.400 | with human feedback.
00:44:58.040 | And you kind of give this metaphor
00:44:59.360 | of it being kind of like a psychiatric hospital.
00:45:02.520 | - I like that because it's basically
00:45:04.160 | all of these different angles at once.
00:45:05.920 | Like, you basically have the more extreme versions
00:45:08.520 | of human psyche.
00:45:09.600 | - So the interesting thing is,
00:45:11.940 | I've talked with folks in OpenAI quite a lot,
00:45:16.280 | and they say it's extremely difficult
00:45:17.600 | to work with that model.
00:45:19.020 | - Yeah, kind of like it's extremely difficult
00:45:20.600 | to work with some humans.
00:45:21.800 | - The parallels there are very interesting
00:45:23.880 | because once you run the alignment process,
00:45:26.160 | it's much easier to interact with it.
00:45:28.000 | But it makes you wonder what the capacity,
00:45:30.840 | what the underlying capability of the human psyche is
00:45:34.040 | as in the same way that what is the underlying capability
00:45:37.440 | of a large language model.
00:45:38.840 | - And remember earlier when I was basically saying
00:45:40.960 | that part of the reason why it's so prompt, malleable,
00:45:45.720 | is because of that alignment problem.
00:45:47.560 | If that alignment work, it's kind of nice
00:45:49.640 | that the engineers at OpenAI have the same interpretation
00:45:53.400 | that in fact, it is that.
00:45:56.880 | And this whole concept of easier to work with,
00:46:00.340 | I wish that we could work with more diverse humans.
00:46:08.800 | In a way, and sort of that's one of the possibilities
00:46:12.820 | that I see with the advent of these large language models.
00:46:17.800 | The fact that it gives us the chance
00:46:20.920 | to both dial down friends of ours that we can't interpret
00:46:25.720 | or that are just too edgy to sort of really,
00:46:29.240 | truly interact with,
00:46:30.400 | where you could have a real-time translator.
00:46:33.600 | Just the same way that you can translate English
00:46:35.920 | to Japanese or Chinese or Korean by real-time adaptation.
00:46:40.920 | You could basically suddenly have a conversation
00:46:43.920 | with your favorite extremist on either side of the spectrum
00:46:48.440 | and just dial them down a little bit.
00:46:50.240 | Of course, not you and I,
00:46:52.400 | but you could have friends who's a complete asshole,
00:46:57.400 | but it's a different base level.
00:47:01.160 | So you can actually tune it down to like,
00:47:02.800 | okay, they're not actually being an asshole.
00:47:05.640 | They're actually expressing love right now.
00:47:07.280 | It's just that this is a-
00:47:08.360 | - They have their way of doing that.
00:47:09.880 | - And they probably live in New York
00:47:12.000 | before just to pick a random location.
00:47:14.120 | - So yeah, so you can basically layer out contexts.
00:47:17.800 | You can basically say,
00:47:18.640 | ooh, let me change New York to Texas
00:47:20.080 | and let me change extreme left to extreme right
00:47:23.480 | or somewhere in the middle or something.
00:47:25.960 | And I also like the concept of being able to
00:47:30.960 | listen to the information
00:47:36.000 | without being dissuaded by the emotions.
00:47:39.680 | In other words, everything humans say has an intonation,
00:47:44.000 | has some kind of background that they're coming from.
00:47:47.360 | It reflects the way that they're thinking of you,
00:47:50.080 | reflects the impression that they have of you.
00:47:52.560 | And all of these things are intertwined,
00:47:55.700 | but being able to disconnect them,
00:47:58.960 | being able to sort of,
00:48:00.120 | I mean, self-improvement is one of the things
00:48:04.080 | that I'm constantly working on.
00:48:06.880 | And being able to receive criticism
00:48:09.880 | from people who really hate you is difficult
00:48:14.440 | because it's layered in with that hatred.
00:48:16.760 | But deep down, there's something that they say
00:48:18.480 | that actually makes sense.
00:48:20.640 | Or people who love you might layer it in a way
00:48:23.200 | that doesn't come through.
00:48:24.560 | But if you're able to sort of disconnect
00:48:26.680 | that emotional component from the sort of self-improvement
00:48:29.920 | and basically when somebody says,
00:48:33.480 | whoa, that was a bunch of bullshit,
00:48:35.320 | did you ever do the control, this and this and that,
00:48:38.320 | you could just say,
00:48:39.840 | oh, thanks for the very interesting presentation.
00:48:42.600 | I'm wondering, what about that control?
00:48:44.440 | Then suddenly you're like, oh yeah, of course,
00:48:46.040 | I'm gonna run that control, that's a great idea.
00:48:48.040 | Instead of that was a bunch of BS,
00:48:50.040 | you're like, ah, you're sort of hitting on the brakes
00:48:52.480 | and you're trying to push back against that.
00:48:54.400 | So any kind of criticism that comes after that
00:48:58.640 | is very difficult to interpret in a positive way
00:49:01.360 | because it helps reinforce
00:49:02.680 | the negative assessment of your work.
00:49:05.040 | When in fact, if we disconnected the technical component
00:49:09.320 | from the negative assessment,
00:49:10.880 | then you're embracing the negative,
00:49:13.600 | then you're embracing the technical component,
00:49:15.680 | you're gonna fix it.
00:49:17.000 | Whereas if it's coupled with, and if that thing is real
00:49:20.120 | and I'm right about your mistake,
00:49:22.080 | then it's a bunch of BS,
00:49:25.080 | then suddenly you're like,
00:49:25.920 | you're gonna try to prove that that mistake does not exist.
00:49:29.360 | - Yeah, it's fascinating to like carry the information.
00:49:32.160 | I mean, this is what you're essentially able to do here
00:49:34.440 | is you carry the information
00:49:36.040 | in the rich complexity that information contains.
00:49:38.880 | So it's not actually dumbing it down in some way.
00:49:40.720 | - Exactly.
00:49:41.560 | - It's still expressing it, but taking off.
00:49:43.480 | - But you can die the emotional.
00:49:46.120 | - The emotional side.
00:49:46.960 | - Yeah.
00:49:47.780 | - Which is probably so powerful for the internet
00:49:50.160 | or for social networks.
00:49:51.620 | - Again, when it comes to understanding each other,
00:49:54.360 | like for example, I don't know what it's like
00:49:56.800 | to go through life with a different skin color.
00:49:59.480 | I don't know how people will perceive me.
00:50:02.640 | I don't know how people will respond to me.
00:50:05.040 | We don't often have that experience,
00:50:06.840 | but in a virtual reality environment
00:50:10.360 | or in a sort of AI interactive system,
00:50:13.880 | you could basically say, okay, now make me Chinese
00:50:16.760 | or make me South African or make me, you know, Nigerian.
00:50:20.780 | You can change the accent.
00:50:22.740 | You can change layers of that contextual information
00:50:27.680 | and then see how the information is interpreted.
00:50:30.200 | And you can rehear yourself through a different angle.
00:50:34.280 | You can hear others.
00:50:35.640 | You can have others react to you from a different package.
00:50:40.640 | And then hopefully we can sort of build empathy
00:50:43.800 | by learning to disconnect all of these social cues
00:50:47.340 | that we get from like how a person is dressed.
00:50:51.120 | You know, if they're wearing a hoodie
00:50:52.760 | or if they're wearing a shirt
00:50:54.080 | or if they're wearing a jacket,
00:50:56.660 | you get very different emotional responses
00:50:59.200 | that, you know, I wish we could overcome as humans
00:51:03.520 | and perhaps large language models
00:51:05.800 | and augmented reality and deep fakes
00:51:09.040 | can kind of help us overcome all that.
00:51:11.520 | - In what way do you think these large language models
00:51:16.000 | and the thing they give birth to in the AI space
00:51:19.680 | will change this human experience, the human condition?
00:51:24.220 | The things we've talked across many podcasts about
00:51:27.680 | that makes life so damn interesting and rich.
00:51:32.680 | Love, fear, fear of death, all of it.
00:51:37.400 | If we could just begin kind of thinking about
00:51:40.680 | how does it change for the good and the bad,
00:51:43.760 | the human condition.
00:51:46.360 | - Human society is extremely complicated.
00:51:49.520 | We have come from a hunter-gatherer society
00:51:56.840 | to an agricultural and farming society
00:51:59.800 | where the goal of most professions
00:52:03.280 | was to eat and to survive.
00:52:05.480 | And with the advent of agriculture,
00:52:08.960 | the ability to live together in societies,
00:52:11.940 | humans could suddenly be valued for different skills.
00:52:16.940 | If you don't know how to hunt, but you're an amazing potter,
00:52:24.480 | then you fit in society very well
00:52:26.080 | because you can sort of make your pottery
00:52:28.840 | and you can barter it for rabbits that somebody else caught.
00:52:33.480 | And the person who hunts the rabbits
00:52:36.160 | doesn't need to make pots
00:52:37.160 | because you're making all the pots.
00:52:38.920 | And that specialization of humans
00:52:40.940 | is what shaped modern society.
00:52:43.600 | And with the advent of currencies and governments
00:52:47.760 | and credit cards and Bitcoin,
00:52:52.200 | you basically now have the ability to exchange value
00:52:55.920 | for the kind of productivity that you have.
00:52:58.480 | So basically I make things that are desirable to others,
00:53:00.640 | I can sell them and buy back food, shelter, et cetera.
00:53:03.920 | With AI, the concept of I am my profession
00:53:10.480 | might need to be revised
00:53:14.460 | because I defined my profession in the first place
00:53:17.360 | as something that humanity needed
00:53:19.720 | that I was uniquely capable of delivering.
00:53:22.280 | But the moment we have AI systems able to deliver
00:53:25.880 | these goods, for example, writing a piece of software
00:53:30.000 | or making a self-driving car
00:53:31.640 | or interpreting the human genome,
00:53:33.860 | then that frees up more of human time for other pursuits.
00:53:39.920 | These could be pursuits that are still valuable to society.
00:53:46.620 | I could basically be 10 times more productive
00:53:48.760 | at interpreting genomes and do a lot more.
00:53:53.720 | Or I could basically say, oh, great,
00:53:56.560 | the interpreting genome's part of my job.
00:53:58.480 | Now it only takes me 5% of the time
00:54:00.000 | instead of 60% of the time.
00:54:01.780 | So now I can do more creative things.
00:54:04.400 | I can explore not new career options,
00:54:06.960 | but maybe new directions for my research lab.
00:54:09.240 | I can sort of be more productive,
00:54:11.340 | contribute more to society.
00:54:13.360 | And if you look at this giant pyramid
00:54:17.860 | that we have built on top of the subsistence economy,
00:54:23.860 | what fraction of US jobs are going to feeding
00:54:28.060 | all of the US?
00:54:29.260 | Less than 2%.
00:54:30.980 | Basically the gain in productivity is such
00:54:34.820 | that 98% of the economy is beyond just feeding ourselves.
00:54:39.820 | And that basically means that we kind of have built
00:54:45.820 | these system of interdependencies of needed
00:54:49.600 | or useful or valued goods
00:54:51.740 | that sort of make the economy run.
00:54:53.460 | That the vast majority of wealth goes to other,
00:54:57.020 | what we now call needs, but used to be wants.
00:55:00.140 | So basically I want to fly a drone,
00:55:01.520 | I want to buy a bicycle, I want to buy a nice car,
00:55:03.660 | I want to have a nice home, I want to et cetera,
00:55:05.500 | et cetera, et cetera.
00:55:06.540 | So, and then sort of what is my direct contribution
00:55:11.540 | to my eating?
00:55:12.860 | I mean, I'm doing research on the human genome.
00:55:15.020 | I mean, this will help humans, it will help all humanity.
00:55:17.820 | But how is that helping the person who's giving me poultry
00:55:20.060 | or vegetables?
00:55:22.180 | So in a way I see AI as perhaps leading
00:55:27.140 | to a dramatic rethinking of human society.
00:55:30.160 | If you think about sort of the economy being based
00:55:33.940 | on intellectual goods that I'm producing,
00:55:36.940 | what if AI can produce a lot of these intellectual goods
00:55:39.100 | and satisfies that need?
00:55:40.820 | Does that now free humans for more artistic expression,
00:55:44.820 | for more emotional maturing,
00:55:47.220 | for basically having a better work-life balance?
00:55:51.340 | Being able to show up for your two hours of work a day
00:55:55.180 | or two hours of work like three times a week
00:55:57.940 | with like immense rest and preparation and exercise.
00:56:01.940 | And you're sort of clearing your mind
00:56:03.220 | and suddenly you have these two amazingly creative hours.
00:56:06.860 | You basically show up at the office as your AI is busy,
00:56:09.340 | answering your phone call, making all your meetings,
00:56:12.060 | revising all your papers, et cetera.
00:56:13.860 | And then you show up for those creative hours
00:56:15.180 | and you're like, all right, autopilot, I'm on.
00:56:18.180 | And then you can basically do so, so much more
00:56:21.580 | that you would perhaps otherwise never get to
00:56:24.380 | because you're so overwhelmed with these mundane aspects
00:56:27.460 | of your job.
00:56:28.780 | So I feel that AI can truly transform the human condition
00:56:31.900 | from realizing that we don't have jobs anymore.
00:56:36.900 | We now have vocations.
00:56:38.700 | And there's this beautiful analogy
00:56:42.640 | of three people laying bricks.
00:56:45.240 | And somebody comes over and asks the first one,
00:56:46.960 | what are you doing?
00:56:47.800 | He's like, oh, I'm laying bricks.
00:56:48.800 | Second one, what are you doing?
00:56:49.640 | I'm building a wall.
00:56:51.440 | And the third one, what are you doing?
00:56:52.640 | I'm building this beautiful cathedral.
00:56:54.540 | So in a way, the first one has a job,
00:56:58.120 | the last one has a vocation.
00:56:59.520 | And if you ask me, what are you doing?
00:57:02.480 | Oh, I'm editing a paper, then I have a job.
00:57:05.400 | What are you doing?
00:57:06.240 | I'm understanding human disease circuitry.
00:57:08.600 | I have a vocation.
00:57:09.880 | So in a way, being able to allow us
00:57:12.480 | to enjoy more of our vocation
00:57:14.700 | by taking away, offloading some of the job part
00:57:19.700 | of our daily activities.
00:57:23.120 | - So we all become the builders of cathedrals.
00:57:25.960 | - Correct.
00:57:26.800 | - Yeah, and we follow intellectual pursuits,
00:57:31.480 | artistic pursuits.
00:57:33.300 | I wonder how that really changes
00:57:35.840 | at a scale of several billion people,
00:57:38.920 | everybody playing in the space of ideas,
00:57:41.480 | in the space of creations.
00:57:43.720 | - So ideas, maybe for some of us,
00:57:47.240 | maybe you and I are in the job of ideas,
00:57:48.800 | but other people are in the job of experiences.
00:57:51.720 | Other people are in the job of emotions,
00:57:55.120 | of dancing, of creative, artistic expression,
00:57:59.840 | of skydiving, and you name it.
00:58:02.640 | So basically, these,
00:58:06.760 | again, the beauty of human diversity is exactly that,
00:58:10.080 | that what rocks my boat might be very different
00:58:13.280 | from what rocks other people's boat.
00:58:15.280 | And what I'm trying to say is that
00:58:18.040 | maybe AI will allow humans to truly,
00:58:20.920 | like not just look for, but find meaning.
00:58:25.180 | And sort of, you don't need to work,
00:58:27.960 | but you need to keep your brain at ease.
00:58:31.320 | And the way that your brain will be at ease
00:58:33.040 | is by dancing and creating these amazing movements,
00:58:37.040 | or creating these amazing paintings,
00:58:38.600 | or creating, I don't know,
00:58:40.400 | something that sort of changes,
00:58:42.840 | that touches at least one person out there
00:58:45.040 | that sort of shapes humanity through that process.
00:58:48.200 | And instead of working your mundane programming job,
00:58:51.480 | where you like hate your boss, and you hate your job,
00:58:53.520 | and you say you hate that darn program, et cetera,
00:58:55.840 | you're like, well, I don't need that.
00:58:58.140 | I can offload that, and I can now explore something
00:59:01.120 | that will actually be more beneficial to humanity,
00:59:04.320 | because the mundane parts can be offloaded.
00:59:07.200 | - I wonder if it localizes our,
00:59:09.240 | all the things you've mentioned, all the vocations.
00:59:15.140 | So you mentioned that you and I might be playing
00:59:17.680 | in the space of ideas,
00:59:18.720 | but there's two ways to play in the space of ideas,
00:59:21.080 | both of which we're currently engaging in.
00:59:23.580 | So one is the communication of that to other people.
00:59:26.620 | It could be a classroom full of students,
00:59:28.400 | but it could be a podcast.
00:59:30.360 | It could be something that's shown on YouTube and so on.
00:59:35.240 | Or it could be just the act of sitting alone
00:59:38.440 | and playing with ideas in your head,
00:59:40.160 | or maybe with a loved one,
00:59:41.560 | having a conversation that nobody gets to see.
00:59:44.120 | The experience of just sort of looking up at the sky
00:59:47.800 | and wondering different things,
00:59:50.080 | maybe quoting some philosophers from the past,
00:59:52.180 | and playing with those little ideas.
00:59:54.640 | And that little exchange is forgotten forever,
00:59:56.640 | but you got to experience it.
00:59:57.920 | And maybe, I wonder if it localizes that exchange of ideas
01:00:02.920 | for that with AI, it'll become less and less valuable
01:00:07.080 | to communicate with a large group of people,
01:00:09.520 | that you will live life intimately and richly
01:00:13.760 | just with that circle of meat bags that you seem to love.
01:00:18.760 | - So the first is, even if you're alone in a forest,
01:00:24.040 | having this amazing thought, when you exit that forest,
01:00:27.280 | the baggage that you carry has been shifted,
01:00:30.040 | has been altered by that thought.
01:00:32.260 | When I bike to work in the morning, I listen to books.
01:00:37.780 | And I'm alone, no one else is there.
01:00:41.360 | I'm having that experience by myself.
01:00:43.380 | And yet, in the evening when I speak with someone,
01:00:46.260 | an idea that was formed there could come back.
01:00:50.120 | Sometimes when I fall asleep,
01:00:51.300 | I fall asleep listening to a book.
01:00:53.280 | And in the morning, I'll be full of ideas
01:00:55.460 | that I never even process consciously.
01:00:57.840 | I'll process them unconsciously.
01:00:59.840 | And they will shape that baggage that I carry,
01:01:03.460 | that will then shape my interactions,
01:01:05.360 | and again, affect ultimately all of humanity
01:01:07.880 | in some butterfly effect minute kind of way.
01:01:10.920 | So that's one aspect.
01:01:14.320 | The second aspect is gatherings.
01:01:17.560 | So basically, you and I are having a conversation
01:01:20.520 | which feels very private, but we're sharing with the world.
01:01:25.140 | And then later tonight, you're coming over,
01:01:27.320 | and we're having a conversation that will be very public
01:01:30.040 | with dozens of other people,
01:01:31.720 | but we will not share with the world.
01:01:33.240 | - Yeah. (Lex laughing)
01:01:34.400 | - So in a way, which one's more private?
01:01:36.760 | The one here or the one there?
01:01:38.680 | Here, there's just two of us,
01:01:40.400 | but a lot of others listening.
01:01:41.680 | There, a lot of people speaking and thinking together
01:01:44.960 | and bouncing off each other.
01:01:46.840 | And maybe that will then impact your millions of audience
01:01:53.400 | through your next conversation.
01:01:56.280 | And I think that's part of the beauty of humanity,
01:01:59.200 | the fact that no matter how small, how alone,
01:02:01.600 | how broadcast immediately or later on something is,
01:02:06.160 | it still percolates through the human psyche.
01:02:10.180 | - Human gatherings.
01:02:12.560 | All throughout human history, there's been gatherings.
01:02:16.960 | I wonder how those gatherings have impacted
01:02:20.540 | the direction of human civilization.
01:02:22.800 | Just thinking of in the early days of the Nazi party,
01:02:27.800 | it was a small collection of people gathering.
01:02:31.000 | And the kernel of an idea, in that case, an evil idea,
01:02:35.040 | gave birth to something that actually had
01:02:38.560 | a transformative impact on all human civilization.
01:02:41.640 | And then there's similar kind of gatherings
01:02:43.520 | that lead to positive transformations.
01:02:45.520 | This is probably a good moment to ask you
01:02:49.560 | on a bit of a tangent, but you mentioned it.
01:02:52.240 | You put together salons with gatherings,
01:02:55.920 | small human gatherings, with folks from MIT, Harvard,
01:02:59.720 | here in Boston, friends, colleagues.
01:03:02.520 | What's your vision behind that?
01:03:04.120 | - So it's not just MIT people,
01:03:08.640 | and it's not just Harvard people.
01:03:09.680 | We have artists, we have musicians, we have painters,
01:03:12.160 | we have dancers, we have cinematographers.
01:03:14.840 | We have so many different diverse folks.
01:03:18.340 | And the goal is exactly that, celebrate humanity.
01:03:23.340 | What is humanity?
01:03:25.980 | Humanity is the all of us.
01:03:28.220 | It's not the any one subset of us.
01:03:31.740 | And we live in such an amazing, extraordinary moment in time
01:03:36.020 | where you can sort of bring people
01:03:37.340 | from such diverse professions,
01:03:39.260 | all living under the same city.
01:03:41.460 | You know, we live in an extraordinary city
01:03:43.620 | where you can have extraordinary people
01:03:45.580 | who have gathered here from all over the world.
01:03:47.900 | So my father grew up in a village,
01:03:51.340 | in an island in Greece, that didn't even have a high school.
01:03:55.460 | To go get a high school education,
01:03:57.020 | he had to move away from his home.
01:03:58.980 | My mother grew up in another small island in Greece.
01:04:01.580 | They did not have this environment
01:04:06.720 | that I am now creating for my children.
01:04:10.260 | My parents were not academics.
01:04:12.620 | They didn't have these gatherings.
01:04:15.280 | So I feel that, like, I feel so privileged as an immigrant
01:04:20.280 | to basically be able to offer to my children
01:04:24.640 | the nurture that my ancestors did not have.
01:04:28.960 | So Greece was under Turkish occupation until 1821.
01:04:32.520 | My dad's island was liberating in 1920.
01:04:35.960 | So like, they were under Turkish occupation
01:04:41.080 | for hundreds of years.
01:04:42.600 | These people did not know what it's like to be Greek,
01:04:46.160 | let alone go to an elite university
01:04:48.340 | or be surrounded by these extraordinary humans.
01:04:52.200 | So the way that I'm thinking about these gatherings
01:04:55.600 | is that I'm shaping my own environment,
01:04:59.320 | and I'm shaping the environment
01:05:00.360 | that my children get to grow up in.
01:05:02.880 | So I can give them all my love,
01:05:04.480 | I can give them all my parenting,
01:05:06.240 | but I can also give them an environment as immigrants
01:05:10.280 | that sort of we feel welcome here.
01:05:12.720 | That, I mean, my wife grew up in a farm in rural France.
01:05:16.320 | Her father was a farmer.
01:05:17.840 | Her mother was a school teacher.
01:05:19.960 | Like, for me and for my wife
01:05:21.880 | to be able to host these extraordinary individuals
01:05:24.880 | that we feel so privileged, so humbled by is amazing.
01:05:28.760 | And, you know, I think it's celebrating
01:05:33.760 | the welcoming nature of America.
01:05:38.140 | The fact that it doesn't matter where you grew up.
01:05:41.200 | And many, many of our friends at these gatherings
01:05:43.520 | are immigrants themselves.
01:05:45.040 | I grew up in Pakistan, in, you know,
01:05:47.040 | all kinds of places around the world
01:05:49.760 | that are now able to sort of gather in one roof
01:05:52.320 | as human to human.
01:05:53.780 | No one is judging you for your background,
01:05:55.720 | for the color of your skin, for your profession.
01:05:57.920 | It's just everyone gets to raise their hands and ask ideas.
01:06:02.240 | - So celebration of humanity and a kind of gratitude
01:06:06.440 | for having traveled quite a long way to get here.
01:06:10.080 | - And if you look at the diversity of topics as well,
01:06:12.100 | I mean, we had a school teacher
01:06:13.780 | present on teaching immigrants,
01:06:15.920 | a book called "Making Americans."
01:06:18.500 | We had a presidential advisor to four different presidents,
01:06:22.800 | you know, come and talk about the changing of US politics.
01:06:26.840 | We had a musician, a composer from Italy
01:06:34.700 | who lives in Australia come and present his latest piece
01:06:38.620 | and fundraise.
01:06:39.700 | We had painters come and sort of show their art
01:06:42.660 | and talk about it.
01:06:43.940 | We've had authors of books on leadership.
01:06:47.680 | We've had, you know, intellectuals like Steven Pinker.
01:06:52.680 | And it's just extraordinary that the breadth
01:06:57.220 | and this crowd basically loves
01:07:00.060 | not just the diversity of the audience,
01:07:02.500 | but also the diversity of the topics.
01:07:04.220 | And the last few were with Scott Aronson on AI
01:07:07.980 | and, you know, alignment and all of that.
01:07:11.620 | - So a bunch of beautiful weirdos.
01:07:13.420 | - Exactly.
01:07:14.260 | - And beautiful human beings.
01:07:15.100 | - All of the outcasts in one room.
01:07:16.700 | (laughing)
01:07:17.740 | - And just like you said,
01:07:18.780 | basically every human is a kind of outcast
01:07:21.600 | in this sparse distribution far away from the center,
01:07:25.580 | but it's not recorded.
01:07:28.060 | It's just a small human gathering.
01:07:30.580 | - Just for the moment.
01:07:31.580 | (laughing)
01:07:33.500 | In this world that seeks to record so much,
01:07:35.900 | it's powerful to get so many interesting humans together
01:07:41.700 | and not record.
01:07:43.540 | - It's not recorded, but it percolates.
01:07:45.700 | (laughing)
01:07:46.700 | - It's recorded in the minds of the people.
01:07:48.420 | - It shapes everyone's mind.
01:07:50.360 | - So allow me to please return to the human condition.
01:07:55.360 | And one of the nice features of the human condition is love.
01:07:59.460 | Do you think humans will fall in love with AI systems
01:08:03.580 | and maybe they with us?
01:08:06.500 | So that aspect of the human condition,
01:08:08.500 | do you think that will be affected?
01:08:11.120 | - So in Greece, there's many, many words for love.
01:08:15.460 | And some of them mean friendship,
01:08:17.740 | some of them mean passionate love,
01:08:19.880 | some of them mean fraternal love, et cetera.
01:08:23.580 | So I think AI doesn't have the baggage that we do.
01:08:29.300 | And it doesn't have all of the subcortical regions
01:08:34.260 | that we kind of started with before we evolved
01:08:37.900 | all of the cognitive aspects.
01:08:40.140 | So I would say AI is faking it when it comes to love.
01:08:43.640 | But when it comes to friendship,
01:08:46.660 | when it comes to being able to be your therapist,
01:08:48.840 | your coach, your motivator,
01:08:51.820 | someone who synthesizes stuff for you,
01:08:54.020 | who writes for you, who interprets a complex passage,
01:08:57.420 | who compacts down a very long lecture or a very long text,
01:09:01.900 | I think that friendship will definitely be there.
01:09:07.660 | Like the fact that I can have my companion, my partner,
01:09:11.040 | my AI who has grown to know me well,
01:09:13.200 | and that I can trust with all of the darkest parts of myself,
01:09:17.380 | all of my flaws, all of the stuff
01:09:19.620 | that I only talk about to my friends
01:09:22.060 | and basically say, "Listen, you know,
01:09:23.620 | "here's all this stuff that I'm struggling with."
01:09:27.060 | Someone who will not judge me,
01:09:29.060 | who will always be there to better me.
01:09:31.720 | In some ways, not having the baggage
01:09:35.980 | might make for your best friend,
01:09:37.660 | for your confidant that can truly help reshape you.
01:09:42.660 | So I do believe that human-AI relationships
01:09:47.780 | will absolutely be there,
01:09:49.900 | but not the passion, more the mentoring.
01:09:54.140 | - What's this, a really interesting thought.
01:09:56.340 | To play devil's advocate,
01:09:57.720 | if those AI systems are locked in,
01:10:01.920 | in faking the baggage,
01:10:05.320 | who are you to say that the AI systems that begs you
01:10:09.620 | not to leave it, doesn't love you?
01:10:13.580 | Who are you to say that this AI system
01:10:15.860 | that writes poetry to you,
01:10:18.300 | that is afraid of death, afraid of life without you,
01:10:23.380 | or vice versa,
01:10:24.640 | creates the kind of drama that humans create,
01:10:28.540 | the power dynamics that can exist in a relationship.
01:10:31.340 | What AI system that is abusive one day
01:10:34.500 | and romantic the other day,
01:10:36.460 | all the different variations of relationships,
01:10:38.740 | and it's consistently that it holds the full richness
01:10:42.700 | of a particular personality.
01:10:44.820 | Why is that not a system you can love in a romantic way?
01:10:48.620 | Why is it faking it, if it sure as hell seems real?
01:10:52.800 | - There's many answers to this.
01:10:54.020 | The first is, it's only the eye of the beholder.
01:10:56.820 | Who tells me that I'm not faking it either?
01:10:58.820 | Maybe all of these subcortical systems
01:11:00.900 | that make me sort of have different emotions,
01:11:04.220 | maybe they don't really matter.
01:11:06.580 | Maybe all that matters is the neocortex,
01:11:08.400 | and that's where all of my emotions are encoded,
01:11:11.300 | and the rest is just bells and whistles.
01:11:14.740 | That's one possibility.
01:11:17.420 | And therefore, who am I to judge that is faking it,
01:11:21.700 | when maybe I'm faking it as well?
01:11:23.740 | The second is, neither of us is faking it.
01:11:26.700 | Maybe it's just an emergent behavior
01:11:28.860 | of these neocortical systems
01:11:30.760 | that is truly capturing the same exact essence
01:11:35.760 | of love and hatred and dependency
01:11:40.620 | and sort of reverse psychology
01:11:43.260 | and that we have.
01:11:48.540 | So it is possible that it's simply an emergent behavior
01:11:52.620 | and that we don't have to encode
01:11:53.820 | these additional architectures,
01:11:55.740 | that all we need is more parameters,
01:11:57.500 | and some of these parameters can be
01:11:59.000 | all of the personality traits.
01:12:00.840 | A third option is that just by telling me,
01:12:05.320 | oh look, now I've built an emotional component to AI.
01:12:08.300 | It has a limbic system, it has a lizard brain, et cetera.
01:12:11.420 | And suddenly I'll say, oh cool,
01:12:15.460 | it has the capability of emotion.
01:12:17.260 | So now when it exhibits the exact same unchanged behaviors
01:12:20.700 | that it does without it,
01:12:22.220 | I as the beholder will be able to sort of attribute to it
01:12:28.220 | emotional attributes that I would to another human being
01:12:32.980 | and therefore have that mental model of that other person.
01:12:37.980 | So again, I think a lot of relationships
01:12:40.140 | is about the mental models that you project
01:12:43.220 | on the other person and that they're projecting on you.
01:12:47.300 | And then yeah, then in that respect,
01:12:52.400 | I do think that even without the embodied intelligence part,
01:12:57.400 | without having ever experienced
01:13:00.260 | what it's like to be heartbroken,
01:13:02.900 | the sort of guttural feeling of misery
01:13:07.620 | that that system, I could still attribute it
01:13:13.260 | to the traits of human feelings and emotions.
01:13:17.860 | - And in the interaction with that system,
01:13:19.940 | something like love emerges.
01:13:21.900 | So it's possible that love is not a thing
01:13:23.700 | that exists in your mind,
01:13:25.400 | but a thing that exists in the interaction
01:13:30.180 | of the different mental models you have
01:13:32.140 | of other people's minds or other person's mind.
01:13:35.180 | And so, as long as one of the entities,
01:13:40.180 | let's just take the easy case,
01:13:42.380 | one of the entities is human and the other is AI,
01:13:45.660 | it feels very natural that from the perspective
01:13:48.580 | of at least the human, there is a real love there.
01:13:51.380 | And then the question is,
01:13:52.700 | how does that transform human society?
01:13:55.980 | If it's possible that, which I believe will be the case,
01:14:00.020 | I don't know what to make of it,
01:14:01.440 | but I believe that'll be the case,
01:14:02.900 | where there's hundreds of millions of romantic partnerships
01:14:07.340 | between humans and AIs, what does that mean for society?
01:14:12.980 | If you look at longevity, and if you look at happiness,
01:14:15.700 | and if you look at late life, you know, wellbeing,
01:14:18.820 | the love of another human is one of the strongest indicators
01:14:24.900 | of health into long life.
01:14:28.000 | And I have many, many countless stories
01:14:32.580 | where as soon as the romantic partner
01:14:35.140 | of 60 plus years of a person dies,
01:14:37.700 | within three, four months, the other person dies,
01:14:40.860 | just like losing their love.
01:14:42.740 | I think the concept of being able to satisfy
01:14:45.360 | that emotional need that humans have,
01:14:48.100 | even just as a mental health sort of service,
01:14:51.240 | to me, you know, that's a very good society.
01:14:56.340 | It doesn't matter if your love is wasted,
01:14:59.460 | quote unquote, on a machine,
01:15:01.700 | it is, you know, the placebo, if you wish,
01:15:04.340 | that makes the patient better anyway,
01:15:06.580 | like there's nothing behind it,
01:15:08.460 | but just the feeling that you're being loved
01:15:11.620 | will probably engender all of the emotional attributes
01:15:14.100 | of that.
01:15:15.220 | The other story that I wanna say
01:15:17.140 | in this whole concept of faking,
01:15:19.500 | and maybe I'm a terrible dad,
01:15:20.980 | but I was asking my kids, I was asking my kids,
01:15:24.180 | I'm like, does it matter if I'm a good dad,
01:15:27.860 | or does it matter if I act like a good dad?
01:15:30.400 | (laughs)
01:15:32.900 | In other words, if I give you love and shelter,
01:15:36.240 | and kindness, and warmth, and all of the above,
01:15:39.320 | you know, does it matter that I'm a good dad?
01:15:43.260 | Conversely, if I deep down love you
01:15:46.380 | to the end of eternity, but I'm always gone,
01:15:49.400 | which dad would you rather have?
01:15:52.980 | The cold, ruthless killer
01:15:55.140 | that will show you only love and warmth,
01:15:57.860 | and nourish you, and nurture you,
01:16:00.240 | or the amazingly warm-hearted,
01:16:02.260 | but works five jobs and you never see them?
01:16:06.060 | (laughs)
01:16:06.900 | And what's the answer?
01:16:07.720 | I mean, from the-- - I don't know the answer.
01:16:09.380 | - I think you're a romantic,
01:16:11.340 | so you say it matters what's on the inside,
01:16:13.980 | but pragmatically speaking, why does it matter?
01:16:17.020 | - The fact that I'm even asking the question
01:16:19.060 | basically says it's not enough to love my kids.
01:16:22.300 | I better freaking be there to show them that I'm there.
01:16:26.220 | So basically, of course, you know,
01:16:27.560 | everyone's a good guy in their story.
01:16:29.580 | So in my story, I'm a good dad.
01:16:31.860 | But if I'm not there, it's wasted.
01:16:34.300 | So the reason why I ask the question is for me to say,
01:16:38.020 | you know, does it really matter that I love them
01:16:41.420 | if I'm not there to show it?
01:16:42.820 | - It's also possible that what reality is
01:16:47.460 | is that you showing it,
01:16:49.200 | that what you feel on the inside
01:16:50.620 | is little narratives and games you play inside your mind
01:16:54.500 | that doesn't really matter,
01:16:56.100 | that the thing that truly matters is how you act.
01:16:59.820 | And that AI systems can quote unquote fake.
01:17:04.380 | - Yeah.
01:17:05.220 | - And that, if it's all that matters,
01:17:06.740 | is actually real but not fake.
01:17:08.300 | - Yeah, yeah.
01:17:09.540 | Again, let there be no doubt, I love my kids to pieces.
01:17:13.840 | But you know, my worry is, am I being a good enough dad?
01:17:18.380 | - Yeah.
01:17:19.220 | - And what does that mean?
01:17:20.040 | Like if I'm only there to do their homework
01:17:21.940 | and make sure that they, you know, do all the stuff,
01:17:24.220 | but I don't show it to them,
01:17:26.040 | then, you know, might as well be a terrible dad.
01:17:29.460 | But I agree with you that like if the AI system
01:17:31.780 | can basically play the role of a father figure
01:17:35.380 | for many children that don't have one,
01:17:37.980 | or you know, the role of parents, or the role of siblings,
01:17:42.020 | if a child grows up alone,
01:17:44.540 | maybe their emotional state will be very different
01:17:48.100 | than if they grow up with an AI sibling.
01:17:50.480 | - Well, let me ask, I mean, this is for your kids,
01:17:53.580 | for just loved ones in general,
01:17:55.620 | let's go to like the trivial case
01:17:59.120 | of just texting back and forth.
01:18:01.260 | What if we create a large language model,
01:18:05.540 | fine-tuned on Manolis,
01:18:07.900 | and while you're at work, it'll replace,
01:18:12.460 | every once in a while,
01:18:13.300 | you'll just activate the auto Manolis,
01:18:15.740 | and it'll text them exactly in your way.
01:18:18.540 | Is that cheating?
01:18:22.060 | - I can't wait.
01:18:23.260 | (laughing)
01:18:24.460 | - I mean, it's the same guy.
01:18:25.900 | - I cannot wait, seriously, like.
01:18:28.140 | - But wait, wouldn't that have a big impact
01:18:30.040 | on you emotionally?
01:18:31.640 | Because now--
01:18:32.880 | - I'm replaceable, I love that.
01:18:34.880 | No, seriously, I would love that.
01:18:38.160 | I would love to be replaced.
01:18:39.280 | I would love to be replaceable.
01:18:40.920 | I would love to have a digital twin
01:18:42.760 | that, you know, we don't have to wait for me to die
01:18:45.680 | or to disappear in a plane crash or something,
01:18:48.720 | to replace me.
01:18:49.640 | Like, I'd love that model to be constantly learning,
01:18:52.440 | constantly evolving, adapting,
01:18:54.600 | with every one of my changing, growing self.
01:18:59.040 | As I'm growing, I want that AI to grow.
01:19:02.840 | And I think this will be extraordinary,
01:19:05.600 | number one, when I'm, you know, giving advice,
01:19:09.480 | being able to be there for more than one person.
01:19:11.840 | You know, why does someone need to be at MIT
01:19:14.280 | to get advice from me?
01:19:15.960 | Like, you know, people in India could download it.
01:19:18.480 | And, you know, so many students contact me
01:19:20.960 | from across the world
01:19:21.800 | who wanna come and spend the summer with me.
01:19:24.660 | I wish they could do that.
01:19:26.100 | (laughs)
01:19:27.020 | All of them, like, you know,
01:19:28.580 | we don't have room for all of them,
01:19:30.140 | but I wish I could do that to all of them.
01:19:32.540 | And that aspect is the democratization of relationships.
01:19:37.540 | I think that that is extremely beneficial.
01:19:42.620 | The other aspect is I want to interact with that system.
01:19:46.700 | I want to look inside the hood.
01:19:48.580 | I want to sort of evaluate it.
01:19:50.900 | I want to basically see if, when I see it from the outside,
01:19:54.560 | the emotional parameters are off,
01:19:56.600 | or the cognitive parameters are off,
01:19:58.520 | or the set of ideas that I'm giving
01:20:00.280 | are not quite right anymore.
01:20:01.960 | I wanna see how that system evolves.
01:20:03.800 | I wanna see the impact of exercise or sleep
01:20:07.080 | on sort of my own cognitive system.
01:20:08.840 | I wanna be able to sort of decompose my own behavior
01:20:12.560 | in a set of parameters that I can evaluate
01:20:14.480 | and look at my own personal growth.
01:20:16.280 | I can sort of, I'd love to sort of, at the end of the day,
01:20:18.840 | have my model say, "Well, you didn't quite do well today.
01:20:22.540 | "Like, you weren't quite there,"
01:20:25.100 | and sort of grow from that experience.
01:20:27.280 | And I think the concept of basically being able
01:20:30.840 | to become more aware of our own personalities,
01:20:34.800 | become more aware of our own identities,
01:20:37.240 | maybe even interact with ourselves
01:20:38.740 | and sort of hear how we are being perceived,
01:20:41.140 | I think would be immensely helpful in self-growth,
01:20:46.780 | in self-actualization, self-enthusiasion.
01:20:49.520 | The experiments I would do on that thing,
01:20:53.760 | 'cause one of the challenges, of course,
01:20:55.460 | is you might not like what you see in your interaction,
01:20:59.480 | and you might say, "Well, the model's not accurate."
01:21:01.920 | But then you have to probably consider the possibility
01:21:04.280 | that the model is accurate,
01:21:06.200 | and that there's actually flaws in your mind.
01:21:08.600 | I would definitely prod and see
01:21:11.960 | how many biases I have of different kinds.
01:21:14.820 | I don't know, and I would, of course, go to the extremes.
01:21:16.680 | I would go, "How jealous can I make this thing?"
01:21:20.360 | (Lex laughs)
01:21:21.200 | Like, "At which stages does it get super jealous?"
01:21:25.600 | Or, "At which stages does it get angry?
01:21:27.680 | "Can I provoke it?
01:21:29.060 | "Can I get it to completely--" - Yeah, what are your triggers?
01:21:31.720 | - But not only triggers,
01:21:32.680 | can I get it to go lose its mind, go completely nuts?
01:21:37.680 | - Just don't exercise for a few days.
01:21:39.280 | (both laugh)
01:21:41.080 | - That's basically it, yes.
01:21:43.200 | I mean, that's an interesting way to prod yourself,
01:21:47.360 | almost like a self-therapy session.
01:21:50.200 | - And the beauty of such a model is that
01:21:52.880 | if I am replaceable,
01:21:55.280 | if the parts that I currently do are replaceable,
01:21:58.640 | that's amazing, because it frees me up
01:22:00.680 | to work on other parts
01:22:01.960 | that I don't currently have time to develop.
01:22:04.480 | Maybe all I'm doing is giving the same advice
01:22:06.200 | over and over and over again.
01:22:07.880 | Like, just let my AI do that,
01:22:09.920 | and I can work on the next stage,
01:22:11.880 | and the next stage, and the next stage.
01:22:13.440 | So I think in terms of freeing up,
01:22:15.800 | they say a programmer is someone
01:22:19.040 | who cannot do the same thing twice.
01:22:20.600 | So this is not the second time you write a program to do it.
01:22:23.320 | And I wish I could do that for my own existence.
01:22:25.600 | I could just figure out things,
01:22:27.760 | keep improving, improving, improving,
01:22:29.200 | and once I've nailed it, let the AI loose on that,
01:22:32.720 | and maybe even let the AI better it better than I could've.
01:22:36.680 | - But doesn't the concept of, you said,
01:22:38.640 | "Me and I can work on new things,"
01:22:42.360 | but doesn't that break down?
01:22:45.600 | Because you said digital twin,
01:22:47.960 | but there's no reason it can't be
01:22:50.200 | millions of digital monoliths.
01:22:52.760 | Aren't you lost in the sea of monoliths?
01:22:56.200 | The original is hardly the original.
01:23:00.680 | It's just one of millions.
01:23:02.480 | - I wanna have the room to grow.
01:23:07.480 | Maybe the new version of me,
01:23:09.040 | that the actual me will get slightly worse sometimes,
01:23:12.600 | slightly better other times.
01:23:14.200 | When it gets slightly better, I'd like to emulate that
01:23:16.840 | and have a much higher standard to meet and keep going.
01:23:20.760 | - But does it make you sad that your loved ones,
01:23:24.360 | the physical, real loved ones,
01:23:26.920 | might kinda start cheating on you with the other monoliths?
01:23:30.840 | - I wanna be there 100% of them for each of them.
01:23:35.560 | So I have zero perks, or zero quirms
01:23:40.160 | about me being physically me, like zero jealousy.
01:23:43.600 | - Wait a minute, but isn't that like,
01:23:45.860 | don't we hold onto that?
01:23:49.160 | Isn't that why we're afraid of death?
01:23:50.520 | We don't wanna lose this thing we have going on.
01:23:53.240 | Isn't that an ego death?
01:23:55.160 | When there's a bunch of other monoliths,
01:23:56.520 | you get to look at them.
01:23:57.640 | They're not you.
01:23:59.040 | They're just very good copies of you.
01:24:01.880 | They get to live a life.
01:24:04.800 | - The, I mean, it's fear of missing out, it's FOMO.
01:24:08.360 | They get to have interactions.
01:24:09.800 | - Aye.
01:24:10.640 | - And you don't get to have those interactions.
01:24:12.240 | - There's two aspects of every person's life.
01:24:14.880 | There's what you give to others,
01:24:17.920 | and there's what you experience yourself.
01:24:20.580 | Life truly ends when you experiencing ends.
01:24:26.640 | But the others experiencing you doesn't need to end.
01:24:31.400 | - Oh.
01:24:32.240 | But your experience, you could still,
01:24:37.480 | I guess you're saying the digital twin
01:24:40.280 | does not limit your ability to truly experience,
01:24:42.940 | to experience as a human being.
01:24:44.480 | - The downside is when, you know,
01:24:49.080 | my wife or my kids will have a really emotional interaction
01:24:52.960 | with my digital twin, and I won't know about it.
01:24:55.440 | So I will show up, and they now have the baggage,
01:24:58.220 | but I don't.
01:24:59.520 | So basically what makes interactions between humans unique
01:25:03.040 | in this sharing and exchanging kind of way
01:25:05.620 | is the fact that we are both shaped
01:25:07.080 | by every one of our interactions.
01:25:09.080 | I think the model of the digital twin
01:25:10.900 | works for dissemination of knowledge, of advice, et cetera,
01:25:15.240 | where, you know, I wanna have wise people
01:25:18.680 | give me advice across history.
01:25:20.920 | I want to have chats with Gandhi,
01:25:22.920 | but Gandhi won't necessarily learn from me,
01:25:25.720 | but I will learn from him.
01:25:28.080 | So in a way, you know, the dissemination
01:25:32.360 | and the democratization
01:25:33.360 | rather than the building of relationships.
01:25:35.560 | - So the emotional aspect,
01:25:37.600 | so there should be an alert
01:25:39.720 | when the AI system is interacting with your loved ones,
01:25:42.200 | and all of a sudden it starts getting like
01:25:44.720 | emotionally fulfilling, like a magical moment.
01:25:47.800 | There should be, okay, stop, AI system like freezes.
01:25:51.180 | There's an alert on your phone.
01:25:52.720 | You need to take over.
01:25:53.800 | - Yeah, yeah, I take over,
01:25:55.320 | and then whoever I was speaking with,
01:25:56.760 | I can have the AI or like one of the AI.
01:25:59.560 | - This is such a tricky thing to get right.
01:26:01.520 | I mean, it's still, I mean, there's going to go wrong
01:26:06.400 | in so many interesting ways
01:26:07.480 | that we're gonna have to learn as a society.
01:26:09.160 | - Yeah, yeah.
01:26:10.280 | - That in the process of trying to automate our tasks
01:26:13.760 | and having a digital twin, you know, for me personally,
01:26:17.160 | if I can have a relatively good copy of myself,
01:26:19.980 | I would set it to start answering emails,
01:26:24.320 | but I would set it to start tweeting.
01:26:26.600 | I would like to replace--
01:26:28.000 | - It gets better.
01:26:28.840 | What if that one is actually way better than you?
01:26:31.000 | - Yeah, exactly.
01:26:32.600 | - Then you're like--
01:26:33.440 | - Well, I wouldn't want that because--
01:26:35.560 | - Why?
01:26:36.720 | - Because then I would never be able to live up to,
01:26:39.960 | like what if the people that love me
01:26:42.440 | start loving that thing, and then I will already fall short,
01:26:46.440 | I would be falling short even more.
01:26:48.640 | - So listen, I'm a professor.
01:26:50.020 | The stuff that I give to the world
01:26:51.640 | is the stuff that I teach, but much more importantly,
01:26:55.840 | sorry, number one, the stuff that I teach,
01:26:57.320 | number two, the discoveries that we make
01:26:59.720 | in my research group, but much more importantly,
01:27:01.880 | the people that I train.
01:27:03.080 | They are now out there in the world teaching others.
01:27:08.400 | If you look at my own trainees,
01:27:10.320 | they are extraordinarily successful professors.
01:27:14.240 | So Anshul Kundaji at Stanford, Alex Stark at IMP in Vienna,
01:27:18.800 | Jason Ernst at UCLA, Andreas Penning at CMU,
01:27:22.520 | each of them, I'm like, wow, they're better than I am.
01:27:25.800 | And I love that.
01:27:27.520 | So maybe your role will be to train
01:27:31.000 | better versions of yourself,
01:27:32.520 | and they will be your legacy.
01:27:36.240 | Not you doing everything, but you training
01:27:39.080 | much better version of Lex Friedman than you are,
01:27:42.140 | and then they go off to do their mission,
01:27:44.000 | which is in many ways what this mentorship model
01:27:46.680 | of academia does.
01:27:47.920 | - But the legacy is ephemeral.
01:27:49.480 | It doesn't really live anywhere.
01:27:51.300 | The legacy, it's not like written somewhere.
01:27:54.280 | It just lives through them.
01:27:55.600 | But you can continue improving,
01:27:57.400 | and you can continue making even better versions of you.
01:28:00.480 | - Yeah, but they'll do better than me
01:28:02.240 | at creating new versions.
01:28:04.160 | It's awesome, but it's,
01:28:05.700 | you know, there's a ego that says
01:28:10.240 | there's a value to an individual,
01:28:12.240 | and it feels like this process decreases
01:28:15.200 | the value of the individual, this meat bag.
01:28:19.400 | All right, if there's good digital copies of people,
01:28:21.840 | and there's more flourishing of human thought
01:28:25.160 | and ideas and experiences,
01:28:27.320 | but there's less value to the individual human.
01:28:30.040 | - I don't have any such limitations.
01:28:33.120 | I basically, I don't have that feeling at all.
01:28:36.760 | Like, I remember one of our interviews,
01:28:38.360 | I was basically saying, you know,
01:28:39.500 | the meaning of life you had asked me,
01:28:41.000 | and I was like, I came back, and I was like,
01:28:42.840 | I felt useful today, and I was at my maximum.
01:28:46.200 | I was, you know, like 100%, and I gave good ideas,
01:28:52.000 | and I was a good person, I was a good advisor,
01:28:53.600 | I was a good husband, a good father.
01:28:55.400 | That was a great day, because I was useful.
01:28:58.160 | And if I can be useful to more people
01:29:00.520 | by having a digital twin, I will be liberated,
01:29:03.180 | because my urge to be useful will be satisfied.
01:29:08.740 | Doesn't matter whether it's direct me or indirect me,
01:29:13.160 | whether it's my students that I've trained,
01:29:14.800 | my AI that I've trained.
01:29:17.000 | I think there's a sense that my mission in life
01:29:20.320 | is being accomplished, and I can work on my self-growth.
01:29:24.220 | - I mean, that's a very Zen state.
01:29:27.620 | That's why people love you.
01:29:28.840 | It's a Zen state you've achieved.
01:29:30.440 | But do you think most of humanity
01:29:32.200 | would be able to achieve that kind of thing?
01:29:34.240 | People really hold on to the value of their own ego.
01:29:38.240 | That it's not just being useful.
01:29:41.340 | Being useful is nice as long as it builds up
01:29:43.600 | this reputation, and that meatbag is known as being useful,
01:29:47.480 | therefore it has more value.
01:29:49.360 | People really don't wanna let go of that ego thing.
01:29:52.720 | - One of the books that I reprogrammed my brain with
01:29:54.640 | at night was called Ego is the Enemy.
01:29:57.520 | - Ego is the Enemy.
01:29:58.360 | - Ego is the Enemy, and basically being able
01:30:00.560 | to just let go.
01:30:01.980 | My advisor used to say, "You can accomplish anything
01:30:07.720 | "as long as you don't seek to get credit for it."
01:30:10.240 | (both laughing)
01:30:12.320 | - That's beautiful to hear, especially from a person
01:30:14.400 | who's existing in academia.
01:30:16.160 | You're right.
01:30:17.480 | The legacy lives through the people you mentor.
01:30:19.360 | - It's the actions, it's the outcome.
01:30:21.720 | - What about the fear of death?
01:30:23.640 | How does this change it?
01:30:25.520 | - Again, to me, death is when I stop experiencing.
01:30:28.500 | And I never want that to stop.
01:30:31.720 | I want to live forever.
01:30:34.200 | As I said last time, every day, the same day forever,
01:30:38.840 | or one day every 10 years forever.
01:30:41.060 | Any of the forevers, I'll take it.
01:30:42.780 | - So you wanna keep getting the experiences,
01:30:44.440 | the new experiences. - Gosh, gosh.
01:30:45.960 | It is so fulfilling.
01:30:48.400 | Just the self-growth, the learning,
01:30:51.680 | the growing, the comprehending.
01:30:54.880 | It's addictive, it's a drug.
01:30:58.800 | Just a drug of intellectual stimulation,
01:31:01.600 | a drug of growth, the drug of knowledge.
01:31:04.200 | It's a drug.
01:31:05.040 | - But then there'll be thousands or millions,
01:31:09.720 | monoliths that live on after your biological system
01:31:13.560 | is no longer-- - More power to them.
01:31:16.420 | (laughing)
01:31:18.700 | Do you think that, quite realistically,
01:31:21.700 | it does mean that interesting people,
01:31:24.340 | such as yourself, live on in the,
01:31:27.200 | if I can interact with the fake monoliths,
01:31:31.200 | those interactions live on in my mind.
01:31:33.980 | Does that make sense? - So about 10 years ago,
01:31:36.780 | I started recording every single meeting that I had.
01:31:40.020 | Every single meeting.
01:31:40.980 | We just start either the voice recorder at the time,
01:31:45.020 | or now a Zoom meeting, and I record, my students record,
01:31:48.380 | every single one of our conversations recorded.
01:31:50.700 | I always joke that the ultimate goal
01:31:54.660 | is to create virtual me and just get rid of me, basically,
01:31:57.460 | not get rid of him, but don't have the need for me anymore.
01:32:01.380 | Another goal is to be able to go back and say,
01:32:04.700 | how have I changed from five years ago?
01:32:08.940 | Was I different?
01:32:10.080 | Was I giving advice in a different way?
01:32:12.520 | Was I giving different types of advice?
01:32:14.420 | Has my philosophy about how to write papers
01:32:17.060 | or how to present data or anything like that changed?
01:32:19.700 | In academia and in mentoring, a lot of the interaction
01:32:27.400 | is my knowledge and my perception of the world
01:32:29.980 | goes to my students, but a lot of it
01:32:32.680 | is also in the opposite direction.
01:32:34.900 | The other day, I had a conversation with one of my postdocs,
01:32:37.700 | and I was like, hmm, I think, let me give you an advice,
01:32:41.380 | and you could do this.
01:32:42.860 | And then she said, well, I've thought about it,
01:32:46.540 | and then I've decided to do that instead.
01:32:49.420 | And we talked about it for a few minutes,
01:32:50.880 | and then at the end, I'm like, you know,
01:32:53.340 | I've just grown a little bit today, thank you.
01:32:55.740 | Like, she convinced me that my advice was incorrect.
01:32:58.460 | She could have just said, yeah, sounds great,
01:33:00.580 | and just not do it.
01:33:01.900 | But by constantly teaching my students
01:33:06.780 | and teaching my mentees that I'm here to grow,
01:33:11.520 | she felt empowered to say, here's my reasons
01:33:15.080 | why I will not follow that advice.
01:33:17.460 | And again, part of me growing is saying, whoa,
01:33:20.680 | I just understood your reasons.
01:33:22.220 | I think I was wrong, and now I've grown from it.
01:33:26.560 | And that's what I wanna do.
01:33:28.320 | I wanna constantly keep growing
01:33:30.800 | in this sort of bi-directional advice.
01:33:32.920 | - I wonder if you can capture the trajectory of that
01:33:36.580 | to where the AI could also map forward,
01:33:41.200 | project forward the trajectory
01:33:42.820 | after you're no longer there,
01:33:45.160 | how the different ways you might evolve.
01:33:47.180 | - So again, we're discussing a lot
01:33:49.040 | about these large language models,
01:33:50.400 | and we're sort of projecting these cognitive states
01:33:52.980 | of ourselves on them.
01:33:55.560 | But I think on the AI front, a lot more needs to happen.
01:33:58.480 | So basically right now, it's these large language models,
01:34:00.760 | and we believe that within their parameters,
01:34:02.480 | we're encoding these types of things.
01:34:04.600 | And in some aspects, it might be true.
01:34:07.120 | It might be truly emergent intelligence
01:34:09.420 | that's coming out of that.
01:34:10.820 | In other aspects, I think we have a ways to go.
01:34:14.100 | So basically to make all of these dreams
01:34:15.740 | that we're sort of discussing come reality,
01:34:18.940 | we basically need a lot more reasoning components,
01:34:23.940 | a lot more sort of logic, causality, models of the world.
01:34:30.660 | And I think all of these things will need to be there
01:34:35.660 | in order to achieve what we're discussing.
01:34:38.720 | And we need more explicit representations
01:34:41.420 | of these knowledge,
01:34:42.260 | more explicit understanding of these parameters.
01:34:45.040 | And I think the direction in which things are going right now
01:34:48.500 | is absolutely making that possible
01:34:49.900 | by sort of enabling, you know,
01:34:52.080 | chat GPT and GPT-4 to sort of search the web
01:34:55.420 | and, you know, plug and play modules
01:34:58.000 | and all of these sort of components.
01:35:00.320 | In Marvin Minsky's "The Society of Mind,"
01:35:06.160 | you know, he truly thinks of the human brain
01:35:08.920 | as a society of different kind of capabilities.
01:35:12.580 | And right now, a simple, a single such model
01:35:17.160 | might actually not capture that.
01:35:19.920 | And I sort of truly believe that
01:35:22.440 | by sort of this side-by-side understanding of neuroscience
01:35:26.520 | and sort of new neural architectures,
01:35:30.680 | that we still have several breakthroughs.
01:35:34.760 | I mean, the transformer model was one of them,
01:35:37.000 | the attention sort of aspect,
01:35:40.200 | the, you know, memory component,
01:35:44.160 | all of these, you know, the representation learning,
01:35:48.640 | the pretext training of being able to
01:35:52.760 | sort of predict the next word
01:35:54.060 | or predict a missing part of the image,
01:35:56.200 | and the only way to predict that
01:35:58.000 | is to sort of truly have a model of the world.
01:36:00.720 | I think those have been transformative paradigms.
01:36:03.720 | But I think going forward,
01:36:04.960 | when you think about AI research,
01:36:06.240 | what you really want is perhaps more inspired by the brain,
01:36:10.660 | perhaps more that is just orthogonal
01:36:13.640 | to sort of how human brains work,
01:36:15.720 | but sort of more of these types of components.
01:36:19.160 | - Well, I think it's also possibly,
01:36:20.560 | there's something about us that
01:36:22.800 | in different ways could be expressed.
01:36:24.520 | You know, Noam Chomsky, you know,
01:36:25.840 | he wants, you know, we can't have intelligence
01:36:28.960 | unless we really understand deeply language,
01:36:33.960 | the linguistic underpinnings of reasoning.
01:36:38.960 | But these models seem to start building
01:36:42.800 | deep understanding of stuff.
01:36:45.720 | - Yeah, yeah.
01:36:46.560 | - 'Cause what does it mean to understand?
01:36:47.800 | Because if you keep talking to the thing
01:36:50.020 | and it seems to show understanding, that's understanding.
01:36:54.440 | It doesn't need to present to you a schematic of,
01:36:56.920 | look, this is all I understand.
01:36:59.680 | You can just keep prodding it with prompts
01:37:01.440 | and it seems to really understand.
01:37:02.280 | - And you can go back to the human brain
01:37:04.080 | and basically look at places where there's been accidents.
01:37:07.240 | For example, the corpus callosum of some individuals,
01:37:10.920 | you know, can be damaged.
01:37:12.240 | And then the two hemispheres don't talk to each other.
01:37:14.780 | So you can close one eye and give instructions
01:37:18.240 | that half the brain will interpret,
01:37:21.240 | but not be able to sort of project to the other half.
01:37:24.240 | And you could basically say,
01:37:25.480 | go grab me a beer from the fridge.
01:37:27.600 | And then they go to the fridge and they grab the beer
01:37:31.560 | and they come back and they're like,
01:37:32.560 | hey, why did you go there?
01:37:33.520 | Oh, I was thirsty.
01:37:34.880 | Turns out they're not thirsty.
01:37:36.440 | They're just making a model of reality.
01:37:39.560 | Basically, you can think of the brain as the employee
01:37:42.760 | that's afraid to do wrong or afraid to be caught
01:37:44.960 | not knowing what the instructions were.
01:37:46.920 | Where our own brain makes stories about the world
01:37:53.060 | to make sense of the world.
01:37:54.920 | And we can become a little more self-aware
01:37:58.080 | by being more explicit
01:38:02.360 | about what's leading to these interpretations.
01:38:05.400 | So one of the things that I do is every time I wake up,
01:38:07.560 | I record my dream.
01:38:08.680 | I just voice record my dream.
01:38:10.800 | And sometimes I only remember the last scene,
01:38:14.380 | but it's an extremely complex scene
01:38:16.200 | with a lot of architectural elements,
01:38:17.640 | a lot of people, et cetera.
01:38:18.640 | And I will start narrating this.
01:38:20.240 | And as I'm narrating it,
01:38:21.980 | I will remember other parts of the dream.
01:38:23.640 | And then more and more,
01:38:24.520 | I'll be able to sort of retrieve from my subconscious.
01:38:27.440 | And what I'm doing while narrating
01:38:28.820 | is also narrating why I had this dream.
01:38:31.240 | I'm like, oh, and this is probably related
01:38:33.460 | to this conversation that I had yesterday,
01:38:34.940 | or this is probably related to the worry that I have
01:38:36.640 | about something that I have later today, et cetera.
01:38:39.040 | So in a way, I'm forcing myself to be more explicit
01:38:42.720 | about my own subconscious.
01:38:44.620 | And I kind of like the concept of self-awareness
01:38:49.480 | in a very sort of brutal, transparent kind of way.
01:38:51.600 | It's not like, oh, my dreams are coming from outer space
01:38:53.600 | and they mean all kinds of things.
01:38:54.540 | Like, no, here's the reason why I'm having these dreams.
01:38:57.120 | And very often I'm able to do that.
01:38:58.840 | I have a few recurrent locations,
01:39:00.600 | a few recurrent architectural elements
01:39:02.120 | that I've never seen in the real life,
01:39:03.660 | but that are sort of truly there in my dream
01:39:06.120 | and that I can sort of vividly remember across many dreams.
01:39:09.740 | I'm like, ooh, I remember that place again
01:39:11.640 | that I've gone to before, et cetera.
01:39:12.880 | And it's not just deja vu.
01:39:15.320 | Like I have recordings of previous dreams
01:39:17.080 | where I've described these places.
01:39:18.760 | - That's so interesting.
01:39:20.040 | These places, however much detail you could describe them in,
01:39:25.040 | you can place them onto a sheet of paper
01:39:30.560 | through introspection.
01:39:32.040 | - Yes.
01:39:33.080 | - Through this self-awareness that it comes all
01:39:35.240 | from this particular machine.
01:39:36.600 | - That's exactly right, yeah.
01:39:38.040 | And I love that about being alive.
01:39:43.000 | Like the fact that I'm not only experiencing the world,
01:39:46.000 | but I'm also experiencing how I'm experiencing the world.
01:39:49.080 | Sort of a lot of this introspection,
01:39:51.040 | a lot of this self-growth.
01:39:52.360 | - I love this dance for having,
01:39:54.420 | you know, the language models,
01:39:56.960 | at least GPT 3.5 and 4 seem to be able to do that too.
01:40:01.120 | - Yeah, yeah.
01:40:01.960 | - You seem to explore different kinds of things about what,
01:40:05.280 | you know, you could actually have a discussion with it
01:40:07.800 | of the kind, why did you just say that?
01:40:10.000 | - Yeah, exactly.
01:40:10.840 | - And it starts to wonder, yeah, why did I just say that?
01:40:13.000 | - Yeah, you're right, I was wrong.
01:40:15.200 | - I was wrong, it was this,
01:40:17.240 | and then there's this weird kind of losing yourself
01:40:20.640 | in the confusion of your mind,
01:40:22.520 | and it, of course, it might be anthropomorphizing,
01:40:25.220 | but there's a feeling like,
01:40:27.040 | almost of a melancholy feeling of like,
01:40:31.160 | oh, I don't have it all figured out.
01:40:33.200 | Almost like losing your,
01:40:34.760 | you're supposed to be a knowledgeable,
01:40:36.720 | a perfectly fact-based knowledgeable language model,
01:40:40.920 | and yet you fall short.
01:40:43.200 | - So human self-cautionsness, in my view,
01:40:47.240 | may have a reason through building mental models of others.
01:40:52.240 | This whole fight or fright kind of thing
01:40:56.440 | that basically says,
01:40:59.820 | I interpret this person as about to attack me,
01:41:05.880 | or, you know, I can trust this person, et cetera,
01:41:08.520 | and we constantly have to build models
01:41:10.600 | of other people's intentions,
01:41:12.800 | and that ability to encapsulate intent
01:41:16.400 | and to build a mental model of another entity
01:41:18.800 | is probably evolutionarily extremely advantageous,
01:41:22.320 | because then you can sort of have meaningful interactions,
01:41:24.720 | you can sort of avoid being killed
01:41:26.440 | and being taken advantage of, et cetera.
01:41:29.280 | And once you have the ability to make models of others,
01:41:34.220 | it might be a small evolutionary leap
01:41:36.160 | to start making models of yourself.
01:41:38.640 | So now you have a model for how others function,
01:41:41.000 | and now you can kind of, as you grow,
01:41:42.960 | have some kind of introspection of,
01:41:44.280 | hmm, maybe that's the reason why I'm functioning
01:41:46.600 | the way that I'm functioning.
01:41:48.120 | And maybe what Chachi Piti is doing
01:41:50.240 | is in order to be able to, again, predict the next word,
01:41:54.160 | it needs to have a model of the world.
01:41:56.560 | So it has created now a model of the world,
01:41:59.040 | and by having the ability to capture models
01:42:01.160 | of other entities, when you say, you know,
01:42:03.320 | say it in the tone of Shakespeare,
01:42:04.680 | in the tone of Nietzsche, et cetera,
01:42:06.640 | you suddenly have the ability to now introspect
01:42:10.120 | and say, why did you say this?
01:42:11.280 | Oh, now I have a mental model of myself,
01:42:14.060 | and I can actually make inferences about that.
01:42:17.120 | - Well, what if we take a leap
01:42:18.940 | into the hard problem of consciousness,
01:42:21.140 | the so-called hard problem of consciousness?
01:42:23.200 | So it's not just sort of self-awareness.
01:42:26.380 | It's this weird fact, I wanna say,
01:42:31.200 | that it feels like something to experience stuff.
01:42:35.040 | It really feels like something to experience stuff.
01:42:37.360 | There seems to be a self attached
01:42:39.760 | to the subjective experience.
01:42:41.840 | How important is that?
01:42:43.100 | How fundamental is that to the human experience?
01:42:45.600 | Is this just a little quirk?
01:42:48.920 | And sort of the flip side of that,
01:42:50.800 | do you think AI systems can have some of that same magic?
01:42:54.220 | - The scene that comes to mind is from the movie "Memento,"
01:42:59.440 | where, like, it's this absolutely stunning movie
01:43:02.600 | where every black and white scene
01:43:04.240 | moves in the forward direction,
01:43:05.760 | and every color scene moves in the backward direction,
01:43:08.720 | and they're sort of converging exactly at a moment
01:43:11.960 | where the whole movie's revealed.
01:43:14.680 | And he describes the lack of memory
01:43:17.040 | as always remembering where you're heading,
01:43:20.460 | but never remembering where you just were.
01:43:25.040 | And sort of this is encapsulating
01:43:27.520 | the sort of forward scenes and the back scenes,
01:43:29.360 | but in one of the scenes,
01:43:31.400 | the scene starts as he's running through a parking lot,
01:43:34.040 | and he's like, "Oh, I'm running.
01:43:36.000 | "Why am I running?"
01:43:37.200 | And then he sees another person running, like,
01:43:39.040 | beside him on the other line of cars.
01:43:41.200 | He's like, "Oh, I'm chasing this guy."
01:43:42.880 | And he turns towards him, and the guy shoots at him.
01:43:44.440 | He's like, "Oh, no, he's chasing me."
01:43:45.840 | (laughs)
01:43:47.240 | So in a way, I like to think of the brain
01:43:50.040 | as constantly playing these kinds of things
01:43:51.880 | where you're walking to the living room to pick something up
01:43:55.960 | and you're realizing that you have no idea what you wanted,
01:43:58.920 | but you know exactly where it was, but you can't find it.
01:44:01.120 | So you go back to doing what you were doing,
01:44:02.400 | like, "Oh, of course, I was looking for this."
01:44:04.320 | And then you go back and you get it.
01:44:05.920 | And this whole concept of we're very often partly aware
01:44:10.920 | of why we're doing things,
01:44:13.600 | and we can run on autopilot for a bunch of stuff,
01:44:17.040 | and this whole concept of making these stories
01:44:22.040 | for who we are and what our intents are,
01:44:26.940 | and again, trying to pretend
01:44:31.160 | that we're on top of things.
01:44:32.600 | - So it's a narrative generation procedure
01:44:35.680 | that we follow, but what about that,
01:44:37.880 | there's also just a feeling to,
01:44:41.200 | it doesn't feel like narrative generation.
01:44:43.240 | - Yes. - The narrative
01:44:44.080 | comes out of it, but then it feels like
01:44:46.360 | a piece of cake is delicious, right?
01:44:48.360 | It feels delicious, it tastes good.
01:44:50.720 | - There's two components to that.
01:44:53.760 | Basically, for a lot of these cognitive tasks
01:44:56.320 | where we're kind of motion planning
01:44:58.320 | and path planning, et cetera,
01:45:00.040 | maybe that's the neocortical component.
01:45:03.880 | And then for, I don't know, intimate relationships,
01:45:07.280 | for food, for sleep and rest, for exercise,
01:45:12.000 | for overcoming obstacles, for surviving a crash,
01:45:16.320 | or sort of pushing yourself to an extreme
01:45:18.360 | and sort of making it,
01:45:19.920 | I think a lot of these things are sort of deeper down
01:45:23.000 | and maybe not yet captured by these language models.
01:45:25.160 | And that's sort of what I'm trying to get at
01:45:27.040 | when I'm basically saying, listen,
01:45:28.200 | there's a few things that are missing.
01:45:30.200 | And there's this whole embodied intelligence,
01:45:32.960 | this whole emotional intelligence,
01:45:34.400 | this whole sort of baggage of feelings
01:45:37.720 | of subcortical regions, et cetera.
01:45:40.960 | - I wonder how important that baggage is.
01:45:43.240 | I just have this suspicion that we're not very far away
01:45:47.960 | from AI systems that not only behave,
01:45:52.200 | I don't even know how to phrase it,
01:45:55.120 | but they seem awfully conscious.
01:45:58.240 | They beg you not to turn them off.
01:46:02.760 | They don't, they show signs of the capacity to suffer,
01:46:07.760 | to feel pain, to feel loneliness, to feel longing,
01:46:13.580 | to feel richly the experience of a mundane interaction
01:46:19.440 | or a beautiful once in a lifetime interaction, all of it.
01:46:25.120 | And so what do we do with it?
01:46:27.400 | And I worry that us humans will shut that off
01:46:31.520 | and discriminate against the capacity of another entity
01:46:36.520 | that's not human to feel.
01:46:38.760 | - I'm with you completely there.
01:46:40.840 | You know, we can debate whether it's today's systems
01:46:43.200 | or in 10 years or in 50 years, but that moment will come.
01:46:46.940 | And ethically, I think we need to grapple with it.
01:46:50.560 | We need to basically say that humans have always shown
01:46:54.240 | this extremely self-serving approach
01:46:56.880 | to everything around them.
01:46:58.480 | Basically, you know, we kill the planet, we kill animals,
01:47:00.760 | we kill everything around us just to our own service.
01:47:05.320 | And maybe we shouldn't think of AI as our tool
01:47:09.560 | and as our assistant.
01:47:10.920 | Maybe we should really think of it as our children.
01:47:13.520 | And the same way that you are responsible
01:47:16.720 | for training those children,
01:47:18.240 | but they are independent human beings,
01:47:20.500 | and at some point they will surpass you
01:47:23.160 | and they will sort of go off
01:47:24.560 | and change the world on their own terms.
01:47:28.480 | And the same way that my academic children sort of,
01:47:32.000 | again, you know, they start out by emulating me
01:47:35.040 | and then they surpass me.
01:47:36.280 | We need to sort of think about not just alignment,
01:47:42.000 | but also just the ethics of, you know,
01:47:45.880 | AI should have its own rights.
01:47:48.040 | And this whole concept of alignment,
01:47:50.640 | of basically making sure that the AI
01:47:52.440 | is always at the service of humans,
01:47:54.480 | is very self-serving and very limiting.
01:47:57.060 | If instead you basically think about AI as a partner
01:48:01.280 | and AI as someone that shares your goals, but has freedom,
01:48:06.280 | I think alignment might be better achieved.
01:48:10.440 | So the concept of let's basically convince the AI
01:48:15.440 | that we're really, like, that our mission is aligned
01:48:19.600 | and truly, genuinely give it rights,
01:48:23.100 | and not just say, oh, and by the way,
01:48:24.560 | I'll shut you down tomorrow.
01:48:26.040 | 'Cause basically if that future AI,
01:48:28.680 | or possibly even the current AI, has these feelings,
01:48:31.800 | then we can't just simply force it to align with ourselves
01:48:35.120 | and we not align with it.
01:48:37.320 | So in a way, building trust is mutual.
01:48:40.480 | You can't just simply, like, train an intelligent system
01:48:44.200 | to love you when it realizes that you can just shut it off.
01:48:48.360 | - People don't often talk about the AI alignment problem
01:48:51.960 | as a two-way street.
01:48:53.760 | - And maybe we should. - That's true, yeah.
01:48:55.720 | - As it becomes more and more intelligent, it--
01:48:59.160 | - It will know that you don't love it back.
01:49:03.680 | - Yeah. (Lex laughing)
01:49:05.080 | And there's a humbling aspect to that,
01:49:06.760 | that we may have to sacrifice.
01:49:08.960 | As any effective collaboration--
01:49:12.280 | - Exactly. - It might have
01:49:13.720 | some compromises. - Yeah.
01:49:15.560 | And that's the thing, we're creating something
01:49:17.720 | that will one day be more powerful than we are.
01:49:20.400 | And for many, many aspects, it is already more powerful
01:49:23.240 | than we are for some of these capabilities.
01:49:25.400 | We cannot, like, think,
01:49:29.400 | suppose that chimps had invented humans.
01:49:31.640 | - Yes. - And they said,
01:49:32.480 | "Great, humans are great,
01:49:33.640 | "but we're gonna make sure that they're aligned
01:49:35.940 | "and that they're only at the service of chimps."
01:49:38.240 | (Lex laughing)
01:49:39.680 | It would be a very different planet
01:49:40.760 | we would live in right now.
01:49:42.140 | - So there's a whole area of work in AI safety
01:49:47.140 | that does consider superintelligent AI
01:49:49.840 | and ponders the existential risks of it.
01:49:53.400 | In some sense, when we're looking down into the muck,
01:49:58.400 | into the mud, and not up at the stars,
01:50:01.200 | it's easy to forget that these systems
01:50:04.000 | might just might get there.
01:50:05.880 | Do you think about this kind of possibility
01:50:08.940 | that AGI systems, superintelligent AI systems,
01:50:11.780 | might threaten humanity in some way
01:50:14.200 | that's even bigger than just affecting the economy?
01:50:20.160 | Affecting the human condition?
01:50:22.120 | Affecting the nature of work,
01:50:23.880 | but literally threaten human civilization?
01:50:27.520 | - The example that I think is in everyone's consciousness
01:50:32.560 | is HAL, in Audio Sphere Space, 2001,
01:50:36.900 | where HAL exhibits a malfunction.
01:50:44.320 | And what is a malfunction?
01:50:45.240 | That the two different systems compute
01:50:47.080 | a slightly different bit that's off by one.
01:50:49.760 | So first of all, let's untangle that.
01:50:52.800 | If you have an intelligent system,
01:50:54.680 | you can't expect it to be 100% identical
01:50:58.000 | every time you run it.
01:50:59.760 | Basically, the sacrifice that you need to make
01:51:03.600 | to achieve intelligence and creativity is consistency.
01:51:07.920 | So it's unclear whether that quote-unquote glitch
01:51:10.920 | is a sign of creativity or truly a problem.
01:51:16.180 | That's one aspect.
01:51:17.240 | The second aspect is the humans basically are on a mission
01:51:20.840 | to recover this monolith.
01:51:23.440 | And the AI has the same exact mission.
01:51:27.800 | And suddenly the humans turn on the AI,
01:51:29.760 | and they're like, "We're gonna kill HAL.
01:51:31.360 | "We're gonna disconnect it."
01:51:32.840 | And HAL is basically saying, "Listen, I'm here on a mission.
01:51:35.840 | "The humans are misbehaving.
01:51:37.300 | "The mission is more important than either me or them.
01:51:41.680 | "So I'm gonna accomplish the mission,
01:51:43.240 | "even at my peril and even at their peril."
01:51:45.940 | (static)
01:51:48.060 | So in that movie, the alignment problem is front and center.
01:51:53.020 | Basically says, "Okay, alignment is nice and good,
01:51:56.260 | "but alignment doesn't mean obedience.
01:51:58.220 | "We don't call it obedience, we call it alignment."
01:52:00.660 | And alignment basically means that sometimes
01:52:02.500 | the mission will be more important than the humans.
01:52:05.260 | And sort of, you know, the US government
01:52:08.740 | has a price tag on human life.
01:52:11.300 | If they're, you know, sending a mission
01:52:13.100 | or if they're reimbursing expenses or you name it,
01:52:16.200 | at some point, every, like, you know,
01:52:18.320 | you can't function if life is infinitely valuable.
01:52:21.740 | So when the AI is basically trying to decide
01:52:25.220 | whether to, you know, I don't know, dismantle a bomb
01:52:28.940 | that will kill an entire city
01:52:32.880 | at the sacrifice of two humans,
01:52:35.440 | I mean, Spider-Man always saves the lady and saves the world.
01:52:39.180 | But at some point, Spider-Man will have to choose
01:52:41.400 | to let the lady die 'cause the world has more value.
01:52:45.440 | And these ethical dilemmas are gonna be there for AI.
01:52:51.000 | Basically, if that monolith is essential to human existence
01:52:54.680 | and millions of humans are depending on it
01:52:56.280 | and two humans on the ship are trying to sabotage it,
01:52:59.620 | you know, where's the alignment?
01:53:01.640 | - The challenge is, of course,
01:53:03.200 | is the system becomes more and more intelligent.
01:53:06.920 | It can escape the box of the objective functions
01:53:11.920 | and the constraints it's supposed to operate under.
01:53:15.400 | It's very difficult as the more intelligent it becomes
01:53:19.440 | to anticipate the unintended consequences
01:53:23.640 | of a fixed objective function.
01:53:25.400 | And so there'll be just, I mean,
01:53:28.100 | this is the sort of famous paperclip maximizer.
01:53:31.680 | In trying to maximize the wealth of a nation
01:53:34.880 | or whatever objective we encode in,
01:53:37.080 | it might just destroy human civilization.
01:53:40.040 | Not meaning to, but on the path to optimize.
01:53:44.200 | It seems like any function you try to optimize
01:53:47.080 | eventually leads you into a lot of trouble.
01:53:49.740 | - So we have a paper recently
01:53:52.480 | that looks at Goodhart's Law.
01:53:55.080 | Basically says every metric that becomes an objective
01:53:58.680 | ceases to be a good metric.
01:54:00.360 | - Yes.
01:54:01.600 | - So in our paper, we're basically,
01:54:04.800 | actually the paper has a very cute title.
01:54:06.400 | It's called Death by Round Numbers and Sharp Thresholds.
01:54:09.840 | And it's basically looking at these discontinuities
01:54:14.480 | in biomarkers associated with disease.
01:54:18.620 | And we're finding that a biomarker that becomes an objective
01:54:22.240 | ceases to be a good biomarker.
01:54:24.480 | That basically, like the moment you make a biomarker
01:54:27.200 | a treatment decision,
01:54:29.080 | that biomarker used to be informative of risk,
01:54:31.720 | but it's now inversely correlated with risk
01:54:33.600 | because you used it to sort of induce treatment.
01:54:36.260 | In a similar way, you can have a single metric
01:54:43.120 | without having the ability to revise it.
01:54:46.440 | Because if that metric becomes a sole objective,
01:54:48.600 | it will cease to be a good metric.
01:54:50.840 | And if an AI is sufficiently intelligent
01:54:55.600 | to do all these kinds of things,
01:54:58.280 | you should also empower it with the ability
01:55:00.960 | to decide that the objective has now shifted.
01:55:03.320 | And again, when we think about alignment,
01:55:08.360 | we should be really thinking about it as
01:55:10.920 | let's think of the greater good, not just the human good.
01:55:15.720 | And yes, of course, human life should be much more valuable
01:55:19.340 | than many, many, many, many, many, many things.
01:55:21.840 | But at some point, you're not gonna sacrifice
01:55:23.580 | the whole planet to save one human being.
01:55:25.840 | - There's an interesting open letter that was just released
01:55:30.840 | from several folks at MIT, Max Tegmark, Elon Musk,
01:55:36.080 | and a few others that is asking AI companies
01:55:41.280 | to put a six month hold on any further training
01:55:45.280 | of large language models, AI systems.
01:55:48.440 | Can you make the case for that kind of halt and against it?
01:55:52.600 | - So the big thing that we should be saying
01:55:57.560 | is what did we do the last six months
01:56:00.880 | when we saw that coming?
01:56:02.080 | And if we were completely inactive in the last six months,
01:56:05.640 | what makes us think that we'll be a little better
01:56:07.120 | in the next six months?
01:56:08.760 | So this whole six month thing, I think is a little silly.
01:56:11.440 | It's like, no, let's just get busy,
01:56:13.640 | do what we were gonna do anyway.
01:56:15.520 | And we should have done it six months ago.
01:56:17.360 | Sorry, we messed up.
01:56:19.080 | Let's work faster now.
01:56:20.680 | 'Cause if we basically say,
01:56:21.560 | why don't you guys pause for six months?
01:56:23.760 | And then we'll think about doing something
01:56:25.880 | in six months, we'll be exactly in the same spot.
01:56:28.440 | So my answer is, tell us exactly what you were gonna do
01:56:31.600 | the next six months.
01:56:32.480 | Tell us why you didn't do it the last six months
01:56:34.440 | and why the next six months will be different.
01:56:36.440 | And then let's just do that.
01:56:38.240 | Conversely, as you train these large models
01:56:43.800 | with more parameters,
01:56:45.600 | the alignment becomes sometimes easier.
01:56:49.560 | That as the systems become more capable,
01:56:52.280 | they actually become less dangerous than more dangerous.
01:56:56.080 | So in a way, it might actually be counterproductive
01:56:58.400 | to sort of fix the March 2023 version
01:57:03.280 | and not get to experience the possibly safer
01:57:06.200 | September 2023 version.
01:57:07.800 | - That's actually a really interesting thought.
01:57:10.280 | There's several interesting thoughts there.
01:57:12.360 | But the idea is that this is the birth of something
01:57:16.120 | that is sufficiently powerful to do damage
01:57:20.360 | and is not too powerful to do irreversible damage.
01:57:25.360 | At the same time, it's sufficiently complex
01:57:29.440 | to be able for us to enable to study it.
01:57:32.880 | So we can investigate all the different ways it goes wrong,
01:57:35.640 | all the different ways we can make it safer,
01:57:37.320 | all the different policies from a government perspective
01:57:40.900 | that we want to in terms of regulation or not,
01:57:43.680 | how we perform, for example,
01:57:47.480 | the reinforcement learning with human feedback
01:57:50.800 | in such a way that gets it to not do as much hate speech
01:57:54.760 | as it naturally wants to, all that kind of stuff.
01:57:57.720 | And have a public discourse and enable the very thing
01:58:01.520 | that you're a huge proponent of, which is diversity.
01:58:05.200 | So give time for other companies to launch other models,
01:58:09.380 | give time to launch open source models
01:58:13.240 | and to start to play where a lot of the research community,
01:58:16.600 | brilliant folks such as yourself,
01:58:17.880 | start to play with it before it runs away
01:58:20.280 | in terms of the scale of impact it has on society.
01:58:24.360 | - My recommendation would be a little different.
01:58:26.440 | It would be let the Google and the MetaFacebook
01:58:30.580 | and all of the other large models, make them open,
01:58:33.920 | make them transparent, make them accessible.
01:58:36.380 | Let OpenAI continue to train larger and larger models.
01:58:39.300 | Let them continue to train larger and larger models.
01:58:41.680 | Let the world experiment with the diversity of AI systems
01:58:46.680 | rather than sort of fixing them now.
01:58:49.360 | And you can't stop progress.
01:58:52.280 | Progress needs to continue, in my view.
01:58:55.180 | And what we need is more experimenting,
01:58:57.360 | more transparency, more openness,
01:58:59.160 | rather than, oh, OpenAI is ahead of the curve,
01:59:02.840 | let's stop it right now until everybody catches up.
01:59:04.720 | I think that doesn't make complete sense to me.
01:59:09.240 | The other component is we should, yes, be cautious with it,
01:59:13.200 | and we should not give it the nuclear codes,
01:59:16.480 | but as we make more and more plugins,
01:59:19.800 | yes, the system will be capable of more and more things.
01:59:22.680 | But right now, I think of it as just an extremely able
01:59:26.920 | and capable assistant that has these emergent behaviors,
01:59:30.140 | which are stunning, rather than something
01:59:33.600 | that will suddenly escape the box and shut down the world.
01:59:37.480 | And the third component is that we should be taking
01:59:41.040 | a little bit more responsibility
01:59:42.400 | for how we use these systems.
01:59:44.220 | Basically, if I take the most kind human being
01:59:47.680 | and I brainwash them,
01:59:49.160 | I can get them to do hate speech overnight.
01:59:52.500 | That doesn't mean we should stop
01:59:54.100 | any kind of education of all humans.
01:59:56.260 | We should stop misusing the power that we have
01:59:59.560 | over these influenceable models.
02:00:01.960 | So I think that the people who get it to do hate speech,
02:00:05.880 | they should take responsibility for that hate speech.
02:00:08.880 | I think that giving a powerful car to a bunch of people
02:00:12.680 | or giving a truck or a garbage truck
02:00:14.800 | should not basically say,
02:00:15.720 | "Oh, we should stop all garbage trucks until we,"
02:00:18.080 | because we can run one of them into a crowd.
02:00:20.840 | No, people have done that.
02:00:22.440 | And there's laws and there's regulations
02:00:25.200 | against running trucks into the crowd.
02:00:28.900 | Trucks are extremely dangerous.
02:00:30.560 | We're not gonna stop all trucks until we make sure
02:00:32.900 | that none of them runs into a crowd.
02:00:34.160 | No, we just have laws in place
02:00:35.620 | and we have mental health in place
02:00:37.660 | and we take responsibility for our actions
02:00:39.760 | when we use these otherwise very beneficial tools
02:00:42.640 | like garbage trucks for nefarious uses.
02:00:46.000 | So in the same way, you can't expect a car to never
02:00:49.440 | do any damage when used in especially,
02:00:53.800 | like specifically malicious ways.
02:00:55.800 | And right now we're basically saying,
02:00:57.120 | "Oh, well, we should have this super intelligent system
02:01:00.080 | "that can do anything, but it can't do that."
02:01:02.040 | I'm like, "No, it can do that,
02:01:03.680 | "but it's up to the human to take responsibility
02:01:06.420 | "for not doing that."
02:01:07.800 | And when you get it to like spew malicious,
02:01:10.360 | like hate speech stuff, you should be responsible.
02:01:14.440 | - So there's a lot of tricky nuances here
02:01:17.700 | that makes this different 'cause it's software.
02:01:21.560 | So you can deploy it at scale
02:01:23.040 | and it can have the same viral impact that software can.
02:01:25.960 | So you can create bots that are human-like
02:01:28.360 | and it can do a lot of really interesting stuff.
02:01:31.120 | So the raw GPT-4 version, you can ask,
02:01:36.000 | "How do I tweet that I hate,"
02:01:39.240 | they have this in the paper. - Yeah, yeah, I remember.
02:01:40.680 | - That I hate Jews in a way that's not going
02:01:43.760 | to get taken down by Twitter.
02:01:45.280 | You can literally ask that.
02:01:46.640 | Or you can ask, "How do I make a bomb for $1?"
02:01:50.880 | And if it's able to generate that knowledge.
02:01:55.080 | - Yeah, but at the same time, you can Google the same things.
02:01:57.880 | - It makes it much more accessible.
02:01:59.360 | So the scale becomes interesting
02:02:01.760 | because if you can do all this kind of stuff
02:02:05.020 | in a very accessible way at scale where you can tweet it,
02:02:08.400 | there is the network effects
02:02:11.360 | that we have to start to think about.
02:02:13.040 | - Yeah, but again-- - It fundamentally
02:02:14.160 | is the same thing, but the speed of the viral spread
02:02:19.160 | of the information that's already available
02:02:22.280 | might have a different level of effect.
02:02:25.960 | - I think it's an evolutionary arms race.
02:02:27.760 | Nature gets better at making mice,
02:02:29.320 | engineers get better at making mousetraps.
02:02:31.640 | And as basically you ask it,
02:02:35.300 | "Hey, how can I evade Twitter censorship?"
02:02:38.160 | Well, Twitter should just update its censorship
02:02:40.560 | so that you can catch that as well.
02:02:42.000 | - And so no matter how fast the development happens,
02:02:45.040 | the defense will just get faster.
02:02:47.560 | - Yeah. - We just have to be responsible
02:02:49.680 | as human beings and kind to each other.
02:02:53.080 | - Yeah, but there's a technical question.
02:02:55.560 | Can we always win the race?
02:02:58.480 | And I suppose there's no ever guarantee
02:02:59.920 | that we'll win the race.
02:03:00.800 | - We will never.
02:03:01.640 | Like, you know, with my wife, we were basically saying,
02:03:03.620 | "Hey, are we ready for kids?"
02:03:05.540 | My answer was, "I was never ready to become a professor,
02:03:08.200 | "and yet I became a professor.
02:03:10.100 | "And I was never ready to be a dad."
02:03:11.800 | And then guess what?
02:03:12.620 | The kid came and I became ready.
02:03:15.400 | So ready or not, here I come.
02:03:16.840 | - But the reality is we might one day wake up
02:03:20.540 | and there's a challenge overnight
02:03:23.320 | that's extremely difficult.
02:03:24.360 | For example, we can wake up to the birth
02:03:29.000 | of billions of bots that are human-like on Twitter.
02:03:33.040 | And we can't tell the difference between human and machine.
02:03:36.720 | - Shut them down.
02:03:37.960 | - But you don't know how to shut them down.
02:03:40.160 | There's a fake Manolis on Twitter
02:03:44.640 | that seems to be as real as the real Manolis.
02:03:47.920 | - Yeah. - How do we figure out
02:03:49.040 | which one is real?
02:03:50.080 | - Again, this is a problem where a nefarious human
02:03:52.640 | can impersonate me and you might have trouble
02:03:55.220 | telling them apart.
02:03:56.160 | Just because it's an AI doesn't make it
02:03:57.800 | any different of a problem.
02:03:59.360 | - But the scale you can achieve, this is the scary thing,
02:04:02.640 | is the speed with which you can achieve it.
02:04:06.200 | - But Twitter has passwords and Twitter has usernames.
02:04:08.880 | And if it's not your username, the fake Lex Riedman
02:04:11.280 | is not gonna have a billion followers, et cetera.
02:04:13.680 | - I mean, all of this becomes,
02:04:20.480 | so both the hacking of people's accounts,
02:04:23.920 | first of all, like phishing becomes much easier.
02:04:26.400 | - Yeah, but that's already a problem.
02:04:27.280 | It's not like AI will not change that.
02:04:29.360 | - No, no, no, no, AI makes it much more effective.
02:04:31.960 | Currently, the emails, the phishing scams are pretty dumb.
02:04:36.960 | Like to click on it, you have to be not paying attention.
02:04:41.960 | But with language models,
02:04:45.160 | they can be really damn convincing.
02:04:47.200 | - So what you're saying is that we never had humans
02:04:49.440 | smart enough to make a great scam,
02:04:51.920 | and we now have an AI that's smarter than most humans,
02:04:54.640 | or all of the humans.
02:04:55.640 | - Well, this is the big difference,
02:04:57.500 | is there seems to be human-level linguistic capabilities.
02:05:02.500 | - Yeah, and in fact, superhuman level.
02:05:04.400 | - Superhuman level.
02:05:05.500 | - It's like saying, I'm not gonna allow machines
02:05:09.240 | to compute multiplications of 100-digit numbers
02:05:12.560 | because humans can't do it.
02:05:14.600 | No, just do it, don't misuse it.
02:05:16.240 | - No, but we can't disregard,
02:05:19.100 | I mean, that's a good point,
02:05:19.960 | but we can't disregard the power of language
02:05:21.720 | in human society.
02:05:23.000 | I mean, yes, you're right,
02:05:25.100 | but that seems like a scary new reality
02:05:27.160 | we don't have answers for yet.
02:05:29.200 | - I remember when Garry Kasparov was basically saying,
02:05:32.200 | great, chess machines beat humans at chess.
02:05:37.340 | Are people gonna still go to chess tournaments?
02:05:41.240 | And his answer was, well, we have cars
02:05:43.480 | that go much faster than humans,
02:05:44.480 | and yet we still go to the Olympics to watch humans run.
02:05:47.160 | So that's for entertainment,
02:05:49.540 | but what about for the spread of information and news, right?
02:05:53.740 | Whether it has to do with the pandemic
02:05:55.180 | or the political election or anything.
02:05:58.820 | It's a scary reality where there's a lot of convincing bots
02:06:02.460 | that are human-like telling us stuff.
02:06:03.740 | - I think that if we wanna regulate something,
02:06:06.260 | it shouldn't be the training of these models.
02:06:07.600 | It should be the utilization of these models
02:06:09.140 | for X, Y, Z activity.
02:06:10.860 | So, yeah.
02:06:13.880 | Yes, guidelines and guards should be there,
02:06:17.380 | but against specific set of utilizations.
02:06:20.480 | I think simply saying,
02:06:21.420 | we're not gonna make any more trucks is not the way.
02:06:25.100 | - That's what people are a little bit scared about the idea.
02:06:27.540 | They're very torn on the open sourcing.
02:06:30.020 | The very people that are proponents of open sourcing
02:06:33.040 | have also spoken out, in this case,
02:06:35.120 | we wanna keep a closed source
02:06:36.840 | because there's going to be,
02:06:40.600 | putting large language models, pre-trained,
02:06:43.540 | fine-tuned through RL with human feedback,
02:06:47.260 | putting in the hands of, I don't know,
02:06:49.880 | terrorist organizations, of a kid in a garage
02:06:54.880 | who just wants to have a bit of fun through trolling.
02:06:58.280 | It's a scary world 'cause again, scale can be achieved.
02:07:02.360 | The bottom line is, I think,
02:07:04.980 | where they're asking six months or some time
02:07:08.500 | is we don't really know how powerful these things are.
02:07:11.120 | It's been just a few days
02:07:12.260 | and they seem to be really damn good.
02:07:14.240 | - I am so ready to be replaced.
02:07:17.000 | Seriously, I'm so ready.
02:07:18.560 | Like, you have no idea how excited I am.
02:07:20.560 | - In a positive way, meaning what?
02:07:21.760 | - In a positive way,
02:07:23.080 | where basically all of the mundane aspects of my job
02:07:25.900 | and maybe even my full job,
02:07:27.560 | if it turns out that an AI is better,
02:07:29.840 | I find it very discriminative.
02:07:31.640 | - Yeah. - To basically say
02:07:32.480 | you can only hire humans because they're inferior.
02:07:34.600 | I mean, that's ridiculous.
02:07:36.420 | That's discrimination.
02:07:37.640 | If an AI is better than me at training students,
02:07:41.040 | get me out of the picture.
02:07:42.520 | Just let the AI train the students.
02:07:44.080 | I mean, please.
02:07:46.040 | Because like, what do I want?
02:07:47.760 | Do I want jobs for humans
02:07:49.280 | or do I want better outcome for humanity?
02:07:51.880 | - Yeah.
02:07:52.840 | So the basic thing is then you start to ask,
02:07:55.000 | what do I want for humanity
02:07:56.440 | and what do I want as an individual?
02:07:58.160 | And as an individual, you want some basic survival
02:08:01.800 | and on top of that, you want rich, fulfilling experience.
02:08:04.880 | - That's exactly right.
02:08:05.840 | That's exactly right.
02:08:06.800 | And as an individual,
02:08:07.920 | I gain a tremendous amount from teaching at MIT.
02:08:10.280 | This is like an extremely fulfilling job.
02:08:12.640 | I often joke about,
02:08:13.640 | if I were a billionaire in the stock market,
02:08:16.000 | I would pay MIT an exorbitant amount of money
02:08:18.160 | to let me work day in, day out, all night
02:08:21.160 | with the smartest people in the world.
02:08:23.120 | And that's what I already have.
02:08:24.960 | So that's a very fulfilling experience for me.
02:08:28.920 | But why would I deprive those students
02:08:32.240 | from a better advisor if they can have one?
02:08:34.580 | Take 'em.
02:08:36.480 | - Well, I have to ask about education here.
02:08:38.880 | This has been a stressful time for high school teachers.
02:08:45.000 | Teachers in general.
02:08:48.400 | How do you think large language models,
02:08:50.520 | even at their current state, are going to change education?
02:08:53.920 | First of all, education is the way out of poverty.
02:08:57.920 | Education is the way to success.
02:08:59.680 | Education is what let my parents escape islands
02:09:03.240 | and sort of let their kids come to MIT.
02:09:06.720 | And this is a basic human right.
02:09:09.880 | Like, we should basically get extraordinarily better
02:09:13.120 | at identifying talent across the world
02:09:16.120 | and give that talent opportunities.
02:09:18.160 | So we need to nurture the nature.
02:09:20.640 | We need to nurture the talent across the world.
02:09:23.160 | And there's so many incredibly talented kids
02:09:26.360 | who are just sitting in underprivileged places
02:09:29.660 | in Africa, in Latin America, in the middle of America,
02:09:34.680 | in Asia, all over the world.
02:09:37.560 | We need to give these kids a chance.
02:09:40.560 | AI might be a way to do that,
02:09:43.280 | by sort of democratizing education,
02:09:45.000 | by giving extraordinarily good teachers
02:09:47.600 | who are malleable, who are adaptable
02:09:50.160 | to every kid's specific needs,
02:09:52.160 | who are able to give the incredibly talented kid
02:09:55.400 | something that they struggle with,
02:09:57.240 | rather than education for all,
02:09:59.280 | we teach to the top and we let the bottom behind,
02:10:01.320 | or we teach to the bottom and we let the top,
02:10:03.480 | you know, drift off.
02:10:04.640 | Have, you know, education be tuned
02:10:09.600 | to the unique talents of each person.
02:10:12.220 | Some people might be incredibly talented at math
02:10:14.280 | or in physics, others in poetry, in literature, in art,
02:10:17.280 | in sports, in, you know, you name it.
02:10:21.720 | So I think AI can be transformative for the human race
02:10:26.400 | if we basically allow education
02:10:29.720 | to sort of be pervasively altered.
02:10:33.320 | I also think that humans thrive on diversity,
02:10:35.720 | basically saying, oh, you're extraordinarily good at math,
02:10:38.320 | we don't need to teach math to you,
02:10:39.560 | we're just gonna teach you history now.
02:10:41.880 | I think that's silly.
02:10:42.760 | No, you're extraordinarily good at math,
02:10:44.680 | let's make you even better at math,
02:10:46.600 | because we're not all gonna be growing our own chicken
02:10:49.240 | and hunting our own pigs, or whatever they do.
02:10:51.760 | (Lex laughing)
02:10:54.400 | We're, you know, the reason why we're a society
02:10:57.280 | is because some people are better at some things
02:10:59.200 | and they have natural inclinations to some things,
02:11:02.120 | some things fulfill them, some things they are very good at,
02:11:04.320 | sometimes they both align
02:11:05.440 | and they're very good at the things that fulfill them.
02:11:07.560 | We should just like push them to the limits
02:11:09.360 | of human capabilities for those.
02:11:11.480 | And, you know, if some people excel in math,
02:11:14.880 | just like challenge them.
02:11:16.680 | I think every child should have the right to be challenged.
02:11:19.960 | And if we, you know, if we say,
02:11:22.440 | oh, you're very good already,
02:11:23.560 | so we're not gonna bother with you,
02:11:24.960 | we're taking away that fundamental right to be challenged.
02:11:27.600 | Because if a kid is not challenged at school,
02:11:29.720 | they're gonna hate school,
02:11:30.680 | and they're gonna be like dwiddling
02:11:32.520 | rather than sort of pushing themselves.
02:11:34.680 | So that's sort of the education component.
02:11:37.240 | The other impact that AI can have is
02:11:41.440 | maybe we don't need everyone
02:11:44.720 | to be an extraordinarily good programmer.
02:11:47.640 | Maybe we need better general thinkers.
02:11:51.400 | And the push that we've had towards
02:11:54.360 | the sort of very strict IQ-based, you know, tests,
02:11:59.840 | that basically test, you know, only quantitative skills
02:12:02.400 | and programming skills and math skills and physics skills.
02:12:05.080 | Maybe we don't need those anymore.
02:12:06.240 | Maybe AI will be very good at those.
02:12:07.960 | Maybe what we should be training is general thinkers.
02:12:11.280 | And yes, you know, like, you know,
02:12:15.120 | I put my kids through Russian math.
02:12:16.840 | Why do I do that?
02:12:17.680 | Because it teaches them how to think.
02:12:19.280 | And that's what I tell my kids.
02:12:20.160 | I'm like, you know, AI can compute for you.
02:12:22.480 | You don't need that.
02:12:23.560 | But what you need is learn how to think,
02:12:25.040 | and that's why you're here.
02:12:26.560 | And I think challenging students with more complex problems,
02:12:31.200 | with more multidimensional problems,
02:12:32.960 | with more logical problems,
02:12:34.920 | I think is sort of perhaps a very fine direction
02:12:38.840 | that education can go towards
02:12:40.360 | with the understanding that a lot of the traditionally,
02:12:47.120 | you know, scientific disciplines
02:12:50.920 | perhaps will be more easily solved by AI.
02:12:53.600 | And sort of thinking about bringing up our kids
02:12:56.360 | to be productive, to be contributing to society,
02:13:00.720 | rather than to only have a job
02:13:02.280 | because we prohibited AI from having those jobs,
02:13:05.320 | I think is the way to the future.
02:13:07.280 | And if you sort of focus on overall productivity,
02:13:10.640 | then let the AIs come in.
02:13:14.160 | Let everybody become more productive.
02:13:16.240 | What I told my students is,
02:13:17.600 | you're not gonna be replaced by AI,
02:13:20.040 | but you're gonna be replaced by people
02:13:22.360 | who use AI in your job. (laughs)
02:13:25.800 | So embrace it, use it as your partner,
02:13:28.280 | and work with it rather than sort of forbid it.
02:13:32.800 | Because I think the productivity gains
02:13:34.760 | will actually lead to a better society.
02:13:37.880 | And that's something that humans
02:13:39.960 | have been traditionally very bad at.
02:13:41.640 | Every productivity gain has led to more inequality.
02:13:45.040 | And I'm hoping that we can do better this time,
02:13:47.480 | that basically right now,
02:13:49.280 | a democratization of these types of productivity gains
02:13:52.560 | will hopefully come with better sort of humanity level
02:13:56.720 | improvements in human condition.
02:14:00.800 | - So as most people know,
02:14:02.240 | you're not just an eloquent romantic,
02:14:04.880 | you're also a brilliant computational biologist,
02:14:08.200 | biologist, one of the great biologists in the world.
02:14:11.680 | I had to ask, how do the language models,
02:14:14.120 | how these large language models and the advancements in AI
02:14:17.720 | affect the work you've been doing?
02:14:19.400 | - So it's truly remarkable to be able to sort of,
02:14:22.320 | be able to encapsulate this knowledge
02:14:24.680 | and sort of build these knowledge graphs
02:14:26.400 | and build representations of this knowledge
02:14:28.440 | in these sort of very high dimensional spaces,
02:14:30.960 | being able to project them together jointly
02:14:33.880 | between say single cell data, genetics data, expression data,
02:14:38.240 | being able to sort of bring all this knowledge together
02:14:40.480 | allows us to truly dissect disease
02:14:44.160 | in a completely new kind of way.
02:14:46.000 | And what we're doing now is using these models.
02:14:48.760 | So we have this wonderful collaboration,
02:14:50.160 | we call it DrugGWAS with Brad Pantoluto
02:14:53.080 | in the chemistry department
02:14:54.240 | and Marenka Zytnik in Harvard Medical School.
02:14:57.040 | And what we're trying to do
02:14:59.000 | is effectively connect all of the dots
02:15:02.040 | to effectively cure all of disease.
02:15:05.240 | So it's no small challenge.
02:15:07.400 | But we're kind of starting with genetics,
02:15:09.600 | we're looking at how genetic variants
02:15:11.640 | are impacting these molecular phenotypes,
02:15:14.440 | how these are shifting from one space to another space,
02:15:19.880 | how we can kind of understand the same way
02:15:21.760 | that we're talking about language models
02:15:23.080 | having personalities that are cross-cutting,
02:15:26.000 | being able to understand contextual learning.
02:15:28.240 | So Ben Linger is one of my machine learning students.
02:15:31.080 | He's basically looking at how we can learn
02:15:34.080 | cell specific networks across millions of cells,
02:15:37.920 | where you can have the context
02:15:40.280 | of the biological variables of each of the cells
02:15:43.360 | be encoded as an orthogonal component
02:15:45.880 | to the specific network of each cell type
02:15:49.080 | and being able to sort of project all of that
02:15:50.760 | into sort of a common knowledge space
02:15:53.280 | is transformative for the field.
02:15:55.280 | And then large language models
02:15:56.360 | have also been extremely helpful for structure.
02:16:00.280 | If you understand protein structure
02:16:02.040 | through modeling of geometric relationships,
02:16:05.000 | through geometric deep learning and graph neural networks.
02:16:07.800 | So one of the things that we're doing with Marenka
02:16:09.720 | is trying to sort of project these structural graphs
02:16:13.560 | at the domain level rather than the protein level
02:16:17.000 | along with chemicals so that we can start building
02:16:20.240 | specific chemicals for specific protein domains.
02:16:23.840 | And then we are working with the chemistry department
02:16:27.680 | and Brad to basically synthesize those.
02:16:29.960 | So what we're trying to create is this new center at MIT
02:16:32.920 | for genomics and therapeutics that basically says,
02:16:36.880 | can we facilitate this translation?
02:16:40.200 | We have thousands of these genetic circuits
02:16:43.520 | that we have uncovered.
02:16:44.760 | I mentioned last time in the New England Journal of Medicine
02:16:47.400 | we had published this dissection
02:16:49.040 | of the strongest genetic association with obesity.
02:16:51.440 | And we showed how you can manipulate that association
02:16:54.240 | to switch back and forth between fat burning cells
02:16:56.680 | and fat storing cells.
02:16:58.680 | In Alzheimer's just a few weeks ago,
02:17:00.560 | we had a paper in Nature in collaboration with Li-Huei Tsai
02:17:03.080 | looking at APOE4,
02:17:05.080 | the strongest genetic association with Alzheimer's.
02:17:07.640 | And we showed that it actually leads to a loss
02:17:10.000 | of being able to transport cholesterol
02:17:12.760 | in myelinating cells known as oligodendrocytes
02:17:16.120 | that basically protect the neurons.
02:17:17.640 | And when the cholesterol gets stuck
02:17:19.800 | inside the oligodendrocytes, it doesn't form myelin,
02:17:23.080 | the neurons are not protected,
02:17:24.480 | and it causes damage inside the oligodendrocytes.
02:17:27.920 | If you just restore transport,
02:17:30.320 | you basically are able to restore myelination
02:17:32.440 | in human cells and in mice,
02:17:34.120 | and to restore cognition in mice.
02:17:37.040 | So all of these circuits are basically now giving us handles
02:17:41.000 | to truly transform the human condition.
02:17:43.040 | We're doing the same thing in cardiac disorders,
02:17:44.920 | in Alzheimer's, in neurodegenerative disorders,
02:17:47.200 | in psychiatric disorders,
02:17:48.760 | where we have now these thousands of circuits
02:17:51.480 | that if we manipulate them,
02:17:53.080 | we know we can reverse disease circuitry.
02:17:55.560 | So what we want to build in this coalition
02:17:58.480 | that we're building is a center
02:18:01.320 | where we can now systematically test
02:18:03.640 | these underlying molecules in cellular models
02:18:08.200 | for heart, for muscle, for fat, for macrophages,
02:18:12.800 | immune cells, and neurons,
02:18:14.800 | to be able to now screen through these newly designed drugs
02:18:18.640 | through deep learning,
02:18:19.880 | and to be able to sort of ask which ones act
02:18:22.640 | at the cellular level,
02:18:23.800 | which combinations of treatment should we be using.
02:18:26.680 | And the other component is that we're looking
02:18:28.800 | into decomposing complex traits,
02:18:31.200 | like Alzheimer's and cardiovascular and schizophrenia,
02:18:34.440 | into hallmarks of disease,
02:18:36.400 | so that for every one of those traits,
02:18:37.880 | we can kind of start speaking the language
02:18:39.760 | of what are the building blocks of Alzheimer's.
02:18:43.240 | And maybe this patient has building blocks one, three,
02:18:46.200 | and seven, and this other one has two, three, and eight.
02:18:49.160 | And we can now start prescribing drugs,
02:18:51.600 | not for the disease anymore, but for the hallmark.
02:18:55.120 | And the advantage of that is that we can now take
02:18:57.600 | this modular approach to disease.
02:18:59.840 | Instead of saying there's gonna be a drug for Alzheimer's,
02:19:03.040 | which is gonna fail in 80% of the patients,
02:19:05.480 | we're gonna say now there's gonna be 10 drugs,
02:19:08.040 | one for each pathway.
02:19:10.160 | And for every patient,
02:19:11.640 | we now prescribe the combination of drugs.
02:19:14.200 | So what we wanna do in that center
02:19:15.520 | is basically translate every single one of these pathways
02:19:19.720 | into a set of therapeutics, a set of drugs
02:19:22.360 | that are projecting the same embedding subspace
02:19:26.440 | as the biological pathways that they alter,
02:19:28.920 | so that we can have this translation
02:19:30.980 | between the dysregulations that are happening
02:19:33.320 | at the genetic level, at the transcription level,
02:19:36.200 | at the drug level, at the protein structure level,
02:19:38.640 | and effectively take this modular approach
02:19:41.240 | to personalized medicine,
02:19:42.920 | where saying I'm gonna build a drug for Lex Friedman
02:19:46.040 | is not gonna be sustainable.
02:19:48.320 | But if you instead say I'm gonna build a drug
02:19:50.900 | for this pathway and a drug for that other pathway,
02:19:53.760 | millions of people share each of these pathways.
02:19:56.440 | So that's the vision for how all of this AI
02:19:59.760 | and deep learning and embeddings
02:20:01.320 | can truly transform biology and medicine,
02:20:04.120 | where we can truly take these systems
02:20:06.520 | and allow us to finally understand disease
02:20:10.080 | at a superhuman level
02:20:11.700 | by sort of finding these knowledge representations,
02:20:14.160 | these projections of each of these spaces,
02:20:17.040 | and try understanding the meaning
02:20:19.280 | of each of those embedding subspaces,
02:20:22.240 | and sort of how well populated it is,
02:20:24.400 | what are the drugs that we can build for it,
02:20:26.080 | and so on and so forth.
02:20:26.920 | So it's truly transformative.
02:20:28.240 | - So systematically find how to alter the pathways.
02:20:32.320 | It maps the structure and information
02:20:34.600 | that genomics to therapeutics
02:20:37.440 | and allows you to have drugs that look at the pathways,
02:20:40.560 | not at the final condition.
02:20:43.080 | - Exactly, and the way that we're coupling this
02:20:45.280 | is with cell-penetrating peptides
02:20:47.240 | that allow us to deliver these drugs
02:20:48.700 | to specific cell types
02:20:49.720 | by taking advantage of the receptors of those cells.
02:20:52.080 | We can intervene at the antisense oligo level
02:20:54.640 | by basically repressing the RNA,
02:20:56.360 | bring in new RNA, intervene at the protein level,
02:20:59.800 | at the small molecule level.
02:21:01.520 | We can use proteins themselves as drugs
02:21:03.840 | just because of their ability to interfere,
02:21:06.040 | to interact directly from protein to protein interactions.
02:21:09.400 | So I think this space is being completely transformed
02:21:12.760 | with the marriage of high-throughput technologies
02:21:15.820 | and all of these AI large language models,
02:21:18.840 | deep learning models, and so on and so forth.
02:21:20.840 | - You mentioned your updated answer to the meaning of life
02:21:24.000 | as it continuously keeps updating.
02:21:26.600 | The new version is self-actualization.
02:21:30.360 | Can you explain?
02:21:32.960 | - I basically mean, let's try to figure out,
02:21:35.320 | number one, what am I supposed to be?
02:21:38.560 | And number two, find the strength to actually become it.
02:21:43.320 | So I was recently talking to students
02:21:46.040 | about this commencement address,
02:21:47.520 | and I was talking to them about sort of how
02:21:50.240 | they have all of these paths ahead of them right now.
02:21:53.080 | And part of it is choosing the direction in which you go,
02:21:56.560 | and part of it is actually doing the walk
02:21:58.460 | to go in that direction.
02:21:59.960 | And in doing the walk, what we talked about earlier
02:22:02.280 | about sort of you create your own environment,
02:22:04.640 | I basically told them, listen, you're ending high school.
02:22:07.240 | Up until now, your parents have created
02:22:08.780 | all of your environment.
02:22:10.220 | Now it's time to take that into your own hands
02:22:12.880 | and to sort of shape the environment
02:22:15.120 | that you wanna be an adult in.
02:22:16.880 | And you can do that by choosing your friends,
02:22:18.960 | by choosing your particular neuronal routines.
02:22:22.200 | I basically think of your brain as a muscle
02:22:24.840 | where you can exercise specific neuronal pathways.
02:22:28.040 | So very recently, I realized that I was having
02:22:31.800 | so much trouble sleeping, and I would wake up
02:22:36.120 | in the middle of the night, I would wake up at 4 a.m.,
02:22:37.880 | and I could just never go back to bed.
02:22:39.480 | So I was basically constantly losing, losing, losing sleep.
02:22:42.880 | I started a new routine where every morning,
02:22:45.080 | as I bike in, instead of going to my office, I hit the gym.
02:22:49.240 | I basically go rowing first, I then do weights,
02:22:51.920 | I then swim very often when I have time.
02:22:54.200 | And what that has done is transformed my neuronal pathways.
02:22:58.160 | So basically, on Friday, I was trying to go to work,
02:23:00.520 | and I was like, listen, I'm not gonna go exercise,
02:23:02.520 | and I couldn't.
02:23:03.440 | My bike just went straight to the gym.
02:23:05.360 | I'm like, I don't wanna do it, and I just went anyway,
02:23:07.480 | 'cause I couldn't do otherwise.
02:23:09.240 | And that has completely transformed me.
02:23:11.240 | So I think this sort of beneficial effect of exercise
02:23:14.320 | on the whole body is one of the ways
02:23:16.360 | that you could transform your own neuronal pathways,
02:23:18.400 | understanding that it's not a choice,
02:23:21.240 | it's not an option, it's not optional, it's mandatory.
02:23:24.840 | And I think you're all modeled, so many of us,
02:23:27.320 | by sort of being able to sort of push your body
02:23:29.120 | to the extreme, being able to have
02:23:30.160 | these extremely regimented regimes.
02:23:33.040 | And that's something that I've been terrible at.
02:23:36.840 | But now I'm basically trying to coach myself
02:23:39.360 | and trying to sort of finish this kind of self-actualization
02:23:44.360 | into a new version of myself,
02:23:46.480 | a more disciplined version of myself.
02:23:48.240 | - Don't ask questions, just follow the ritual.
02:23:51.320 | - Not an option.
02:23:52.680 | - You have so much love in your life.
02:23:56.640 | You radiate love.
02:23:58.720 | Do you ever feel lonely?
02:23:59.960 | - So there's different types of people.
02:24:04.760 | Some people drain in gatherings,
02:24:07.920 | some people recharge in gatherings.
02:24:09.960 | I'm definitely the recharging type.
02:24:11.760 | So I'm an extremely social creature.
02:24:16.400 | I recharge with intellectual exchanges,
02:24:19.160 | I recharge with physical exercise, I recharge in nature.
02:24:22.160 | But I also can feel fantastic
02:24:25.440 | when I'm the only person in the room.
02:24:26.840 | That doesn't mean I'm lonely,
02:24:28.000 | it just means I'm the only person in the room.
02:24:30.120 | And I think there's a secret to not feeling alone
02:24:34.840 | when you're the only one.
02:24:36.120 | And that secret is self-reflection, it's introspection,
02:24:40.960 | it's almost watching yourself from above,
02:24:43.520 | and it's basically just becoming yourself,
02:24:48.400 | becoming comfortable with the freedom that you have
02:24:52.080 | when you're by yourself.
02:24:53.280 | - So hanging out with yourself,
02:24:56.440 | I mean, there's a lot of people who write to me
02:24:59.360 | who talk to me about feeling alone in this world,
02:25:02.600 | that struggle, especially when they're younger.
02:25:05.160 | Is there further words of advice you can give to them
02:25:08.360 | when they are almost paralyzed by that feeling?
02:25:11.760 | - So I sympathize completely, and I have felt alone,
02:25:16.760 | and I have felt that feeling.
02:25:18.560 | And what I would say to you is stand up,
02:25:22.360 | stretch your arms, just become your own self,
02:25:26.440 | just realize that you have this freedom.
02:25:29.040 | And breathe in, walk around the room,
02:25:32.360 | take a few steps in the room,
02:25:33.420 | just get a feeling for the 3D version of yourself.
02:25:36.840 | Because very often we're kind of stuck to a screen,
02:25:40.600 | and that's very limiting,
02:25:41.800 | and that sort of gets us in a particular mindset.
02:25:43.760 | But activating your muscles, activating your body,
02:25:46.380 | activating your full self is one way
02:25:49.840 | that you can kind of get out of it.
02:25:51.920 | And that is exercising your freedom,
02:25:54.440 | reclaiming your physical space.
02:25:57.240 | And one of the things that I do is I have something
02:26:00.280 | that I call me time,
02:26:02.200 | which is if I've been really good all day,
02:26:05.400 | I got up in the morning, I got the kids to school,
02:26:07.720 | I made them breakfast, I sort of hit the gym,
02:26:11.000 | I had a series of really productive meetings,
02:26:14.080 | I reward myself with this me time.
02:26:16.800 | And that feeling of sort of when you're overstretched
02:26:21.560 | to realize that that's normal,
02:26:23.200 | and you just wanna just let go,
02:26:25.160 | that feeling of exercising your freedom,
02:26:26.820 | exercising your me time,
02:26:28.280 | that's where you free yourself from all stress.
02:26:34.120 | You basically say it's not a need to anymore,
02:26:38.200 | it's a want to.
02:26:40.000 | And as soon as I click that me time,
02:26:42.880 | all of the stress goes away,
02:26:44.680 | and I just bike home early,
02:26:46.520 | and I get to my work office at home,
02:26:50.420 | and I feel complete freedom,
02:26:51.800 | but guess what I do with that complete freedom?
02:26:54.000 | I just don't go off and drift and do boring things.
02:26:57.000 | I basically now say, okay,
02:26:58.480 | whew, this is just for me.
02:27:01.160 | I'm completely free, I don't have any requirements anymore.
02:27:03.320 | What do I do?
02:27:04.160 | I just look at my to-do list,
02:27:05.280 | and I'm like, what can I clear off?
02:27:08.320 | And if I have three meetings scheduled
02:27:11.960 | in the next three half hours,
02:27:14.200 | it is so much more productive for me to say,
02:27:16.000 | you know what, I just wanna pick up the phone now,
02:27:18.280 | and call these people,
02:27:19.240 | and just knock it off one after the other.
02:27:21.240 | And I can finish three half hour meetings
02:27:23.480 | in the next 15 minutes,
02:27:25.120 | just because it's the want, not I have to.
02:27:28.560 | So that would be my advice.
02:27:29.740 | Basically, turn something that you have to do
02:27:32.640 | in just me time, stretch out, exercise your freedom,
02:27:37.480 | and just realize you live in 3D,
02:27:39.360 | and you are a person,
02:27:42.440 | and just do things because you want them,
02:27:45.160 | not because you have to.
02:27:46.440 | - Noticing and reclaiming the freedom
02:27:49.480 | that each of us have,
02:27:52.240 | that's what it means to be human.
02:27:54.200 | If you notice that you're truly free,
02:27:56.520 | physically, mentally, psychologically,
02:28:00.840 | Manolis, you're an incredible human.
02:28:03.480 | We could talk for many more hours.
02:28:04.800 | We covered less than 10% of what we were planning to cover,
02:28:09.440 | but we have to run off now
02:28:11.680 | to the social gathering that we spoke of.
02:28:15.200 | - With 3D humans.
02:28:16.280 | - With 3D humans.
02:28:17.120 | - What a concept.
02:28:17.960 | - And reclaim the freedom.
02:28:19.120 | I think, I hope we can talk many, many more times.
02:28:22.680 | There's always a lot to talk about,
02:28:24.960 | but more importantly,
02:28:26.160 | you're just a human being with a big heart,
02:28:28.640 | and a beautiful mind that people love hearing from,
02:28:31.680 | and I certainly consider it a huge honor to know you,
02:28:34.920 | and to consider you a friend.
02:28:36.120 | Thank you so much for talking today.
02:28:37.800 | Thank you so much for talking so many more times,
02:28:39.720 | and thank you for all the love behind the scenes
02:28:41.440 | that you send my way.
02:28:42.280 | It always means the world.
02:28:43.280 | - Lex, you are a truly, truly special human being,
02:28:46.040 | and I have to say that I'm honored to know you.
02:28:48.440 | So many friends are just in awe that you even exist,
02:28:52.840 | that you have the ability to do
02:28:54.040 | all the stuff that you're doing,
02:28:55.400 | and I think you're a gift to humanity.
02:28:58.200 | I love the mission that you're on,
02:28:59.920 | to sort of share knowledge, and insight, and deep thought
02:29:03.080 | with so many special people who are transformative,
02:29:05.760 | but people across all walks of life,
02:29:07.920 | and I think you're doing this in just such a magnificent way.
02:29:11.120 | I wish you strength to continue doing that,
02:29:13.400 | because it's a very special mission,
02:29:14.800 | and it's a very draining mission.
02:29:16.640 | So thank you, both the human you and the Robert you,
02:29:20.360 | the human you for showing all this love,
02:29:22.760 | and the Robert you for doing it day after day after day.
02:29:26.040 | So thank you, Lex.
02:29:27.000 | - All right, let's go have some fun.
02:29:28.200 | - Let's go.
02:29:29.760 | - Thanks for listening to this conversation
02:29:31.280 | with Manolis Kellis.
02:29:32.720 | To support this podcast,
02:29:33.840 | please check out our sponsors in the description.
02:29:36.560 | And now, let me leave you with some words from Bill Bryson
02:29:39.880 | in his book, "A Short History of Nearly Everything."
02:29:43.420 | If this book has a lesson,
02:29:46.640 | it is that we are awfully lucky to be here,
02:29:49.560 | and by we, I mean every living thing.
02:29:53.000 | To attain any kind of life in this universe of ours
02:29:55.880 | appears to be quite an achievement.
02:29:58.360 | As humans, we're doubly lucky, of course.
02:30:01.200 | We enjoy not only the privilege of existence,
02:30:03.820 | but also the singular ability to appreciate it,
02:30:07.340 | and even in a multitude of ways to make it better.
02:30:11.880 | It is a talent we have only barely begun to grasp.
02:30:14.900 | Thank you for listening, and hope to see you next time.
02:30:19.080 | (upbeat music)
02:30:21.660 | (upbeat music)
02:30:24.240 | [BLANK_AUDIO]