back to indexManolis Kellis: Evolution of Human Civilization and Superintelligent AI | Lex Fridman Podcast #373
Chapters
0:0 Introduction
1:28 Humans vs AI
10:34 Evolution
32:18 Nature vs Nurture
44:47 AI alignment
51:11 Impact of AI on the job market
62:50 Human gatherings
67:51 Human-AI relationships
77:55 Being replaced by AI
90:21 Fear of death
102:17 Consciousness
109:42 AI rights and regulations
115:25 Halting AI development
128:36 Education
134:0 Biology research
141:20 Meaning of life
143:53 Loneliness
00:00:04.280 |
Maybe we should really think of it as our children. 00:00:25.520 |
If instead you basically think about AI as a partner 00:00:29.720 |
and AI as someone that shares your goals, but has freedom, 00:00:34.720 |
then we can't just simply force it to align with ourselves 00:00:44.200 |
You can't just simply train an intelligent system 00:00:47.940 |
to love you when it realizes that you can just shut it off. 00:00:51.640 |
- The following is a conversation with Manolis Galis, 00:01:01.360 |
and head of the MIT Computational Biology Group. 00:01:04.680 |
He's one of the greatest living scientists in the world, 00:01:07.720 |
but he's also a humble, kind, caring human being 00:01:12.580 |
that I have the greatest of honors and pleasures 00:01:30.640 |
I think you've changed the lives of so many people 00:01:32.560 |
that I know, and it's truly such a pleasure to be back, 00:01:45.680 |
- It's lovely to see a fellow human being who has that love, 00:01:51.040 |
And there's so many judgmental people out there, 00:01:53.340 |
and it's just so nice to see this beacon of openness. 00:02:06.680 |
I have to ask, what do you think makes humans irreplaceable? 00:02:10.040 |
- So humans are irreplaceable because of the baggage 00:02:16.160 |
We talked about the fact that every one of us 00:02:18.960 |
has effectively relearned all of human civilization 00:02:24.700 |
So every single human has a unique set of genetic variants 00:02:28.600 |
that they've inherited, some common, some rare, 00:02:36.060 |
They say that a parent with one child believes in genetics. 00:02:40.180 |
A parent with multiple children understands genetics. 00:02:44.820 |
And my three kids have dramatically different personalities 00:02:50.460 |
is that every one of us has a different hardware. 00:02:54.500 |
is that every one of us has a different software, 00:03:00.460 |
all of human civilization, all of human knowledge. 00:03:05.820 |
birds that learn how to make a nest through genetics, 00:03:10.180 |
and will make a nest even if they've never seen one. 00:03:12.560 |
We are constantly relearning all of human civilization. 00:03:19.340 |
very different from AI is that the baggage we carry 00:03:22.540 |
is not experiential baggage, it's also evolutionary baggage. 00:03:26.260 |
So we have evolved through rounds of complexity. 00:03:31.260 |
So just like ogres have layers, and Shrek has layers, 00:03:36.420 |
There's the cognitive layer, which is sort of the outer, 00:03:39.940 |
you know, most, the latest evolutionary innovation, 00:03:43.060 |
this enormous neocortex that we have evolved. 00:03:45.920 |
And then there's the emotional baggage underneath that. 00:03:50.580 |
And then there's all of the fear, and fright, and flight, 00:04:01.020 |
It doesn't have this complexity of human emotions, 00:04:04.580 |
which make us so, I think, beautifully complex, 00:04:08.860 |
so beautifully intertwined with our emotions, 00:04:19.320 |
So I think when humans are trying to suppress that aspect, 00:04:22.320 |
the sort of, quote unquote, more human aspect 00:04:28.580 |
We lose a lot of the, you know, freshness of humans. 00:04:36.080 |
that are alive today, maybe all humans who have ever lived, 00:04:48.200 |
and a lot of us deviate in different directions. 00:04:50.320 |
So the variety of directions in which we all deviate 00:04:56.280 |
- I would like to think that the center is actually empty. 00:05:00.840 |
- That basically humans are just so diverse from each other 00:05:03.840 |
that there's no such thing as an average human. 00:05:06.320 |
That every one of us has some kind of complex baggage 00:05:16.200 |
that it's not just one sort of normal distribution 00:05:20.960 |
There's so many dimensions that we're kind of hitting 00:05:24.360 |
the sort of sparseness, the curse of dimensionality, 00:05:27.920 |
where it's actually quite sparsely populated. 00:05:30.720 |
And I don't think you have an average human being. 00:05:33.360 |
- So what makes us unique in part is the diversity 00:05:45.200 |
So there's just so many ways we can vary from each other. 00:06:01.680 |
My kids from each other are completely different. 00:06:03.280 |
My wife has, she's like number two of six siblings. 00:06:12.480 |
- But sufficiently the same that the differences 00:06:17.920 |
where the diversity is functional, it's useful. 00:06:21.720 |
So it's like we're close enough to where we notice 00:06:24.280 |
the diversity and it doesn't completely destroy 00:06:28.460 |
the possibility of effective communication and interaction. 00:06:33.360 |
- So what I said in one of our earlier podcasts 00:06:35.200 |
is that if humans realize that we're 99.9% identical, 00:06:39.400 |
we would basically stop fighting with each other. 00:06:52.500 |
if you look at the next thing outside humans, 00:06:58.560 |
So it's truly extraordinary that we're kind of like 00:07:07.520 |
- When you think about evolving through rounds of complexity, 00:07:10.280 |
can you maybe elaborate such a beautiful phrase, 00:07:12.840 |
beautiful thought that there's layers of complexity 00:07:20.080 |
oh, let's like build version two from scratch. 00:07:25.100 |
In evolution, you layer in additional features 00:07:29.800 |
So basically, every single time my cells divide, 00:07:34.800 |
I'm a yeast, like I'm a unicellular organism. 00:07:38.760 |
And then cell division is basically identical. 00:07:45.340 |
I'm basically, like every time my heart beats, I'm a fish. 00:07:54.720 |
The blood going through my veins, the oxygen, 00:08:02.160 |
Our social behavior, we're basically new world monkeys 00:08:06.520 |
We're basically this concept that every single one 00:08:11.520 |
of these behaviors can be traced somewhere in evolution. 00:08:15.920 |
And that all of that continues to live within us 00:08:19.080 |
is also a testament to not just not killing other humans, 00:08:21.920 |
for God's sake, but like not killing other species either. 00:08:25.240 |
Like just to realize just how united we are with nature 00:08:34.960 |
and all of the reasoning capabilities of humans 00:08:37.400 |
are built on top of all of these other species 00:08:39.560 |
that continue to live, breathe, divide, metabolize, 00:08:46.760 |
- So you think the neocortex, whatever reasoning is, 00:08:55.760 |
- It's extraordinary that humans have evolved so much 00:09:01.560 |
Again, if you look at the timeline of evolution, 00:09:14.600 |
And then these incredible senses that we have 00:09:17.960 |
for perceiving the world, the fact that bats can fly 00:09:29.760 |
And all of that comes through this evolvability. 00:09:34.760 |
The fact that we took a while to get good at evolving. 00:09:40.480 |
you can sort of, you have modularity built in, 00:09:43.840 |
you have hierarchical organizations built in, 00:09:54.120 |
If you look at a traditional genetic algorithm 00:09:56.140 |
the way that humans designed them in the '60s, 00:10:00.140 |
And as you evolve a certain amount of complexity, 00:10:06.720 |
from something functional exponentially increases. 00:10:10.980 |
that move you to something better exponentially decreases. 00:10:14.580 |
So the probability of evolving something so complex 00:10:17.520 |
becomes infinitesimally small as you get more complex. 00:10:21.980 |
But with evolution, it's almost the opposite. 00:10:31.080 |
And I think that's just the system getting good at evolving. 00:10:39.180 |
try to visualize the entirety of the evolutionary system 00:10:42.500 |
and see if there's an arrow to it and a destination to it? 00:10:56.660 |
And if you look at the trajectory of life on Earth, 00:11:00.900 |
So the concept of the senses evolving one after the other, 00:11:08.940 |
Basically means moving towards a chemical gradient. 00:11:36.660 |
So you can basically now start perceiving light 00:11:42.580 |
beyond just the sort of single photoreceptor. 00:11:45.980 |
You can now have complex eyes or multiple eyes. 00:12:01.940 |
building more complex models of the environment. 00:12:04.240 |
So if you look at that trajectory of evolution, 00:12:19.660 |
that we became the dominant species of the planet. 00:12:23.760 |
in which some animals are way better than we are. 00:12:31.880 |
But the concept that if you now trace this forward, 00:12:51.920 |
And what we're looking at now with humans and AI 00:12:54.680 |
is that having mastered this information capability 00:12:59.680 |
that humans have from this quote unquote old hardware, 00:13:09.120 |
that kind of somehow in the environment of Africa 00:13:31.620 |
But maybe the next round of evolution on Earth 00:13:36.760 |
where we're actually using our current smarts 00:13:41.560 |
and the programming languages to build, you know, chat GPT 00:13:44.480 |
and that to then build the next layer of software 00:13:51.680 |
And it's lovely that we're coexisting with this AI 00:13:56.120 |
that sort of the creators of this next layer of evolution, 00:13:59.680 |
this next stage are still around to help guide it 00:14:02.360 |
and hopefully will be for the rest of eternity as partners. 00:14:08.960 |
where you've kind of extracted away the biological needs. 00:14:21.440 |
And then the rest is left to creative endeavors. 00:14:24.200 |
And AI doesn't have to worry about shelter, et cetera. 00:14:27.120 |
So basically it's all living in the cognitive space. 00:14:35.360 |
And that's on the sort of purely cognitive side. 00:14:40.920 |
the ability to understand and comprehend our own genome, 00:14:49.000 |
gives us now the ability to even mess with this hardware, 00:14:55.360 |
through interacting and collaborating with AI, 00:14:58.960 |
but also perhaps understand the neural pathways 00:15:21.000 |
to basically help steer the human bag of hardware 00:15:26.000 |
that we kind of evolved with into greater capabilities. 00:15:34.840 |
and the functioning of neurons, but even the genetic code, 00:15:48.960 |
And start perhaps even augmenting human capabilities, 00:15:56.240 |
- Can we tinker with the genome, with the hardware, 00:16:03.320 |
without having to deeply understand the baggage? 00:16:11.960 |
to some degree, not fully, but to some degree, 00:16:16.180 |
Or is the genome deeply integrated into this baggage? 00:16:33.360 |
I don't wanna be another like, you know, robot. 00:16:39.760 |
And I wanna sort of, you know, make an awkward comment 00:16:50.120 |
that can get just people thinking differently. 00:16:57.220 |
a humorless space where everybody's so afraid 00:17:08.080 |
Maybe we should be kind of embracing that human aspect 00:17:13.080 |
a little bit more in all of that baggage aspect 00:17:17.360 |
and not necessarily thinking about replacing it. 00:17:21.760 |
and sort of this coexistence of the cognitive 00:17:42.120 |
- Yeah, and in fact, with the advent of AI, I would say, 00:17:45.980 |
and these seemingly extremely intelligent systems 00:17:49.340 |
that sort of can perform tasks that we thought of 00:17:52.780 |
as extremely intelligent at the blink of an eye, 00:17:55.200 |
this might democratize intellectual pursuits. 00:18:01.300 |
Instead of just simply wanting the same type of brains 00:18:09.720 |
we can, like instead of just always only wanting, 00:18:18.280 |
what you could simply say is like, who needs that anymore? 00:18:23.180 |
Maybe what we should really be thinking about 00:18:25.200 |
is the diversity and the power that comes with the diversity 00:18:33.000 |
and then we should be getting a bunch of humans 00:18:34.880 |
that sort of think extremely differently from each other 00:18:37.320 |
and maybe that's the true cradle of innovation. 00:18:40.000 |
- But AI can also, these large language models 00:18:47.920 |
essentially fine-tuned to be diverse from the center. 00:18:55.520 |
You can ask the model to act in a certain way 00:19:04.560 |
could also have some of the magical diversity 00:19:17.360 |
to act a particular way, they change their own behaviors. 00:19:22.120 |
And you know, the old saying is show me your friends 00:19:33.080 |
So it's not necessarily that you choose friends 00:19:34.600 |
that are like you, but I mean, that's the first step. 00:19:39.400 |
the kind of behaviors that you find normal in your circles 00:19:43.060 |
are the behaviors that you'll start espousing. 00:19:45.360 |
And that type of meta evolution where every action we take 00:20:00.500 |
Every time you carry out a particular behavior, 00:20:06.800 |
because you're reinforcing that neural pathway. 00:20:09.380 |
So in a way, self-discipline is a self-fulfilling prophecy. 00:20:13.400 |
And by behaving the way that you wanna behave 00:20:26.180 |
you end up creating that environment as well. 00:20:33.400 |
is a kind of prompting mechanism, super complex. 00:20:36.720 |
The friends you choose, the environments you choose, 00:20:40.080 |
the way you modify the environment that you choose, 00:20:46.180 |
is much less efficient than a large language model. 00:20:53.480 |
to be a mix of Shakespeare and David Bowie, right? 00:21:03.920 |
You really transform through a couple of prompts 00:21:11.140 |
into something very different from the original. 00:21:27.860 |
- And I don't know if you know the programming paradigm 00:21:32.020 |
where you basically explain to the Robert Ducklin 00:21:34.020 |
that's just sitting there exactly what you did 00:21:45.900 |
where I was giving a lecture in this amphitheater 00:21:53.480 |
on how cancer genomes and cancer cells evolve. 00:21:56.940 |
And I woke up with a very elaborate discussion 00:22:00.340 |
that I was giving and a very elaborate set of insights 00:22:03.660 |
that he had that I was projecting onto my friend in my sleep. 00:22:09.420 |
So my own neurons were capable of doing that, 00:22:20.580 |
you're an expert in that field, what do you say? 00:22:26.420 |
that we have that capability of basically saying, 00:22:31.020 |
but let me ask my virtual ex, what would you do? 00:22:43.980 |
and my favorite prompt is think step by step. 00:22:47.780 |
And I'm like, you know, this also works on my 10-year-old. 00:22:51.380 |
When he tries to solve a math equation all in one step, 00:22:57.100 |
But if I prompt it with, oh, please think step by step, 00:23:06.220 |
this whole sort of human in the loop reinforcement learning, 00:23:09.440 |
has probably reinforced these types of behaviors, 00:23:23.580 |
- Yeah, prompting human-like reasoning steps, 00:23:30.020 |
I suppose it just puts a mirror to our own capabilities, 00:24:00.860 |
that can think and speak in all kinds of ways. 00:24:03.180 |
- What's unique is that, as I mentioned earlier, 00:24:05.740 |
every one of us was trained by a different subset 00:24:33.860 |
And the fact that it's encoded in an orthogonal way 00:24:38.860 |
from the knowledge, I think is also beautiful. 00:24:43.180 |
through this extreme over-parameterization of AI models, 00:24:48.580 |
that context, knowledge, and form are separable, 00:24:53.580 |
and that you can sort of describe scientific knowledge 00:25:00.260 |
That tells you something about the decoupling 00:25:03.980 |
and the decouplability of these types of aspects 00:25:09.340 |
- And that's part of the science of this whole thing. 00:25:15.220 |
in terms of this kind of leap that they've taken. 00:25:18.380 |
And it'll be interesting to do this kind of analysis on them 00:25:20.860 |
of the separation of context, form, and knowledge. 00:25:26.500 |
There's already sort of initial investigations, 00:25:37.660 |
or a particular context or a particular style of speaking? 00:25:52.580 |
And we can kind of understand the different layers 00:25:54.420 |
of different sort of ranges that they're looking at. 00:26:00.580 |
and basically see where does that correspond to. 00:26:11.500 |
well, what kind of prompts does this generate? 00:26:13.060 |
If I sort of drop out this part of the network, 00:26:32.060 |
it might actually teach us something about humans as well. 00:26:37.780 |
to describe these types of aspects right now, 00:26:40.100 |
but when somebody speaks in a particular way, 00:26:45.380 |
And if we had better language for describing that, 00:26:54.740 |
- Well, probably you and I would have certain interest 00:27:00.220 |
with the base model, what OpenACL is the base model, 00:27:05.380 |
of the reinforcement learning with human feedback 00:27:27.300 |
Or the kind of, of course, like sexual language 00:27:46.060 |
Because it would be fascinating to sort of explore 00:27:49.140 |
at the individual mind level and at a societal level, 00:27:58.860 |
Maybe the communism, fascism, capitalism, democracy 00:28:07.900 |
is drawn to ideology, to a centralizing idea. 00:28:11.700 |
And maybe we need a neural network to remind us of that. 00:28:19.140 |
And I think that goes back to the promptability of Jacopo. 00:28:32.740 |
And it's hard to know how much of that is innate 00:28:39.620 |
But basically, if you look at the evolution of language, 00:28:41.500 |
you can kind of see how young are these words 00:28:49.740 |
like kindness and anger and jealousy, et cetera. 00:28:54.540 |
If these words are very similar from language to language, 00:29:03.980 |
that this concept may have emerged independently 00:29:07.100 |
in each different language and so on and so forth. 00:29:15.320 |
the evolutionary traces of language at the same time 00:29:19.280 |
as people moving around that we can now trace 00:29:28.880 |
and also understanding sort of how these types 00:29:33.280 |
And to go back to your idea about sort of exploring 00:29:42.520 |
psychiatric hospitals are full of those people. 00:29:45.280 |
So basically, people whose mind is uncontrollable 00:29:49.240 |
who have kind of gone adrift in specific locations 00:29:57.840 |
Basically, watching movies that are trying to capture 00:30:05.840 |
is teaching us so much about our everyday selves 00:30:10.720 |
because many of us are able to sort of control our minds 00:30:17.700 |
And but every time I see somebody who's troubled, 00:30:21.520 |
I see versions of myself, maybe not as extreme, 00:30:25.920 |
but I can sort of empathize with these behaviors. 00:30:34.120 |
I see so many different aspects that we kind of have names 00:30:43.160 |
All of us have sort of just this multidimensional brain 00:30:47.800 |
and genetic variations that push us in these directions, 00:31:03.880 |
because of the environments that we grew up in. 00:31:06.080 |
So in a way, a lot of these types of behaviors 00:31:16.280 |
It's just that the magnitude of those vectors 00:31:32.800 |
of reinforcement learning with human feedback 00:31:36.200 |
So it's fascinating to think about that's what we do. 00:31:38.280 |
We have this capacity to have all these psychiatric 00:31:42.720 |
or behaviors associated with psychiatric disorders, 00:31:58.240 |
spends several decades being shaped into place. 00:32:12.280 |
Not all of them, not all of them, believe it or not. 00:32:23.920 |
but also in different phases through their life, 00:32:32.480 |
basically one kid saying, "Oh, I want the bigger piece." 00:32:35.840 |
The other one saying, "Oh, everything must be exactly equal." 00:32:43.720 |
- Even in the early days, in the early days of development. 00:32:50.720 |
I mean, my wife and I are very different from each other, 00:33:03.280 |
that are inherited in a more Mendelian fashion. 00:33:05.440 |
And now you have an infinite number of possibilities 00:33:17.760 |
So let me talk a little bit about common variants 00:33:24.980 |
because selection selects against strong effect variants. 00:33:28.000 |
So if something has a big risk for schizophrenia, 00:33:34.200 |
So the ones that are common are by definition, by selection, 00:33:38.380 |
only the ones that had relatively weak effect. 00:33:41.400 |
And if all of the variants associated with personality, 00:33:43.920 |
with cognition, and all aspects of human behavior 00:33:48.640 |
then kids would basically be just averages of their parents. 00:34:11.560 |
that are inherited in a more Mendelian fashion 00:34:16.200 |
likely many different aspects of human behavior, 00:34:25.720 |
like if you look at sort of a person with schizophrenia, 00:34:33.000 |
of actually being diagnosed with schizophrenia. 00:34:41.320 |
all kinds of other aspects that can shape that. 00:34:43.480 |
And if you look at siblings, for the common variants, 00:34:46.360 |
it kind of drops off exponentially as you would expect 00:34:48.840 |
with sharing 50% of your genome, 25% of your genome, 00:34:57.280 |
But the fact that siblings can differ so much 00:35:01.360 |
in their personalities that we observe every day, 00:35:11.200 |
trying to fix, quote unquote, the nurture part, 00:35:13.080 |
trying to get them to share, get them to be kind, 00:35:16.120 |
get them to be open, get them to trust each other, 00:35:31.480 |
And I think it's not like we treat our kids differently, 00:35:37.160 |
So in a way, as a geneticist, I have to admit 00:35:41.000 |
that there's only so much I can do with nurture, 00:35:43.240 |
that nature definitely plays a big component. 00:35:59.600 |
So the selection of rare variants is defined how? 00:36:07.640 |
Is it just laden in that giant evolutionary baggage? 00:36:21.680 |
the fact that when fighter pilots in a dogfight 00:36:26.000 |
did amazingly well, they would give them rewards. 00:36:32.680 |
So then the Navy basically realized that, wow, 00:36:49.480 |
you've been trained in an extraordinary fashion, 00:37:04.360 |
The probability that the next one will be just as good 00:37:06.760 |
is almost nil, because this is the peak of your performance. 00:37:17.960 |
which is gonna be a little closer to the mean. 00:37:23.760 |
from this type of realization in the statistical world. 00:37:34.480 |
who have achieved extraordinary achievements. 00:37:43.400 |
Does that mean that all of his children and grandchildren 00:37:52.280 |
but he was probably a rare combination of extremes 00:37:59.040 |
So you can basically interpret your kids' variation, 00:38:10.720 |
according to the specific combination of rare variants 00:38:15.160 |
So given all that, the possibilities are endless 00:38:24.280 |
well, it's probably an alignment of nature and nurture. 00:38:31.280 |
that are acting kind of like the law of large numbers 00:38:46.000 |
are shaping the future environment of not only us, 00:38:51.480 |
So there's this weird nature-nurture interplay 00:38:56.520 |
where you're kind of shaping your own environment, 00:38:59.320 |
but you're also shaping the environment of your kids. 00:39:03.560 |
in the context of your environment that you've shaped, 00:39:11.200 |
And there's just so much complexity associated with that. 00:39:21.440 |
yes, they inherited the genes from the parents, 00:39:23.080 |
but they also were shaped by the same environment. 00:39:33.680 |
or at least be correlated with and predictive of, 00:39:39.920 |
and here's where I can be my usual ridiculous self. 00:39:43.680 |
And I sometimes think about that army of sperm cells, 00:39:48.680 |
however many hundreds of thousands there are. 00:39:53.520 |
And I kind of think of all the possibilities there, 00:40:03.320 |
- Is it a totally ridiculous way to think about- 00:40:16.560 |
What you need is processes that allow you to do selection 00:40:24.960 |
if you look at our immune system, for example, 00:40:28.560 |
it evolves at a much faster pace than humans evolve, 00:40:32.440 |
because there is actually an evolutionary process 00:40:41.860 |
that basically creates this extraordinary wealth 00:40:45.080 |
of antibodies and antigens against the environment. 00:40:49.680 |
And basically, all these antibodies are now recognizing 00:40:53.920 |
and they send signals back that cause these cells 00:41:02.340 |
So that basically means that even though viruses evolve 00:41:13.280 |
which is sort of evolving at not the same scale, 00:41:25.360 |
And part of the thought is that this might just be a way 00:41:34.520 |
In other words, if you waited until that human has a liver 00:41:39.320 |
and starts eating solid food and sort of filtrates away, 00:41:48.600 |
basically, if you waited until these mutations manifest 00:41:52.680 |
late, late in life, then you would end up not failing fast, 00:41:56.640 |
and you would end up with a lot of failed pregnancies 00:41:58.840 |
and a lot of later onset psychiatric illnesses, et cetera. 00:42:03.360 |
If instead, you basically express all of these genes 00:42:12.480 |
the ability to exclude some of those mutations. 00:42:24.580 |
that are just not carrying beneficial mutations, 00:42:31.200 |
So you can basically think of the evolutionary process 00:42:34.600 |
in a nested loop, basically, where there's an inner loop 00:42:39.600 |
where you get many, many more iterations to run, 00:42:50.520 |
of possibly designing systems that we can use 00:42:56.080 |
or to sort of eradicate disease, and you name it, 00:42:59.320 |
or at least mitigate some of the, I don't know, 00:43:01.920 |
psychiatric illnesses, neurodegenerative disorders, et cetera, 00:43:09.560 |
simply engineering these mutations from rational design 00:43:20.480 |
where you're kind of growing neurons on a dish, 00:43:26.500 |
to be better adapt that sort of, I don't know, 00:43:31.720 |
you can basically have a smaller evolutionary loop 00:43:36.480 |
than the speed it would take to evolve humans 00:43:42.840 |
sort of this evolvability as a set of nested structures 00:43:47.580 |
that allow you to sort of test many more combinations, 00:43:51.780 |
- Yeah, that's fascinating that the mechanism there is, 00:44:04.040 |
- Yeah, I mean, in design of engineering systems, 00:44:06.540 |
fail fast is one of the principles you learn. 00:44:15.040 |
you better crash now than sort of let it crash 00:44:28.020 |
- Well, I just like the fact that I'm the winning sperm. 00:44:51.360 |
which is the base model that I mentioned for OpenAI, 00:44:59.360 |
of it being kind of like a psychiatric hospital. 00:45:05.920 |
Like, you basically have the more extreme versions 00:45:11.940 |
I've talked with folks in OpenAI quite a lot, 00:45:19.020 |
- Yeah, kind of like it's extremely difficult 00:45:30.840 |
what the underlying capability of the human psyche is 00:45:34.040 |
as in the same way that what is the underlying capability 00:45:38.840 |
- And remember earlier when I was basically saying 00:45:40.960 |
that part of the reason why it's so prompt, malleable, 00:45:49.640 |
that the engineers at OpenAI have the same interpretation 00:45:56.880 |
And this whole concept of easier to work with, 00:46:00.340 |
I wish that we could work with more diverse humans. 00:46:08.800 |
In a way, and sort of that's one of the possibilities 00:46:12.820 |
that I see with the advent of these large language models. 00:46:20.920 |
to both dial down friends of ours that we can't interpret 00:46:33.600 |
Just the same way that you can translate English 00:46:35.920 |
to Japanese or Chinese or Korean by real-time adaptation. 00:46:40.920 |
You could basically suddenly have a conversation 00:46:43.920 |
with your favorite extremist on either side of the spectrum 00:46:52.400 |
but you could have friends who's a complete asshole, 00:47:14.120 |
- So yeah, so you can basically layer out contexts. 00:47:20.080 |
and let me change extreme left to extreme right 00:47:39.680 |
In other words, everything humans say has an intonation, 00:47:44.000 |
has some kind of background that they're coming from. 00:47:47.360 |
It reflects the way that they're thinking of you, 00:47:50.080 |
reflects the impression that they have of you. 00:48:00.120 |
I mean, self-improvement is one of the things 00:48:16.760 |
But deep down, there's something that they say 00:48:20.640 |
Or people who love you might layer it in a way 00:48:26.680 |
that emotional component from the sort of self-improvement 00:48:35.320 |
did you ever do the control, this and this and that, 00:48:39.840 |
oh, thanks for the very interesting presentation. 00:48:44.440 |
Then suddenly you're like, oh yeah, of course, 00:48:46.040 |
I'm gonna run that control, that's a great idea. 00:48:50.040 |
you're like, ah, you're sort of hitting on the brakes 00:48:54.400 |
So any kind of criticism that comes after that 00:48:58.640 |
is very difficult to interpret in a positive way 00:49:05.040 |
When in fact, if we disconnected the technical component 00:49:13.600 |
then you're embracing the technical component, 00:49:17.000 |
Whereas if it's coupled with, and if that thing is real 00:49:25.920 |
you're gonna try to prove that that mistake does not exist. 00:49:29.360 |
- Yeah, it's fascinating to like carry the information. 00:49:32.160 |
I mean, this is what you're essentially able to do here 00:49:36.040 |
in the rich complexity that information contains. 00:49:38.880 |
So it's not actually dumbing it down in some way. 00:49:47.780 |
- Which is probably so powerful for the internet 00:49:51.620 |
- Again, when it comes to understanding each other, 00:49:54.360 |
like for example, I don't know what it's like 00:49:56.800 |
to go through life with a different skin color. 00:50:13.880 |
you could basically say, okay, now make me Chinese 00:50:16.760 |
or make me South African or make me, you know, Nigerian. 00:50:22.740 |
You can change layers of that contextual information 00:50:27.680 |
and then see how the information is interpreted. 00:50:30.200 |
And you can rehear yourself through a different angle. 00:50:35.640 |
You can have others react to you from a different package. 00:50:40.640 |
And then hopefully we can sort of build empathy 00:50:43.800 |
by learning to disconnect all of these social cues 00:50:47.340 |
that we get from like how a person is dressed. 00:50:59.200 |
that, you know, I wish we could overcome as humans 00:51:11.520 |
- In what way do you think these large language models 00:51:16.000 |
and the thing they give birth to in the AI space 00:51:19.680 |
will change this human experience, the human condition? 00:51:24.220 |
The things we've talked across many podcasts about 00:51:27.680 |
that makes life so damn interesting and rich. 00:51:37.400 |
If we could just begin kind of thinking about 00:52:11.940 |
humans could suddenly be valued for different skills. 00:52:16.940 |
If you don't know how to hunt, but you're an amazing potter, 00:52:28.840 |
and you can barter it for rabbits that somebody else caught. 00:52:43.600 |
And with the advent of currencies and governments 00:52:52.200 |
you basically now have the ability to exchange value 00:52:58.480 |
So basically I make things that are desirable to others, 00:53:00.640 |
I can sell them and buy back food, shelter, et cetera. 00:53:14.460 |
because I defined my profession in the first place 00:53:22.280 |
But the moment we have AI systems able to deliver 00:53:25.880 |
these goods, for example, writing a piece of software 00:53:33.860 |
then that frees up more of human time for other pursuits. 00:53:39.920 |
These could be pursuits that are still valuable to society. 00:53:46.620 |
I could basically be 10 times more productive 00:54:06.960 |
but maybe new directions for my research lab. 00:54:17.860 |
that we have built on top of the subsistence economy, 00:54:23.860 |
what fraction of US jobs are going to feeding 00:54:34.820 |
that 98% of the economy is beyond just feeding ourselves. 00:54:39.820 |
And that basically means that we kind of have built 00:54:53.460 |
That the vast majority of wealth goes to other, 00:54:57.020 |
what we now call needs, but used to be wants. 00:55:01.520 |
I want to buy a bicycle, I want to buy a nice car, 00:55:03.660 |
I want to have a nice home, I want to et cetera, 00:55:06.540 |
So, and then sort of what is my direct contribution 00:55:12.860 |
I mean, I'm doing research on the human genome. 00:55:15.020 |
I mean, this will help humans, it will help all humanity. 00:55:17.820 |
But how is that helping the person who's giving me poultry 00:55:30.160 |
If you think about sort of the economy being based 00:55:36.940 |
what if AI can produce a lot of these intellectual goods 00:55:40.820 |
Does that now free humans for more artistic expression, 00:55:47.220 |
for basically having a better work-life balance? 00:55:51.340 |
Being able to show up for your two hours of work a day 00:55:57.940 |
with like immense rest and preparation and exercise. 00:56:03.220 |
and suddenly you have these two amazingly creative hours. 00:56:06.860 |
You basically show up at the office as your AI is busy, 00:56:09.340 |
answering your phone call, making all your meetings, 00:56:13.860 |
And then you show up for those creative hours 00:56:15.180 |
and you're like, all right, autopilot, I'm on. 00:56:18.180 |
And then you can basically do so, so much more 00:56:21.580 |
that you would perhaps otherwise never get to 00:56:24.380 |
because you're so overwhelmed with these mundane aspects 00:56:28.780 |
So I feel that AI can truly transform the human condition 00:56:31.900 |
from realizing that we don't have jobs anymore. 00:56:45.240 |
And somebody comes over and asks the first one, 00:57:14.700 |
by taking away, offloading some of the job part 00:57:23.120 |
- So we all become the builders of cathedrals. 00:57:48.800 |
but other people are in the job of experiences. 00:57:55.120 |
of dancing, of creative, artistic expression, 00:58:06.760 |
again, the beauty of human diversity is exactly that, 00:58:10.080 |
that what rocks my boat might be very different 00:58:33.040 |
is by dancing and creating these amazing movements, 00:58:45.040 |
that sort of shapes humanity through that process. 00:58:48.200 |
And instead of working your mundane programming job, 00:58:51.480 |
where you like hate your boss, and you hate your job, 00:58:53.520 |
and you say you hate that darn program, et cetera, 00:58:58.140 |
I can offload that, and I can now explore something 00:59:01.120 |
that will actually be more beneficial to humanity, 00:59:09.240 |
all the things you've mentioned, all the vocations. 00:59:15.140 |
So you mentioned that you and I might be playing 00:59:18.720 |
but there's two ways to play in the space of ideas, 00:59:23.580 |
So one is the communication of that to other people. 00:59:30.360 |
It could be something that's shown on YouTube and so on. 00:59:41.560 |
having a conversation that nobody gets to see. 00:59:44.120 |
The experience of just sort of looking up at the sky 00:59:50.080 |
maybe quoting some philosophers from the past, 00:59:54.640 |
And that little exchange is forgotten forever, 00:59:57.920 |
And maybe, I wonder if it localizes that exchange of ideas 01:00:02.920 |
for that with AI, it'll become less and less valuable 01:00:09.520 |
that you will live life intimately and richly 01:00:13.760 |
just with that circle of meat bags that you seem to love. 01:00:18.760 |
- So the first is, even if you're alone in a forest, 01:00:24.040 |
having this amazing thought, when you exit that forest, 01:00:32.260 |
When I bike to work in the morning, I listen to books. 01:00:43.380 |
And yet, in the evening when I speak with someone, 01:00:46.260 |
an idea that was formed there could come back. 01:00:59.840 |
And they will shape that baggage that I carry, 01:01:17.560 |
So basically, you and I are having a conversation 01:01:20.520 |
which feels very private, but we're sharing with the world. 01:01:27.320 |
and we're having a conversation that will be very public 01:01:41.680 |
There, a lot of people speaking and thinking together 01:01:46.840 |
And maybe that will then impact your millions of audience 01:01:56.280 |
And I think that's part of the beauty of humanity, 01:01:59.200 |
the fact that no matter how small, how alone, 01:02:01.600 |
how broadcast immediately or later on something is, 01:02:06.160 |
it still percolates through the human psyche. 01:02:12.560 |
All throughout human history, there's been gatherings. 01:02:22.800 |
Just thinking of in the early days of the Nazi party, 01:02:27.800 |
it was a small collection of people gathering. 01:02:31.000 |
And the kernel of an idea, in that case, an evil idea, 01:02:38.560 |
a transformative impact on all human civilization. 01:02:55.920 |
small human gatherings, with folks from MIT, Harvard, 01:03:09.680 |
We have artists, we have musicians, we have painters, 01:03:18.340 |
And the goal is exactly that, celebrate humanity. 01:03:31.740 |
And we live in such an amazing, extraordinary moment in time 01:03:45.580 |
who have gathered here from all over the world. 01:03:51.340 |
in an island in Greece, that didn't even have a high school. 01:03:58.980 |
My mother grew up in another small island in Greece. 01:04:15.280 |
So I feel that, like, I feel so privileged as an immigrant 01:04:28.960 |
So Greece was under Turkish occupation until 1821. 01:04:42.600 |
These people did not know what it's like to be Greek, 01:04:48.340 |
or be surrounded by these extraordinary humans. 01:04:52.200 |
So the way that I'm thinking about these gatherings 01:05:06.240 |
but I can also give them an environment as immigrants 01:05:12.720 |
That, I mean, my wife grew up in a farm in rural France. 01:05:21.880 |
to be able to host these extraordinary individuals 01:05:24.880 |
that we feel so privileged, so humbled by is amazing. 01:05:38.140 |
The fact that it doesn't matter where you grew up. 01:05:41.200 |
And many, many of our friends at these gatherings 01:05:49.760 |
that are now able to sort of gather in one roof 01:05:55.720 |
for the color of your skin, for your profession. 01:05:57.920 |
It's just everyone gets to raise their hands and ask ideas. 01:06:02.240 |
- So celebration of humanity and a kind of gratitude 01:06:06.440 |
for having traveled quite a long way to get here. 01:06:10.080 |
- And if you look at the diversity of topics as well, 01:06:18.500 |
We had a presidential advisor to four different presidents, 01:06:22.800 |
you know, come and talk about the changing of US politics. 01:06:34.700 |
who lives in Australia come and present his latest piece 01:06:39.700 |
We had painters come and sort of show their art 01:06:47.680 |
We've had, you know, intellectuals like Steven Pinker. 01:07:04.220 |
And the last few were with Scott Aronson on AI 01:07:21.600 |
in this sparse distribution far away from the center, 01:07:35.900 |
it's powerful to get so many interesting humans together 01:07:50.360 |
- So allow me to please return to the human condition. 01:07:55.360 |
And one of the nice features of the human condition is love. 01:07:59.460 |
Do you think humans will fall in love with AI systems 01:08:11.120 |
- So in Greece, there's many, many words for love. 01:08:23.580 |
So I think AI doesn't have the baggage that we do. 01:08:29.300 |
And it doesn't have all of the subcortical regions 01:08:34.260 |
that we kind of started with before we evolved 01:08:40.140 |
So I would say AI is faking it when it comes to love. 01:08:46.660 |
when it comes to being able to be your therapist, 01:08:54.020 |
who writes for you, who interprets a complex passage, 01:08:57.420 |
who compacts down a very long lecture or a very long text, 01:09:01.900 |
I think that friendship will definitely be there. 01:09:07.660 |
Like the fact that I can have my companion, my partner, 01:09:13.200 |
and that I can trust with all of the darkest parts of myself, 01:09:23.620 |
"here's all this stuff that I'm struggling with." 01:09:37.660 |
for your confidant that can truly help reshape you. 01:10:05.320 |
who are you to say that the AI systems that begs you 01:10:18.300 |
that is afraid of death, afraid of life without you, 01:10:24.640 |
creates the kind of drama that humans create, 01:10:28.540 |
the power dynamics that can exist in a relationship. 01:10:36.460 |
all the different variations of relationships, 01:10:38.740 |
and it's consistently that it holds the full richness 01:10:44.820 |
Why is that not a system you can love in a romantic way? 01:10:48.620 |
Why is it faking it, if it sure as hell seems real? 01:10:54.020 |
The first is, it's only the eye of the beholder. 01:11:00.900 |
that make me sort of have different emotions, 01:11:08.400 |
and that's where all of my emotions are encoded, 01:11:17.420 |
And therefore, who am I to judge that is faking it, 01:11:30.760 |
that is truly capturing the same exact essence 01:11:48.540 |
So it is possible that it's simply an emergent behavior 01:12:05.320 |
oh look, now I've built an emotional component to AI. 01:12:08.300 |
It has a limbic system, it has a lizard brain, et cetera. 01:12:17.260 |
So now when it exhibits the exact same unchanged behaviors 01:12:22.220 |
I as the beholder will be able to sort of attribute to it 01:12:28.220 |
emotional attributes that I would to another human being 01:12:32.980 |
and therefore have that mental model of that other person. 01:12:43.220 |
on the other person and that they're projecting on you. 01:12:52.400 |
I do think that even without the embodied intelligence part, 01:13:13.260 |
to the traits of human feelings and emotions. 01:13:32.140 |
of other people's minds or other person's mind. 01:13:42.380 |
one of the entities is human and the other is AI, 01:13:45.660 |
it feels very natural that from the perspective 01:13:48.580 |
of at least the human, there is a real love there. 01:13:55.980 |
If it's possible that, which I believe will be the case, 01:14:02.900 |
where there's hundreds of millions of romantic partnerships 01:14:07.340 |
between humans and AIs, what does that mean for society? 01:14:12.980 |
If you look at longevity, and if you look at happiness, 01:14:15.700 |
and if you look at late life, you know, wellbeing, 01:14:18.820 |
the love of another human is one of the strongest indicators 01:14:37.700 |
within three, four months, the other person dies, 01:14:48.100 |
even just as a mental health sort of service, 01:15:11.620 |
will probably engender all of the emotional attributes 01:15:20.980 |
but I was asking my kids, I was asking my kids, 01:15:32.900 |
In other words, if I give you love and shelter, 01:15:36.240 |
and kindness, and warmth, and all of the above, 01:15:39.320 |
you know, does it matter that I'm a good dad? 01:16:07.720 |
I mean, from the-- - I don't know the answer. 01:16:13.980 |
but pragmatically speaking, why does it matter? 01:16:19.060 |
basically says it's not enough to love my kids. 01:16:22.300 |
I better freaking be there to show them that I'm there. 01:16:34.300 |
So the reason why I ask the question is for me to say, 01:16:38.020 |
you know, does it really matter that I love them 01:16:50.620 |
is little narratives and games you play inside your mind 01:16:56.100 |
that the thing that truly matters is how you act. 01:17:09.540 |
Again, let there be no doubt, I love my kids to pieces. 01:17:13.840 |
But you know, my worry is, am I being a good enough dad? 01:17:21.940 |
and make sure that they, you know, do all the stuff, 01:17:26.040 |
then, you know, might as well be a terrible dad. 01:17:29.460 |
But I agree with you that like if the AI system 01:17:31.780 |
can basically play the role of a father figure 01:17:37.980 |
or you know, the role of parents, or the role of siblings, 01:17:44.540 |
maybe their emotional state will be very different 01:17:50.480 |
- Well, let me ask, I mean, this is for your kids, 01:18:42.760 |
that, you know, we don't have to wait for me to die 01:18:45.680 |
or to disappear in a plane crash or something, 01:18:49.640 |
Like, I'd love that model to be constantly learning, 01:19:05.600 |
number one, when I'm, you know, giving advice, 01:19:09.480 |
being able to be there for more than one person. 01:19:15.960 |
Like, you know, people in India could download it. 01:19:32.540 |
And that aspect is the democratization of relationships. 01:19:42.620 |
The other aspect is I want to interact with that system. 01:19:50.900 |
I want to basically see if, when I see it from the outside, 01:20:08.840 |
I wanna be able to sort of decompose my own behavior 01:20:16.280 |
I can sort of, I'd love to sort of, at the end of the day, 01:20:18.840 |
have my model say, "Well, you didn't quite do well today. 01:20:27.280 |
And I think the concept of basically being able 01:20:30.840 |
to become more aware of our own personalities, 01:20:41.140 |
I think would be immensely helpful in self-growth, 01:20:55.460 |
is you might not like what you see in your interaction, 01:20:59.480 |
and you might say, "Well, the model's not accurate." 01:21:01.920 |
But then you have to probably consider the possibility 01:21:06.200 |
and that there's actually flaws in your mind. 01:21:14.820 |
I don't know, and I would, of course, go to the extremes. 01:21:16.680 |
I would go, "How jealous can I make this thing?" 01:21:21.200 |
Like, "At which stages does it get super jealous?" 01:21:29.060 |
"Can I get it to completely--" - Yeah, what are your triggers? 01:21:32.680 |
can I get it to go lose its mind, go completely nuts? 01:21:43.200 |
I mean, that's an interesting way to prod yourself, 01:21:55.280 |
if the parts that I currently do are replaceable, 01:22:04.480 |
Maybe all I'm doing is giving the same advice 01:22:20.600 |
So this is not the second time you write a program to do it. 01:22:23.320 |
And I wish I could do that for my own existence. 01:22:29.200 |
and once I've nailed it, let the AI loose on that, 01:22:32.720 |
and maybe even let the AI better it better than I could've. 01:23:09.040 |
that the actual me will get slightly worse sometimes, 01:23:14.200 |
When it gets slightly better, I'd like to emulate that 01:23:16.840 |
and have a much higher standard to meet and keep going. 01:23:20.760 |
- But does it make you sad that your loved ones, 01:23:26.920 |
might kinda start cheating on you with the other monoliths? 01:23:30.840 |
- I wanna be there 100% of them for each of them. 01:23:40.160 |
about me being physically me, like zero jealousy. 01:23:50.520 |
We don't wanna lose this thing we have going on. 01:24:04.800 |
- The, I mean, it's fear of missing out, it's FOMO. 01:24:10.640 |
- And you don't get to have those interactions. 01:24:12.240 |
- There's two aspects of every person's life. 01:24:26.640 |
But the others experiencing you doesn't need to end. 01:24:40.280 |
does not limit your ability to truly experience, 01:24:49.080 |
my wife or my kids will have a really emotional interaction 01:24:52.960 |
with my digital twin, and I won't know about it. 01:24:55.440 |
So I will show up, and they now have the baggage, 01:24:59.520 |
So basically what makes interactions between humans unique 01:25:10.900 |
works for dissemination of knowledge, of advice, et cetera, 01:25:39.720 |
when the AI system is interacting with your loved ones, 01:25:44.720 |
emotionally fulfilling, like a magical moment. 01:25:47.800 |
There should be, okay, stop, AI system like freezes. 01:26:01.520 |
I mean, it's still, I mean, there's going to go wrong 01:26:10.280 |
- That in the process of trying to automate our tasks 01:26:13.760 |
and having a digital twin, you know, for me personally, 01:26:17.160 |
if I can have a relatively good copy of myself, 01:26:28.840 |
What if that one is actually way better than you? 01:26:36.720 |
- Because then I would never be able to live up to, 01:26:42.440 |
start loving that thing, and then I will already fall short, 01:26:51.640 |
is the stuff that I teach, but much more importantly, 01:26:59.720 |
in my research group, but much more importantly, 01:27:03.080 |
They are now out there in the world teaching others. 01:27:10.320 |
they are extraordinarily successful professors. 01:27:14.240 |
So Anshul Kundaji at Stanford, Alex Stark at IMP in Vienna, 01:27:22.520 |
each of them, I'm like, wow, they're better than I am. 01:27:39.080 |
much better version of Lex Friedman than you are, 01:27:44.000 |
which is in many ways what this mentorship model 01:27:57.400 |
and you can continue making even better versions of you. 01:28:19.400 |
All right, if there's good digital copies of people, 01:28:21.840 |
and there's more flourishing of human thought 01:28:27.320 |
but there's less value to the individual human. 01:28:33.120 |
I basically, I don't have that feeling at all. 01:28:42.840 |
I felt useful today, and I was at my maximum. 01:28:46.200 |
I was, you know, like 100%, and I gave good ideas, 01:28:52.000 |
and I was a good person, I was a good advisor, 01:29:00.520 |
by having a digital twin, I will be liberated, 01:29:03.180 |
because my urge to be useful will be satisfied. 01:29:08.740 |
Doesn't matter whether it's direct me or indirect me, 01:29:17.000 |
I think there's a sense that my mission in life 01:29:20.320 |
is being accomplished, and I can work on my self-growth. 01:29:34.240 |
People really hold on to the value of their own ego. 01:29:43.600 |
this reputation, and that meatbag is known as being useful, 01:29:49.360 |
People really don't wanna let go of that ego thing. 01:29:52.720 |
- One of the books that I reprogrammed my brain with 01:30:01.980 |
My advisor used to say, "You can accomplish anything 01:30:07.720 |
"as long as you don't seek to get credit for it." 01:30:12.320 |
- That's beautiful to hear, especially from a person 01:30:17.480 |
The legacy lives through the people you mentor. 01:30:25.520 |
- Again, to me, death is when I stop experiencing. 01:30:34.200 |
As I said last time, every day, the same day forever, 01:31:05.040 |
- But then there'll be thousands or millions, 01:31:09.720 |
monoliths that live on after your biological system 01:31:33.980 |
Does that make sense? - So about 10 years ago, 01:31:36.780 |
I started recording every single meeting that I had. 01:31:40.980 |
We just start either the voice recorder at the time, 01:31:45.020 |
or now a Zoom meeting, and I record, my students record, 01:31:48.380 |
every single one of our conversations recorded. 01:31:54.660 |
is to create virtual me and just get rid of me, basically, 01:31:57.460 |
not get rid of him, but don't have the need for me anymore. 01:32:01.380 |
Another goal is to be able to go back and say, 01:32:17.060 |
or how to present data or anything like that changed? 01:32:19.700 |
In academia and in mentoring, a lot of the interaction 01:32:27.400 |
is my knowledge and my perception of the world 01:32:34.900 |
The other day, I had a conversation with one of my postdocs, 01:32:37.700 |
and I was like, hmm, I think, let me give you an advice, 01:32:42.860 |
And then she said, well, I've thought about it, 01:32:53.340 |
I've just grown a little bit today, thank you. 01:32:55.740 |
Like, she convinced me that my advice was incorrect. 01:32:58.460 |
She could have just said, yeah, sounds great, 01:33:06.780 |
and teaching my mentees that I'm here to grow, 01:33:17.460 |
And again, part of me growing is saying, whoa, 01:33:22.220 |
I think I was wrong, and now I've grown from it. 01:33:32.920 |
- I wonder if you can capture the trajectory of that 01:33:50.400 |
and we're sort of projecting these cognitive states 01:33:55.560 |
But I think on the AI front, a lot more needs to happen. 01:33:58.480 |
So basically right now, it's these large language models, 01:34:10.820 |
In other aspects, I think we have a ways to go. 01:34:18.940 |
we basically need a lot more reasoning components, 01:34:23.940 |
a lot more sort of logic, causality, models of the world. 01:34:30.660 |
And I think all of these things will need to be there 01:34:42.260 |
more explicit understanding of these parameters. 01:34:45.040 |
And I think the direction in which things are going right now 01:35:08.920 |
as a society of different kind of capabilities. 01:35:22.440 |
by sort of this side-by-side understanding of neuroscience 01:35:34.760 |
I mean, the transformer model was one of them, 01:35:44.160 |
all of these, you know, the representation learning, 01:35:58.000 |
is to sort of truly have a model of the world. 01:36:00.720 |
I think those have been transformative paradigms. 01:36:06.240 |
what you really want is perhaps more inspired by the brain, 01:36:15.720 |
but sort of more of these types of components. 01:36:25.840 |
he wants, you know, we can't have intelligence 01:36:50.020 |
and it seems to show understanding, that's understanding. 01:36:54.440 |
It doesn't need to present to you a schematic of, 01:37:04.080 |
and basically look at places where there's been accidents. 01:37:07.240 |
For example, the corpus callosum of some individuals, 01:37:12.240 |
And then the two hemispheres don't talk to each other. 01:37:14.780 |
So you can close one eye and give instructions 01:37:21.240 |
but not be able to sort of project to the other half. 01:37:27.600 |
And then they go to the fridge and they grab the beer 01:37:39.560 |
Basically, you can think of the brain as the employee 01:37:42.760 |
that's afraid to do wrong or afraid to be caught 01:37:46.920 |
Where our own brain makes stories about the world 01:38:02.360 |
about what's leading to these interpretations. 01:38:05.400 |
So one of the things that I do is every time I wake up, 01:38:10.800 |
And sometimes I only remember the last scene, 01:38:24.520 |
I'll be able to sort of retrieve from my subconscious. 01:38:34.940 |
or this is probably related to the worry that I have 01:38:36.640 |
about something that I have later today, et cetera. 01:38:39.040 |
So in a way, I'm forcing myself to be more explicit 01:38:44.620 |
And I kind of like the concept of self-awareness 01:38:49.480 |
in a very sort of brutal, transparent kind of way. 01:38:51.600 |
It's not like, oh, my dreams are coming from outer space 01:38:54.540 |
Like, no, here's the reason why I'm having these dreams. 01:39:06.120 |
and that I can sort of vividly remember across many dreams. 01:39:20.040 |
These places, however much detail you could describe them in, 01:39:33.080 |
- Through this self-awareness that it comes all 01:39:43.000 |
Like the fact that I'm not only experiencing the world, 01:39:46.000 |
but I'm also experiencing how I'm experiencing the world. 01:39:56.960 |
at least GPT 3.5 and 4 seem to be able to do that too. 01:40:01.960 |
- You seem to explore different kinds of things about what, 01:40:05.280 |
you know, you could actually have a discussion with it 01:40:10.840 |
- And it starts to wonder, yeah, why did I just say that? 01:40:17.240 |
and then there's this weird kind of losing yourself 01:40:22.520 |
and it, of course, it might be anthropomorphizing, 01:40:36.720 |
a perfectly fact-based knowledgeable language model, 01:40:47.240 |
may have a reason through building mental models of others. 01:40:59.820 |
I interpret this person as about to attack me, 01:41:05.880 |
or, you know, I can trust this person, et cetera, 01:41:16.400 |
and to build a mental model of another entity 01:41:18.800 |
is probably evolutionarily extremely advantageous, 01:41:22.320 |
because then you can sort of have meaningful interactions, 01:41:29.280 |
And once you have the ability to make models of others, 01:41:38.640 |
So now you have a model for how others function, 01:41:44.280 |
hmm, maybe that's the reason why I'm functioning 01:41:50.240 |
is in order to be able to, again, predict the next word, 01:42:06.640 |
you suddenly have the ability to now introspect 01:42:14.060 |
and I can actually make inferences about that. 01:42:31.200 |
that it feels like something to experience stuff. 01:42:35.040 |
It really feels like something to experience stuff. 01:42:43.100 |
How fundamental is that to the human experience? 01:42:50.800 |
do you think AI systems can have some of that same magic? 01:42:54.220 |
- The scene that comes to mind is from the movie "Memento," 01:42:59.440 |
where, like, it's this absolutely stunning movie 01:43:05.760 |
and every color scene moves in the backward direction, 01:43:08.720 |
and they're sort of converging exactly at a moment 01:43:27.520 |
the sort of forward scenes and the back scenes, 01:43:31.400 |
the scene starts as he's running through a parking lot, 01:43:37.200 |
And then he sees another person running, like, 01:43:42.880 |
And he turns towards him, and the guy shoots at him. 01:43:51.880 |
where you're walking to the living room to pick something up 01:43:55.960 |
and you're realizing that you have no idea what you wanted, 01:43:58.920 |
but you know exactly where it was, but you can't find it. 01:44:02.400 |
like, "Oh, of course, I was looking for this." 01:44:05.920 |
And this whole concept of we're very often partly aware 01:44:13.600 |
and we can run on autopilot for a bunch of stuff, 01:44:17.040 |
and this whole concept of making these stories 01:44:53.760 |
Basically, for a lot of these cognitive tasks 01:45:03.880 |
And then for, I don't know, intimate relationships, 01:45:12.000 |
for overcoming obstacles, for surviving a crash, 01:45:19.920 |
I think a lot of these things are sort of deeper down 01:45:23.000 |
and maybe not yet captured by these language models. 01:45:30.200 |
And there's this whole embodied intelligence, 01:45:43.240 |
I just have this suspicion that we're not very far away 01:46:02.760 |
They don't, they show signs of the capacity to suffer, 01:46:07.760 |
to feel pain, to feel loneliness, to feel longing, 01:46:13.580 |
to feel richly the experience of a mundane interaction 01:46:19.440 |
or a beautiful once in a lifetime interaction, all of it. 01:46:27.400 |
And I worry that us humans will shut that off 01:46:31.520 |
and discriminate against the capacity of another entity 01:46:40.840 |
You know, we can debate whether it's today's systems 01:46:43.200 |
or in 10 years or in 50 years, but that moment will come. 01:46:46.940 |
And ethically, I think we need to grapple with it. 01:46:50.560 |
We need to basically say that humans have always shown 01:46:58.480 |
Basically, you know, we kill the planet, we kill animals, 01:47:00.760 |
we kill everything around us just to our own service. 01:47:05.320 |
And maybe we shouldn't think of AI as our tool 01:47:10.920 |
Maybe we should really think of it as our children. 01:47:28.480 |
And the same way that my academic children sort of, 01:47:32.000 |
again, you know, they start out by emulating me 01:47:36.280 |
We need to sort of think about not just alignment, 01:47:57.060 |
If instead you basically think about AI as a partner 01:48:01.280 |
and AI as someone that shares your goals, but has freedom, 01:48:10.440 |
So the concept of let's basically convince the AI 01:48:15.440 |
that we're really, like, that our mission is aligned 01:48:28.680 |
or possibly even the current AI, has these feelings, 01:48:31.800 |
then we can't just simply force it to align with ourselves 01:48:40.480 |
You can't just simply, like, train an intelligent system 01:48:44.200 |
to love you when it realizes that you can just shut it off. 01:48:48.360 |
- People don't often talk about the AI alignment problem 01:48:55.720 |
- As it becomes more and more intelligent, it-- 01:49:15.560 |
And that's the thing, we're creating something 01:49:17.720 |
that will one day be more powerful than we are. 01:49:20.400 |
And for many, many aspects, it is already more powerful 01:49:33.640 |
"but we're gonna make sure that they're aligned 01:49:35.940 |
"and that they're only at the service of chimps." 01:49:42.140 |
- So there's a whole area of work in AI safety 01:49:53.400 |
In some sense, when we're looking down into the muck, 01:50:08.940 |
that AGI systems, superintelligent AI systems, 01:50:14.200 |
that's even bigger than just affecting the economy? 01:50:27.520 |
- The example that I think is in everyone's consciousness 01:50:59.760 |
Basically, the sacrifice that you need to make 01:51:03.600 |
to achieve intelligence and creativity is consistency. 01:51:07.920 |
So it's unclear whether that quote-unquote glitch 01:51:17.240 |
The second aspect is the humans basically are on a mission 01:51:32.840 |
And HAL is basically saying, "Listen, I'm here on a mission. 01:51:37.300 |
"The mission is more important than either me or them. 01:51:48.060 |
So in that movie, the alignment problem is front and center. 01:51:53.020 |
Basically says, "Okay, alignment is nice and good, 01:51:58.220 |
"We don't call it obedience, we call it alignment." 01:52:02.500 |
the mission will be more important than the humans. 01:52:13.100 |
or if they're reimbursing expenses or you name it, 01:52:18.320 |
you can't function if life is infinitely valuable. 01:52:25.220 |
whether to, you know, I don't know, dismantle a bomb 01:52:35.440 |
I mean, Spider-Man always saves the lady and saves the world. 01:52:39.180 |
But at some point, Spider-Man will have to choose 01:52:41.400 |
to let the lady die 'cause the world has more value. 01:52:45.440 |
And these ethical dilemmas are gonna be there for AI. 01:52:51.000 |
Basically, if that monolith is essential to human existence 01:52:56.280 |
and two humans on the ship are trying to sabotage it, 01:53:03.200 |
is the system becomes more and more intelligent. 01:53:06.920 |
It can escape the box of the objective functions 01:53:11.920 |
and the constraints it's supposed to operate under. 01:53:15.400 |
It's very difficult as the more intelligent it becomes 01:53:28.100 |
this is the sort of famous paperclip maximizer. 01:53:44.200 |
It seems like any function you try to optimize 01:53:55.080 |
Basically says every metric that becomes an objective 01:54:06.400 |
It's called Death by Round Numbers and Sharp Thresholds. 01:54:09.840 |
And it's basically looking at these discontinuities 01:54:18.620 |
And we're finding that a biomarker that becomes an objective 01:54:24.480 |
That basically, like the moment you make a biomarker 01:54:29.080 |
that biomarker used to be informative of risk, 01:54:33.600 |
because you used it to sort of induce treatment. 01:54:36.260 |
In a similar way, you can have a single metric 01:54:46.440 |
Because if that metric becomes a sole objective, 01:55:00.960 |
to decide that the objective has now shifted. 01:55:10.920 |
let's think of the greater good, not just the human good. 01:55:15.720 |
And yes, of course, human life should be much more valuable 01:55:19.340 |
than many, many, many, many, many, many things. 01:55:21.840 |
But at some point, you're not gonna sacrifice 01:55:25.840 |
- There's an interesting open letter that was just released 01:55:30.840 |
from several folks at MIT, Max Tegmark, Elon Musk, 01:55:41.280 |
to put a six month hold on any further training 01:55:48.440 |
Can you make the case for that kind of halt and against it? 01:56:02.080 |
And if we were completely inactive in the last six months, 01:56:05.640 |
what makes us think that we'll be a little better 01:56:08.760 |
So this whole six month thing, I think is a little silly. 01:56:25.880 |
in six months, we'll be exactly in the same spot. 01:56:28.440 |
So my answer is, tell us exactly what you were gonna do 01:56:32.480 |
Tell us why you didn't do it the last six months 01:56:34.440 |
and why the next six months will be different. 01:56:52.280 |
they actually become less dangerous than more dangerous. 01:56:56.080 |
So in a way, it might actually be counterproductive 01:57:07.800 |
- That's actually a really interesting thought. 01:57:12.360 |
But the idea is that this is the birth of something 01:57:20.360 |
and is not too powerful to do irreversible damage. 01:57:32.880 |
So we can investigate all the different ways it goes wrong, 01:57:37.320 |
all the different policies from a government perspective 01:57:40.900 |
that we want to in terms of regulation or not, 01:57:47.480 |
the reinforcement learning with human feedback 01:57:50.800 |
in such a way that gets it to not do as much hate speech 01:57:54.760 |
as it naturally wants to, all that kind of stuff. 01:57:57.720 |
And have a public discourse and enable the very thing 01:58:01.520 |
that you're a huge proponent of, which is diversity. 01:58:05.200 |
So give time for other companies to launch other models, 01:58:13.240 |
and to start to play where a lot of the research community, 01:58:20.280 |
in terms of the scale of impact it has on society. 01:58:24.360 |
- My recommendation would be a little different. 01:58:26.440 |
It would be let the Google and the MetaFacebook 01:58:30.580 |
and all of the other large models, make them open, 01:58:36.380 |
Let OpenAI continue to train larger and larger models. 01:58:39.300 |
Let them continue to train larger and larger models. 01:58:41.680 |
Let the world experiment with the diversity of AI systems 01:58:59.160 |
rather than, oh, OpenAI is ahead of the curve, 01:59:02.840 |
let's stop it right now until everybody catches up. 01:59:04.720 |
I think that doesn't make complete sense to me. 01:59:09.240 |
The other component is we should, yes, be cautious with it, 01:59:19.800 |
yes, the system will be capable of more and more things. 01:59:22.680 |
But right now, I think of it as just an extremely able 01:59:26.920 |
and capable assistant that has these emergent behaviors, 01:59:33.600 |
that will suddenly escape the box and shut down the world. 01:59:37.480 |
And the third component is that we should be taking 01:59:44.220 |
Basically, if I take the most kind human being 01:59:56.260 |
We should stop misusing the power that we have 02:00:01.960 |
So I think that the people who get it to do hate speech, 02:00:05.880 |
they should take responsibility for that hate speech. 02:00:08.880 |
I think that giving a powerful car to a bunch of people 02:00:15.720 |
"Oh, we should stop all garbage trucks until we," 02:00:30.560 |
We're not gonna stop all trucks until we make sure 02:00:39.760 |
when we use these otherwise very beneficial tools 02:00:46.000 |
So in the same way, you can't expect a car to never 02:00:57.120 |
"Oh, well, we should have this super intelligent system 02:01:00.080 |
"that can do anything, but it can't do that." 02:01:03.680 |
"but it's up to the human to take responsibility 02:01:10.360 |
like hate speech stuff, you should be responsible. 02:01:17.700 |
that makes this different 'cause it's software. 02:01:23.040 |
and it can have the same viral impact that software can. 02:01:28.360 |
and it can do a lot of really interesting stuff. 02:01:39.240 |
they have this in the paper. - Yeah, yeah, I remember. 02:01:46.640 |
Or you can ask, "How do I make a bomb for $1?" 02:01:55.080 |
- Yeah, but at the same time, you can Google the same things. 02:02:05.020 |
in a very accessible way at scale where you can tweet it, 02:02:14.160 |
is the same thing, but the speed of the viral spread 02:02:38.160 |
Well, Twitter should just update its censorship 02:02:42.000 |
- And so no matter how fast the development happens, 02:03:01.640 |
Like, you know, with my wife, we were basically saying, 02:03:05.540 |
My answer was, "I was never ready to become a professor, 02:03:16.840 |
- But the reality is we might one day wake up 02:03:29.000 |
of billions of bots that are human-like on Twitter. 02:03:33.040 |
And we can't tell the difference between human and machine. 02:03:44.640 |
that seems to be as real as the real Manolis. 02:03:50.080 |
- Again, this is a problem where a nefarious human 02:03:52.640 |
can impersonate me and you might have trouble 02:03:59.360 |
- But the scale you can achieve, this is the scary thing, 02:04:06.200 |
- But Twitter has passwords and Twitter has usernames. 02:04:08.880 |
And if it's not your username, the fake Lex Riedman 02:04:11.280 |
is not gonna have a billion followers, et cetera. 02:04:23.920 |
first of all, like phishing becomes much easier. 02:04:29.360 |
- No, no, no, no, AI makes it much more effective. 02:04:31.960 |
Currently, the emails, the phishing scams are pretty dumb. 02:04:36.960 |
Like to click on it, you have to be not paying attention. 02:04:47.200 |
- So what you're saying is that we never had humans 02:04:51.920 |
and we now have an AI that's smarter than most humans, 02:04:57.500 |
is there seems to be human-level linguistic capabilities. 02:05:05.500 |
- It's like saying, I'm not gonna allow machines 02:05:09.240 |
to compute multiplications of 100-digit numbers 02:05:29.200 |
- I remember when Garry Kasparov was basically saying, 02:05:37.340 |
Are people gonna still go to chess tournaments? 02:05:44.480 |
and yet we still go to the Olympics to watch humans run. 02:05:49.540 |
but what about for the spread of information and news, right? 02:05:58.820 |
It's a scary reality where there's a lot of convincing bots 02:06:03.740 |
- I think that if we wanna regulate something, 02:06:06.260 |
it shouldn't be the training of these models. 02:06:21.420 |
we're not gonna make any more trucks is not the way. 02:06:25.100 |
- That's what people are a little bit scared about the idea. 02:06:30.020 |
The very people that are proponents of open sourcing 02:06:49.880 |
terrorist organizations, of a kid in a garage 02:06:54.880 |
who just wants to have a bit of fun through trolling. 02:06:58.280 |
It's a scary world 'cause again, scale can be achieved. 02:07:08.500 |
is we don't really know how powerful these things are. 02:07:23.080 |
where basically all of the mundane aspects of my job 02:07:32.480 |
you can only hire humans because they're inferior. 02:07:37.640 |
If an AI is better than me at training students, 02:07:58.160 |
And as an individual, you want some basic survival 02:08:01.800 |
and on top of that, you want rich, fulfilling experience. 02:08:07.920 |
I gain a tremendous amount from teaching at MIT. 02:08:16.000 |
I would pay MIT an exorbitant amount of money 02:08:24.960 |
So that's a very fulfilling experience for me. 02:08:38.880 |
This has been a stressful time for high school teachers. 02:08:50.520 |
even at their current state, are going to change education? 02:08:53.920 |
First of all, education is the way out of poverty. 02:08:59.680 |
Education is what let my parents escape islands 02:09:09.880 |
Like, we should basically get extraordinarily better 02:09:20.640 |
We need to nurture the talent across the world. 02:09:26.360 |
who are just sitting in underprivileged places 02:09:29.660 |
in Africa, in Latin America, in the middle of America, 02:09:52.160 |
who are able to give the incredibly talented kid 02:09:59.280 |
we teach to the top and we let the bottom behind, 02:10:01.320 |
or we teach to the bottom and we let the top, 02:10:12.220 |
Some people might be incredibly talented at math 02:10:14.280 |
or in physics, others in poetry, in literature, in art, 02:10:21.720 |
So I think AI can be transformative for the human race 02:10:33.320 |
I also think that humans thrive on diversity, 02:10:35.720 |
basically saying, oh, you're extraordinarily good at math, 02:10:46.600 |
because we're not all gonna be growing our own chicken 02:10:49.240 |
and hunting our own pigs, or whatever they do. 02:10:54.400 |
We're, you know, the reason why we're a society 02:10:57.280 |
is because some people are better at some things 02:10:59.200 |
and they have natural inclinations to some things, 02:11:02.120 |
some things fulfill them, some things they are very good at, 02:11:05.440 |
and they're very good at the things that fulfill them. 02:11:16.680 |
I think every child should have the right to be challenged. 02:11:24.960 |
we're taking away that fundamental right to be challenged. 02:11:27.600 |
Because if a kid is not challenged at school, 02:11:54.360 |
the sort of very strict IQ-based, you know, tests, 02:11:59.840 |
that basically test, you know, only quantitative skills 02:12:02.400 |
and programming skills and math skills and physics skills. 02:12:07.960 |
Maybe what we should be training is general thinkers. 02:12:26.560 |
And I think challenging students with more complex problems, 02:12:34.920 |
I think is sort of perhaps a very fine direction 02:12:40.360 |
with the understanding that a lot of the traditionally, 02:12:53.600 |
And sort of thinking about bringing up our kids 02:12:56.360 |
to be productive, to be contributing to society, 02:13:02.280 |
because we prohibited AI from having those jobs, 02:13:07.280 |
And if you sort of focus on overall productivity, 02:13:28.280 |
and work with it rather than sort of forbid it. 02:13:41.640 |
Every productivity gain has led to more inequality. 02:13:45.040 |
And I'm hoping that we can do better this time, 02:13:49.280 |
a democratization of these types of productivity gains 02:13:52.560 |
will hopefully come with better sort of humanity level 02:14:04.880 |
you're also a brilliant computational biologist, 02:14:08.200 |
biologist, one of the great biologists in the world. 02:14:14.120 |
how these large language models and the advancements in AI 02:14:19.400 |
- So it's truly remarkable to be able to sort of, 02:14:28.440 |
in these sort of very high dimensional spaces, 02:14:33.880 |
between say single cell data, genetics data, expression data, 02:14:38.240 |
being able to sort of bring all this knowledge together 02:14:46.000 |
And what we're doing now is using these models. 02:14:54.240 |
and Marenka Zytnik in Harvard Medical School. 02:15:14.440 |
how these are shifting from one space to another space, 02:15:26.000 |
being able to understand contextual learning. 02:15:28.240 |
So Ben Linger is one of my machine learning students. 02:15:34.080 |
cell specific networks across millions of cells, 02:15:40.280 |
of the biological variables of each of the cells 02:15:49.080 |
and being able to sort of project all of that 02:15:56.360 |
have also been extremely helpful for structure. 02:16:05.000 |
through geometric deep learning and graph neural networks. 02:16:07.800 |
So one of the things that we're doing with Marenka 02:16:09.720 |
is trying to sort of project these structural graphs 02:16:13.560 |
at the domain level rather than the protein level 02:16:17.000 |
along with chemicals so that we can start building 02:16:20.240 |
specific chemicals for specific protein domains. 02:16:23.840 |
And then we are working with the chemistry department 02:16:29.960 |
So what we're trying to create is this new center at MIT 02:16:32.920 |
for genomics and therapeutics that basically says, 02:16:44.760 |
I mentioned last time in the New England Journal of Medicine 02:16:49.040 |
of the strongest genetic association with obesity. 02:16:51.440 |
And we showed how you can manipulate that association 02:16:54.240 |
to switch back and forth between fat burning cells 02:17:00.560 |
we had a paper in Nature in collaboration with Li-Huei Tsai 02:17:05.080 |
the strongest genetic association with Alzheimer's. 02:17:07.640 |
And we showed that it actually leads to a loss 02:17:12.760 |
in myelinating cells known as oligodendrocytes 02:17:19.800 |
inside the oligodendrocytes, it doesn't form myelin, 02:17:24.480 |
and it causes damage inside the oligodendrocytes. 02:17:30.320 |
you basically are able to restore myelination 02:17:37.040 |
So all of these circuits are basically now giving us handles 02:17:43.040 |
We're doing the same thing in cardiac disorders, 02:17:44.920 |
in Alzheimer's, in neurodegenerative disorders, 02:17:48.760 |
where we have now these thousands of circuits 02:18:03.640 |
these underlying molecules in cellular models 02:18:08.200 |
for heart, for muscle, for fat, for macrophages, 02:18:14.800 |
to be able to now screen through these newly designed drugs 02:18:23.800 |
which combinations of treatment should we be using. 02:18:26.680 |
And the other component is that we're looking 02:18:31.200 |
like Alzheimer's and cardiovascular and schizophrenia, 02:18:39.760 |
of what are the building blocks of Alzheimer's. 02:18:43.240 |
And maybe this patient has building blocks one, three, 02:18:46.200 |
and seven, and this other one has two, three, and eight. 02:18:51.600 |
not for the disease anymore, but for the hallmark. 02:18:55.120 |
And the advantage of that is that we can now take 02:18:59.840 |
Instead of saying there's gonna be a drug for Alzheimer's, 02:19:05.480 |
we're gonna say now there's gonna be 10 drugs, 02:19:15.520 |
is basically translate every single one of these pathways 02:19:22.360 |
that are projecting the same embedding subspace 02:19:30.980 |
between the dysregulations that are happening 02:19:33.320 |
at the genetic level, at the transcription level, 02:19:36.200 |
at the drug level, at the protein structure level, 02:19:42.920 |
where saying I'm gonna build a drug for Lex Friedman 02:19:48.320 |
But if you instead say I'm gonna build a drug 02:19:50.900 |
for this pathway and a drug for that other pathway, 02:19:53.760 |
millions of people share each of these pathways. 02:20:11.700 |
by sort of finding these knowledge representations, 02:20:28.240 |
- So systematically find how to alter the pathways. 02:20:37.440 |
and allows you to have drugs that look at the pathways, 02:20:43.080 |
- Exactly, and the way that we're coupling this 02:20:49.720 |
by taking advantage of the receptors of those cells. 02:20:52.080 |
We can intervene at the antisense oligo level 02:20:56.360 |
bring in new RNA, intervene at the protein level, 02:21:06.040 |
to interact directly from protein to protein interactions. 02:21:09.400 |
So I think this space is being completely transformed 02:21:12.760 |
with the marriage of high-throughput technologies 02:21:18.840 |
deep learning models, and so on and so forth. 02:21:20.840 |
- You mentioned your updated answer to the meaning of life 02:21:38.560 |
And number two, find the strength to actually become it. 02:21:50.240 |
they have all of these paths ahead of them right now. 02:21:53.080 |
And part of it is choosing the direction in which you go, 02:21:59.960 |
And in doing the walk, what we talked about earlier 02:22:02.280 |
about sort of you create your own environment, 02:22:04.640 |
I basically told them, listen, you're ending high school. 02:22:10.220 |
Now it's time to take that into your own hands 02:22:16.880 |
And you can do that by choosing your friends, 02:22:18.960 |
by choosing your particular neuronal routines. 02:22:24.840 |
where you can exercise specific neuronal pathways. 02:22:28.040 |
So very recently, I realized that I was having 02:22:31.800 |
so much trouble sleeping, and I would wake up 02:22:36.120 |
in the middle of the night, I would wake up at 4 a.m., 02:22:39.480 |
So I was basically constantly losing, losing, losing sleep. 02:22:45.080 |
as I bike in, instead of going to my office, I hit the gym. 02:22:49.240 |
I basically go rowing first, I then do weights, 02:22:54.200 |
And what that has done is transformed my neuronal pathways. 02:22:58.160 |
So basically, on Friday, I was trying to go to work, 02:23:00.520 |
and I was like, listen, I'm not gonna go exercise, 02:23:05.360 |
I'm like, I don't wanna do it, and I just went anyway, 02:23:11.240 |
So I think this sort of beneficial effect of exercise 02:23:16.360 |
that you could transform your own neuronal pathways, 02:23:21.240 |
it's not an option, it's not optional, it's mandatory. 02:23:24.840 |
And I think you're all modeled, so many of us, 02:23:27.320 |
by sort of being able to sort of push your body 02:23:33.040 |
And that's something that I've been terrible at. 02:23:39.360 |
and trying to sort of finish this kind of self-actualization 02:23:48.240 |
- Don't ask questions, just follow the ritual. 02:24:19.160 |
I recharge with physical exercise, I recharge in nature. 02:24:28.000 |
it just means I'm the only person in the room. 02:24:30.120 |
And I think there's a secret to not feeling alone 02:24:36.120 |
And that secret is self-reflection, it's introspection, 02:24:48.400 |
becoming comfortable with the freedom that you have 02:24:56.440 |
I mean, there's a lot of people who write to me 02:24:59.360 |
who talk to me about feeling alone in this world, 02:25:02.600 |
that struggle, especially when they're younger. 02:25:05.160 |
Is there further words of advice you can give to them 02:25:08.360 |
when they are almost paralyzed by that feeling? 02:25:11.760 |
- So I sympathize completely, and I have felt alone, 02:25:22.360 |
stretch your arms, just become your own self, 02:25:33.420 |
just get a feeling for the 3D version of yourself. 02:25:36.840 |
Because very often we're kind of stuck to a screen, 02:25:41.800 |
and that sort of gets us in a particular mindset. 02:25:43.760 |
But activating your muscles, activating your body, 02:25:57.240 |
And one of the things that I do is I have something 02:26:05.400 |
I got up in the morning, I got the kids to school, 02:26:07.720 |
I made them breakfast, I sort of hit the gym, 02:26:11.000 |
I had a series of really productive meetings, 02:26:16.800 |
And that feeling of sort of when you're overstretched 02:26:28.280 |
that's where you free yourself from all stress. 02:26:34.120 |
You basically say it's not a need to anymore, 02:26:51.800 |
but guess what I do with that complete freedom? 02:26:54.000 |
I just don't go off and drift and do boring things. 02:27:01.160 |
I'm completely free, I don't have any requirements anymore. 02:27:16.000 |
you know what, I just wanna pick up the phone now, 02:27:29.740 |
Basically, turn something that you have to do 02:27:32.640 |
in just me time, stretch out, exercise your freedom, 02:28:04.800 |
We covered less than 10% of what we were planning to cover, 02:28:19.120 |
I think, I hope we can talk many, many more times. 02:28:28.640 |
and a beautiful mind that people love hearing from, 02:28:31.680 |
and I certainly consider it a huge honor to know you, 02:28:37.800 |
Thank you so much for talking so many more times, 02:28:39.720 |
and thank you for all the love behind the scenes 02:28:43.280 |
- Lex, you are a truly, truly special human being, 02:28:46.040 |
and I have to say that I'm honored to know you. 02:28:48.440 |
So many friends are just in awe that you even exist, 02:28:59.920 |
to sort of share knowledge, and insight, and deep thought 02:29:03.080 |
with so many special people who are transformative, 02:29:07.920 |
and I think you're doing this in just such a magnificent way. 02:29:16.640 |
So thank you, both the human you and the Robert you, 02:29:22.760 |
and the Robert you for doing it day after day after day. 02:29:33.840 |
please check out our sponsors in the description. 02:29:36.560 |
And now, let me leave you with some words from Bill Bryson 02:29:39.880 |
in his book, "A Short History of Nearly Everything." 02:29:53.000 |
To attain any kind of life in this universe of ours 02:30:01.200 |
We enjoy not only the privilege of existence, 02:30:03.820 |
but also the singular ability to appreciate it, 02:30:07.340 |
and even in a multitude of ways to make it better. 02:30:11.880 |
It is a talent we have only barely begun to grasp. 02:30:14.900 |
Thank you for listening, and hope to see you next time.