back to indexRisto Miikkulainen: Neuroevolution and Evolutionary Computation | Lex Fridman Podcast #177
Chapters
0:0 Introduction
1:7 If we re-ran Earth over 1 million times
4:24 Would aliens detect humans?
7:2 Evolution of intelligent life
10:47 Fear of death
17:3 Hyenas
20:28 Language
23:59 The magic of programming
29:59 Neuralink
37:31 Surprising discoveries by AI
41:6 How evolutionary computation works
52:28 Learning to walk
55:41 Robots and a theory of mind
64:45 Neuroevolution
75:3 Tesla Autopilot
78:28 Language and vision
84:9 Aliens communicating with humans
89:45 Would AI learn to lie to humans?
96:20 Artificial life
101:12 Cellular automata
106:49 Advice for young people
111:25 Meaning of life
00:00:00.000 |
The following is a conversation with Risto Michelinan, 00:00:02.880 |
a computer scientist at University of Texas at Austin 00:00:07.880 |
of Evolutionary Artificial Intelligence at Cognizant. 00:00:14.440 |
but also many other topics in artificial intelligence, 00:00:21.920 |
Jordan Harbinger Show, Grammarly, Belcampo, and Indeed. 00:00:26.600 |
Check them out in the description to support this podcast. 00:00:30.600 |
As a side note, let me say that nature-inspired algorithms 00:00:34.160 |
from ant colony optimization to genetic algorithms 00:00:50.720 |
It does seem that in the long arc of computing history, 00:00:54.200 |
running toward biology, not running away from it, 00:01:03.240 |
and here is my conversation with Risto Michelinan. 00:01:12.560 |
over and over and over and over a million times 00:01:15.280 |
and watch the evolution of life as it pans out, 00:01:19.240 |
how much variation in the outcomes of that evolution 00:01:23.240 |
Now, we should say that you are a computer scientist. 00:01:30.440 |
because we are building simulations of these things, 00:01:36.240 |
and that's a difficult question to answer in biology, 00:01:43.600 |
how much variation do we see when we simulate it? 00:01:47.040 |
And that's a little bit beyond what we can do today, 00:01:50.640 |
but I think that we will see some regularities, 00:01:54.160 |
and it took evolution also a really long time 00:01:57.760 |
and then things accelerated really fast towards the end. 00:02:02.240 |
But there are things that need to be discovered, 00:02:04.280 |
and they probably will be over and over again, 00:02:06.480 |
like manipulation of objects, opposable thumbs, 00:02:16.040 |
maybe orally, like, why will you have speech? 00:02:43.280 |
And would that be an apex of evolution after a while? 00:02:59.680 |
that can do that, manipulate the environment and build. 00:03:11.720 |
building buildings, and then running for president, 00:03:25.680 |
like detect if any cool stuff came up, right? 00:03:56.360 |
intelligent agents that communicate, cooperate, manipulate, 00:04:13.520 |
But also, I think we do have to run it many times 00:04:16.680 |
because we don't quite know what shape those will take, 00:04:21.160 |
and our detectors may not be perfect for them 00:04:32.800 |
If we take an alien perspective, observing Earth, 00:04:37.160 |
are you sure that they would be able to detect humans 00:04:41.360 |
Wouldn't they be already curious about other things? 00:04:43.800 |
There's way more insects by body mass, I think, 00:04:50.880 |
Obviously, dolphins is the most intelligent creature 00:04:55.240 |
So, it could be the dolphins that they detect. 00:04:58.400 |
It could be the rockets that we seem to be launching. 00:05:00.840 |
That could be the intelligent creature they detect. 00:05:22.320 |
that could be the thing your detector is detecting. 00:05:33.320 |
Do you think you would be able to detect humans? 00:05:40.920 |
in the computational sense, detect interesting things? 00:05:44.600 |
Do you basically have to have a strict objective function 00:05:48.760 |
by which you measure the performance of a system, 00:05:51.800 |
or can you find curiosities and interesting things? 00:06:08.800 |
and that's where a lot of people live, most people live. 00:06:11.960 |
So, that would be a good sign of intelligence, 00:06:25.520 |
Termites build mounds and hives and things like that, 00:06:29.080 |
but the complexity of the human construction cities, 00:06:32.120 |
I think, would stand out, even to an external observer. 00:06:38.280 |
- Yeah, and you can certainly say that sharks 00:06:41.000 |
are really smart because they've been around so long, 00:06:43.280 |
and they haven't destroyed their environment, 00:06:52.080 |
And we can get over it by doing some construction 00:06:55.320 |
that actually is benign, and maybe even enhances 00:07:12.640 |
I don't know if you think about this kind of stuff, 00:07:18.240 |
which is the springing up, like the origin of life on Earth? 00:07:23.040 |
And second, how unlikely is anything interesting 00:07:30.560 |
Sort of like the start that creates all the rich complexity 00:07:38.640 |
on exactly that problem from primordial soup, 00:07:42.320 |
how do you actually get self-replicating molecules? 00:07:48.800 |
With a little bit of help, you can make that happen. 00:07:57.200 |
and try out conditions that are conducive to that. 00:07:59.840 |
For evolution to discover that, it took a long time. 00:08:04.160 |
For us to recreate it probably won't take that long. 00:08:14.520 |
But with evolution, what was really fascinating 00:08:18.600 |
was eventually the runaway evolution of the brain 00:08:27.280 |
That was something that happened really fast. 00:08:35.800 |
And if it happens, does it go in the same direction? 00:08:43.000 |
I think that it's relatively possible to come up here, 00:08:47.360 |
create an experiment where we look at the primordial soup 00:08:53.480 |
But to get something as complex as the brain, 00:09:12.360 |
What do you think is the, let's say, what is intelligence? 00:09:15.960 |
So in terms of the thing that makes humans special, 00:09:26.000 |
in the broad category we might call intelligence. 00:09:29.600 |
So if you put your computer scientist hat on, 00:09:33.000 |
is there favorite ways you like to think about 00:09:39.780 |
- Well, my goal is to create agents that are intelligent. 00:09:52.720 |
And that means that it's some kind of an object 00:10:02.720 |
and effective capabilities interacting with the world, 00:10:08.000 |
and then also a mechanism for making decisions. 00:10:11.720 |
So with limited abilities like that, can it survive? 00:10:24.440 |
And that is quite a bit less than human intelligence. 00:10:27.220 |
There are, animals would be intelligent, of course, 00:10:31.080 |
And you might have even some other forms of life. 00:10:37.840 |
is a survival skill given resources that you have 00:10:42.840 |
and using your resources so that you will stay around. 00:10:46.080 |
- Do you think death, mortality is fundamental to an agent? 00:10:52.880 |
So like there's, I don't know if you're familiar, 00:10:56.880 |
who wrote "The Denial of Death" and his whole idea. 00:11:01.220 |
And there's folks, psychologists, cognitive scientists 00:11:06.600 |
And they think that one of the special things about humans 00:11:10.020 |
is that we're able to sort of foresee our death, right? 00:11:16.620 |
sort of constantly fear in an instinctual sense, 00:11:19.420 |
respond to all the dangers that are out there, 00:11:21.600 |
but like understand that this ride ends eventually. 00:11:28.520 |
is the force behind all of the creative efforts 00:11:35.280 |
I mean, animals probably don't think of death the same way, 00:11:42.080 |
And you can make it count in many different ways, 00:11:45.020 |
but I think that has a lot to do with creativity 00:11:57.400 |
I think that that could be the second decision, 00:12:10.080 |
Something you create something that is useful for others, 00:12:13.280 |
is useful in the future, not just for yourself. 00:12:16.200 |
And I think that's a nice definition of intelligence 00:12:25.240 |
They could be artificial agents that are intelligence. 00:12:30.360 |
- So particular agent, the ripple effects of their existence 00:12:35.360 |
on the entirety of the system is significant. 00:12:38.560 |
So like they leave a trace where there's like a, 00:12:47.400 |
and then you can trace a lot of like nuclear wars 00:13:02.160 |
That's something we humans in a human centric way 00:13:09.080 |
Like that is the secondary effect of our intelligence. 00:13:12.200 |
We've had that long lasting impact on the world, 00:13:14.600 |
but maybe the entirety of physics in the universe 00:13:31.640 |
Is it something that you actually contributed 00:13:34.600 |
because you had something unique to contribute? 00:13:48.080 |
but other people might have solved this equation 00:14:04.560 |
And then that could change the way things move forward 00:14:15.400 |
but we have this local effect that is changing. 00:14:18.160 |
If you weren't there, that would not have happened. 00:14:26.080 |
to computational agents, a fear of mortality? 00:14:35.640 |
So there's a very trivial thing where it's like, 00:14:48.960 |
and somehow encoding a complex representation of that fear, 00:14:58.880 |
I mean, there seems to be something really profound 00:15:01.640 |
about this fear that's not currently encodable 00:15:08.240 |
- Well, I think you're referring to the emotion of fear, 00:15:20.600 |
But sometimes you can have like a fear that's not healthy, 00:15:25.600 |
that paralyzes you, that you can't do anything. 00:15:32.000 |
not caring at all and getting paralyzed because of fear 00:15:37.280 |
which is a little bit more than just logic and it's emotion. 00:15:41.440 |
So now the question is what good are emotions? 00:15:46.160 |
and there are multiple dimensions of emotions 00:15:48.480 |
and they probably do serve as a survival function, 00:15:55.840 |
And fear of death might be a really good emotion 00:15:59.680 |
when you are in danger, that you recognize it. 00:16:02.640 |
Even if it's not logically necessarily easy to derive 00:16:06.360 |
and you don't have time for that logical deduction, 00:16:10.400 |
you may be able to recognize the situation is dangerous 00:16:12.720 |
and this fear kicks in and you all of a sudden perceive 00:16:18.440 |
And I think that's generally is the role of emotions. 00:16:20.640 |
It allows you to focus what's relevant for your situation. 00:16:24.520 |
And maybe if fear of death plays the same kind of role, 00:16:27.800 |
but if it consumes you and it's something that you think 00:16:32.080 |
then it's not healthy and then it's not productive. 00:16:36.640 |
how to incorporate emotion into a computational agent. 00:16:41.640 |
It almost seems like a silly statement to make, 00:17:02.440 |
Do you ever in your work or like maybe on a coffee break, 00:17:08.600 |
think about what the heck is this thing consciousness 00:17:11.640 |
and is it at all useful in our thinking about AI systems? 00:17:23.160 |
I think into these agents that act like emotions 00:17:35.400 |
Yeah, you can have that kind of a filter mechanism 00:17:49.880 |
but it acts very much like we understand what emotions are. 00:17:56.920 |
modeling hyenas who were trying to steal a kill from lions, 00:18:08.280 |
And they have this behavior that's more complex 00:18:14.040 |
They can band together if there's about 30 of them or so, 00:18:20.040 |
so that they push the lions away from a kill. 00:18:24.040 |
that they could kill a hyena by striking with a paw. 00:18:28.440 |
But when they work together and precisely time this attack, 00:18:34.080 |
And probably there are some states like emotions 00:18:43.640 |
They really want that kill, but there's not enough of them. 00:18:55.600 |
but they also have a strong affiliation between each other. 00:18:59.840 |
And then this is the balance of the two emotions. 00:19:07.360 |
But then this affiliation eventually is so strong 00:19:12.280 |
they act as a unit and they can perform that function. 00:19:18.440 |
that seems to depend on these emotions strongly 00:19:47.040 |
Maybe much of our intelligence is essentially 00:19:52.880 |
And maybe for the creation of intelligent agents, 00:19:55.720 |
we have to be creating fundamentally social systems. 00:20:08.120 |
but they also rub against each other and they push 00:20:15.760 |
And I don't think people act alone very much either, 00:20:30.880 |
because this theory, for instance, for language, 00:20:32.520 |
but language origins is that, where did language come from? 00:20:36.200 |
And it's a plausible theory that first came social systems 00:20:47.380 |
that I scratch your back, you scratch my back, 00:20:53.480 |
that allow you to understand actions in terms of roles 00:20:55.760 |
that can be changed, that's the basis for language, 00:21:06.720 |
So there's a social structure that's fundamental 00:21:13.880 |
you can refer to things that are not here right now, 00:21:17.160 |
and that allows you to then build all the good stuff 00:21:24.600 |
So yeah, I think that very strongly humans are social, 00:21:28.240 |
and that gives us ability to structure the world. 00:21:32.960 |
But also as a society, we can do so much more, 00:21:35.480 |
'cause one person does not have to do everything. 00:21:51.880 |
these robots, little robots that had to navigate 00:21:57.660 |
like maybe a big chasm or some kind of groove, 00:22:03.600 |
But if they grab each other with their gripper, 00:22:09.000 |
like a team, and this way they could get across that. 00:22:18.920 |
Alone they couldn't, but as a team they could. 00:22:24.840 |
- Yeah, and the way you described the system of hyenas, 00:22:29.720 |
Like the problem with humans is they're so complex, 00:22:43.260 |
to create computational systems that mimic that. 00:22:48.260 |
- Yeah, that's exactly why we looked at that. 00:22:55.260 |
but they are not quite as intelligent as, say, baboons, 00:22:59.420 |
which would learn a lot and would be much more flexible. 00:23:02.140 |
The hyenas are relatively rigid in what they can do. 00:23:05.700 |
And therefore, you could look at this behavior, 00:23:08.100 |
like this is a breakthrough in evolution about to happen, 00:23:12.860 |
about social structures, communication, about cooperation, 00:23:17.560 |
and it might then spill over to other things too 00:23:22.660 |
- Yeah, I think the problem with baboons and humans 00:23:24.960 |
is probably too much is going on inside the head, 00:23:35.500 |
and the various motivations that are involved. 00:23:40.060 |
- And we can even quantify possibly their emotional state 00:23:49.580 |
that can be associated with neurotransmitters. 00:23:52.980 |
And we can separate what emotions they might have 00:23:58.980 |
- What to you is the most beautiful, speaking of hyenas, 00:24:04.240 |
what to you is the most beautiful nature-inspired algorithm 00:24:10.460 |
Something maybe earlier on in your work or maybe today? 00:24:19.860 |
So what fascinates me most is that, with computers, 00:24:24.380 |
is that you can get more out than you put in. 00:24:32.540 |
I mean, this happened to me in my freshman year. 00:24:35.380 |
It did something very simple and I was just amazed. 00:24:37.780 |
I was blown away that it would get the number 00:24:51.580 |
And already, say, deep learning neural networks, 00:24:54.040 |
they can learn to recognize objects, sounds, patterns 00:25:06.180 |
you get something like evolutionary algorithms 00:25:10.540 |
They come up with solutions that you did not think of. 00:25:15.180 |
It's so great that we can build systems, algorithms, 00:25:18.680 |
that can be, in some sense, smarter than we are, 00:25:21.560 |
that they can discover solutions that we might miss. 00:25:24.940 |
A lot of times it is because we have, as humans, 00:25:30.060 |
And you don't put those biases into the algorithm 00:25:34.140 |
And evolution is just absolutely fantastic explorer. 00:25:59.800 |
but really any robot that I've ever worked on, 00:26:41.120 |
'cause you've really created something that's living. 00:26:47.380 |
- It has a life of its own, has the intelligence of its own. 00:26:51.900 |
- And that is, I think it's exactly, Spot on, 00:27:06.500 |
- But you mentioned creativity in this context, 00:27:11.020 |
especially in the context of evolutionary computation. 00:27:14.160 |
So, you know, we don't often think of algorithms 00:27:17.420 |
as creative, so how do you think about creativity? 00:27:20.320 |
- Yeah, algorithms absolutely can be creative. 00:27:25.020 |
They can come up with solutions that you don't think about. 00:27:29.820 |
A couple of requirements have to, has to be new. 00:27:32.740 |
It has to be useful and it has to be surprising. 00:27:38.020 |
evolutionary computation, discovering solutions. 00:27:44.340 |
we did this collaboration with MIT Media Lab, 00:27:47.500 |
Caleb Harvest Lab, where they had a hydroponic food computer 00:27:53.820 |
they called it, environment that was completely 00:27:55.900 |
computer controlled, nutrients, water, light, temperature, 00:28:00.940 |
Now, what do you do if you can't control everything? 00:28:07.420 |
how to make plants grow in their own patch of land. 00:28:10.340 |
But if you can control everything, it's too much. 00:28:16.100 |
So we built a system, evolutionary optimization system, 00:28:20.380 |
together with a surrogate model of how plants grow. 00:28:23.740 |
And let this system explore recipes on its own. 00:28:32.260 |
how strong, what wavelengths, how long the light was on. 00:28:40.340 |
For instance, that there was at least six hours of darkness 00:28:44.580 |
like night, because that's what we have in the world. 00:28:47.380 |
And very quickly, the system evolution pushed 00:28:56.100 |
and we had initially had some 200, 300 recipes, 00:29:04.300 |
And everything was like pushed at that limit. 00:29:24.700 |
but also the biologists in the team that anticipated 00:29:28.940 |
that there's some constraints that are in the world. 00:29:36.180 |
And therefore it discovered something that was creative. 00:29:38.980 |
It was surprising, it was useful, and it was new. 00:29:42.900 |
like the things we think that are fundamental 00:29:49.940 |
or they somehow fit the constraints of the system, 00:29:53.940 |
and all we'll have to do is just remove the constraints. 00:29:56.700 |
Do you ever think about, I don't know how much you know 00:30:00.580 |
about brain-computer interfaces and Neuralink. 00:30:03.540 |
The idea there is, you know, our brains are very limited. 00:30:13.980 |
to speak with the brain, so you're thereby expanding 00:30:19.500 |
the possibilities there, sort of from a very high level 00:30:38.420 |
in terms of our brain size and skull and lifespan 00:30:46.260 |
like a quirk of evolution, and if we just open that up, 00:31:02.860 |
the relationship of evolution and computation 00:31:07.320 |
- Well, at first I'd like to comment on that, 00:31:15.820 |
- Yes, that would be great. - And flexibility 00:31:20.740 |
There are experiments that are done in animals, 00:31:22.380 |
like migangas are, but they might be switching 00:31:36.540 |
And there are kids that are born with severe disorders, 00:31:41.180 |
and sometimes they have to remove half of the brain, 00:31:46.180 |
they have the functions migrate to the other parts. 00:31:50.420 |
So I think it's quite possible to hook up the brain 00:31:55.020 |
with different kinds of sensors, for instance, 00:31:57.660 |
and something that we don't even quite understand 00:32:00.340 |
or have today, and different kinds of wavelengths 00:32:02.580 |
or whatever they are, and then the brain can learn 00:32:10.020 |
that these prosthetic devices, for instance, work, 00:32:12.760 |
not because we make them so good and so easy to use, 00:32:19.160 |
And so in that sense, if there's a trouble, a problem, 00:32:26.220 |
Now, going beyond what we have today, can you get smarter? 00:32:31.600 |
Giving the brain more input probably might overwhelm it. 00:32:35.540 |
It would have to learn to filter it and focus 00:32:47.160 |
of external devices like that might be difficult, I think. 00:32:51.560 |
But replacing what's lost, I think, is quite possible. 00:32:55.680 |
- Right, so our intuition allows us to sort of imagine 00:33:05.360 |
if not the most intelligent things on this earth, right? 00:33:07.800 |
So it's hard to imagine if the brain can hold up 00:33:20.740 |
Part of me, this is the Russian thing, I think, 00:33:30.420 |
and huge increase in bandwidth of information 00:33:40.200 |
is not going to produce greater intelligence. 00:33:51.080 |
to fitness or performance, but that could be just 00:33:59.000 |
- No, exactly, you make do with what you have. 00:34:00.680 |
But you don't have to pipe it directly to the brain. 00:34:07.560 |
where we can look up information at any point. 00:34:14.000 |
what happened in that baseball game or whatever it is, 00:34:17.680 |
And I think in that sense, we can learn to utilize tools. 00:34:22.040 |
And that's what we have been doing for a long, long time. 00:34:25.240 |
So, and we are already, the brain is already drinking 00:34:35.680 |
So brain's already good at identifying what matters. 00:34:42.880 |
to some other wavelength or some other kind of modality. 00:34:45.040 |
But I think that the same processing principles 00:34:49.040 |
But also, indeed, this ability to have information 00:34:57.760 |
I mean, kids today at school, they learn about DNA. 00:35:07.600 |
And we don't see a problem where there's too much information 00:35:20.920 |
But this information that we have accumulated, 00:35:23.760 |
it is passed on, and people are picking up on it, 00:35:27.600 |
So it's not like we have reached the point of saturation. 00:35:31.040 |
We have still this process that allows us to be selective 00:35:34.520 |
and decide what's interesting, I think still works, 00:35:37.600 |
even with the more information we have today. 00:35:45.360 |
like so the fire hose of information from Wikipedia. 00:35:49.080 |
So it's like you integrate it directly into the brain 00:35:51.800 |
to where you're thinking, like you're observing the world 00:35:54.240 |
with all of Wikipedia directly piping into your brain. 00:35:59.920 |
I immediately have like the history of who invented 00:36:03.640 |
electricity, like integrated very quickly into. 00:36:11.240 |
if you can integrate that kind of information. 00:36:30.680 |
you're already replacing the thumbs essentially 00:36:38.960 |
the interface by which you interact with the computer. 00:36:48.880 |
I think it's great that we could do something like that. 00:36:51.600 |
I mean, you can, there are devices that read your EEG, 00:36:55.080 |
for instance, and humans can learn to control things 00:37:12.920 |
It still probably has to be built on human terms, 00:37:17.520 |
not to overwhelm them, but utilize what's there 00:37:24.840 |
But, oh, that I think is really quite possible 00:37:27.760 |
and wonderful and could be very much more efficient. 00:37:37.080 |
Is there something, you already mentioned a few examples, 00:37:44.600 |
from the various evolutionary computation systems 00:37:50.880 |
come up along the way, not necessarily the final solutions, 00:38:05.680 |
so good at discovering solutions you don't anticipate. 00:38:09.280 |
A lot of times they are taking advantage of something 00:38:19.160 |
about surprising anecdotes about evolutionary computation. 00:38:22.960 |
A lot of them are indeed, in some software environment, 00:38:30.600 |
- By the way, for people who want to read it, 00:38:33.120 |
It's called "The Surprising Creativity of Digital Evolution, 00:38:36.040 |
"A Collection of Anecdotes from the Evolutionary Computation 00:38:43.200 |
from all the seminal figures in this community. 00:38:45.880 |
You have a story in there that relates to you, 00:38:51.040 |
So can you, I guess, describe that situation, 00:38:59.680 |
than our basic doesn't need to sleep surprise, 00:39:03.080 |
but it was actually done by students in my class, 00:39:06.680 |
in a neural nets evolutionary computation class. 00:39:43.480 |
And then when we look at it, like what was going on, 00:39:45.760 |
was that evolution discovered that if it makes a move 00:39:53.440 |
the other teams, the other programs just expanded memory 00:40:18.280 |
you don't have to be better within the rules of the game. 00:40:22.720 |
You have to come up with ways to break your opponent's brain 00:40:31.360 |
but through some hack where the brain just is not, 00:40:46.560 |
I mean, this was even Kasparov pointed that out 00:40:49.560 |
that when Deep Blue was playing against Kasparov, 00:40:51.760 |
that it was not playing the same way as Kasparov expected. 00:40:55.440 |
And this has to do with not having the same biases. 00:40:59.720 |
And that's really one of the strengths of the AI approach. 00:41:21.720 |
what are the echoes of the connection to his biological? 00:41:24.840 |
- A lot of these algorithms really do take motivation 00:41:31.320 |
and take the elements that you believe matter. 00:41:44.760 |
that are very different from what you already have. 00:41:49.040 |
And then you have to have some way of measuring 00:41:50.760 |
how well they are doing and using that measure to select 00:41:55.600 |
who goes to the next generation and you continue. 00:42:04.520 |
So I guess humans in biological systems have DNA 00:42:09.720 |
And so you have to have similar kind of encodings 00:42:16.960 |
So there's a genotype, which is that encoding 00:42:23.040 |
which is the actual individual that then performs the task 00:42:26.400 |
and in an environment can be evaluated how good it is. 00:42:37.320 |
they are strings of numbers or they are some kind of trees. 00:42:43.560 |
But they, and DNA in some sense is also a sequence 00:42:56.720 |
like there's folding and interactions that are other 00:43:06.000 |
and we don't know whether they are really crucial. 00:43:16.000 |
it's not necessarily the case that every piece 00:43:20.880 |
There's a lot of baggage 'cause you have to construct it 00:43:25.360 |
and we still have appendix and we have tailbones 00:43:29.360 |
and things like that that are not really that useful. 00:43:31.360 |
If you try to explain them now, it would make no sense, 00:43:35.200 |
But if you think of us as productive evolution, 00:43:39.240 |
They were useful at one point perhaps and no longer are, 00:43:50.800 |
And that is quite difficult if we are limited 00:43:56.280 |
with strings or trees and then we are pretty much limited 00:44:07.520 |
is what we saw in biology, major transitions. 00:44:11.400 |
So that you go from, for instance, single cell 00:44:14.520 |
to multicell organisms and eventually societies. 00:44:34.400 |
we don't even understand it in biology very well 00:44:37.680 |
So it would be really good to look at major transitions 00:44:40.480 |
in biology, try to characterize them a little bit more 00:44:51.480 |
as part of a community, a multicell organism. 00:44:54.720 |
Even though it could reproduce, now it can't alone. 00:44:59.360 |
So there's a push to another level, at least the selection. 00:45:03.400 |
- And how do you make that jump to the next level? 00:45:08.160 |
So we haven't really seen that in computation yet. 00:45:33.440 |
those agent all of a sudden spontaneously decide 00:45:36.240 |
to then be together, and then your entire system 00:45:46.320 |
But also, so you mentioned, I think you mentioned selection. 00:45:51.080 |
and they don't get to live on if they don't do well. 00:46:06.600 |
the computational mechanisms of evolution computation are. 00:46:12.720 |
you can take multiple individuals, two usually, 00:46:17.200 |
And you exchange the parts of the representation. 00:46:42.160 |
In computation, we tend to rely more on mutation. 00:46:55.800 |
and make the mutations also follow some principle. 00:47:12.160 |
I mean, evolution computation has been around for 50 years, 00:47:23.040 |
We use similar tools to guide evolutionary computation. 00:47:43.960 |
We are very impatient in evolutionary computation today. 00:47:52.120 |
And biological evolution doesn't work quite that way. 00:48:00.040 |
- So I guess we need to add some kind of mating, 00:48:07.400 |
so into our algorithms to improve the combination, 00:48:13.680 |
as opposed to all mutation doing all of the work. 00:48:18.920 |
Usually in evolutionary computation, we have one goal, 00:48:21.600 |
play this game really well compared to others. 00:48:25.920 |
But in biology, there are many ways of being successful. 00:48:28.680 |
You can build niches, you can be stronger, faster, 00:48:36.800 |
So there are many ways to solve the same problem of survival. 00:48:51.160 |
rather than trying to go from initial population directly, 00:48:54.120 |
or more or less directly to your maximum fitness, 00:49:11.200 |
as more effective than deep learning in certain contexts? 00:49:18.680 |
I don't know if you want to draw any kind of lines 00:49:21.000 |
and distinctions and borders where they rub up 00:49:30.240 |
and they address different kinds of problems. 00:49:32.280 |
And the deep learning has been really successful 00:49:39.800 |
And that means not just data about situations, 00:49:45.120 |
So labeled examples, or there might be predictions, 00:50:03.400 |
where we don't really know what the right answer is. 00:50:07.520 |
but many robotics tasks and actions in the world, 00:50:12.520 |
decision-making, and actual practical applications 00:50:26.640 |
And there you need different kinds of approach. 00:50:30.840 |
Reinforcement learning comes from biology as well. 00:50:46.040 |
but a different timescale because you have a population. 00:50:50.840 |
but an entire population as a whole can discover what works. 00:50:55.200 |
And there you can afford individuals that don't work out. 00:50:58.960 |
They learn, everybody dies and you have a next generation 00:51:04.120 |
So that's the big difference between these methods. 00:51:09.840 |
And in particular, there's often a comparison 00:51:16.640 |
between reinforcement learning and evolutionary computation. 00:51:23.400 |
was about individual learning during their lifetime. 00:51:29.720 |
You don't care about all the individuals that are tested. 00:51:34.520 |
The last one, the best candidate that evolution produced. 00:51:48.680 |
and reinforcement learning to create engineering solutions, 00:51:55.280 |
And from the point of view, what algorithm you wanna use, 00:52:02.280 |
for every trial, reinforcement learning might be your choice. 00:52:18.640 |
then this population-based method is perhaps a better choice 00:52:23.360 |
because you can try things out that you wouldn't afford 00:52:33.760 |
or reinforcement learning teaching a simulated robot 00:52:43.640 |
but do you find this whole space of applications 00:52:47.560 |
in the robotics interesting for evolution computation? 00:52:53.520 |
And indeed, there are fascinating videos of that. 00:53:00.560 |
- Between reinforcement learning and evolution. 00:53:03.200 |
- Yes, so if you have a reinforcement learning agent, 00:53:08.000 |
because it wants to walk as long as possible and be stable. 00:53:25.280 |
is something like a falling that's controlled. 00:53:36.200 |
something that's hard to discover step by step, 00:53:45.520 |
in the sense that they can serve as stepping stones. 00:53:47.760 |
When you take two of those, put them together, 00:53:52.440 |
And that is a great example of this kind of discovery. 00:54:22.400 |
essentially, if you want to do elegant movement. 00:54:37.520 |
sometimes requires a leap of faith and patience 00:55:02.640 |
not just their controller, but also their body. 00:55:11.160 |
and you're creating creatures that look quite different. 00:55:21.960 |
And what was interesting is that when you evolve 00:55:30.480 |
because they're optimized for that physical setup. 00:55:33.600 |
And these creatures, you start believing them, 00:55:36.160 |
that they're alive because they walk in a way 00:55:41.920 |
- Yeah, there's something subjective also about that. 00:55:47.120 |
especially in the human-robot interaction context. 00:55:57.400 |
There is something about human-robot communication. 00:56:13.360 |
First of all, the eyes, you both look at the same thing 00:56:15.520 |
and dogs communicate with their eyes as well. 00:56:18.080 |
Like if you and a dog want to deal with a person, 00:56:26.200 |
the dog will look at you and then look at the object 00:56:28.080 |
and look back at you, all those kinds of things. 00:56:30.320 |
But there's also just the elegance of movement. 00:56:35.840 |
and all those kinds of mechanisms of communication. 00:56:47.240 |
because it almost seems impossible to hard-code in. 00:56:53.800 |
something like that, but it's essentially choreographed. 00:56:58.120 |
Like if you watch some of the Boston Dynamics videos 00:57:01.760 |
all of that is choreographed by human beings. 00:57:09.380 |
demonstrate a naturalness, an elegance, that's fascinating. 00:57:16.840 |
to learn the kind of scale that you're referring to, 00:57:20.100 |
but the hope is that you could do that in simulation 00:57:25.360 |
if you're able to model the robots efficiently, naturally. 00:57:28.680 |
- Yeah, and sometimes I think that it requires 00:57:38.920 |
because they themselves are doing something similar. 00:57:53.840 |
because we assume that they are similar to us. 00:58:01.440 |
Two robots that were competing, simulation, like I said, 00:58:24.320 |
so they get more, but then they started to pay attention 00:58:32.720 |
where one of the robots, the more sophisticated one, 00:58:48.720 |
So it faked, now I'm using anthropomorphized terms, 00:58:53.400 |
but it made a move towards those other pieces 00:58:55.900 |
in order for the other robot to actually go and get them. 00:58:59.080 |
Because it knew that the last remaining piece of food 00:59:02.400 |
was close and the other robot would have to travel 00:59:16.640 |
to guide it towards bad behavior in order to win. 00:59:29.480 |
of a place for a theory of mind to emerge easier 00:59:45.560 |
And I tend to think that a very simple theory of mind 00:59:50.560 |
will go a really long way for cooperation between agents 00:59:57.520 |
Like it doesn't have to be super complicated. 00:59:59.760 |
I've gotten a chance in the autonomous vehicle space 01:00:07.040 |
or pedestrians interacting with vehicles in general. 01:00:09.920 |
I mean, you would think that there's a very complicated 01:00:14.520 |
but I have a sense, it's not well understood yet, 01:00:21.060 |
There's a social contract there where between humans, 01:00:32.000 |
that the human in the car is not going to murder them. 01:00:44.000 |
that's built in that you're mapping your own morality 01:00:50.040 |
And even if they're driving at a speed where you think 01:00:54.080 |
if they don't stop, they're going to kill you, 01:01:08.520 |
And autonomous robots in the human-robot interaction context 01:01:13.800 |
Current robots are much less than what you're describing. 01:01:19.360 |
They're not the kind that fall and discover how to run. 01:01:24.080 |
They're more like, please don't touch anything, 01:01:30.200 |
Treat humans as ballistic objects that you can't, 01:01:47.680 |
need to have a beautiful dance between human and machine, 01:01:50.640 |
where it's not just the collision avoidance problem, 01:01:55.920 |
- Yeah, I think these systems need to be able to predict 01:02:00.000 |
what will happen, what the other agent is going to do, 01:02:02.320 |
and then have a structure of what the goals are 01:02:06.440 |
and whether those predictions actually meet the goals. 01:02:16.200 |
I mean, it doesn't matter whether the pedestrian has a mind, 01:02:19.280 |
it's an object and we can predict what we will do. 01:02:21.840 |
And then we can predict what the states will be 01:02:23.720 |
in the future and whether they are desirable states. 01:02:29.720 |
So it's a relatively simple, functional approach to that. 01:02:40.960 |
and you're trying to get the other agent to do something 01:02:51.880 |
you have to have a sense of where their attention, 01:02:57.840 |
but also like, there's this vision science people 01:03:04.680 |
So figuring out what is the person looking at, 01:03:07.400 |
what is the sensory information they've taken in? 01:03:12.500 |
what are they actually attending to cognitively? 01:03:19.000 |
Like, what is the computation they're performing? 01:03:39.240 |
If you're collaborating to pick up an object, 01:03:48.520 |
and you have to predict that by observing the human. 01:03:52.200 |
And that seems like a machine learning problem 01:04:03.920 |
means the arm will continue moving in this direction. 01:04:09.880 |
and what's the motivation behind the movement of the arm. 01:04:19.280 |
like to predict what the people are going to do, 01:04:26.080 |
what are they trying to, are they exercising? 01:04:30.880 |
you are predicting what people will do in their career. 01:04:40.560 |
but it allows you to then predict their actions, 01:04:52.500 |
from you and others in the world of neuroevolution. 01:04:55.520 |
So maybe first, can you say, what is this field? 01:05:01.140 |
of neural networks and evolutionary computation 01:05:05.480 |
but the early versions were simply using evolution 01:05:18.380 |
Because evolution can evolve these parameters, 01:05:23.980 |
just like any other string of numbers, you can do that. 01:05:27.180 |
And that's useful because some cases you don't have 01:05:30.760 |
those targets that you need to back propagate from. 01:05:34.800 |
And it might be an agent that's running a maze 01:05:39.800 |
You don't, again, you don't know what the right answer is, 01:05:43.080 |
but this way you can still evolve a neural net. 01:05:45.860 |
And neural networks are really good at these tasks 01:05:51.400 |
and they generalize, interpolate between known situations. 01:05:55.280 |
So you want to have a neural network in such a task, 01:05:57.720 |
even if you don't have the supervised targets. 01:06:04.200 |
when we have all this deep learning literature, 01:06:12.480 |
The deep learning architectures have become so complex 01:06:16.400 |
that there's little hope for us little humans 01:06:22.880 |
And now we can use evolution to give that design for you. 01:06:25.840 |
And it might mean optimizing hyperparameters, 01:06:36.560 |
but also other aspects like what activation functions 01:06:39.000 |
you use where in the network during the learning process, 01:06:50.040 |
of deep learning experiments could be optimized that way. 01:06:53.840 |
So that's an interaction between two mechanisms. 01:06:57.000 |
But there's also, when we get more into cognitive science 01:07:00.880 |
and the topics that we've been talking about, 01:07:02.640 |
you could have learning mechanisms at two level timescales. 01:07:12.960 |
And you have this interaction of two timescales. 01:07:15.960 |
And I think that can potentially be really powerful. 01:07:19.400 |
Now in biology, we are not born with all our faculties. 01:07:23.520 |
We have to learn, we have a developmental period. 01:07:38.880 |
But we can, evolution can decide on a starting point 01:07:54.200 |
well, evolution that has produced a good starting point 01:08:00.800 |
with the interaction of, with the environment. 01:08:04.720 |
for constructing brains and constructing behaviors. 01:08:08.040 |
- I like how you walk back from intelligence. 01:08:12.400 |
Okay, there's a lot of fascinating things to ask here. 01:08:18.520 |
And this is basically this dance between neural networks 01:08:23.880 |
Could go into the category of automated machine learning 01:08:33.560 |
But the topology thing is really interesting. 01:08:36.400 |
I mean, that's not really done that effectively 01:08:40.240 |
or throughout the history of machine learning 01:08:45.020 |
Maybe there's a few components you're playing with. 01:08:52.960 |
How hard is it, do you think, to grow a neural network? 01:09:00.880 |
are more amenable to this kind of idea than others? 01:09:04.680 |
I've seen quite a bit of work on recurrent neural networks. 01:09:06.960 |
Is there some architectures that are friendlier than others? 01:09:10.920 |
And is this just a fun, small scale set of experiments 01:09:15.280 |
or do you have hope that we can be able to grow 01:09:34.280 |
We took a winner of the image captioning competition 01:09:38.480 |
and the architecture, and just broke it into pieces 01:09:42.620 |
and took the pieces, and that was our search base. 01:09:55.880 |
But that's starting from a point that humans have produced. 01:10:05.840 |
The hard part is, there are a couple of challenges. 01:10:10.760 |
What are your elements and how you put them together? 01:10:25.840 |
And another challenge is that in order to evaluate 01:10:28.560 |
how good your design is, you have to train it. 01:10:37.320 |
I mean, deep learning networks may take days to train. 01:10:48.080 |
It will be, but also there's a large carbon footprint 01:10:52.480 |
I mean, we are using a lot of computation for doing it. 01:10:57.560 |
I mean, we have to do some science in order to figure out 01:11:01.680 |
what the right representations are and right operators are, 01:11:11.440 |
and we're making progress on all those fronts. 01:11:20.960 |
But also I think we can create our own architecture 01:11:23.600 |
and all representations that are even better at that. 01:11:28.840 |
a tiny baby network that grows into something 01:11:32.760 |
and like even the simple data set like MNIST, 01:11:35.440 |
and just like it just grows into a gigantic monster 01:11:39.960 |
that's the world's greatest handwriting recognition system? 01:11:48.560 |
and then systematically expanding it to a larger one. 01:11:52.000 |
Your elements are already there and scaling it up 01:11:56.600 |
So again, evolution gives you that starting point 01:11:59.400 |
and then there's a mechanism that gives you the final result 01:12:04.600 |
But you could also simulate the actual growth process. 01:12:11.000 |
And like I said before, evolving a starting point 01:12:18.440 |
There's not that much work that's been done on that yet. 01:12:21.960 |
We need some kind of a simulation environment 01:12:33.080 |
- Sorry, the interaction between neural networks? 01:12:35.560 |
- Yeah, the neural networks that you're creating, 01:12:37.320 |
interacting the world and learning from these sequences 01:12:42.120 |
of interactions, perhaps communication with others. 01:12:55.400 |
I mean, one of the powerful things about evolution on Earth 01:13:09.480 |
and one neural network being able to destroy another one. 01:13:31.080 |
we had simulated hyenas and simulated zebras. 01:13:42.880 |
And when they actually stumbled upon the zebra, 01:14:28.360 |
And they gradually developed more complex behaviors, 01:14:55.400 |
that supports all of these complex behaviors. 01:15:00.080 |
we've already seen it in this predator-prey scenario. 01:15:03.680 |
- First of all, it's fascinating to think about this context 01:15:09.640 |
So I've studied Tesla Autopilot for a long time. 01:15:16.680 |
of an AI system that's operating in the real world. 01:15:23.400 |
And I'm not sure if you're familiar with that system much, 01:15:30.120 |
And there's a multi-task network, multi-headed network 01:15:34.880 |
where there's a core, but it's trained on particular tasks 01:15:41.760 |
Is there some lessons from evolutionary computation 01:15:52.440 |
- Yes, it's a very good problem for neuroevolution. 01:15:55.680 |
And the reason is that when you have multiple tasks, 01:16:01.600 |
So let's say you're learning to classify X-ray images 01:16:09.120 |
So you have one task is to classify this disease 01:16:13.440 |
and another one, this disease, another one, this one. 01:16:18.040 |
that forces certain kinds of internal representations 01:16:24.480 |
as a helpful starting point for the other tasks. 01:16:27.240 |
So you are combining the wisdom of multiple tasks 01:16:36.120 |
simultaneously other tasks than you would by one task alone. 01:16:39.440 |
- Which is a fascinating idea in itself, yeah. 01:16:43.440 |
I mean, you use knowledge of domains that you know 01:16:45.640 |
in new domains, and certainly neural networks can do that. 01:16:54.760 |
Now there's architectural design that allow you to decide 01:17:07.600 |
And my team, Elliot Mayerson's worked on that in particular, 01:17:16.720 |
And we're getting to understand how that's constructed 01:17:23.680 |
that supports multiple different heads, like you said. 01:17:33.960 |
You don't build a representation just for one task. 01:17:39.680 |
not only so that you can do better in one task 01:17:50.200 |
and that helps you in all kinds of future challenges. 01:17:54.040 |
- And so you're trying to design a representation 01:18:03.120 |
and that's, again, a surprise that Elliot found, 01:18:06.000 |
was that those tasks don't have to be very related. 01:18:09.800 |
You can learn to do better vision by learning language 01:18:14.120 |
or better language by learning about DNA structure. 01:18:22.680 |
- The world rhymes, even if it's very disparate fields. 01:18:31.440 |
'cause you've also, on the competition neuroscience side, 01:18:44.440 |
What's more, maybe there's a bunch of ways to ask this, 01:18:52.320 |
the human language system or the human vision system, 01:18:56.080 |
or the equivalent of, in the AI space, language and vision, 01:19:11.640 |
I think, is a fascinating direction in the future. 01:19:15.200 |
So you have datasets where there's visual component 01:19:17.480 |
as well as verbal descriptions, for instance, 01:19:20.040 |
and that way you can learn a deeper representation, 01:19:29.480 |
I mean, recognizing objects or even understanding sentences, 01:19:35.800 |
but where it becomes, where the challenges are 01:19:43.640 |
and predicting what will happen, the relationships. 01:19:48.240 |
And language, obviously, it's what is being said, 01:19:52.760 |
And the meaning doesn't stop at who did what to whom. 01:20:04.720 |
in order to understand a sentence very much fully. 01:20:10.880 |
when you bring in all the world knowledge to understand it. 01:20:22.240 |
will give you already a lot deeper understanding 01:20:26.880 |
And I think that that's where we're going very soon. 01:20:42.800 |
I think that will be the next step in the next few years. 01:20:48.000 |
was the AI community started to dip their toe 01:20:56.920 |
that are now doing stuff with images, with vision, 01:21:03.920 |
I mean, right now it's like these little explorations, 01:21:07.800 |
But maybe at some point we'll just dive into the pool 01:21:11.800 |
and it'll just be all seen as the same thing. 01:21:17.960 |
whether we don't think about vision correctly. 01:21:26.720 |
and because we have cameras that take in pixels 01:21:33.280 |
that we don't sufficiently think about vision as language. 01:21:43.760 |
sorry, that language is fundamental to everything, 01:21:59.360 |
- Yeah, well, earlier we talked about the social structures 01:22:02.600 |
and that may be what's underlying the language, 01:22:06.720 |
and then language has been added on top of that. 01:22:08.720 |
- Language emerges from the social interaction. 01:22:17.560 |
and also when we think about various abstract concepts, 01:22:37.520 |
It's probably possible that it predated language even. 01:22:45.840 |
And language is interesting development from mastication, 01:22:55.320 |
that actually can produce sound to manipulate them. 01:23:07.480 |
Sign language could have been the original proto-language. 01:23:10.680 |
We don't quite know, but the language is more fundamental 01:23:17.360 |
And I think that it comes from those representations. 01:23:19.960 |
Now, in current world, they are so strongly integrated, 01:23:26.600 |
it's really hard to say which one is fundamental. 01:23:28.840 |
You look at the brain structures and even visual cortex, 01:23:32.840 |
which is supposed to be very much just vision. 01:23:35.200 |
Well, if you are thinking of semantic concepts, 01:23:38.040 |
if you're thinking of language, visual cortex lights up. 01:23:41.560 |
It's still useful, even for language computations. 01:23:44.560 |
So there are common structures underlying them. 01:23:53.200 |
well, that's not so far from understanding relationships 01:23:56.880 |
So I think that that's how they are integrated. 01:23:59.160 |
- Yeah, and there's dreams, and once we close our eyes, 01:24:02.400 |
there's still a world in there somehow operating 01:24:24.840 |
or if we came on Mars or maybe in other solar system, 01:24:30.960 |
that us humans would not be able to detect it 01:24:48.380 |
our frameworks of thinking would not detect it 01:24:56.760 |
I think were part of developing this alien language 01:25:12.520 |
which is communicating from across a very big distance, 01:25:21.320 |
do you think we'd be able to find a common language 01:25:35.440 |
would they be able to find a common language? 01:25:41.000 |
I mean, I think a lot of people who are in computing, 01:25:45.360 |
they got into it because they were fascinated 01:25:47.320 |
with science fiction and all of these options. 01:25:50.740 |
I mean, Star Trek generated all kinds of devices 01:25:56.560 |
and it's a great motivator to think about things like that. 01:26:00.700 |
And I, so one, and again, being a computational scientist 01:26:14.520 |
where the agents actually evolve communication, 01:26:21.280 |
that they communicate, they signal and so on, 01:26:40.880 |
And then we can start asking that kind of questions. 01:26:55.920 |
- So machine translation of evolved languages, 01:26:59.000 |
and so like languages that evolve come up with, 01:27:02.040 |
can we translate, like I have a Google Translate 01:27:09.800 |
we have perhaps an idea what an alien language 01:27:14.120 |
might be like, the space of where those languages can be. 01:27:17.200 |
'Cause we can set up their environment differently. 01:27:22.040 |
You can have all kinds of, societies can be different, 01:27:32.880 |
where those languages are and what the difficulties are. 01:27:48.240 |
Is there a ways to evolve a communication scheme for, 01:27:51.920 |
there's a field you can call it like explainable AI, 01:28:01.640 |
but for some of them to be able to talk to you also. 01:28:05.440 |
So to evolve a way for agents to be able to communicate 01:28:11.080 |
Do you think that there's possible mechanisms 01:28:25.640 |
it allows us to together again, achieve common goals. 01:28:55.400 |
But that's one thing that there would have to be 01:29:01.920 |
whether that communication is at least useful. 01:29:04.220 |
They may be saying things just to make us feel good 01:29:16.760 |
to really make sure that that translation is critical. 01:29:37.440 |
and logging what you're doing in some interpretable way. 01:29:41.480 |
I think a fascinating topic, yeah, to do that. 01:29:54.440 |
for integrating yourself and succeeding in a social network 01:30:04.540 |
are evolutionary advantages in an environment 01:30:09.540 |
where there's a network of intelligent agents. 01:30:19.840 |
all those kinds of things might be more beneficial. 01:30:23.100 |
That's the old open question about good versus evil. 01:30:25.900 |
But I tend to, I mean, I don't know if it's a hopeful, 01:30:29.240 |
maybe I'm delusional, but it feels like karma is a thing, 01:30:43.780 |
In a society that's not highly constrained on resources. 01:30:58.320 |
like survival, shelter, food, all those kinds of things. 01:31:14.900 |
But it's scary to think about the Turing test. 01:31:19.900 |
AI systems that will eventually pass the Turing test 01:31:23.940 |
will be ones that are exceptionally good at lying. 01:31:31.260 |
First of all, so from somebody who studied language 01:31:34.220 |
and obviously are not just a world expert in AI, 01:31:37.860 |
but somebody who dreams about the future of the field, 01:31:41.540 |
do you hope, do you think there'll be human level 01:31:45.640 |
or superhuman level intelligences in the future 01:31:51.240 |
- Well, I definitely hope that we can get there. 01:32:28.780 |
but not in the foreseeable future when we are building it. 01:32:38.840 |
And your point about lying is very interesting. 01:32:58.560 |
and don't participate in that risky behavior, 01:33:02.040 |
but they walk in later and join the party after the kill. 01:33:06.960 |
And there are even some that may be ineffective 01:33:37.480 |
you also have opportunity for cheaters and liars. 01:34:02.280 |
when we build these systems that autonomously learn 01:34:09.320 |
because that's the best way of getting things done. 01:34:12.040 |
But there probably are also intelligent agents 01:34:25.720 |
say we build an AGI system and deploying millions of them, 01:34:37.840 |
an evolution competition perspective, a lot of variation. 01:34:41.280 |
Sort of like diversity in all its forms is beneficial 01:34:46.280 |
even if some people are assholes or some robots are assholes. 01:35:05.760 |
you see diversity is the one fundamental thing 01:35:09.040 |
And absolutely, also, it's not always good diversity. 01:35:32.160 |
So I think that as long as we can tolerate some of that, 01:35:38.480 |
because it's so much more efficient to do something 01:35:52.160 |
even though they were considered proper behavior before. 01:36:00.000 |
I think it's a good idea to be able to tolerate some of that, 01:36:05.080 |
because eventually we might turn into something better. 01:36:07.480 |
- So yeah, I think this is a message to the trolls 01:36:23.120 |
I don't know if you're connected to this field, 01:36:35.320 |
in the evolutionary computation perspective as life? 01:36:52.440 |
- Different levels of definition and goals there. 01:37:02.360 |
that build a society that again, achieves a goal. 01:37:05.080 |
And it might be robots that go into a building 01:37:07.000 |
and clean it up or after an earthquake or something. 01:37:10.360 |
You can think of that as an artificial life problem 01:37:14.600 |
Or you can really think of it, artificial life, 01:37:24.640 |
And like I said, in artificial life conference, 01:37:26.840 |
there are branches of that conference sessions 01:37:29.760 |
of people who really worry about molecular designs 01:37:36.760 |
where eventually you get something self-replicating 01:37:44.840 |
And I think that artificial life is a great tool 01:38:08.080 |
we may have understood that there's a pivotal point 01:38:12.200 |
They discovered cooperation and coordination. 01:38:14.880 |
Artificial life simulations can identify that 01:38:22.920 |
And also societies can be seen as a form of life itself. 01:38:27.920 |
I mean, we're not talking about biological evolution, 01:38:31.920 |
Maybe some of the same phenomena emerge in that domain 01:38:39.440 |
and understanding could help us build better societies. 01:38:50.880 |
that maybe the organisms, ideas of the organisms, 01:38:56.680 |
it's almost like reframing what is exactly evolving. 01:39:04.560 |
as the contents of our minds is the interesting thing. 01:39:13.040 |
And that maybe has more power on the trajectory 01:39:16.240 |
of life on earth than does biological evolution. 01:39:21.000 |
- Yes, and it's fascinating, like I said before, 01:39:30.120 |
with this meme evolution, literature, internet. 01:39:35.120 |
We understand DNA and we understand fundamental particles. 01:39:39.040 |
We didn't start that way a thousand years ago 01:39:41.400 |
and we haven't evolved biologically very much, 01:39:47.040 |
And therefore AI can be seen also as one such step 01:39:53.440 |
And it's part of that meme evolution that we created, 01:39:56.360 |
even if our biological evolution does not progress as fast. 01:39:59.640 |
- And us humans might only be able to understand so much. 01:40:09.520 |
Maybe like the physics of the universe is operating, 01:40:14.760 |
maybe it's operating in much higher dimensions. 01:40:17.440 |
Maybe we're totally, because of our cognitive limitations, 01:40:21.240 |
are not able to truly internalize the way this world works. 01:40:25.720 |
And so we're running up against the limitation 01:40:34.520 |
that would be able to understand much deeper, 01:40:45.400 |
- Translation, and generally we can deal with the world, 01:40:48.200 |
even if you don't understand all the details, 01:40:52.120 |
most of us don't know all the structures underneath 01:40:57.280 |
especially new cars that you don't quite fully know, 01:40:59.880 |
but you have the interface, you have an abstraction of it 01:41:02.720 |
that allows you to operate it and utilize it. 01:41:12.160 |
- I have to ask about beautiful artificial life systems 01:41:18.120 |
or evolution computation systems, cellular automata to me. 01:41:22.640 |
Like I remember it was a game changer for me early on 01:41:31.400 |
It's beautiful how much complexity can emerge 01:41:44.440 |
is such a powerful illustration and also humbling 01:41:48.320 |
because it feels like I personally, from my perspective, 01:41:58.400 |
how complexity can emerge from such simplicity. 01:42:08.520 |
Is there, do you think about cellular automata? 01:42:21.640 |
It's almost like it's just evolving and creating. 01:42:38.000 |
Is there some of those systems that you find beautiful? 01:42:41.920 |
And similarly, evolution does not have a goal. 01:42:52.720 |
and therefore we have something that we perceive as progress 01:42:56.080 |
but that's not what evolution is inherently set to do. 01:43:03.800 |
how a simple set of rules or simple mappings can, 01:43:08.800 |
how from such simple mappings, complexity can emerge. 01:43:14.520 |
So it's a question of emergence and self-organization. 01:43:17.720 |
And the game of life is one of the simplest ones 01:43:21.500 |
and very visual and therefore it drives home the point 01:43:25.680 |
that it's possible that nonlinear interactions 01:43:31.240 |
and this kinds of complexity can emerge from them. 01:43:34.720 |
And biology and evolution is along the same lines. 01:43:40.080 |
DNA, if you really think of it, it's not that complex. 01:43:49.880 |
whatever string or tree representation we have 01:43:52.640 |
and the operations, the amount of code that's required 01:43:57.560 |
to manipulate those, it's really, really little. 01:44:02.440 |
So how complexity emerges from such simple principles, 01:44:11.440 |
and guide it and direct it so that it becomes useful. 01:44:15.520 |
And like game of life is fascinating to look at 01:44:17.920 |
and evolution, all the forms that come out is fascinating 01:44:24.040 |
- And efficient because if you actually think about 01:44:27.040 |
each of the cells in the game of life as a living organism, 01:44:38.920 |
we wanna kinda hurry up and make sure we take evolution, 01:44:43.920 |
the trajectory that is a little bit more efficient 01:44:49.360 |
- And that touches upon something we talked about earlier 01:44:51.240 |
that evolutionary computation is very impatient. 01:44:57.160 |
versus biology has a lot of time and deep time 01:45:04.520 |
One great example of this is the novelty search. 01:45:08.960 |
So evolutionary computation where you don't actually 01:45:12.440 |
specify a fitness goal, something that is your actual thing 01:45:19.640 |
that are different from what you've seen before. 01:45:25.160 |
You actually discover things that are interesting 01:45:28.520 |
Ken Stanley and Joel Lehman did this one study 01:45:36.600 |
where your robot actually failed in all kinds of ways 01:45:47.720 |
that were different that you were able to discover something 01:45:56.600 |
you have to utilize what is there in a domain 01:46:00.760 |
So you have encoded the fundamentals of your world 01:46:05.760 |
and then you make changes to those fundamentals 01:46:28.120 |
but among them, there were some of these gems. 01:46:33.160 |
you have to outside recognize and make useful. 01:46:38.640 |
if you code them the right kind of principles, 01:46:41.560 |
I think that they encode the structure of the domain, 01:46:45.640 |
then you will get to these solutions and you discover it. 01:46:48.480 |
- It feels like that might also be a good way to live life. 01:46:52.760 |
So let me ask, do you have advice for young people today 01:46:57.760 |
about how to live life or how to succeed in their career 01:47:04.680 |
From an evolutionary computation perspective. 01:47:35.480 |
It's possible, you have to make a bit of an effort 01:47:37.840 |
'cause it's not easy, but the rewards are wonderful. 01:47:43.840 |
about an objective function of new experiences. 01:47:49.400 |
what is the maximally new experience I could have today? 01:47:55.560 |
And that sort of, that novelty, optimizing for novelty 01:47:59.360 |
for some period of time might be a very interesting way 01:48:01.800 |
to sort of maximally expand the sets of experiences 01:48:06.320 |
you had and then ground from that perspective, 01:48:10.360 |
like what will be the most fulfilling trajectory 01:48:15.360 |
And of course, the flip side of that is where I come from. 01:48:21.000 |
But the choice has a detrimental effect, I think, 01:48:40.280 |
and only one of that something, I will appreciate it deeply 01:48:48.640 |
and I've been pigging out on delicious, incredible meat. 01:48:51.560 |
I've been fasting a lot, so I need to do that again. 01:48:56.320 |
that the first taste of a food is incredible. 01:49:00.760 |
So the downside of exploration is that somehow, 01:49:11.080 |
but somehow you don't get to experience deeply 01:49:19.760 |
That could be just a very human, peculiar flaw. 01:49:23.680 |
- Yeah, I didn't mean that you superficially explore. 01:49:28.360 |
- Yeah, so you don't have to explore 100 things, 01:49:34.080 |
a deep enough dive that you gain an understanding. 01:49:56.280 |
and they may stay on some career for a decade 01:50:01.840 |
You're not pretty determined to stay where you are. 01:50:17.200 |
and gain the experience that you can use later, 01:50:19.360 |
you probably have to spend, like I said, it's not easy. 01:50:24.440 |
Now, also at some point then when you have this diversity 01:50:45.240 |
and you are pursuing it because you figured out 01:50:52.760 |
but you asked what's the advice for young people. 01:50:57.520 |
And then beyond that, after that exploration, 01:51:03.200 |
And even there you can switch multiple times, 01:51:05.800 |
but I think that diversity exploration is fundamental 01:51:09.120 |
to having a successful career as is concentration 01:51:15.520 |
And, but you are in better position to make the choice 01:51:21.240 |
So exploration precedes commitment, but both are beautiful. 01:51:24.920 |
So again, from an evolutionary computation perspective, 01:51:32.440 |
in order to come up with different solutions in simulation. 01:51:35.780 |
What do you think from that individual agent's perspective 01:51:43.880 |
who's going to be dead, unfortunately, one day too soon. 01:51:47.600 |
What do you think is the why of why that agent came to be 01:52:11.400 |
Some of them are foundations for further improvement. 01:52:16.400 |
And even those that are perhaps going to die out 01:52:20.240 |
where potential energy is potential solutions. 01:52:24.680 |
In biology, we see a lot of species die off naturally 01:52:29.880 |
I mean, they were really good solution for a while, 01:52:31.880 |
but then it didn't turn out to be not such a good solution 01:53:12.360 |
or stepping stones for other things that could come after. 01:53:16.400 |
- But it still feels from an individual perspective 01:53:21.080 |
But even if I'm just a little cog in the giant machine, 01:53:31.540 |
Do you find beauty in being part of the giant machine? 01:53:43.600 |
- That said, do you ponder your individual agent's mortality? 01:53:56.760 |
- Well, certainly more now than when I was a youngster 01:54:00.740 |
and did skydiving and paragliding and all these things. 01:54:13.980 |
that younger folks are more fearless in many ways. 01:54:28.040 |
I mean, older folks don't necessarily think that way, 01:54:32.240 |
but younger do and it's kind of counterintuitive. 01:54:42.480 |
So you try to, you have done your exploration, 01:54:55.640 |
That's how I think a lot of people, myself included, 01:55:05.700 |
and a bit of an impact even after the agent is gone. 01:55:13.720 |
I don't think there's a better way to end it. 01:55:19.520 |
of how vibrant the community at UT Austin and Austin is. 01:55:25.640 |
And this whole field seems like profound philosophically, 01:55:36.920 |
and for wasting all of your valuable time with me. 01:55:46.040 |
and thank you to the Jordan and Harbinger Show, 01:55:52.080 |
Check them out in the description to support this podcast. 01:55:55.640 |
And now let me leave you with some words from Carl Sagan.