back to indexJeff Hawkins: The Thousand Brains Theory of Intelligence | Lex Fridman Podcast #208
Chapters
0:0 Introduction
3:4 Collective intelligence
9:46 The origin of intelligence in the human brain
22:59 How intelligent life evolved on Earth
33:58 Why humans are special in the universe
37:16 Neurons
41:30 A Thousand Brains theory of intelligence
50:10 How to build superintelligent AI
68:10 Sam Harris and existential risk of AI
80:12 Neuralink
87:2 Will AI prevent the self-destruction of human civilization?
92:34 Communicating human knowledge to alien civilizations
102:50 Devil's advocate
107:45 Human nature
116:7 Hardware for AI
122:46 Advice for young people
00:00:00.000 |
The following is a conversation with Jeff Hawkins, a neuroscientist seeking to understand the structure, function, and origin of intelligence in the human brain. 00:00:09.500 |
He previously wrote a seminal book on the subject titled "On Intelligence" and recently a new book called "A Thousand Brains" which presents a new theory of intelligence that Richard Dawkins, for example, has been raving about, calling the book "brilliant and exhilarating." 00:00:29.100 |
I can't read those two words and not think of him saying it in his British accent. 00:00:34.200 |
Quick mention of our sponsors, Codecademy, BioOptimizers, ExpressVPN, Asleep, and Blinkist. 00:00:41.800 |
Check them out in the description to support this podcast. 00:00:44.700 |
As a side note, let me say that one small but powerful idea that Jeff Hawkins mentions in his new book is that if human civilization were to destroy itself, all of knowledge, all our creations will go with us. 00:00:59.100 |
He proposes that we should think about how to save that knowledge in a way that long outlives us, whether that's on Earth, in orbit around Earth, or in deep space, and then to send messages that advertise this backup of human knowledge to other intelligent alien civilizations. 00:01:16.200 |
The main message of this advertisement is not that we are here, but that we were once here. 00:01:26.300 |
This little difference somehow was deeply humbling to me, that we may, with some non-zero likelihood, destroy ourselves, and that an alien civilization thousands or millions of years from now may come across this knowledge store, and they would only with some low probability even notice it, not to mention be able to interpret it. 00:01:46.500 |
And the deeper question here for me is what information in all of human knowledge is even essential? 00:01:55.900 |
This thought experiment forces me to wonder what are the things we've accomplished and are hoping to still accomplish that will outlive us? 00:02:03.100 |
Is it things like complex buildings, bridges, cars, rockets? 00:02:08.300 |
Is it ideas like science, physics, and mathematics? 00:02:13.800 |
Is it computers, computational systems, or even artificial intelligence systems? 00:02:19.100 |
I personally can't imagine that aliens wouldn't already have all of these things. 00:02:27.500 |
To me, the only unique thing we may have is consciousness itself, and the actual subjective experience of suffering, of happiness, of hatred, of love. 00:02:38.100 |
If we can record these experiences in the highest resolution directly from the human brain, such that aliens will be able to replay them, that is what we should store and send as a message. 00:02:49.100 |
Not Wikipedia, but the extremes of conscious experiences. 00:02:54.000 |
The most important of which, of course, is love. 00:02:57.300 |
This is the Lex Friedman Podcast, and here is my conversation with Jeff Hawkins. 00:03:07.100 |
Do you think there's still neurons in your brain that remember that conversation? 00:03:14.300 |
Like there's a Lex neuron in your brain that just like finally has a purpose? 00:03:18.300 |
I do remember our conversation, or I have some memories of it, and I formed additional memories of you in the meantime. 00:03:24.900 |
I wouldn't say there's a neuron or a neurons in my brain that know you. 00:03:29.500 |
There are synapses in my brain that have formed that reflect my knowledge of you and the model I have of you and the world. 00:03:37.200 |
And whether they're the exact same synapses were formed two years ago, it's hard to say because these things come and go all the time. 00:03:42.900 |
But we know from one thing to know about brains is that when you think of things, you often erase the memory and rewrite it again. 00:03:49.100 |
So, yes, but I have a memory of you and I have that's instantiated in synapses. 00:03:55.500 |
Like so you have we have a model of the world in your head, and that model is continually being updated. 00:04:08.400 |
And so the model includes where we live, the places we know, the words, the objects in the world. 00:04:14.400 |
It's just a monstrous model and it's constantly being updated. 00:04:19.000 |
So are animals, so are other physical objects, so are events we've done. 00:04:23.900 |
So it's no special in my mind special place for the memories of humans. 00:04:29.900 |
I mean, obviously, I know a lot about my wife and friends and so on. 00:04:37.600 |
But it's not like a special place for humans are over here. 00:04:41.300 |
But we model everything and we model other people's behaviors, too. 00:04:44.800 |
So if I said yours is a copy of your mind in my mind, it's just because I know how humans I've learned how humans behave. 00:04:56.100 |
Well, I just also mean the collective intelligence of the human species. 00:05:02.400 |
I wonder if there is something fundamental to the brain that enables that. 00:05:11.000 |
You're actually jumping into a lot of big topics. 00:05:14.100 |
Like collective intelligence is a separate topic that a lot of people like to talk about. 00:05:25.200 |
But from our research point of view, and so again, let's just talk. 00:05:39.300 |
And so you can apply that algorithm to lots of different problems, but it's all underneath. 00:05:47.400 |
So from our point of view, we wouldn't look for these special circuits someplace buried in your brain that might be related to other, you know, understanding other humans. 00:05:56.100 |
It's more like, you know, how do we build a model of anything? 00:06:00.700 |
And humans are just another part of the things we understand. 00:06:03.300 |
So there's nothing to the brain that knows the emergent phenomenon of collective intelligence. 00:06:16.900 |
Well, I think we have language, which is sort of built into our brains, and that's a key part of collective intelligence. 00:06:22.600 |
So there are some, you know, prior assumptions about the world we're going to live in when we're born. 00:06:30.100 |
And so, you know, did we evolve to take advantage of those situations? 00:06:36.000 |
But again, we study only part of the brain, the neocortex. 00:06:38.500 |
There's other parts of the brain are very much involved in societal interactions and human emotions and how we interact and even societal issues about, you know, how we interact with other people, when we support them, when we're reading, things like that. 00:06:55.800 |
I mean, certainly the brain is a great place where to study intelligence. 00:07:01.900 |
I wonder if it's the fundamental atom of intelligence. 00:07:06.900 |
Well, I would say it's absolutely an essential component, even if you believe in collective intelligence as, hey, that's where it's all happening. 00:07:15.400 |
That's what we need to study, which I don't believe that, by the way. 00:07:17.700 |
I think it's really important, but I don't think that is the thing. 00:07:20.700 |
But even if you do believe that, then you have to understand how the brain works in doing that. 00:07:26.600 |
It's more like we are intelligent individuals and together we are much more magnified, our intelligence. 00:07:34.400 |
We can do things which we couldn't do individually. 00:07:36.200 |
But even as individuals, we're pretty damn smart and we can model things and understand the world and interact with it. 00:07:42.500 |
So to me, if you're going to start someplace, you need to start with the brain. 00:07:47.100 |
Then you could say, well, how do brains interact with each other? 00:07:51.800 |
And how do we share models that I've learned something about the world? 00:07:56.500 |
Which is really what sort of communal intelligence is. 00:08:01.500 |
We've had different experiences in the world. 00:08:06.700 |
You've learned something about physics and you can impart that to me. 00:08:10.200 |
But it all comes down to even just the epistemological question of, well, what is knowledge and how do you represent it in the brain? 00:08:19.000 |
And it's not, that's where it's going to reside, right? 00:08:22.900 |
It's obvious that human collaboration, human interaction is how we build societies. 00:08:29.500 |
But some of the things you talk about and work on, some of those elements of what makes up an intelligent entity is there with a single person. 00:08:41.100 |
I mean, we can't deny that the brain is the core element here in, at least I think it's obvious. 00:08:47.800 |
The brain is the core element in all theories of intelligence. 00:08:54.300 |
We interact, we share, we build upon each other's work. 00:09:01.300 |
You know, there would be no intelligence without brains. 00:09:07.000 |
I got into this field because I just was curious as to who I am. 00:09:12.300 |
What's going on in my head when I'm, when I'm thinking? 00:09:16.500 |
You know, I can ask what it means for me to know something independent of how I learned it from you or from someone else or from society. 00:09:22.600 |
What does it mean for me to know that I have a model of you in my head? 00:09:25.900 |
What does it mean to know I know what this microphone does and how it works physically, even though I can't see it right now? 00:09:33.400 |
How the neurons do that at the fundamental level of neurons and synapses and so on? 00:09:39.900 |
And I'm happy to, just happy to understand those if I could. 00:09:44.600 |
So in your, in your new book, you talk about our brain, our mind as being made up of many brains. 00:09:54.900 |
So the book is called A Thousand Brains, A Thousand Brain Theory of Intelligence. 00:10:01.900 |
The book has three sections and it has sort of maybe three big ideas. 00:10:08.000 |
So the first section is all about what we've learned about the neocortex and that's the thousand brains theory. 00:10:13.200 |
Just to complete the picture of the second section is all about AI and the third section is about the future of humanity. 00:10:17.700 |
So the thousand brains theory, the big idea there, if you've, if I had to summarize into one big idea, is that we think of the brain, the neocortex is learning this model of the world. 00:10:33.200 |
But what we learned is actually there's tens of thousands of independent modeling systems going on. 00:10:39.100 |
And so each, what we call a column in the cortex is about 150,000 of them is a complete modeling system. 00:10:45.000 |
So it's a collective intelligence in your head in some sense. 00:10:48.900 |
So the thousand brains theory is about where do I have knowledge about, you know, this coffee cup or where's the model of this cell phone? 00:10:56.900 |
It's in thousands of separate models that are complementary and they communicate with each other through voting. 00:11:01.100 |
So this idea that we have, we feel like we're one person, you know, that's our experience. 00:11:07.200 |
But reality, there's lots of these, like, it's almost like little brains, like, but they're, they're sophisticated modeling systems, about 150,000 of them in each human brain. 00:11:16.700 |
And that's a total different way of thinking about how the neocortex is structured than we or anyone else thought of even just five years ago. 00:11:24.300 |
So you mentioned you started this journey just looking in the mirror, trying to understand who you are. 00:11:31.700 |
So if you have many brains, who are you then? 00:11:35.900 |
So it's interesting, we have a singular perception, right? 00:11:40.800 |
But it's, it's composed of all these things, like there's sounds and there's, and there's this vision and there's touch and all kinds of inputs. 00:11:48.000 |
Yet we have the singular perception and what the thousand brain series says, we have these models that are visual models. 00:11:52.800 |
We have a lot of models, auditory models, models, I'll talk to models and so on, but they vote. 00:12:00.100 |
You can think about these columns as bad, like little grains of rice, 150,000 stacked next to each other. 00:12:06.500 |
And each one is its own little modeling system, but they have these long range connections that go between them. 00:12:12.000 |
And we call those voting connections or voting neurons. 00:12:15.800 |
And so the different columns try to reach a consensus, like what am I looking at? 00:12:22.100 |
Okay, you know, each one has some ambiguity, but they come to a consensus. 00:12:27.000 |
We are only consciously able to perceive the voting. 00:12:31.700 |
We're not able to perceive anything that goes under the hood. 00:12:38.500 |
- The results of the voting. - Yeah, the voting. 00:12:42.500 |
We were just talking about eye movements a moment ago. 00:12:44.400 |
So as I'm looking at something, my eyes are moving about three times a second. 00:12:47.600 |
And with each movement, a completely new input is coming into the brain. 00:12:57.200 |
But yet, if I looked at the neurons in your brain, they're going on and off, on and off, on and off, on and off. 00:13:03.300 |
The voting neurons are saying, you know, we all agree, even though I'm looking at different parts of this, this is a water bottle right now. 00:13:09.600 |
And it's in some position and pose relative to me. 00:13:12.800 |
So I have this perception of the water bottle about two feet away from me at a certain pose to me. 00:13:20.100 |
I can't be aware of the fact that the inputs from the eyes are moving and changing and all this other stuff. 00:13:24.800 |
So these long-range connections are the part we can be conscious of. 00:13:29.800 |
The individual activity in each column doesn't go anywhere else. 00:13:36.100 |
There's no way to extract it and talk about it or extract it and even remember it to say, oh, yes, I can recall that. 00:13:43.700 |
But these long-range connections are the things that are accessible to language and to our, you know, it's like the hippocampus, our memories, you know, our short-term memory systems and so on. 00:13:55.000 |
So we're not aware of 95% or maybe it's even 98% of what's going on in your brain. 00:14:00.900 |
We're only aware of this sort of stable, somewhat stable voting outcome of all these things that are going on underneath the hood. 00:14:09.500 |
So what would you say is the basic element in the thousand brains theory of intelligence of intelligence? 00:14:17.600 |
Like what's the atom of intelligence when you think about it? 00:14:21.300 |
Is it the individual brains and then what is a brain? 00:14:24.600 |
Well, let's, can we just talk about what intelligence is first and then we can talk about what the elements are. 00:14:30.100 |
So in my book, intelligence is the ability to learn a model of the world, to build internal to your head a model that represents the structure of everything. 00:14:41.600 |
You know, to know that this is a table and that's a coffee cup and this is a gooseneck lamp and all this, to know these things, I have to have a model in my head. 00:14:49.400 |
I just don't look at them and go, what is that? 00:14:51.500 |
I already have internal representations of these things in my head and I had to learn them. 00:14:57.500 |
You were, you know, we have some lights in the room here. 00:15:00.400 |
I, you know, that's not part of my evolutionary heritage, right? 00:15:03.900 |
So we have this incredible model and the model includes not only what things look like and feel like, but where they are relative to each other and how they behave. 00:15:11.900 |
I've never picked up this water bottle before, but I know that if I took my hand on that blue thing and I turn it, it'll probably make a funny little sound as a little plastic things detach and then it'll rotate and it'll rotate. 00:15:24.500 |
So the essence of intelligence is our ability to learn a model and the more sophisticated our model is, the smarter we are. 00:15:32.000 |
Not that there is a single intelligence because you can know about, you know, a lot about things that I don't know and I know about things you don't know and we can both be very smart, but we both learned a model of the world through interacting with it. 00:15:45.300 |
Then we can ask ourselves, what are the mechanisms in the brain that allow us to do that? 00:15:51.100 |
Not just the neural mechanisms, but what are the general process for how we learn a model? 00:15:56.300 |
It's like, what are the actual things that, how do you learn this stuff? 00:16:00.600 |
It turns out you have to learn it through movement. 00:16:02.200 |
You can't learn it just by, that's how we learn. 00:16:05.900 |
We learn, so you build up this model by observing things and touching them and moving them and walking around the world and so on. 00:16:15.900 |
You obviously can learn things just by reading a book, something like that. 00:16:18.600 |
But think about if I were to say, oh, here's a new house. 00:16:21.100 |
I want you to learn, you know, what do you do? 00:16:25.600 |
You have to open the doors, look around, see what's on the left, what's on the right. 00:16:29.300 |
As you do this, you're building a model in your head. 00:16:33.500 |
You can't just sit there and say, I'm going to grok the house. 00:16:36.100 |
No, you know, or you could, you don't even want to just sit there and read some description of it. 00:16:43.300 |
If I'm going to learn a new app, I touch it and I move things around. 00:16:48.700 |
So that's the basic way we learn in the world. 00:16:50.700 |
And by the way, when you say model, you mean something that can be used for prediction in the future. 00:16:56.200 |
It's used for prediction and for behavior and planning. 00:17:08.200 |
So you can imagine an architect making a model of a house. 00:17:17.100 |
Well, we do that because you can imagine what it would look like from different angles. 00:17:21.800 |
And you can also say, well, how far to get from the garage to the swimming pool or something like that. 00:17:28.900 |
And so what would you view from this location? 00:17:30.500 |
So we build these physical models to let you imagine the future and imagine behaviors. 00:17:35.200 |
Now we can take that same model and put it in a computer. 00:17:38.500 |
So we now today, they'll build models of houses and a computer. 00:17:42.200 |
And they do that using a set of, we'll come back to this term in a moment, reference frames. 00:17:48.400 |
But basically you assign a reference frame for the house and you assign different things for the house in different locations. 00:17:53.100 |
And then the computer can generate an image and say, okay, this is what it looks like from this direction. 00:17:57.400 |
The brain is doing something remarkably similar to this. 00:18:04.100 |
It's similar to a model on a computer, which has the same benefits of building a physical model. 00:18:08.100 |
It allows me to say, what would this thing look like if it was in this orientation? 00:18:12.100 |
What would likely happen if I push this button? 00:18:22.900 |
I can imagine in my head, well, I could talk about it. 00:18:28.600 |
I could, you know, maybe tell my neighbor, you know, and I can imagine the outcomes of all these things before I do any of them. 00:18:37.400 |
It lets us plan the future and imagine the consequences of our actions. 00:18:51.700 |
So prediction is fundamental to intelligence. 00:18:58.700 |
And let me go back and be very precise about this. 00:19:01.500 |
Prediction, you can think of prediction two ways. 00:19:03.200 |
One is like, hey, what would happen if I did this? 00:19:08.700 |
But even prediction is like, oh, what's this water bottle going to feel like when I pick it up? 00:19:13.200 |
You know, and that doesn't seem very intelligent. 00:19:15.800 |
But the way to think, one way to think about prediction is it's a way for us to learn where our model is wrong. 00:19:22.800 |
So if I picked up this water bottle and it felt hot, I'd be very surprised. 00:19:27.300 |
Or if I picked it up, it was very light, I'd be surprised. 00:19:30.300 |
Or if I turned this top and it didn't, I had to turn it the other way, I'd be surprised. 00:19:35.800 |
And so all those might have a prediction like, okay, I'm going to do it, I'll drink some water. 00:19:44.800 |
Then I say, oh my gosh, I misunderstood this. 00:19:50.100 |
I'll be looking at it going, well, how the hell did that happen? 00:19:56.300 |
Just by looking at it and playing around with it, I'd update it and say, this is a new type of water bottle. 00:19:59.900 |
But you're talking about sort of complicated things like a water bottle, but this also applies for just basic vision, just like seeing things. 00:20:09.400 |
It's almost like a precondition of just perceiving the world is predicting. 00:20:14.700 |
It's just everything that you see is first passed through your prediction. 00:20:23.100 |
In fact, this is the insight I had back in the late 80s. 00:20:29.000 |
And I know that people have reached the same idea, is that every sensory input you get, not just vision, but touch and hearing, you have an expectation about it. 00:20:43.900 |
I can't predict what next word is going to come out of your mouth. 00:20:46.600 |
But as you start talking, I'll get better and better predictions. 00:20:49.000 |
And if you talk about some topics, I'd be very surprised. 00:20:51.600 |
So I have this sort of background prediction that's going on all the time for all of my senses. 00:20:58.200 |
Again, the way I think about that is this is how we learn. 00:21:09.900 |
If it is, I shouldn't see, you know, a little finger sticking out the side. 00:21:13.500 |
And if I saw a little finger sticking out, I was like, what the hell's going on? 00:21:17.900 |
I mean, that's fascinating that, let me linger on this for a second. 00:21:25.200 |
It really honestly feels that prediction is fundamental to everything, to the way our mind operates, to intelligence. 00:21:33.100 |
So like, it's just a different way to see intelligence, which is like everything starts at prediction. 00:21:41.600 |
You can't predict something unless you have a model of it. 00:21:47.600 |
So like the thing the model does is prediction. 00:21:50.300 |
But it also, yeah, but you can then extend it to things like, what would happen if I took this today? 00:22:00.600 |
Or how you can extend prediction to like, oh, I want to get a promotion at work. 00:22:07.200 |
And you can say, if I did this, I predict what might happen. 00:22:09.400 |
If I spoke to someone, I predict what might happen. 00:22:16.600 |
You can ask basically any question, low-level or high-level. 00:22:24.900 |
And then we asked, how do neurons actually make predictions physically? 00:22:29.000 |
Like, what does the neuron do when it makes a prediction? 00:22:31.100 |
And, or the neural tissue does when it makes a prediction. 00:22:34.700 |
And then we asked, what are the mechanisms by how we build a model that allows you to make prediction? 00:22:38.800 |
So we started with prediction as sort of the fundamental research agenda, if in some sense. 00:22:45.500 |
Like, and say, well, we understand how the brain makes predictions. 00:22:48.800 |
We'll understand how it builds these models and how it learns. 00:22:52.900 |
So it was like, it was the key that got us in the door to say, that is our research agenda. 00:22:58.900 |
So in this whole process, where does intelligence originate, would you say? 00:23:05.300 |
So if we look at things that are much less intelligence than humans, and you start to build up a human through the process of evolution, 00:23:16.100 |
where's this magic thing that has a prediction model or a model that's able to predict that starts to look a lot more like intelligence? 00:23:25.800 |
Is there a place where, Richard Dawkins wrote an introduction to your book, an excellent introduction. 00:23:32.700 |
I mean, it puts a lot of things into context. 00:23:35.700 |
It's funny just looking at parallels for your book and Darwin's Origin of Species. 00:23:49.000 |
Well, we have a theory about it, and it's just that, it's a theory. 00:23:53.700 |
As soon as living things started to move, they're not just floating in the sea. 00:23:58.700 |
They're not just a plant, you know, grounded someplace. 00:24:01.800 |
As soon as they started to move, there was an advantage to moving intelligently, to moving in certain ways. 00:24:08.300 |
And there's some very simple things you can do, you know, bacteria or single-cell organisms can move towards a source of gradient of food or something like that. 00:24:16.900 |
But an animal that might know where it is and know where it's been and how to get back to that place, or an animal that might say, "Oh, there was a source of food someplace. 00:24:33.500 |
So early on, there was a pressure to start understanding your environment, like, "Where am I and where have I been? 00:24:40.300 |
And what happened in those different places?" 00:24:42.400 |
So we still have this neural mechanism in our brains. 00:24:50.300 |
It's in the hippocampus and entorhinal cortex. 00:25:00.700 |
So these neurons in these parts of the brain know where I am in this room and where the door was and things like that. 00:25:06.900 |
- So a lot of other mammals have this kind of capability. 00:25:10.900 |
And almost any animal that knows where it is and can get around must have some mapping system, must have some way of saying, "I've learned a map of my environment. 00:25:27.300 |
They just know where they are when they're flying. 00:25:30.000 |
They know particular flowers they come back to. 00:25:34.400 |
And it turns out it's very tricky to get neurons to do this, to build a map of an environment. 00:25:40.500 |
And so we now know there's these famous studies that's still very active about place cells and grid cells and these other types of cells in the older parts of the brain and how they build these maps of the world. 00:25:53.000 |
It's obviously been under a lot of evolutionary pressure over a long period of time to get good at this. 00:26:00.100 |
What we think has happened, and there's a lot of evidence to suggest this, is that that mechanism we learned to map like a space was repackaged. 00:26:11.900 |
The same type of neurons was repackaged into a more compact form. 00:26:19.600 |
And it was in some sense genericized, if that's a word. 00:26:23.700 |
It was turned into a very specific thing about learning maps of environments to learning maps of anything, learning a model of anything, not just your space, but coffee cups and so on. 00:26:33.800 |
And it got sort of repackaged into a more compact version, a more universal version, and then replicated. 00:26:42.600 |
So the reason we're so flexible is we have a very generic version of this mapping algorithm and we have 150,000 copies of it. 00:26:51.200 |
Sounds a lot like the progress of deep learning. 00:26:55.200 |
So take neural networks that seem to work well for a specific task, compress them and multiply it by a lot. 00:27:09.200 |
It's like the story of Transformers in Natural Language Processing. 00:27:13.300 |
But in deep learning networks, they end up, you're replicating an element, but you still need the entire network to do anything. 00:27:20.700 |
Here, what's going on, each individual element is a complete learning system. 00:27:25.500 |
This is why I can take a human brain, cut it in half, and it still works. 00:27:33.100 |
It's fundamentally distributed, complete modeling systems. 00:27:43.100 |
But there's a lot of evidence supporting that story, this evolutionary story. 00:27:50.000 |
The thing which brought me to this idea is that the human brain got big very quickly. 00:27:55.300 |
So that led to the proposal a long time ago that, well, there's this common element, just instead of creating new things, it just replicated something. 00:28:06.600 |
We can learn things that we had no history about. 00:28:11.000 |
And so that tells us that the learning algorithm is very generic. 00:28:15.100 |
It's very kind of universal because it doesn't assume any prior knowledge about what it's learning. 00:28:20.200 |
And so you combine those things together and you say, OK, well, how did that come about? 00:28:26.100 |
Where did that universal algorithm come from? 00:28:27.900 |
It had to come from something that wasn't universal. 00:28:29.800 |
It came from something that was more specific. 00:28:31.300 |
And so anyway, this led to our hypothesis that you would find grid cells and place cell equivalents in the neocortex. 00:28:37.600 |
And when we first published our first papers on this theory, we didn't know of evidence for that. 00:28:43.500 |
It turns out there was some, but we didn't know about it. 00:28:45.300 |
And since then, so then we became aware of evidence for grid cells and for parts of the neocortex. 00:28:50.500 |
And then now there's been new evidence coming out. 00:28:53.000 |
There's some interesting papers that came out just January of this year. 00:28:56.500 |
So one of our predictions was if this evolutionary hypothesis is correct, we would see grid cell place cell equivalent cells that work like them through every column in the neocortex. 00:29:07.600 |
What does it mean that, why is it important that they're present? 00:29:12.100 |
Because it tells us, well, we're asking about the evolutionary origin of intelligence, right? 00:29:15.900 |
So our theory is that these columns in the cortex are working on the same principles, they're modeling systems. 00:29:24.000 |
And it's hard to imagine how neurons do this. 00:29:26.700 |
And so we said, hey, it's really hard to imagine how neurons could learn these models of things. 00:29:31.600 |
We can talk about the details of that if you want. 00:29:33.300 |
But there's this other part of the brain, we know that learns models of environments. 00:29:40.600 |
So could that mechanism to learn to model this room be used to learn to model the water bottle? 00:29:47.000 |
So we said it's much more likely the brain's using the same mechanism, which case it would have these equivalent cell types. 00:29:53.700 |
So it's basically the whole theory is built on the idea that these columns have reference frames and we're learning these models and these grid cells create these reference frames. 00:30:03.600 |
So it's basically the major, in some sense, the major predictive part of this theory is that we will find these equivalent mechanisms in each column in the inner cortex, which tells us that that's what they're doing. 00:30:16.500 |
They're learning these sensory motor models of the world. 00:30:19.200 |
So just we're pretty confident that would happen. 00:30:23.700 |
So the evolutionary process nature does a lot of copy and paste and see what happens. 00:30:30.500 |
But it just found out like, hey, if I took this, these elements and made more of them, what happens? 00:30:37.100 |
And let's hook them up to the eyes and let's hook them up to ears. 00:30:43.100 |
Again, just to take a quick step back to our conversation of collective intelligence. 00:30:48.400 |
Do you sometimes see that as just another copy and paste aspect? 00:30:54.600 |
Is copying and pasting these brains in humans and making a lot of them? 00:31:00.500 |
And then creating social structures that then almost operate as a single brain? 00:31:05.900 |
I wouldn't have said it, but you said it sounded pretty good. 00:31:08.500 |
So to you, the brain is like a, is its own thing. 00:31:15.300 |
I mean, our goal is to understand how the neural cortex works. 00:31:18.800 |
We can argue how essential that is to understanding the human brain, because it's not the entire human brain. 00:31:24.300 |
You can argue how essential that is to understanding human intelligence. 00:31:28.700 |
You can argue how essential this is to communal intelligence. 00:31:34.500 |
Our goal was to understand the neural cortex. 00:31:38.900 |
So what is the neural cortex and where does it fit in the various aspects of what the brain does? 00:31:46.200 |
Well, obviously, again, I mentioned again in the beginning, it's about 70 to 75% of the volume of a human brain. 00:31:54.700 |
So it dominates our brain in terms of size, not in terms of number of neurons, but in terms of size. 00:32:06.300 |
We know that all high level vision, hearing and touch happens in the neural cortex. 00:32:11.300 |
We know that all language occurs and is understood in the neural cortex, whether that's spoken language, written language, sign language, language of mathematics, language of physics, music. 00:32:22.000 |
We know that all high level planning and thinking occurs in the neural cortex. 00:32:25.300 |
If I were to say, you know, what part of your brain designed a computer and understands programming and creates music, it's all the neural cortex. 00:32:36.400 |
But then there's other parts of our brain are important too, right? 00:32:40.400 |
Our emotional states, our body regulating our body. 00:32:44.600 |
So the way I like to look at it is, you know, can you understand the neural cortex without the rest of the brain? 00:32:52.000 |
And some people say you can't, and I think absolutely you can. 00:32:54.900 |
It's not that they're not interacting, but you can understand it. 00:32:58.400 |
Can you understand the neural cortex without understanding the emotions of fear? 00:33:05.200 |
I make the analogy in the book that it's like a map of the world, and how that map is used depends on who's using it. 00:33:12.800 |
So how our map of our world in our neural cortex, how we manifest as a human, depends on the rest of our brain. 00:33:26.500 |
You know, how important different things are in my life. 00:33:31.200 |
But the neural cortex can be understood on its own. 00:33:36.500 |
And I say that as a neuroscientist, I know there's all these interactions. 00:33:41.500 |
And I want to say I don't know them, and we don't think about them. 00:33:44.600 |
But from a layperson's point of view, you can say it's a modeling system. 00:33:48.000 |
I don't generally think too much about the communal aspect of intelligence, which you brought up a number of times already. 00:33:58.500 |
From the origin of the universe, like, this pockets of complexities that form living organisms. 00:34:07.700 |
I wonder if we're just, if you look at humans, we feel like we're at the top. 00:34:12.900 |
I wonder if there's like just, where everybody probably, every living type pocket of complexity is probably thinks they're the, pardon the French, they're the shit. 00:34:30.600 |
- In a sense, the whole point is in their sense of the world, their sense is that they're at the top of it. 00:34:41.900 |
- But you're bringing up, the problems of complexity and complexity theory are, it's a huge, interesting problem in science. 00:34:50.300 |
And I think we've made surprisingly little progress in understanding complex systems in general. 00:34:59.200 |
And so, the Santa Fe Institute was founded to study this. 00:35:02.500 |
And even the scientists there will say, it's really hard. 00:35:04.900 |
We haven't really been able to figure out exactly, that science isn't really congealed yet. 00:35:10.300 |
We're still trying to figure out the basic elements of that science. 00:35:18.000 |
Whether it's DNA creating bodies or phenotypes or it's individuals creating societies or ants and markets and so on. 00:35:30.600 |
I think you need to ask, well, the brain itself is a complex system. 00:35:37.400 |
I think we've made a lot of progress understanding how the brain works. 00:35:40.400 |
So, but I haven't brought it out to like, oh, well, where are we on the complexity spectrum? 00:35:49.700 |
- I prefer for that answer to be, we're not special. 00:35:54.700 |
It seems like if we're honest, most likely we're not special. 00:35:58.800 |
So, if there is a spectrum, we're probably not in some kind of significant place in that spectrum. 00:36:03.300 |
- I think there's one thing we could say that we are special. 00:36:09.100 |
Is that if we think about knowledge, what we know, we clearly, human brains have, are the only brains that have a certain types of knowledge. 00:36:21.200 |
We're the only brains on this Earth to understand what the Earth is, how old it is, that the universe is a picture as a whole. 00:36:27.800 |
We're the only organisms to understand DNA and the origins of, you know, of species. 00:36:32.800 |
No other species on this planet has that knowledge. 00:36:37.300 |
So, if we think about, I like to think about, you know, one of the endeavors of humanity is to understand the universe as much as we can. 00:36:45.900 |
I think our species is further along in that, undeniably. 00:36:51.800 |
Whether our theories are right or wrong, we can debate, but at least we have theories. 00:36:54.900 |
You know, we know that what the sun is and how its fusion is and how, what black holes are. 00:37:00.100 |
And, you know, we know general theory of relativity and no other animal has any of this knowledge. 00:37:07.400 |
Are we special in terms of the hierarchy of complexity in the universe? 00:37:19.800 |
You say that prediction happens in the neuron. 00:37:23.400 |
So, the neuron traditionally is seen as the basic element of the brain. 00:37:27.200 |
- So, I mentioned this earlier, that prediction was our research agenda. 00:37:32.200 |
- We said, okay, how does the brain make a prediction? 00:37:35.600 |
Like, I'm about to grab this water bottle and my brain is predicting what I'm gonna feel on all my parts of my fingers. 00:37:42.600 |
If I felt something really odd on any part here, I'd notice it. 00:37:45.200 |
So, my brain is predicting what it's gonna feel as I grab this thing. 00:37:49.000 |
So, what is that, how does that manifest itself in neural tissue, right? 00:37:52.300 |
We got brains made of neurons and there's chemicals and there's neurons and there's spikes and the connect, you know, where is the prediction going on? 00:38:01.400 |
And one argument could be that, well, when I'm predicting something, a neuron must be firing in advance. 00:38:08.800 |
It's like, okay, this neuron represents what you're gonna feel and it's firing, it's sending a spike. 00:38:18.900 |
They were making so many of them, which we're totally unaware of. 00:38:21.200 |
Just the vast majority of them, you have no idea that you're doing this. 00:38:23.600 |
That it wasn't really, we were trying to figure out how could this be? 00:38:31.200 |
And I won't walk you through the whole story unless you insist upon it, but we came to the realization that most of your predictions are occurring inside individual neurons, especially the most common are in the pyramidal cells. 00:38:48.700 |
Everyone knows, or most people know, that a neuron is a cell and it has this spike called an action potential and it sends information. 00:38:55.800 |
But we now know that there's these spikes internal to the neuron. 00:39:00.900 |
They travel along the branches of the neuron and they don't leave the neuron. 00:39:06.100 |
There's far more dendritic spikes than there are action potentials, far more. 00:39:12.800 |
And what we came to understand that those dendritic spikes, the ones that are occurring, are actually a form of prediction. 00:39:19.200 |
They're telling the neuron, the neuron is saying, "I expect that I might become active shortly." 00:39:24.800 |
And that internal, so the internal spike is a way of saying, "You might be generating external spikes soon. 00:39:32.100 |
I've predicted you're going to become active." 00:39:34.300 |
And we wrote a paper in 2016 which explained how this manifests itself in neural tissue. 00:39:44.600 |
But the vast majority, we think there's a lot of evidence supporting it. 00:39:48.400 |
So that's where we think that most of these predictions are internal. 00:39:52.100 |
That's why you can't be, they're internal to the neuron, you can't perceive them. 00:39:55.500 |
- From understanding the prediction mechanism of a single neuron, do you think there's deep insights to be gained about the prediction capabilities of the mini brains and then the bigger brain and then the brain? 00:40:08.500 |
So having a prediction side of individual neuron is not that useful. 00:40:13.300 |
The way it manifests itself in neural tissue is that when a neuron, a neuron emits these spikes, a very singular type of event. 00:40:23.500 |
If a neuron is predicting that it's going to be active, it emits its spike very, a little bit sooner, just a few milliseconds sooner than it would have otherwise. 00:40:31.800 |
It's like, I give the analogy in the book, it's like a sprinter on a starting blocks in a race. 00:40:37.100 |
And if someone says, get ready, set, you get up and you're ready to go. 00:40:41.200 |
And then when you're ready to start, you get a little bit earlier start. 00:40:43.900 |
So that it's that, that ready set is like the prediction and the neurons like ready to go quicker. 00:40:48.000 |
And what happens is when you have a whole bunch of neurons together and they're all getting these inputs, the ones that are in the predictive state, the ones that are anticipating to become active. 00:40:57.900 |
If they do become active, they happen sooner, they disable everything else. 00:41:01.600 |
And it leads to different representations in the brain. 00:41:03.600 |
So you have to, it's not isolated just to the neuron. 00:41:08.100 |
The prediction occurs with the neuron, but the network behavior changes. 00:41:11.400 |
So what happens under different predictions, different inputs have different representations. 00:41:16.300 |
So how I, what I predict is going to be different under different contexts. 00:41:21.700 |
You know, what my input will be is different under different contexts. 00:41:24.600 |
So this is, this is a key little theory, how this works. 00:41:27.700 |
So the theory of the thousand brains, if you were to count the number of brains, how would you do it? 00:41:34.100 |
The thousand brain theory says that basically every cortical column in your neocortex is a complete modeling system. 00:41:42.100 |
And that when I ask, where do I have a model of something like a coffee cup? 00:41:53.800 |
And there's a voting mechanism, which you lead, which you're, which is the thing you're, which you're conscious of, which leads to your singular perception. 00:42:04.600 |
The details, how we got to that theory are complicated. 00:42:12.200 |
And one of those details, we had to ask, how does a model make predictions? 00:42:16.100 |
And we talked about just these predictive neurons. 00:42:23.000 |
It's like, how are we going to figure out how these neurons built through this? 00:42:26.600 |
So we just looked at prediction as like, well, we know that's ubiquitous. 00:42:30.700 |
We know that every part of the cortex is making predictions. 00:42:33.000 |
Therefore, whatever the predictive system is, it's going to be everywhere. 00:42:36.900 |
We know there's a gazillion predictions happening at once. 00:42:39.500 |
So this is where we can start teasing apart, you know, ask questions about, you know, how could neurons be making these predictions? 00:42:45.900 |
And that sort of built up to now what we have, this thousand brains theory, which is complex. 00:43:05.100 |
So again, a reference frame, I mentioned earlier about the, you know, a model of a house. 00:43:10.900 |
And I said, if you're going to build a model of a house in a computer, they have a reference frame. 00:43:14.500 |
And you can think of reference frame like Cartesian coordinates, like X, Y, and Z axes. 00:43:18.700 |
So I could say, oh, I'm going to design a house. 00:43:21.500 |
I can say, well, the front door is at this location, X, Y, Z, and the roof is at this location, X, Y, Z, and so on. 00:43:28.600 |
So it turns out for you to make a prediction, and I walk you through the thought experiment in the book where I was predicting what my finger was going to feel when I touch the coffee cup. 00:43:37.500 |
It was a ceramic coffee cup, but this one will do. 00:43:39.800 |
And what I realized is that to make a prediction, what my finger is going to feel like, it's going to feel different than this, which would feel different if I touch the hole or this thing on the bottom. 00:43:51.200 |
To make that prediction, the cortex needs to know where the finger is, the tip of the finger, relative to the coffee cup. 00:44:00.100 |
And to do that, I have to have a reference frame for the coffee cup. 00:44:03.200 |
It has to have a way of representing the location of my finger to the coffee cup. 00:44:07.100 |
And then we realized, of course, every part of your skin has to have a reference frame relative to the things it touches. 00:44:13.000 |
But so the idea that a reference frame is necessary to make a prediction when you're touching something or when you're seeing something and you're moving your eyes, you're moving your fingers. 00:44:22.000 |
It's just a requirement to know what to predict. 00:44:24.600 |
If I have a structure, I'm going to make a prediction. 00:44:27.400 |
I have to know where it is I'm looking or touching it. 00:44:30.200 |
So then we said, well, how do neurons make reference frames? 00:44:35.200 |
You know, X, Y, Z coordinates don't exist in the brain. 00:44:40.000 |
So that's when we looked at the older part of the brain, the hippocampus and the adrenal cortex, where we knew that in that part of the brain, there's a reference frame for a room or reference frame for environment. 00:44:50.500 |
Remember I talked earlier about how you could make a map of this room. 00:44:53.500 |
So we said, oh, they are implementing reference frames there. 00:44:58.800 |
So we knew that reference frames needed to exist in every quarter of a column. 00:45:08.000 |
So you take the old mammalian ability to know where you are in a particular space and you start applying that to higher and higher levels. 00:45:18.400 |
Yeah, you first you apply it to like where your finger is. 00:45:22.300 |
The old part of the brain says, where's my body in this room? 00:45:25.700 |
The new part of the brain says, where's my finger relative to this this object? 00:45:30.800 |
Where is a section of my retina relative to this object? 00:45:37.400 |
Where is that relative to this patch of my retina? 00:45:39.600 |
And then we take the same thing and apply it to concepts, mathematics, physics, you know, humanity, whatever you want to think. 00:45:48.200 |
Eventually you're pondering your own mortality. 00:45:51.200 |
But the point is, when we think about the world, when we have knowledge about the world, how is that knowledge organized, Lex? 00:46:01.900 |
So the way I learned the structure of this water bottle, where the features are relative to each other, when I think about history or democracy or mathematics, the same basic underlying structures happening. 00:46:13.800 |
There's reference frames for where the knowledge that you're assigning things to. 00:46:17.000 |
So in the book, I go through examples like mathematics and language and politics. 00:46:21.100 |
But the evidence is very clear in the neuroscience. 00:46:24.500 |
The same mechanism that we use to model this coffee cup, we're going to use to model high level thoughts. 00:46:30.300 |
You're, you're, you're the demise of the humanity, whatever you want to think about. 00:46:33.800 |
It's interesting to think about how different are the representations of those higher dimensional concepts, higher level concepts, how different the representation there is in terms of reference frames versus spatial. 00:46:47.500 |
But interesting thing, it's, it's, it's a different application, but it's the exact same mechanism. 00:46:53.900 |
But isn't there some aspect to a higher level concepts that they seem to be hierarchical, like they just seem to integrate a lot of information into that. 00:47:08.400 |
I'm not particular to this brand, but this is a Fiji water bottle and it has a logo on it. 00:47:15.400 |
I use this example in my book, our, our company's coffee cup has a logo on it, but this object is hierarchical. 00:47:22.500 |
It is, it's got like a cylinder and a cap, but then it has this logo on it and the logo has a word. 00:47:27.700 |
The word has letters, the letters are different features. 00:47:30.000 |
And so I don't have to remember, I don't have to think about this. 00:47:33.600 |
So I said, oh, there's a Fiji logo on this water bottle. 00:47:36.000 |
I don't have to go through and say, oh, what is the Fiji logo? 00:47:38.600 |
It's the F and I and a J and I, and there's a hibiscus flower and, and, oh, it has the pest, you know, the stamen on it. 00:47:45.400 |
I just incorporate all of that in some sort of hierarchical representation. 00:47:48.500 |
I say, you know, put this logo on this water bottle. 00:47:52.800 |
And, and, and then the logo has a word and the word has letters, all hierarchical. 00:47:58.700 |
It's amazing that the brain instantly just does all that. 00:48:01.500 |
The idea that there's, there's water, it's liquid and the idea that you can drink it when you're thirsty. 00:48:10.000 |
And then there's like all of that information is instantly like built into the whole thing once you proceed. 00:48:16.700 |
So I wanted to get back to your point about hierarchical representation. 00:48:22.200 |
And I can take this microphone in front of me. 00:48:23.900 |
I know inside there's going to be some electronics. 00:48:25.800 |
I know there's going to be some wires and I know there's going to be a little diaphragm that moves back and forth. 00:48:34.700 |
You just go into a room, it's composed of other components. 00:48:37.200 |
The kitchen has a refrigerator, you know, the refrigerator has a door, the door has a hinge, the hinge has screws and pins. 00:48:42.600 |
I mean, so anyway, the, the, the modeling system that exists in every cortical column learns the hierarchical structure of objects. 00:48:51.600 |
So it's a very sophisticated modeling system in this grain of rice. 00:48:54.400 |
It's hard to imagine, but this grain of rice can do really sophisticated things. 00:49:01.500 |
So that same mechanism that can model a water bottle or coffee cup can model conceptual objects as well. 00:49:09.500 |
It's, it's, that's the beauty of this discovery that this guy, Vernon Mouncastle made many, many years ago, which is that there's, there's a single cortical algorithm underlying everything we're doing. 00:49:21.200 |
So common sense concepts and higher level concepts are all represented in the same way. 00:49:32.700 |
Even the little teeny one that's, you know, my toaster and the big one that's running some cloud server someplace. 00:49:42.600 |
So the brain is all built on the same principle. 00:49:45.300 |
It's all about learning these models, structured models using movement and reference frames. 00:49:51.300 |
And it can be applied to something as simple as a water bottle and a coffee cup. 00:49:55.700 |
Just thinking like, what's the future of humanity? 00:49:57.700 |
And you know, why do you have a hedgehog on your, on your desk? 00:50:10.600 |
Does it give you any inclination or hope about how difficult it is to engineer common sense reasoning? 00:50:21.900 |
So looking at the brain, is this a marvel of engineering or is it pretty dumb stuff stack on top of each other over and over again? 00:50:34.900 |
I don't know if it can be both because if it's an incredible engineering job, that means it's very, so evolution did a lot of work. 00:50:49.700 |
So as I said earlier, the figuring out how to model something like a space is really hard and evolution had to go through a lot of trick. 00:50:57.800 |
And these, these, these cells I was talking about, these grid cells and place cells, they're really complicated. 00:51:02.900 |
This neural tissue works on these really unexpected, weird mechanisms. 00:51:10.500 |
But now you could just make lots of copies of it. 00:51:15.400 |
So it's a very interesting idea that's a lot of copies of a basic mini brain. 00:51:20.600 |
But the question is how difficult it is to find that mini brain that you can copy and paste effectively. 00:51:33.100 |
I'm sitting here with, you know, I know the steps we have to go. 00:51:36.700 |
There's still some engineering problems to solve, but we know enough. 00:51:40.600 |
And this is not like, oh, this is an interesting idea. 00:51:43.600 |
We have to go think about it for another few decades. 00:51:45.500 |
No, we actually understand it pretty well details. 00:51:51.200 |
So it's complicated, but it is an engineering problem. 00:51:57.400 |
We are basically laid out a roadmap, how we do this. 00:52:12.800 |
So in which domain do you think it's best to build them? 00:52:17.200 |
Are we talking about robotics, like entities that operate in the physical world that are able to interact with that world? 00:52:25.000 |
Are we talking about entities that operate in the digital world? 00:52:27.900 |
Are we talking about something more like, more specific, like is done in the machine learning community where you look at natural language or computer vision? 00:52:41.200 |
It's the first two more than the third one, I would say. 00:52:44.100 |
Again, let's just use computers as an analogy. 00:52:49.300 |
The pioneers in computing, people like John Ben-Norman, Alan Turing, they created this thing, you know, we now call the universal Turing machine, which is a computer, right? 00:52:58.500 |
Did they know how it was going to be applied? 00:53:02.100 |
You know, could they envision any of the future? 00:53:04.400 |
They just said, this is like a really interesting computational idea about algorithms and how you can implement them in a machine. 00:53:12.700 |
And we're doing something similar to that today. 00:53:16.600 |
Like we are building this sort of universal learning principle that can be applied to many, many different things. 00:53:23.900 |
But the robotics piece of that, the interactive... 00:53:29.500 |
You can think of this cortical column as what we call a sensory motor learning system. 00:53:33.200 |
It has the idea that there's a sensor and then it's moving. 00:53:38.200 |
It could be like my finger and it's moving in the world. 00:53:41.000 |
It could be like my eye and it's physically moving. 00:53:44.700 |
So it could be, an example would be, I could have a system that lives in the internet that actually samples information on the internet and moves by following links. 00:53:58.200 |
So something that echoes the process of a finger moving along a cortical column. 00:54:05.400 |
It's like, again, learning is inherently about discovering the structure of the world and discover the structure of the world, you have to move through the world. 00:54:12.800 |
Even if it's a virtual world, even if it's a conceptual world, you have to move through it. 00:54:17.400 |
It doesn't exist in one, it has some structure to it. 00:54:20.900 |
So here's a couple of predictions that getting what you're talking about. 00:54:32.300 |
And so in the future, to me, robotics and AI will merge. 00:54:38.200 |
They're not going to be separate fields because the algorithms for really controlling robots are going to be the same algorithms we have in our brain, these sensory motor algorithms. 00:54:48.400 |
Today we're not there, but I think that's going to happen. 00:54:57.300 |
You can have systems that have very different types of embodiments. 00:55:00.100 |
Some will have physical movements, some will not have physical movements. 00:55:05.900 |
Again, it's like computers, the Turing machine, it doesn't say how it's supposed to be implemented, doesn't tell you how big it is, doesn't tell you what you can apply it to, but it's a computational principle. 00:55:15.600 |
Cortical column equivalent is a computational principle about learning. 00:55:19.800 |
It's about how you learn and it can be applied to a gazillion things. 00:55:23.200 |
This is what I think this is, I think this impact of AI is going to be as large, if not larger, than computing has been in the last century by far, because it's getting at a fundamental thing. 00:55:34.000 |
It's not a vision system or a learning system. 00:55:35.900 |
It's a, it's not a vision system or a hearing system. 00:55:39.800 |
It's a fundamental principle, how you learn to structure in the world, how you can gain knowledge and be intelligent. 00:55:44.300 |
And that's what the thousand brain says was going on. 00:55:47.200 |
And we have a particular implementation in our head, but it doesn't have to be like that at all. 00:55:51.000 |
Do you think there's going to be some kind of impact? 00:55:56.100 |
What do increasingly intelligent AI systems do with us humans in the following way? 00:56:03.600 |
Like how hard is the human in the loop problem? 00:56:07.300 |
How hard is it to interact the finger on the coffee cup equivalent of having a conversation with a human being? 00:56:15.300 |
So how hard is it to fit into our little human world? 00:56:20.300 |
I don't, I think it's a lot of engineering problems. 00:56:26.700 |
How hard is it for computers to fit into a human world? 00:56:29.800 |
That, I mean, that's essentially what I'm asking. 00:56:36.800 |
Are we as humans, like we tried to keep out systems? 00:56:43.300 |
I think I'm not sure that's the right question. 00:56:46.300 |
Let's, let's look at computers as an analogy. 00:56:49.200 |
Computers are a million times faster than us. 00:56:52.600 |
Most people have no idea what's going on when they use computers, right? 00:56:58.100 |
Well, they're, we don't think of them as their own entity. 00:57:09.000 |
Our survival as a 7 billion people or something like that is relying on computers now. 00:57:16.100 |
Don't you think that's a fundamental problem that we see them as something we can't, we don't give rights to? 00:57:24.500 |
So robots, computers, intelligent systems, it feels like for them to operate successfully, they would need to have a lot of the elements that we would start having to think about, like, should this entity have rights? 00:57:44.500 |
First of all, I don't think anyone, hardly anyone thinks that's for computers today. 00:57:51.000 |
Or, you know, if I throw it in the trash can, you know, and hit it with a sledgehammer, I might form a criminal act. 00:57:57.500 |
And now we think about intelligent machines, which is where you're going. 00:58:02.700 |
And, and all of a sudden, like, well, now we can't do that. 00:58:07.900 |
I think the basic problem we have here is that people think intelligent machines will be like us. 00:58:12.600 |
They're going to have the same emotions as we do, the same feelings as we do. 00:58:15.900 |
What if I can build an intelligent machine that absolutely could care less about whether it was on or off or destroyed or not? 00:58:28.200 |
Is it possible to create a system that can model the world deeply and not care about whether it lives or dies? 00:58:44.600 |
Where does your, where does your desire to live come from? 00:58:51.400 |
I mean, we could argue, does it really matter if we live or not? 00:59:05.100 |
Evolutionists want to care and love one another and to care for our children and our, and our relatives and our family and, and so on. 00:59:14.200 |
But they come about not because we're smart, because we're animals that grew up. 00:59:19.500 |
You know, the hummingbird in my backyard cares about its offspring. 00:59:22.700 |
You know, the, every living thing in some sense cares about, you know, surviving. 00:59:27.800 |
But when we talk about creating intelligent machines, we're not creating life. 00:59:36.000 |
We're just creating a machine that can learn really sophisticated stuff. 00:59:39.900 |
And that machine, it may even be able to talk to us, but it doesn't, it's not going to have a desire to live unless somehow we put it into that system. 00:59:50.100 |
The thing is, but you don't learn to want to live. 00:59:54.400 |
It's, well, so people like Ernest Becker argue. 01:00:02.100 |
The way we think about it is something we learned. 01:00:09.200 |
And some people decide they don't want to live and some people decide, you know, you can, but the desire to live is built in DNA. 01:00:15.700 |
But I think what I'm trying to get to is in order to accomplish goals, it's useful to have the urgency of mortality. 01:00:22.500 |
So what the Stoics talked about is meditating in your mortality. 01:00:26.300 |
It might be a very useful thing to do to die and have the urgency of death and to realize that to conceive yourself as an entity that operates in the 01:00:38.800 |
world that eventually will no longer be a part of this world and actually conceive of yourself as a conscious entity might be very useful for you to be a system that makes sense of the world. 01:00:58.100 |
But we're building the equivalent of the cortical columns, the neocortex. 01:01:07.200 |
And the question is, where do they arrive at? 01:01:11.300 |
Because we're not hard coding everything in where. 01:01:14.600 |
Well, in terms of if you build the neocortex equivalent, it will not have any of these desires or emotional states. 01:01:21.300 |
Now, you can argue that that neocortex won't be useful unless I give it some agency, unless I give it some desire, unless I give it some motivation. 01:01:32.600 |
But on its own, it's not going to do those things. 01:01:36.100 |
It's just not going to sit there and say, "I understand the world, therefore I care to live." 01:01:41.800 |
It's just going to say, "I understand the world." 01:01:47.900 |
Do you think it's possible it will at least assign to itself agency and perceive itself in this world as being a conscious entity as a useful way to operate in the world 01:02:07.000 |
I think intelligent machine can be conscious, but that does not, again, imply any of these desires and goals that you're worried about. 01:02:15.600 |
We can talk about what it means for a machine to be conscious. 01:02:20.300 |
And by the way, not worry about, but get excited about. 01:02:23.100 |
It's not necessarily that we should worry about it. 01:02:25.200 |
So I think there's a legitimate problem, or not problem, a question to ask. 01:02:29.300 |
If you build this modeling system, what's it going to model? 01:02:39.300 |
One thing, and it depends on the application. 01:02:43.600 |
It's not something that's inherent to the modeling system. 01:02:46.500 |
It's something we apply to the modeling system in a particular way. 01:02:49.200 |
So if I wanted to make a really smart car, it would have to know about driving in cars and what's important in driving in cars. 01:02:57.800 |
It's not going to figure that out on its own. 01:02:59.900 |
It's not going to sit there and say, "You know, I've understood the world and I've decided..." 01:03:11.900 |
And it's just, you know, is it one day going to wake up and say, "You know what? 01:03:16.400 |
I'm tired of driving and doing what you want. 01:03:19.000 |
I think I have better ideas about how to spend my time." 01:03:23.900 |
Well, part of me is playing a little bit of devil's advocate, but part of me is also trying to think through this because I've studied cars quite a bit. 01:03:32.600 |
And I've studied pedestrians and cyclists quite a bit. 01:03:35.000 |
And there's part of me that thinks that there needs to be more intelligence than we realize in order to drive successfully. 01:03:45.400 |
That game theory of human interaction seems to require some deep understanding of human nature. 01:03:55.600 |
That... Okay, when a pedestrian crosses the street, there's some sense... 01:04:02.200 |
They look at a car, usually, and then they look away. 01:04:06.900 |
There's some sense in which they say, "I believe that you're not going to murder me. 01:04:14.300 |
This is the little dance of pedestrian car interaction is saying, "I'm going to look away and I'm going to put my life in your hands because I think you're human. 01:04:25.800 |
And then the car, in order to successfully operate in Manhattan streets, has to say, "No, no, no, I am going to kill you." 01:04:35.200 |
There's a little bit of this weird inkling of mutual murder. 01:04:40.300 |
And we somehow successfully operate through that. 01:04:47.400 |
I think it might have a lot of the same elements that you're talking about, which is we're leveraging things we were born with and applying them in the context that... 01:04:57.400 |
All right, I would have said that that kind of interaction is learned. 01:05:02.400 |
Because, you know, people in different cultures have different interactions like that. 01:05:05.800 |
If you cross the street in different cities and different parts of the world, they have different ways of interacting. 01:05:10.900 |
And I would say an intelligence system can learn that, too. 01:05:15.300 |
And the intelligence system can understand humans. 01:05:19.800 |
It could understand that, you know, just like I can study an animal and learn something about that animal. 01:05:25.800 |
You know, I could study apes and learn something about their culture and so on. 01:05:30.700 |
I may not be completely, but I can understand something. 01:05:38.600 |
The question we're trying to get at, will the intelligence machine have its own personal agency that's beyond, you know, what we assign to it or its own personal, you know, goals or will it evolve and create these things? 01:05:50.200 |
My confidence comes from understanding the mechanisms I'm talking about creating. 01:06:03.000 |
I know what the kind of things it could do and the kind of things it can't do. 01:06:06.000 |
Just like when I build a computer, I know it's not going to on its own decide to put another register inside of it. 01:06:13.600 |
No matter what your software does, it can't add a register to the computer. 01:06:16.200 |
So, in this way, when we build AI systems, we have to make choices about how we embed them. 01:06:27.700 |
I said, you know, intelligence system is not just the neocortex equivalent. 01:06:31.800 |
You have to have that, but it has to have some kind of embodiment, physical or virtual. 01:06:38.300 |
It has to have some sort of ideas about dangers, about things it shouldn't do. 01:06:42.200 |
Like, you know, like we build in safeguards into systems. 01:06:49.600 |
My car follows my directions until the day it sees I'm about to hit something and it ignores my directions and puts the brakes on. 01:06:58.300 |
So, that's a very interesting problem, how to build those in. 01:07:01.900 |
I think my differing opinion about the risks of AI for most people is that people assume that somehow those things will just appear automatically and it'll evolve. 01:07:12.800 |
And intelligence itself begets that stuff or requires it. 01:07:18.300 |
Intelligence of the neocortex equivalent doesn't require this. 01:07:20.700 |
The neocortex equivalent just says, I'm a learning system. 01:07:31.500 |
It doesn't, a map has no intent about things, but you can use it to solve problems. 01:07:37.700 |
So, the building, engineering the neocortex in itself is just creating an intelligent prediction system. 01:07:48.600 |
You can use it to then make predictions and then, but you can also put it inside a thing that's actually acting in this world. 01:07:58.600 |
It's, again, think of the map analogy, right? 01:08:04.700 |
It's just, you can learn, but it's just inert. 01:08:06.500 |
So, we have to embed it somehow in something to do something. 01:08:11.600 |
You had, you had a conversation with Sam Harris recently that was sort of, you've had a bit of a disagreement and you're sticking on this point. 01:08:21.300 |
You know, Elon Musk, Stuart Russell kind of have us worry existential threats of AI. 01:08:31.400 |
Why, if we engineer an increasingly intelligent neocortex type of system in the computer, why that shouldn't be a thing that we worry about? 01:08:41.000 |
It was interesting, you used the word intuition and Sam Harris used the word intuition too. 01:08:45.000 |
And, and when he used that intuition, that word, I immediately stopped and said, oh, that's the cause of the problem. 01:08:53.800 |
I'm speaking about something I understand, something I'm going to build, something I am building, something I understand completely, or at least well enough to know what it's all, I'm guessing. 01:09:03.700 |
And I think most people who are worried, they have trouble separating out, they don't have, they don't have the knowledge or the understanding about like what is intelligence? 01:09:15.100 |
How's it separate from these other functions in the brain? 01:09:17.300 |
And so, they imagine it's going to be human-like or animal-like. 01:09:20.700 |
It's going to have, it's going to have the same sort of drives and emotions we have, but there's no reason for that. 01:09:26.400 |
That's just because there's, there's an unknown. 01:09:29.100 |
If you, if the unknown is like, oh my God, you know, I don't know what this is going to do. 01:09:36.600 |
It'll be really smart, but it won't be like us at all. 01:09:38.800 |
And, um, and, but I, I'm coming from that, not because I just guessing, I'm not intuitive using intuition. 01:09:45.800 |
I'm basing it on like, okay, I understand this thing works. 01:09:52.400 |
So I also disagree with the, the intuitions that Sam has, but, but I also disagree with what you just said, which, you know, what's a good, uh, analogy. 01:10:03.400 |
So if you look at the Twitter algorithm in the early days, just recommender systems, you can understand how recommender systems work. 01:10:13.000 |
What you can't understand in the early days is when you apply that recommender system at scale to thousands of millions of people, how that can change societies. 01:10:21.800 |
So the question is, yes, you're just saying, this is how an engineer in your cortex works, but the, like when you have a very useful, uh, tick tock type of service that goes viral when you're in your cortex goes viral and then millions of people start using it. 01:10:43.900 |
One thing I want to say is that, um, AI is a dangerous technology. 01:10:46.900 |
I don't, I'm not denying that all technology is dangerous. 01:10:55.600 |
The thing where the narrow component we're talking about now is the existential risk of AI. 01:11:01.700 |
So I want to make that distinction because I think AI can be applied poorly. 01:11:05.800 |
It can be applied in ways that, you know, people are going to understand the consequences of it. 01:11:11.300 |
Um, these are all potentially very bad things, but they're not the AI system creating this existential risk on its own. 01:11:20.500 |
And that's the only place I disagree with other people. 01:11:23.600 |
So, so I, I think the existential risk thing is, um, humans are really damn good at surviving. 01:11:29.300 |
So to kill off the human race, it'd be very, very difficult. 01:11:36.500 |
I don't think AI systems are ever going to try to, I don't think AI systems are ever going to like, say, 01:11:44.200 |
Um, I don't think that's going to happen, at least not in the way I'm talking about it. 01:11:49.900 |
So the Twitter recommendation outrun this interesting example. 01:11:54.700 |
Let's, let's use computers as an analogy again. 01:12:01.800 |
I can't predict what people are going to use it for. 01:12:05.200 |
They can, they can even create computer viruses. 01:12:10.000 |
So there's some unknown about its utility and about where it's going to go. 01:12:13.600 |
But on the other hand, I pointed out that once I build a computer, it's not going to fundamentally change how it computes. 01:12:20.100 |
It's like, I use the example of a register, which is a part, internal part of a computer. 01:12:23.700 |
Um, you know, I say it can't just sit there, cause computers don't evolve. 01:12:29.700 |
They don't, you know, the physical manifestation of the computer itself is not going to, there's certain things it can't do. 01:12:35.700 |
So we can break into things like things that are possible to happen. 01:12:38.700 |
We can't predict and things that are just impossible to happen. 01:12:41.600 |
Unless we go out of our way to make them happen, they're not going to happen unless somebody makes them happen. 01:12:46.500 |
So there's, there's a bunch of things to say. 01:12:48.100 |
One is the physical aspect, which you're absolutely right. 01:12:51.500 |
We have to build a thing for it to operate in the physical world and you can just stop building them. 01:12:57.600 |
Uh, you know, the moment they're not doing the thing you want them to do, or just change the design or change the design. 01:13:05.500 |
The question is, I mean, there's a, it's possible in the physical world, this is probably longer term is you automate the building. 01:13:11.900 |
It makes, it makes a lot of sense to automate the building. 01:13:14.900 |
There's a lot of factories that are doing more and more and more automation to go from raw resources to the final product. 01:13:21.100 |
It's possible to imagine that it's obviously much more efficient to keep, to create a factory that's creating robots that do something, uh, you know, do something extremely useful for society. 01:13:34.600 |
It could be, uh, it could be, it could be your toaster, but a toaster that's much has deeper knowledge of your culinary preferences. 01:13:44.100 |
Well, I think now you've hit on the right thing. 01:13:45.800 |
The real thing we need to be worried about Lex is self-replication. 01:13:53.600 |
Self-replication because self-replication is dangerous. 01:13:57.300 |
It's probably more likely to be killed by a virus, you know, or a human hand veneered virus. 01:14:02.200 |
Anybody can create a, you know, there's the technology is getting so almost anybody, but not anybody, but a lot of people could create a human engineered virus that could wipe out humanity. 01:14:20.100 |
So when I think about, you know, AI, I'm not thinking about robots, building robots. 01:14:28.600 |
Well, that's because you're interested in creating intelligence. 01:14:31.000 |
It seems like self-replication is a good way to make a lot of money. 01:14:37.500 |
But so is, you know, maybe editing viruses is a good way to, I don't know. 01:14:42.000 |
The point is, if as a society, when we want to look at existential risks, the existential risks we face that we can control almost all of all of around self-replication. 01:14:54.800 |
The question is, I don't see a good way to make a lot of money by engineering viruses and deploying them on the world. 01:15:01.700 |
There could be, there could be applications that are useful. 01:15:07.900 |
You only need some, you know, terrorists who wants to do it because it doesn't take a lot of money to make viruses. 01:15:11.600 |
Um, let's just separate out what's risky and what's not risky. 01:15:16.000 |
I'm arguing that the intelligence side of this equation is not risky. 01:15:20.900 |
It's the self-replication side of the equation that's risky. 01:15:30.300 |
That those are often like talked about in the same conversation. 01:15:37.400 |
Like creating ultra intelligent, super intelligent systems is not necessarily coupled with a self-replicating, arbitrarily self-replicating systems. 01:15:47.600 |
And you don't get evolution unless you're self-replicating. 01:15:50.500 |
And so I think that's just this argument that people have trouble separating those two out. 01:15:55.700 |
They just think, oh, yeah, intelligence looks like us. 01:15:58.600 |
And look how, look at the damage we've done to this planet. 01:16:01.000 |
Like how we've, you know, destroyed all these other species. 01:16:04.800 |
We're 8 billion of us or 7 billion of us now. 01:16:08.700 |
I think the idea is that the more intelligent we're able to build systems, the more tempting it becomes from a capitalist perspective of creating products. 01:16:18.700 |
The more tempting it becomes to create self-reproducing systems. 01:16:24.400 |
So does that mean we don't build intelligent systems? 01:16:26.800 |
No, that means we regulate, we, we understand the risks. 01:16:32.300 |
Uh, you know, look, there's a lot of things we could do as a society, which have some sort of financial benefit to someone, which could do a lot of harm. 01:16:39.800 |
And we have to learn how to regulate those things. 01:16:42.500 |
We have to learn how to deal with those things. 01:16:46.200 |
Like, I would say having intelligent machines at our disposal will actually help us in the end more because it'll help us understand these risks better. 01:16:55.400 |
There might be ways of saying, oh, well, how do we solve climate change problems? 01:16:58.900 |
You know, how do we do this or how do we do that? 01:17:00.700 |
Um, that just like computers are dangerous in the hands of the wrong people, but they've been so great for so many other things. 01:17:09.500 |
And I think we have to do the same with intelligent machines. 01:17:12.500 |
We just, but we have to be constantly vigilant about this idea of a bad actors doing bad things with them and B, um, don't ever, ever create a self-replicating system. 01:17:22.900 |
Um, uh, and, and by the way, I don't even know if you could create a self-replicating system that uses a factory. 01:17:30.000 |
You know, nature's way of self-replicating is so amazing. 01:17:36.500 |
It just, you know, the thing and resources and it goes right. 01:17:40.300 |
Um, if I said to you, you know what we have to build, uh, our goal is to build a factory that can make, that builds new factories and it has to end to end supply chain. 01:17:50.400 |
It has to, it has to mine the resources, get the energy. 01:17:58.000 |
It's, you know, no one's doing that in the next, you know, a hundred years. 01:18:00.700 |
I've been extremely impressed by the efforts of Elon Musk and Tesla to try to do exactly that. 01:18:10.000 |
Well, he actually, I think states the goal is to go from raw resource to the, uh, the final car in one factory. 01:18:18.800 |
Of course it's not currently possible, but they're taking huge leaps. 01:18:24.200 |
This has been a goal for many, uh, industries for a long, long time. 01:18:28.800 |
Well, a lot of people, what they do is instead they have like a million suppliers and then they like, there's everybody's man. 01:18:36.200 |
They all co locate them and they, and they tie the systems together. 01:18:42.300 |
I think that's, that also is not getting at the issue I was just talking about, um, which is self replication. 01:18:48.300 |
It's, um, I mean, self replication means there's no entity involved other than the entity that's replicating. 01:18:58.100 |
And so if there are humans in this, in the loop, that's not really self replicating, right? 01:19:02.200 |
It's unless somehow we're duped into doing it, but it's also, I don't necessarily agree with you. 01:19:12.500 |
Cause you, you've kind of mentioned that AI will not say no to us. 01:19:18.800 |
So like, um, I think it's a useful feature to build in. 01:19:23.300 |
I'm just trying to like, uh, put myself in the mind of engineers to sometimes say no. 01:19:31.300 |
Well, I mean, I gave the example earlier, right? 01:19:35.800 |
My car turns the wheel and, and applies the accelerator and the brake, as I say, until it decides there's something dangerous. 01:19:45.700 |
Now that was something it didn't decide to do. 01:19:56.100 |
The question again, isn't like, well, if we create an intelligent system, will it ever ignore our commands? 01:20:02.700 |
And sometimes is it going to do it because it came up, came up with its own goals that serve its purposes and it doesn't care about our purposes? 01:20:12.900 |
So let me ask you about these, uh, super intelligent cortical systems that we engineer and us humans. 01:20:20.100 |
Do you think, uh, with these entities operating out there in the world, what, what is the future most promising future look like? 01:20:32.800 |
Like, how do we keep us humans around when you have increasingly intelligent beings? 01:20:37.800 |
Is it, uh, one of the dreams is to upload our minds in the digital space. 01:20:42.100 |
So can we just give our minds to these systems so they can operate on them? 01:20:48.900 |
Is there some kind of more interesting merger or is there more, more? 01:20:51.800 |
So in the third part of my book, I talked about all these scenarios and let me just walk through them. 01:21:03.300 |
Like, like we have no idea how to do this even remotely right now. 01:21:06.700 |
Um, so it would be a very long way away, but I make the argument you wouldn't like the result. 01:21:12.700 |
Um, and you wouldn't be pleased with the result. 01:21:16.200 |
It's really not what you think it's going to be. 01:21:18.300 |
Um, imagine I could upload your brain into a, into a computer right now. 01:21:21.400 |
And now the computer's sitting there going, Hey, I'm over here. 01:21:31.100 |
Are you going to feel satisfied that then you, but people imagine, look, I'm on my deathbed and I'm about to, you know, expire and I push the button and I'm uploaded, but think about it a little differently. 01:21:41.400 |
And, and so I don't think it's going to be a thing because people, by the time we're able to do this, if ever, because you have to replicate the entire body, not just the brain. 01:21:50.700 |
It's, it's really, it's, I walked through the issues. 01:21:57.800 |
Is there, if, is there a shortcut to it can only save a certain part that makes us truly ours? 01:22:04.500 |
No, but I think that machine would feel like it's you too. 01:22:08.100 |
If two people just like I have a child, I have a, I have a child, right? 01:22:15.300 |
And, um, uh, I don't just because they're somewhat like me, I don't feel on them and they don't feel like on me. 01:22:24.800 |
So we can come back to what, what makes, what consciousness do you want? 01:22:28.300 |
We can talk about that, but we don't have a remote consciousness. 01:22:31.300 |
I'm not sitting there going, oh, I'm conscious of that. 01:22:40.700 |
Ain't gonna happen in a hundred years, maybe a thousand, but I don't think people are going to want to do it. 01:22:45.300 |
The merging your mind with, uh, you know, the neural link thing, right? 01:22:53.500 |
It's, it's one thing to make progress, to control a prosthetic arm. 01:22:56.900 |
It's another to have like a billion or several billion, you know, things and understanding what those signals mean. 01:23:02.300 |
Like it's the one thing that like, okay, I can learn to think some patterns to make something happen. 01:23:07.500 |
It's quite another thing to have a system, a computer, which actually knows exactly what cells it's talking to and how it's talking to them and interacting in a way like that. 01:23:22.300 |
What, so for me, what makes that merger very difficult practically in the next 10, 20, 50 years is like literally the biology side of it, which is like, it's just hard to do that kind of surgery in a safe way. 01:23:37.400 |
But your intuition is even the machine learning part of it, where the machine has to learn what the heck it's talking to. 01:23:46.600 |
And it's not, it's, it's easy to do when you're talking about hundreds of signals. 01:23:51.100 |
It's, it's a totally different thing to say, talking about billions of signals. 01:23:55.400 |
So you don't think it's the raw, it's a machine learning problem. 01:24:00.100 |
Well, I'm just saying, no, I think you'd have to have detailed knowledge. 01:24:03.400 |
You'd have to know exactly what the types of neurons you're connecting to. 01:24:07.300 |
I mean, in the brain, there's these, there are neurons that do all different types of things. 01:24:13.300 |
We talked about the grid cells and the place cells. 01:24:15.300 |
You know, you have to know what kind of cells you're talking to and what they're doing and how their timing works and all, all this stuff, which you can't, today there's no way of doing that. 01:24:23.000 |
But I think it's, I think it's a, I think the problem, you're right that the biological aspect of it, like who wants to have a surgery and have this stuff, you know, inserted in your brain, that's a problem. 01:24:34.100 |
I think the, the information coding aspect is much worse. 01:24:40.200 |
It's simple machine learning stuff because you're doing simple things, but if you want to merge your brain, like I'm thinking on the internet, I'm merge my brain with the machine and we're both doing, that's a totally different issue. 01:24:54.600 |
If you have a super clean signal from a bunch of neurons at the start, you don't know what those neurons are. 01:25:02.700 |
I think that's much easier than the getting of the clean signal. 01:25:07.600 |
I think if you think about today's machine learning, that's what you would conclude. 01:25:12.800 |
I'm thinking about what's going on in the brain and I don't reach that conclusion. 01:25:17.700 |
But I don't think even, even then, I think there's kind of a sad future. 01:25:22.100 |
Like, you know, do I, do I have to like plug my brain into a computer? 01:25:33.500 |
Some sort of, oh, I disagree that we don't know what those are, but it seems like there could be a lot of different applications. 01:25:40.200 |
It's like virtual reality is to expand your brain's capability to, to, to like to read Wikipedia. 01:25:54.700 |
You're making your life in this short period of time better, right? 01:25:57.500 |
Just like having the internet made our life better. 01:26:02.100 |
So I think that's of, of, if I think about all the possible gains we can have here, that's a marginal one. 01:26:07.900 |
It's an individual, Hey, I'm better, you know, I'm smarter. 01:26:20.200 |
When each of us individuals are smarter, we get a chance to then share our smartness. 01:26:24.800 |
We get smarter and smarter together as like, as a collective, this is kind of like this ant colony of, why don't I just create an intelligent machine that doesn't have any of this? 01:26:34.800 |
It's it's everything except don't burden it with my brain. 01:26:41.200 |
It's like my child, but it's much, much smarter than me. 01:26:43.300 |
So I have a choice between doing some implant, doing some hybrid, weird, you know, biological thing that bleeding and all these problems and limited by my brain or creating a system, which is super smart that I can talk to. 01:26:58.100 |
They can read the internet, you know, read Wikipedia and talk to me. 01:27:00.700 |
I guess my, uh, the open questions there are what does the manifestation of super intelligence look like? 01:27:08.500 |
So like, what are we going to, you talked about, why do I want to merge with AI? 01:27:13.300 |
Like what, what's the actual marginal benefit here? 01:27:16.100 |
If I, if we have a super intelligent system, how will it make our life better? 01:27:24.200 |
So let's, let's, that's a great question, but let's break it down to little pieces. 01:27:28.800 |
On the one hand, it can make our life better in lots of simple ways. 01:27:32.100 |
You mentioned like a care robot or something that helps me do things. 01:27:37.000 |
Little things like that, we can have better, smarter cars. 01:27:39.500 |
We can have, you know, better agents that aid helping us in our work environment and things like that. 01:27:45.300 |
To me, that's like the easy stuff, the simple stuff in the beginning. 01:27:48.500 |
Um, um, and so in the same way that computers made our lives better in ways, many, many ways, I will have those kinds of things. 01:27:57.700 |
To me, the really exciting thing about AI is the sort of its transcendent, transcendent quality in terms of humanity. 01:28:08.300 |
It's going to be hard for us to live anywhere else. 01:28:10.500 |
Uh, I don't think you and I are going to want to live on Mars anytime soon. 01:28:14.200 |
Um, and, um, and we're flawed, you know, we may end up destroying ourselves. 01:28:23.100 |
Uh, we, if not completely, we could destroy our civilizations. 01:28:26.400 |
You know, it's just face the fact that we have issues here, but we can create intelligent machines that can help us in various ways. 01:28:33.600 |
For example, one example I gave, and that sounds a little sci-fi, but I believe this. 01:28:37.100 |
If we really wanted to live on Mars, we'd have to have intelligent systems that go there and build the habitat for us. 01:28:46.700 |
Um, but could we have a thousand or 10,000, you know, engineering workers up there doing this stuff, building things, terraforming Mars? 01:28:55.700 |
But then if we want to, if we want to go around the universe, should I send my children around the universe? 01:29:00.500 |
Or should I send some intelligent machine, which is like a child that represents me and understands our needs here on earth that could travel through space? 01:29:08.400 |
Um, so it's sort of, in some sense, intelligence allows us to transcend our, the limitations of our biology. 01:29:15.600 |
Uh, and don't think of it as a negative thing. 01:29:20.000 |
It's in some sense, my children transcend my, my biology too, because they, they live beyond me. 01:29:26.300 |
Um, and we impart, they represent me and they also have their own knowledge and I can impart knowledge to them. 01:29:31.400 |
So intelligent machines will be like that too, but not limited like us. 01:29:34.500 |
But the question is, um, there's so many ways that transcendence can happen and the merger with AI and humans is one of those ways. 01:29:43.800 |
So you said intelligent, basically beings or systems propagating throughout the universe, representing us humans. 01:29:52.200 |
They represent us humans in the sense they represent our knowledge and our history, not us individually. 01:30:01.000 |
But I mean, the question is, is it just the database with, uh, with the really damn good, uh, model? 01:30:09.100 |
No, no, they're conscious, conscious, just like us. 01:30:19.400 |
I guess maybe I've already, I kind of, I take a very broad view of our life here on, on earth. 01:30:33.100 |
Are we fighting just cause we want to just keep going? 01:30:38.500 |
So to me, the point, if I asked myself, what's the point of life is what's transcends that ephemeral sort of biological experience is to me, this is my answer is the acquisition of knowledge to understand more about the universe, uh, and to explore. 01:31:03.800 |
If the ultimate outcome of humanity is we create systems that are intelligent, that are offspring, but they're not like us at all. 01:31:12.000 |
And we stay, we stay here and live on earth as long as we can, which won't be forever, but as long as we can. 01:31:25.500 |
Well, would you be okay then if, um, the human species vanishes, but our knowledge is preserved and keeps being expanded by intelligent systems? 01:31:38.700 |
I want our knowledge to be preserved and expanded. 01:31:46.600 |
But if it, if it does happen, what if we were sitting here and this is the, we were the last two people on earth and we're saying Lex, we blew it. 01:31:55.500 |
Wouldn't I feel better if I knew that our knowledge was preserved and that we had agents that knew about that, that were trans, you know, that left earth. 01:32:06.700 |
You know, I make the analogy of like, you know, the dinosaurs, the poor dinosaurs, they live for, you know, tens of millions of years. 01:32:12.500 |
They, you know, they, they fought to survive. 01:32:15.400 |
They, they, you know, they did everything we do and then they're all gone. 01:32:19.600 |
Like, you know, and, and if we didn't discover their bones, nobody would ever know that they ever existed. 01:32:30.800 |
And it's kind of, it's jarring to think about that. 01:32:34.100 |
It's possible that a human like intelligence civilization has previously existed on earth. 01:32:40.100 |
The reason I say this is like, it is jarring to think that we would not, if they went extinct, we wouldn't be able to find evidence of them. 01:32:50.500 |
After a sufficient amount of time, of course, there's like, look, basically humans, like if we destroy ourselves now, human civilization destroy ourselves now, after a sufficient amount of time, we would not be, we'd find evidence of the dinosaurs. 01:33:08.400 |
Although I'm not sure if we have enough knowledge about species going back for billions of years, that we could, we could, we might be able to eliminate that possibility. 01:33:19.500 |
Of course, this is a similar question to, you know, there were lots of intelligence species throughout our galaxy that have all disappeared. 01:33:27.100 |
That's super sad that they're exactly that there may have been much more intelligent alien civilizations in our galaxy. 01:33:38.400 |
You actually talked about this, that humans might destroy ourselves and how we might preserve our knowledge and advertise that knowledge to other... 01:33:58.200 |
You know, like make it like from a tourism perspective, make it interesting. 01:34:04.100 |
Can you describe how you think about this problem? 01:34:06.900 |
I broke it down into two parts, actually three parts. 01:34:09.600 |
One is, you know, there's a lot of things we know that what if we ended, what if our civilization collapsed? 01:34:22.800 |
But historically, it would be likely at some point. 01:34:29.200 |
You know, could we, and then intelligent life evolved again on this planet. 01:34:34.400 |
Wouldn't they want to know a lot about us and what we knew? 01:34:39.400 |
So one very simple thing I said, how would we archive what we know? 01:34:44.100 |
I said, you know what, that wouldn't be that hard. 01:34:46.100 |
Put a few satellites, you know, going around the sun and we'd upload Wikipedia every day and that kind of thing. 01:34:51.600 |
So, you know, if we end up killing ourselves, well, it's up there and the next intelligence piece will find it and learn something. 01:35:02.100 |
The next thing I said, well, what if, you know, how outside of our solar system, we have the SETI program. 01:35:08.600 |
We're looking for these intelligent signals from everybody. 01:35:11.100 |
And if you do a little bit of math, which I did in the book, and you say, well, what if intelligent species only live for 10,000 years before, you know, technologically intelligent species, like ones are really able to do the stuff we're just starting to be able to do. 01:35:24.500 |
Well, the chances are we wouldn't be able to see any of them because they would have all been disappeared by now. 01:35:29.200 |
They've lived for 10,000 years and now they're gone. 01:35:32.100 |
And so we're not going to find these signals being sent from these people because, but I said, what kind of signal could you create that would last a million years or a billion years? 01:35:41.200 |
That someone would say, damn it, someone smart lived there. 01:35:45.800 |
That would be a life changing event for us to figure that out. 01:35:48.500 |
Well, what we're looking for today in the SETI program isn't that. 01:35:50.600 |
We're looking for very coded signals in some sense. 01:35:53.900 |
And so I asked myself, what would be a different type of signal one could create? 01:35:57.600 |
I've always thought about this throughout my life. 01:35:59.300 |
In the book, I gave one possible suggestion, which was we now detect planets going around other stars. 01:36:10.400 |
And we do that by seeing this slight dimming of the light as the planets move in front of them. 01:36:14.800 |
That's how we detect planets elsewhere in our galaxy. 01:36:20.100 |
What if we created something like that, that just rotated around the sun and it blocked out a little bit of light in a particular pattern that someone said, hey, that's not a planet. 01:36:33.200 |
You can say, what if it's beating up pi, you know, 3 point whatever. 01:36:40.100 |
From a distance, broadly broadcast, takes no continual activation on our part. 01:36:46.100 |
No one has to be sitting there running a computer and supplying it with power. 01:36:52.700 |
And I argued that part of the SETI program should be looking for signals like that. 01:36:57.300 |
And to look for signals like that, you ought to figure out how would we create a signal? 01:37:01.200 |
Like, what would we create that would be like that, that would persist for millions of years, that would be broadcast broadly, that you could see from a distance, that was unequivocal, came from an intelligent species. 01:37:13.000 |
And so I gave that one example because they don't know what I know of, actually. 01:37:16.600 |
And then finally, right, if ultimately our solar system will die at some point in time, you know, how do we go beyond that? 01:37:27.800 |
And I think it's possible, if at all possible, we'll have to create intelligent machines that travel throughout the solar system or throughout the galaxy. 01:37:37.800 |
I don't think it's going to be biological organisms. 01:37:40.200 |
So these are just things to think about, you know. 01:37:41.900 |
Like, what's the, you know, I don't want to be like the dinosaur. 01:37:44.600 |
I don't want to just live and, okay, that was it. 01:37:47.700 |
Well, there is a kind of presumption that we're going to live forever, which I think it is a bit sad to imagine that the message we send as you talk about is that we were once here instead of we are here. 01:38:07.400 |
But it's more of a it's more of an insurance policy in case we're not here, you know. 01:38:11.400 |
Well, I don't know, but there is something I think about. 01:38:16.200 |
We as humans don't often think about this, but it's like whenever I record a video, I've done this a couple of times in my life. 01:38:26.700 |
I've recorded a video for my future self, just for personal, just for fun. 01:38:30.400 |
And it's always just fascinating to think about that, preserving yourself for future civilizations. 01:38:40.100 |
For me, it was preserving myself for a future me. 01:38:42.800 |
But that's a little that's a little fun example of archival. 01:38:47.400 |
Well, these podcasts are preserving you and I in a way for future, hopefully well after we're gone. 01:38:54.500 |
But you don't often we're sitting here talking about this. 01:38:59.900 |
You are not thinking about the fact that you and I are going to die and there'll be like 10 years after somebody watching this and we're still alive. 01:39:12.500 |
I'm here because I want to talk about ideas and these ideas transcend me and they transcend this time and on our planet. 01:39:22.800 |
We're talking here about ideas that could be around a thousand years from now or a million years from now. 01:39:28.100 |
When I wrote my book, I had an audience in mind and one of the clearest audiences was- 01:39:34.800 |
No, were people reading this a hundred years from now. 01:39:38.300 |
I said to myself, how do I make this book relevant to someone reading this a hundred years from now? 01:39:42.800 |
What would they want to know that we were thinking back then? 01:39:45.700 |
What would make it like that was an interesting, it's still an interesting book. 01:39:49.700 |
I'm not sure I could achieve that, but that was how I thought about it. 01:39:53.900 |
Because these ideas, especially in the third part of the book, the ones we were just talking about, these crazy, 01:39:59.000 |
what sounds like crazy ideas about storing our knowledge and merging our brains with computers and sending our machines out into space. 01:40:09.800 |
And they may not even happen in the next hundred years. 01:40:15.400 |
But we have the unique opportunity right now, we, you, me, and other people like this, to sort of at least propose the agenda that might impact the future like that. 01:40:27.200 |
It's a fascinating way to think, both writing or creating, try to create ideas, try to create things that hold up in time. 01:40:39.700 |
You know, understanding how the brain works, we're going to figure that out once. 01:40:46.700 |
And people will study that thousands of years from now. 01:40:53.700 |
And because ideas are exciting even well into the future. 01:41:01.500 |
Well, the interesting thing is big ideas, even if they're wrong, are still useful. 01:41:09.600 |
Yeah, especially if they're not completely wrong. 01:41:12.700 |
Newton's laws are not wrong, they're just Einstein's are better. 01:41:18.700 |
So, yeah, I mean, but we're talking with Newton and Einstein, we're talking about physics. 01:41:22.200 |
I wonder if we'll ever achieve that kind of clarity about understanding like complex systems and this particular manifestation of complex systems, which is the human brain. 01:41:38.200 |
I don't see any reason why we can't completely. 01:41:41.200 |
I mean, completely understand in the sense, you know, we don't really completely understand what all the molecules in this water bottle are doing. 01:41:48.200 |
But, you know, we have laws that sort of capture it pretty good. 01:41:51.200 |
And so we'll have that kind of understanding. 01:41:54.200 |
I mean, it's not like you're going to know what every neuron in your brain is doing. 01:42:02.200 |
And second of all, to do, you know, do what physics does, which is like have concrete experiments where we can validate. 01:42:27.200 |
And, you know, until fairly recently, I wouldn't have said that. 01:42:31.200 |
But right now, where I'm sitting right now, I'm saying, you know, this is going to happen. 01:42:38.200 |
We finally have a framework for understanding what's going on in the cortex. 01:42:47.200 |
So I can't see why we wouldn't be able to understand it. 01:42:52.200 |
So, I mean, on that topic, let me ask you to play devil's advocate. 01:42:56.200 |
Is it possible for you to imagine, look 100 years from now, and looking at your book, 01:43:18.200 |
I think there's, you know, I can best relate it to, like, things I'm worried about right now. 01:43:30.200 |
But it could be far more, there's enough things I don't know about it that it might be working 01:43:44.200 |
I talked about, like, you have a thousand models of a coffee cup like that. 01:43:47.200 |
That could turn out to be wrong because it may be there are a thousand models that are 01:43:53.200 |
sub-models, but not really a single model of the coffee cup. 01:43:57.200 |
I mean, there's things, these are all sort of on the edges, things that I present as, 01:44:06.200 |
And there's parts of the theory which I don't understand the complexity well. 01:44:13.200 |
So I think the idea that the brain is a distributed modeling system is not controversial at all, 01:44:22.200 |
The question then is, are each cortical column an independent modeling system? 01:44:32.200 |
My intuition, not even thinking why you could be wrong, is the same intuition I have about 01:44:38.200 |
any sort of physicist, like, strength theory, that we as humans desire for a clean explanation. 01:44:46.200 |
And a hundred years from now, intelligent systems might look back at us and laugh at 01:44:54.200 |
how we try to get rid of the whole mess by having simple explanation, when the reality 01:45:05.200 |
It's like this idea of complex systems and cellular automata. 01:45:11.200 |
Yeah, I think that the history of science suggests that's not likely to occur. 01:45:17.200 |
The history of science suggests that as a theorist, and we're theorists, you look for 01:45:24.200 |
Fully knowing that whatever simple explanation you're going to come up with is not going 01:45:36.200 |
They give you a framework on which you now can talk about a problem and figure out, okay, 01:45:44.200 |
The best frameworks stick around while the details change. 01:45:48.200 |
Again, the classic example is Newton and Einstein, right? 01:46:05.200 |
It's not obvious for physics either that the universe should be such that it's amenable 01:46:11.200 |
I know, but it's so far it appears to be, as far as we can tell. 01:46:19.200 |
But it's also an open question whether the brain is amenable to such clean theories. 01:46:29.200 |
I would just say, well, okay, the evidence we have suggests that the human brain is, 01:46:39.200 |
A, at the one time, extremely messy and complex, but there's some parts that are very regular 01:46:46.200 |
It's extremely regular in its structure, and unbelievably so. 01:46:51.200 |
And then I mentioned earlier, the other thing is its universal abilities. 01:46:59.200 |
We haven't figured out what it can't learn yet. 01:47:01.200 |
We don't know, but we haven't figured out yet. 01:47:03.200 |
But it learns things that it never was evolved to learn. 01:47:08.200 |
That's why I went into this field, because I said, you know, this regular structure, 01:47:15.200 |
There's got to be some underlying principles that are common, and other scientists have 01:47:25.200 |
And whether the theories play out exactly this way or not, that is the role that theorists 01:47:33.200 |
And so far, it's worked out well, even though maybe we don't understand all the laws of 01:47:41.200 |
The ones we have, our theories are pretty useful. 01:47:45.200 |
You mentioned that we should not necessarily be, at least to the degree that we are, worried 01:47:51.200 |
about the existential risks of artificial intelligence relative to human risks from 01:48:02.200 |
What aspect of human nature worries you the most in terms of the survival of the human 01:48:22.200 |
One is how it's difficult for us to separate our rational component of ourselves from our 01:48:29.200 |
evolutionary heritage, which is not always pretty. 01:48:36.200 |
Rape is an evolutionary good strategy for reproduction. 01:48:44.200 |
Making other people miserable at times is a good strategy for reproduction. 01:48:49.200 |
And so now that we know that, and yet we have this sort of ... You and I can have this very 01:48:54.200 |
rational discussion talking about intelligence and brains and life and so on. 01:49:01.200 |
It's just a big transition to get humans, all humans, to make the transition from being 01:49:06.200 |
like, "Let's pay no attention to all that ugly stuff over here. 01:49:12.200 |
What's unique about humanity is our knowledge and our intellect. 01:49:16.200 |
But the fact that we're striving is in itself amazing. 01:49:19.200 |
The fact that we're able to overcome that part, and it seems like we are more and more 01:49:28.200 |
That is the optimistic view, and I agree with you. 01:49:37.200 |
It could end tomorrow because some terrorists could get nuclear bombs and blow us all up. 01:49:43.200 |
The other thing I'm disappointed is, and I understand it. 01:49:48.200 |
It's just a fact is that we're so prone to false beliefs. 01:49:55.200 |
The things we can interact with directly, physical objects, people, that model's pretty good. 01:50:03.200 |
I touch something, I look at it, I talk to you, see if my model's correct. 01:50:06.200 |
But so much of what we know is stuff I can't directly interact with. 01:50:10.200 |
I only know because someone told me about it. 01:50:12.200 |
So we're inherently prone to having false beliefs because if I'm told something, 01:50:21.200 |
Then we have the scientific process which says we are inherently flawed. 01:50:26.200 |
So the only way we can get closer to the truth is by looking for contrary evidence. 01:50:34.200 |
Yeah, like this conspiracy theory, this theory that scientists keep telling me about 01:50:43.200 |
As far as I can tell, when I look out, it looks pretty flat. 01:50:49.200 |
But it's also, I tend to believe that we haven't figured out most of this thing. 01:51:04.200 |
I mean, it's like, oh, that's like a pleasure. 01:51:08.200 |
But I'm saying like there's going to be a lot of "wrong ideas." 01:51:13.200 |
I mean, I've been thinking a lot about engineering systems like social networks 01:51:18.200 |
and so on, and I've been worried about censorship and thinking through all that 01:51:22.200 |
kind of stuff because there's a lot of wrong ideas. 01:51:27.200 |
But then I also read history and see when you censor ideas that are wrong. 01:51:34.200 |
Now, this could be small-scale censorship, like a young grad student who comes up, 01:51:40.200 |
who raises their hand and says some crazy idea. 01:51:43.200 |
A form of censorship could be, I shouldn't use the word censorship, but- 01:51:49.200 |
De-incentivize them from, no, no, no, no, this is the way it's been done. 01:51:56.200 |
So in some sense, those wrong ideas most of the time end up being wrong, 01:52:09.200 |
At the very end of the book, I ended up with a sort of a plea or a recommended 01:52:17.200 |
And the best way I could, I know how to deal with this issue that you bring up, 01:52:23.200 |
is if everybody understood as part of your upbringing in life, 01:52:28.200 |
something about how your brain works, that it builds a model of the world, 01:52:32.200 |
how it works, how basic it builds that model of the world, 01:52:35.200 |
and that the model is not the real world, it's just a model. 01:52:39.200 |
And it's never going to reflect the entire world, and it can be wrong, 01:52:43.200 |
And here's all the ways you can get a wrong model in your head, right? 01:52:48.200 |
It's not to prescribe what's right or wrong, just understand that process. 01:52:52.200 |
If we all understood the processes, and I get together and you say, 01:52:55.200 |
"I disagree with you, Jeff," and I say, "Lex, I disagree with you," 01:52:57.200 |
that at least we understand that we're both trying to model something. 01:53:02.200 |
We both have different information which leads to our different models, 01:53:05.200 |
and therefore I shouldn't hold it against you, and you shouldn't hold it against me. 01:53:08.200 |
And we can at least agree that, well, what can we look for in its common ground 01:53:12.200 |
to test our beliefs, as opposed to so much as we raise our kids on dogma, 01:53:18.200 |
which is this is a fact, and this is a fact, and these people are bad. 01:53:24.200 |
If everyone knew just to be skeptical of every belief, and why, 01:53:31.200 |
and how their brains do that, I think we might have a better world. 01:53:35.200 |
Do you think the human mind is able to comprehend reality? 01:53:40.200 |
So you talk about this creating models that are better and better. 01:53:49.200 |
So the wildest ideas is like Donald Hoffman saying we're very far away from reality. 01:53:56.200 |
Well, I guess it depends on what you define reality. 01:54:00.200 |
We have a model of the world that's very useful. 01:54:06.200 |
Well, for our survival and our pleasure, whatever, right? 01:54:18.200 |
I don't think, I don't know the answer to that question. 01:54:23.200 |
I think that's part of the question we're trying to figure out, right? 01:54:26.200 |
Obviously, if you end up with a theory of everything, 01:54:29.200 |
that really is a theory of everything, and all of a sudden, 01:54:32.200 |
everything comes into play, and there's no room for something else, 01:54:35.200 |
then you might feel like we have a good model of the world. 01:54:37.200 |
Yeah, but if we have a theory of everything, and somehow, first of all, 01:54:40.200 |
you'll never be able to really conclusively say it's a theory of everything, 01:54:43.200 |
but say somehow we are very damn sure it's a theory of everything. 01:54:49.200 |
and how just the entirety of the physical process. 01:54:52.200 |
I'm still not sure that gives us an understanding of the next many layers 01:55:02.200 |
Well, also, what if string theory turns out to be true? 01:55:05.200 |
And then you say, well, we have no reality, no modeling, 01:55:08.200 |
what's going on in those other dimensions that are wrapped into it on each other? 01:55:15.200 |
I honestly don't know how, for us, for human interaction, 01:55:19.200 |
for ideas of intelligence, how it helps us to understand 01:55:22.200 |
that we're made up of vibrating strings that are like 10 to the whatever times 01:55:31.200 |
You could probably build better weapons and better rockets, 01:55:34.200 |
but you're not going to be able to understand intelligence. 01:55:41.200 |
It might lead to a better understanding of the beginning of the universe. 01:55:46.200 |
It might lead to a better understanding of, I don't know. 01:55:50.200 |
I guess I think the acquisition of knowledge has always been one 01:55:59.200 |
and you don't always know what is going to make a difference. 01:56:04.200 |
You're pleasantly surprised by the weird things you find. 01:56:11.200 |
do you think there's a lot of innovation to be done on the machine side? 01:56:16.200 |
You use the computer as a metaphor quite a bit. 01:56:19.200 |
Is there different types of computer that would help us build intelligence? 01:56:23.200 |
I mean, what are the physical manifestations of intelligent machines? 01:56:30.200 |
We have no idea how this is going to look out yet. 01:56:34.200 |
Today, of course, we model these things on traditional computers, 01:56:38.200 |
and now GPUs are really popular with neural networks and so on. 01:56:44.200 |
But there are companies coming up with fundamentally new physical substrates 01:56:51.200 |
I don't know if they're going to work or not, 01:56:53.200 |
but I think there'll be decades of innovation here. 01:56:59.200 |
Do you think the final thing will be messy like our biology is messy? 01:57:03.200 |
Or do you think it's the old bird versus airplane question? 01:57:08.200 |
Or do you think we could just build airplanes that fly way better than birds 01:57:16.200 |
in the same way we could build electrical neocortex? 01:57:29.200 |
The Wright brothers, the problem they were trying to solve was controlled flight, 01:57:35.200 |
how to turn an airplane, not how to propel an airplane. 01:57:40.200 |
At that time, there was already wing shapes, which they had from studying birds. 01:57:44.200 |
There was already gliders that carried people. 01:57:46.200 |
The problem was if you put a rudder on the back of a glider and you turn it, 01:57:51.200 |
So the problem was how do you control flight? 01:57:54.200 |
And they studied birds, and they actually had birds in captivity. 01:57:59.200 |
They observed them in the wild, and they discovered the secret was the birds 01:58:05.200 |
And so that's what they did on the Wright brothers' flyer. 01:58:07.200 |
They had these sticks, and you would twist the wing, 01:58:09.200 |
and that was their innovation, not their propeller. 01:58:12.200 |
And today, airplanes still twist their wings. 01:58:16.200 |
We just twist the tail end of it, the flaps, which is the same thing. 01:58:20.200 |
So today's airplanes fly on the same principles as birds, which is observed by-- 01:58:28.200 |
Once you understand the principles of flight, you can choose how to implement them. 01:58:34.200 |
No one's going to use bones and feathers and muscles, but they do have wings, 01:58:41.200 |
So when we have the principles of computation that goes on to modeling the world 01:58:46.200 |
in a brain, we understand those principles very clearly. 01:58:50.200 |
We have choices on how to implement them, and some of them will be biological-like 01:58:56.200 |
But I do think there's going to be a huge amount of innovation here. 01:58:59.200 |
Just think about the innovation went into the computers. 01:59:09.200 |
I mean, there's millions of things they had to do, memory systems. 01:59:14.200 |
Well, it's interesting that the deep learning--the effectiveness of deep learning 01:59:19.200 |
for specific tasks is driving a lot of innovation in the hardware, 01:59:23.200 |
which may have effects for actually allowing us to discover intelligence systems 01:59:29.200 |
that operate very differently, or at least much bigger than deep learning. 01:59:33.200 |
So ultimately, it's good to have an application that's making our life better now 01:59:39.200 |
because the capitalist process, if you can make money, 01:59:44.200 |
I mean, the other way--Neil deGrasse Tyson writes about this-- 01:59:48.200 |
is the other way we fund science, of course, is through military conquests. 01:59:53.200 |
Here's an interesting thing we're doing on this regard. 01:59:56.200 |
So we used to have a series of these biological principles, 01:59:59.200 |
and we can see how to build these intelligent machines, 02:00:02.200 |
but we've decided to apply some of these principles to today's machine learning techniques. 02:00:12.200 |
Most of the neurons are inactive at any point in time. 02:00:14.200 |
It's sparse, and the connectivity is sparse, and that's different than deep learning networks. 02:00:18.200 |
So we've already shown that we can speed up existing deep learning networks 02:00:23.200 |
anywhere from 10 to a factor of 100--I mean, literally 100-- 02:00:34.200 |
And so if we can prove this actually in the largest systems that are commercially applied today, 02:00:44.200 |
Well, sparsity is something that doesn't run really well on existing hardware. 02:00:50.200 |
It doesn't really run really well on GPUs and on CPUs. 02:00:56.200 |
And so that would be a way of sort of bringing more brain principles 02:01:00.200 |
into the existing system on a commercially valuable basis. 02:01:04.200 |
Another thing we can think we can do is we're going to use these dendrites models of-- 02:01:10.200 |
we talked earlier about the prediction occurring inside a neuron. 02:01:13.200 |
That basic property can be applied to existing neural networks 02:01:17.200 |
and allow them to learn continuously, which is something they don't do today. 02:01:22.200 |
The dendritic spikes that you were talking about. 02:01:24.200 |
Yeah, well, we wouldn't model them as spikes, but the idea that you have-- 02:01:27.200 |
that today's neural networks have something called a point neuron, 02:01:32.200 |
And by adding dendrites to them at just one more level of complexity 02:01:36.200 |
that's in biological systems, you can solve problems in continuous learning 02:01:43.200 |
So we're trying to take-- we're trying to bring the existing field-- 02:01:48.200 |
We're trying to bring the existing field of machine learning commercially along with us. 02:01:53.200 |
You brought up this idea of keeping-- paying for it commercially along with us 02:01:57.200 |
as we move towards the ultimate goal of a true AI system. 02:02:00.200 |
Even small innovations on neural networks are really, really exciting. 02:02:04.200 |
It seems like such a trivial model of the brain 02:02:09.200 |
and applying different insights that just even, like you said, 02:02:13.200 |
continuous learning or making it more asynchronous 02:02:19.200 |
or maybe making more dynamic or like incentivizing-- 02:02:27.200 |
And making it somehow much better-- incentivizing sparsity somehow. 02:02:34.200 |
Well, if you can make things 100 times faster, 02:02:39.200 |
People are spending millions of dollars just training some of these networks now, 02:02:48.200 |
How, for young people listening to this today in high school and college, 02:02:53.200 |
what advice would you give them in terms of which career path to take 02:03:03.200 |
Well, in my case, I didn't start life with any kind of goals. 02:03:08.200 |
When I was going to college, it was like, "Oh, what did I study?" 02:03:11.200 |
"Well, maybe I'll do some electrical engineering stuff." 02:03:15.200 |
It wasn't like-- Today you see some of these young kids are so motivated, 02:03:22.200 |
But then I did fall in love with something, besides my wife. 02:03:26.200 |
I fell in love with this, like, "Oh, my God, it would be so cool 02:03:31.200 |
Then I said to myself, "That's the most important thing I could work on." 02:03:34.200 |
I can't imagine anything more important because if we understand how brains work, 02:03:37.200 |
we could build telecom machines and they could figure out 02:03:42.200 |
Then I said, "I want to understand how I work." 02:03:44.200 |
I fell in love with this idea and I became passionate about it. 02:03:49.200 |
This is a trope. People say this, but it's true. 02:03:53.200 |
Because I was passionate about it, I was able to put up 02:04:00.200 |
I was that person who said, "You can't do this." 02:04:03.200 |
I was a graduate student at Berkeley when they said, 02:04:05.200 |
"You can't study this problem. No one can solve this," 02:04:10.200 |
Then I went in to do mobile computing and it was like, 02:04:12.200 |
people say, "You can't do that. You can't build a cell phone." 02:04:17.200 |
But all along, I kept being motivated because I wanted to work on this problem. 02:04:20.200 |
I said, "I want to understand how the brain works." 02:04:24.200 |
I'm going to figure it out, do the best I can. 02:04:27.200 |
By having that, because it's really, as you pointed out, Lex, 02:04:34.200 |
There's so many downers along the way, so many obstacles that get in your way. 02:04:38.200 |
I'm sitting here happy all the time, but trust me, it's not always like that. 02:04:42.200 |
I guess the happiness, the passion is a prerequisite for surviving the whole thing. 02:04:51.200 |
I don't want to sit to someone and say, "You need to find a passion and do it." 02:04:54.200 |
No, maybe you don't, but if you do find something you're passionate about, 02:04:59.200 |
then you can follow it as far as your passion will let you put up with it. 02:05:04.200 |
Do you remember how you found it, how the spark happened? 02:05:11.200 |
Yeah, because you said, "It's such an interesting," 02:05:13.200 |
so almost like later in life, by later, I mean not when you were five, 02:05:18.200 |
you didn't really know, and then all of a sudden you fell in love with it. 02:05:22.200 |
Yeah, there was two separate events that compounded one another. 02:05:25.200 |
One, when I was probably a teenager, I might have been 17 or 18, 02:05:30.200 |
I made a list of the most interesting problems I could think of. 02:05:39.200 |
The second one was, well, given it exists, why does it behave the way it does? 02:05:42.200 |
It's laws of physics. Why is it equal to mc squared, not mc cubed? 02:05:45.200 |
That's an interesting question. I don't know. 02:05:47.200 |
The third one was, what's the origin of life? 02:05:53.200 |
I stopped there. I said, "Well, that's probably the most interesting one." 02:05:59.200 |
But then when I was 22, and I was reading the-- 02:06:11.200 |
I was reading the September issue of Scientific American, 02:06:16.200 |
And then the final essay was by Francis Crick, of DNA fame. 02:06:22.200 |
And he had taken his interest to studying the brain now. 02:06:25.200 |
And he said, "You know, there's something wrong here." 02:06:28.200 |
He says, "We got all this data, all this fact." 02:06:32.200 |
This is 1979. "All these facts about the brain. 02:06:37.200 |
Do we need more facts, or do we just need to think about a way 02:06:42.200 |
Maybe we're just not thinking about the problem correctly." 02:06:53.200 |
I said, "I don't have to become an experimental neuroscientist. 02:06:57.200 |
I could just look at all those facts and become a theoretician 02:07:04.200 |
And I said, "That I felt like was something I would be good at." 02:07:07.200 |
I said, "I wouldn't be a good experimentalist. 02:07:22.200 |
And there's something, obviously, you can't convert into words 02:07:28.200 |
I mean, I have that a few times in my life, just something-- 02:07:36.200 |
Yeah. I thought it was something that was both important 02:07:41.200 |
And so all of a sudden, I felt like, "Oh, it gave me purpose in life." 02:07:44.200 |
I honestly don't think it has to be as big as one of those four questions. 02:07:48.200 |
I think you can find those things in the smallest. 02:07:52.200 |
I'm with--David Foster Wallace said, "The key to life is to be unboreable." 02:07:56.200 |
I think it's very possible to find that intensity of joy in the smallest thing. 02:08:05.200 |
No, but I'm actually speaking to the audience. 02:08:09.200 |
You happen to get excited by one of the bigger questions of-- 02:08:13.200 |
--in the universe, but even the smallest things. 02:08:20.200 |
Just giving yourself life--giving your life over to the study 02:08:24.200 |
and the mastery of a particular sport is fascinating. 02:08:28.200 |
And if it sparks joy and passion, you're able to, in the case of the Olympics, 02:08:33.200 |
basically suffer for a couple of decades to achieve perfection. 02:08:36.200 |
I mean, you can find joy and passion just being a parent. 02:08:41.200 |
So I was--not always, but for a long time, wanted kids and get married and stuff, 02:08:46.200 |
and especially it has to do with the fact that I've seen a lot of people 02:08:51.200 |
that I respect get a whole other level of joy from kids. 02:08:57.200 |
And at first, it's like you're thinking is, "Well, I don't have enough time in the day," right? 02:09:09.200 |
--but if I want to solve intelligence, how is this kid situation going to help me? 02:09:14.200 |
But then you realize that, like you said, the things that sparks joy, 02:09:22.200 |
and it's very possible that kids can provide even a greater or deeper 02:09:26.200 |
or more meaningful joy than those bigger questions when they enrich each other. 02:09:32.200 |
And that seemed like--obviously, when I was younger, 02:09:34.200 |
it was probably a counterintuitive notion because there's only so many hours in the day. 02:09:38.200 |
But then life is finite, and you have to pick the things that give you joy. 02:09:44.200 |
Yeah. But you also--I understand you can be patient, too. 02:09:48.200 |
I mean, it's finite, but we do have whatever, 50 years or something. 02:09:53.200 |
So in my case, I had to give up on my dream of the neuroscience 02:09:58.200 |
because I was a graduate student at Berkeley, and they told me I couldn't do this, 02:10:04.200 |
And so I went back in the computing industry for a number of years. 02:10:09.200 |
I thought it would be four, but it turned out to be more. 02:10:11.200 |
But I said, "I'll come back. I'm definitely going to come back. 02:10:15.200 |
I know I'm going to do this computer stuff for a while, but I'm definitely coming back. 02:10:18.200 |
Everyone knows that." And it's the same as raising kids. 02:10:21.200 |
Well, yeah, you have to spend a lot of time with your kids. 02:10:25.200 |
But that doesn't mean you have to give up on other dreams. 02:10:28.200 |
It just means that you may have to wait a week or two to work on that next idea. 02:10:32.200 |
You talk about the darker side of me, disappointing sides of human nature 02:10:39.200 |
that we're hoping to overcome so that we don't destroy ourselves. 02:10:43.200 |
I tend to put a lot of value in the broad general concept of love, 02:10:49.200 |
of the human capacity of compassion towards each other, of just kindness, 02:10:58.200 |
whatever that longing of just the human-to-human connection. 02:11:04.200 |
I tend to see a lot of value in this collective intelligence aspect. 02:11:07.200 |
I think some of the magic of human civilization happens when there's... 02:11:22.200 |
what role does love play in the human condition? 02:11:26.200 |
From a neocortex point of view, I don't think it doesn't impact 02:11:32.200 |
From a human condition point of view, I think it's core. 02:11:36.200 |
I mean, we get so much pleasure out of loving people and helping people. 02:11:47.200 |
and maybe we can throw it under the bus of evolution if you want. 02:11:54.200 |
It doesn't impact how we think about how we model the world. 02:11:58.200 |
But from a humanity point of view, I think it's essential. 02:12:03.200 |
Also, I tend to think that some aspects of that need to be engineered 02:12:07.200 |
into AI systems, both in their ability to have compassion for other humans 02:12:16.200 |
and their ability to maximize love in the world between humans. 02:12:25.200 |
Whenever there's a deep integration between AI systems and humans, 02:12:29.200 |
specific applications where it's AI and humans, 02:12:32.200 |
I think that's something that's often not talked about in terms of metrics 02:12:40.200 |
over which you try to maximize, like which metric to maximize in a system. 02:12:46.200 |
It seems like one of the most powerful things in societies is the capacity 02:12:56.200 |
I think it's a great way of thinking about it. 02:12:58.200 |
I have been thinking more of these fundamental mechanisms in the brain 02:13:03.200 |
as opposed to the social interaction between humans and AI systems 02:13:09.200 |
I think if you think about that, you're absolutely right. 02:13:14.200 |
I can have intelligence systems that don't have that component, 02:13:18.200 |
They're just running something or building something. 02:13:23.200 |
But if you think about interacting with humans, yeah. 02:13:28.200 |
I don't think it's going to appear on its own. 02:13:35.200 |
In terms of from a reinforcement learning perspective, 02:13:41.200 |
whether the darker sides of human nature or the better angels of our nature 02:13:47.200 |
win out statistically speaking, I don't know. 02:13:50.200 |
I tend to be optimistic and hope that love wins out in the end. 02:13:54.200 |
You've done a lot of incredible stuff, and your book is driving towards 02:14:01.200 |
this fourth question that you started with on the nature of intelligence. 02:14:06.200 |
What do you hope your legacy for people reading 100 years from now-- 02:14:17.200 |
Well, I think as an entrepreneur or a scientist or any human 02:14:22.200 |
who's trying to accomplish some things, I have a view that really 02:14:31.200 |
It's like if we didn't study the brain, someone else would study the brain. 02:14:36.200 |
If Elon Musk didn't make electric cars, someone else would do it eventually. 02:14:44.200 |
What you can do as an individual is you can accelerate something 02:14:49.200 |
that's beneficial and make it happen sooner than whatever. 02:14:55.200 |
You can't create a new reality that it wasn't going to happen. 02:15:00.200 |
From that perspective, I would hope that our work-- 02:15:07.200 |
people would look back and say, "Hey, they really helped make 02:15:13.200 |
"They helped us understand the nature of false beliefs 02:15:19.200 |
"Now we're so happy that we have these intelligent machines 02:15:21.200 |
doing these things, helping us, that maybe that solved 02:15:24.200 |
the climate change problem and they made it happen sooner." 02:15:30.200 |
Some would say, "Those guys just moved the needle forward 02:15:36.200 |
Well, it feels like the progress of human civilization is not-- 02:15:44.200 |
If you have individuals that accelerate towards one direction, 02:15:52.200 |
I think in a long stretch of time, all trajectories will be traveled, 02:15:58.200 |
but I think it's nice for this particular civilization on Earth 02:16:03.200 |
I think you're right. We have to take the whole period of 02:16:06.200 |
World War II, Nazism, or something like that. 02:16:08.200 |
Well, that was a bad sidestep. We've been over there for a while. 02:16:11.200 |
But there is the optimistic view about life that ultimately 02:16:18.200 |
It progresses ultimately, even if we have years of darkness. 02:16:28.200 |
It could also mean eliminating some bad missteps along the way too. 02:16:36.200 |
Despite we're talking about the end of civilization, 02:16:39.200 |
I think we're going to live for a long time. I hope we are. 02:16:42.200 |
I think our society in the future is going to be better. 02:16:45.200 |
We're going to have less people killing each other. 02:16:47.200 |
We'll live in some sort of way that's compatible 02:16:56.200 |
and all we can do is try to get there sooner. 02:16:58.200 |
And at the very least, if we do destroy ourselves, 02:17:04.200 |
--that will tell alien civilization that we were once here. 02:17:15.200 |
We kill ourselves a million years from now or a billion years from now. 02:17:19.200 |
There's curious creatures who were once here. 02:17:24.200 |
and thank you so much for talking to me once again. 02:17:27.200 |
Well, actually, it's great. I love what you do. 02:17:30.200 |
You have the most interesting people, me aside. 02:17:34.200 |
So it's a real service, I think, you do for-- 02:17:38.200 |
in a very broader sense for humanity, I think. 02:17:43.200 |
Thanks for listening to this conversation with Jeff Hawkins, 02:17:45.200 |
and thank you to Codecademy, BioOptimizers, ExpressVPN, 02:17:53.200 |
Check them out in the description to support this podcast. 02:17:56.200 |
And now let me leave you with some words from Albert Camus. 02:18:01.200 |
"An intellectual is someone whose mind watches itself." 02:18:05.200 |
I like this because I'm happy to be both halves, 02:18:13.200 |
This is a practical question we must try to answer. 02:18:17.200 |
Thank you for listening, and hope to see you next time.