back to indexMelanie Mitchell: Concepts, Analogies, Common Sense & Future of AI | Lex Fridman Podcast #61
Chapters
0:0 Intro
2:33 Artificial Intelligence
5:15 What is AI
6:31 Strong AI vs Weak AI
9:11 Creating Intelligence Without Understanding Our Own Mind
10:7 Are Humans Better Than Computers
12:47 Why Do We Want AI
13:57 Understanding Ourselves
15:24 Intelligence
15:56 Are Ant colonies intelligent
17:34 Are humans intelligent
18:7 The AI field
18:38 Predicting the future
19:37 Computer Vision
22:3 The Journey to Intelligence
23:34 Deep Learning
28:6 Whats the Foundation
29:53 Who is the Most Impressive
31:23 What is Copycat
34:32 Workspace
35:13 Work in the workspace
36:10 Innate concepts
36:47 Concept Analogies
37:50 Analogies
39:55 Analogies in conversations
41:10 Analogies in cognition
42:15 Analogies in perception
44:0 Network graph of concepts
45:37 Semantic web
46:47 Intuitive physics
48:13 The psych project
49:41 Data structures
50:39 Will our current hardware work
51:58 What is your hope for approaches like copycat
55:32 Analogies and deep learning
00:00:00.000 |
The following is a conversation with Melanie Mitchell. 00:00:06.720 |
and an external professor at Santa Fe Institute. 00:00:10.020 |
She has worked on and written about artificial intelligence 00:00:14.940 |
including adaptive complex systems, genetic algorithms, 00:00:30.980 |
to today, she has contributed a lot of important ideas 00:00:34.180 |
to the field of AI, including her recent book, 00:00:58.100 |
I recently started doing ads at the end of the introduction. 00:01:01.580 |
I'll do one or two minutes after introducing the episode 00:01:12.140 |
I provide timestamps for the start of the conversation, 00:01:18.500 |
by trying out the product or service being advertised. 00:01:27.500 |
I personally use Cash App to send money to friends, 00:01:38.180 |
You can buy fractions of a stock, say $1 worth, 00:01:43.380 |
Brokerage services are provided by Cash App Investing, 00:01:51.860 |
to support one of my favorite organizations called FIRST, 00:01:55.260 |
best known for their FIRST Robotics and LEGO competitions. 00:01:58.900 |
They educate and inspire hundreds of thousands of students 00:02:04.420 |
and have a perfect rating on Charity Navigator, 00:02:11.660 |
When you get Cash App from the App Store or Google Play 00:02:23.220 |
that I've personally seen inspire girls and boys 00:02:28.980 |
And now, here's my conversation with Melanie Mitchell. 00:02:32.820 |
The name of your new book is "Artificial Intelligence," 00:02:39.700 |
The name of this podcast is "Artificial Intelligence." 00:02:44.100 |
and ask the old Shakespeare question about roses. 00:02:46.940 |
And what do you think of the term artificial intelligence 00:02:51.100 |
for our big and complicated and interesting field? 00:03:10.060 |
There's so many different kinds of intelligence, 00:03:14.420 |
degrees of intelligence, approaches to intelligence. 00:03:24.380 |
he called it that to differentiate it from cybernetics, 00:03:28.780 |
which was another related movement at the time. 00:03:33.700 |
And he later regretted calling it artificial intelligence. 00:03:40.720 |
for calling it complex information processing, 00:03:55.360 |
in terms of words that's most problematic, would you say? 00:04:02.960 |
because I personally was attracted to the field 00:04:07.060 |
because I was interested in phenomenon of intelligence. 00:04:11.280 |
And if it was called complex information processing, 00:04:13.620 |
maybe I'd be doing something wholly different now. 00:04:17.200 |
I've heard that term used cognitive systems, for example. 00:04:22.760 |
- Yeah, I mean, cognitive has certain associations with it. 00:04:36.800 |
as being different from other aspects of intelligence. 00:04:44.640 |
beautiful mess of things that encompasses the whole thing. 00:04:48.760 |
- Yeah, I think it's hard to draw lines like that. 00:04:53.040 |
When I was coming out of grad school in 1990, 00:05:13.900 |
- What about, to stick briefly on terms and words, 00:05:24.100 |
or like Jan LeCun prefers human level intelligence. 00:05:32.460 |
that achieve higher and higher levels of intelligence. 00:05:37.740 |
And somehow artificial intelligence seems to be 00:05:50.340 |
to describe the thing that perhaps we strive to create? 00:05:59.220 |
And defining exactly what it is that we're talking about. 00:06:22.580 |
or carrying out processes that we would call intelligent. 00:06:27.580 |
- At a high level, if you look at the founding 00:06:34.460 |
of the field of McCarthy and Searle and so on, 00:06:38.940 |
are we closer to having a better sense of that line 00:06:49.380 |
- Yes, I think we're closer to having a better idea 00:06:57.140 |
Early on, for example, a lot of people thought 00:07:13.340 |
to play chess better than humans, that revised that view. 00:07:18.340 |
And people said, okay, well, maybe now we have 00:07:38.740 |
of what intelligence is because I don't think 00:07:43.020 |
At least that's not what I want to call intelligence. 00:07:47.620 |
Or will we eventually really feel as a civilization 00:07:51.220 |
like we've crossed the line if it's possible? 00:07:54.020 |
- It's hard to predict, but I don't see any reason 00:07:58.640 |
create something that we would consider intelligent. 00:08:10.480 |
will be refined more and more until we finally figure out 00:08:15.680 |
But I think eventually we will create machines 00:08:24.400 |
They may not be the kinds of machines we have now. 00:08:28.040 |
And one of the things that that's going to produce 00:08:47.680 |
They have algorithms, they process information by, 00:08:57.040 |
we get this emergent property that we call intelligence. 00:09:01.240 |
But underlying it is really just cellular processing 00:09:12.220 |
do you think it's possible to create intelligence 00:09:19.440 |
but do you think it's possible to sort of create 00:09:42.840 |
But I think that's a really big open question. 00:09:49.280 |
kind of brute force approaches based on, say, 00:09:59.080 |
And they have nothing to do with the way our minds work. 00:10:03.080 |
So that's been surprising to me, so it could be wrong. 00:10:06.800 |
- To explore the psychological and the philosophical, 00:10:11.760 |
with something that's more intelligent than us? 00:10:15.960 |
Do you think perhaps the reason we're pushing that line 00:10:19.720 |
further and further is we're afraid of acknowledging 00:10:29.000 |
- Well, I'm not sure we can define intelligence that way 00:10:31.600 |
because smarter than is with respect to what, 00:10:36.600 |
computers are already smarter than us in some areas. 00:10:54.400 |
They know about traffic conditions and all that stuff. 00:11:02.240 |
sometimes computers are much better than we are 00:11:12.200 |
which things about our intelligence would we feel 00:11:18.280 |
very sad or upset that machines had been able to recreate? 00:11:24.440 |
So in the book, I talk about my former PhD advisor, 00:11:36.760 |
that if a machine could create beautiful music, 00:11:44.080 |
because that is something he feels is really at the core 00:11:57.920 |
machines can recognize spoken language really well. 00:12:03.840 |
He personally doesn't like using speech recognition, 00:12:11.600 |
'cause it's like, okay, that's not at the core of humanity. 00:12:17.920 |
what really they feel would usurp their rights 00:12:25.240 |
And I think maybe it's a generational thing also. 00:12:27.440 |
Maybe our children or our children's children 00:12:30.720 |
will be adapted, they'll adapt to these new devices 00:12:38.680 |
yes, this thing is smarter than me in all these areas, 00:12:44.880 |
- Looking at the broad history of our species, 00:12:52.720 |
of creating artificial life and artificial intelligence 00:12:57.360 |
So not just this century or the 20th century, 00:13:28.960 |
I think we want to understand ourselves better. 00:13:33.960 |
And we also want machines to do things for us. 00:13:43.240 |
But I don't know, there's something more to it 00:13:46.240 |
because it's so deep in the kind of mythology 00:13:53.360 |
And I don't think other species have this drive. 00:13:57.600 |
- If you were to sort of psychoanalyze yourself 00:14:02.260 |
what excites you about creating intelligence? 00:14:09.800 |
- Yeah, I think that's what drives me particularly. 00:14:18.820 |
But I'm also interested in the sort of the phenomenon 00:14:43.780 |
And if you think of things like insect colonies 00:14:54.200 |
or even societal processes have as an emergent property, 00:14:59.200 |
some aspects of what we would call intelligence. 00:15:02.460 |
You know, they have memory, they do process information, 00:15:05.120 |
they have goals, they accomplish their goals, et cetera. 00:15:08.500 |
And to me, the question of what is this thing 00:15:12.700 |
we're talking about here was really fascinating to me. 00:15:17.700 |
And exploring it using computers seemed to be 00:15:26.140 |
do you think of our universe as a kind of hierarchy 00:15:31.020 |
is just the property of any, you can look at any level 00:15:35.560 |
and every level has some aspect of intelligence. 00:15:49.800 |
But I guess what I wanna, I don't have a good enough 00:15:56.780 |
- So let me do sort of a multiple choice, I guess. 00:16:12.740 |
molecules and the behavior at the quantum level 00:16:16.260 |
of electrons and so on, are those kinds of systems, 00:16:22.940 |
Like where's the line that feels compelling to you? 00:16:27.720 |
- I don't know, I mean, I think intelligence is a continuum. 00:16:30.560 |
And I think that the ability to, in some sense, 00:16:37.480 |
have some kind of self-awareness is part of it. 00:16:56.400 |
the planets orbiting the sun is an intelligent system. 00:17:00.760 |
I mean, I would find that maybe not the right term 00:17:06.320 |
And this is, there's all this debate in the field 00:17:09.160 |
of like, what's the right way to define intelligence? 00:17:23.560 |
And I think that it's a fantastic time to be in the field 00:17:33.880 |
- So are we the most special kind of intelligence 00:17:47.200 |
Is human intelligence the thing in our brain? 00:17:53.080 |
Is that the most interesting kind of intelligence 00:17:57.100 |
- Well, it's interesting to us 'cause it is us. 00:18:06.720 |
- But to understanding the fundamentals of intelligence, 00:18:18.640 |
yes, it's hard to define, but it's usually talking 00:18:21.560 |
about something that's very akin to human intelligence. 00:18:32.120 |
It's the only system, at least that I know of, 00:18:41.040 |
and us, in terms of creating artificial intelligence, 00:18:50.880 |
So why do you think we're so bad at predicting the future? 00:19:01.960 |
or the next few decades, every time we make a prediction, 00:19:06.920 |
Or as the field matures, we'll be better and better at it? 00:19:10.880 |
- I believe as the field matures, we will be better. 00:19:13.720 |
And I think the reason that we've had so much trouble 00:19:20.320 |
So there's the famous story about Marvin Minsky 00:19:25.320 |
assigning computer vision as a summer project 00:19:42.480 |
that describes everything that should be done 00:19:46.840 |
And it's hilarious because it, I mean, you can explain it, 00:19:49.920 |
but from my recollection, it describes basically 00:19:52.600 |
all the fundamental problems of computer vision, 00:20:08.400 |
But I think that no one really understands or understood, 00:20:24.640 |
To us, vision, being able to look out at the world 00:20:27.640 |
and describe what we see, that's just immediate. 00:20:33.380 |
So it didn't seem like it would be that hard, 00:20:39.320 |
sort of invisible to us that I think we overestimate 00:20:44.480 |
how easy it will be to get computers to do it. 00:20:49.480 |
- And sort of for me to ask an unfair question, 00:20:56.520 |
many different branches of AI through this book, 00:20:59.920 |
widespread looking at where AI has been, where it is today. 00:21:08.800 |
how many years from now would we as a society 00:21:25.080 |
- A prediction that will most likely be wrong. 00:21:35.320 |
- And I quoted somebody in my book who said that 00:21:38.480 |
human level intelligence is 100 Nobel prizes away. 00:21:44.640 |
Which I like 'cause it's a nice way to sort of, 00:21:49.720 |
And it's like that many fantastic discoveries 00:22:05.260 |
your sense is really the journey to intelligence 00:22:19.400 |
Understanding them, being able to create them 00:22:21.600 |
in the artificial systems as opposed to sort of 00:22:25.400 |
taking the machine learning approaches of today 00:22:28.860 |
and really scaling them and scaling them exponentially 00:22:40.580 |
I think that in the sort of going along in the narrow AI 00:22:47.160 |
that these current approaches will get better. 00:23:01.760 |
And there's some fundamental weaknesses that they have 00:23:09.960 |
that just comes from this approach of supervised learning, 00:23:14.960 |
requiring sort of feed forward networks and so on. 00:23:36.480 |
Sort of I've, everything you read about in the book 00:23:39.480 |
and sort of we're talking about now, I agree with you, 00:23:47.400 |
first of all, I'm deeply surprised by the success 00:23:50.080 |
of machine learning and deep learning in general. 00:23:53.840 |
when I was, it's really been my main focus of work. 00:24:07.080 |
So I think there'll be a lot of surprise of how far it gets. 00:24:14.360 |
Like my sense is everything I've seen so far, 00:24:17.120 |
and we'll talk about autonomous driving and so on. 00:24:21.760 |
but I also have a sense that we will discover 00:24:24.720 |
just like you said, is that even though we'll get really far 00:24:29.240 |
in order to create something like our own intelligence 00:24:34.960 |
- I think these methods are a lot more powerful 00:24:41.160 |
but I think there's a lot of researchers in the community, 00:24:48.840 |
they're skeptical about how far deep learning can get. 00:24:50.960 |
And I'm more and more thinking that it can actually get 00:24:58.480 |
One thing that surprised me when I was writing the book 00:25:00.840 |
is how far apart different people in the field are 00:25:03.800 |
on their opinion of how far the field has come 00:25:08.440 |
and what is accomplished and what's gonna happen next. 00:25:13.800 |
who are the different people, groups, mindsets, 00:25:17.560 |
thoughts in the community about where AI is today? 00:25:24.120 |
So there's kind of the singularity transhumanism group. 00:25:29.120 |
I don't know exactly how to characterize that approach. 00:25:37.920 |
We're on the sort of almost at the hugely accelerating 00:25:49.720 |
we're going to see super intelligent AI and all that, 00:25:54.120 |
and we'll be able to upload our brains and that. 00:25:57.400 |
So there's that kind of extreme view that most, 00:26:00.520 |
I think most people who work in AI don't have. 00:26:06.080 |
But there are people who are maybe aren't singularity people, 00:26:20.040 |
and is going to kind of go all the way basically 00:26:30.880 |
And a lot of them, like a lot of the people I've met 00:26:39.800 |
kind of have this view that we're really not that far. 00:26:47.440 |
sort of if I can take as an example, like Jan Lekun, 00:26:55.320 |
- He believes that there's a bunch of breakthroughs, 00:26:57.840 |
like fundamental, like Nobel prizes that are needed still. 00:27:06.600 |
And then there's some people who think we need to kind of 00:27:21.320 |
Jan Lekun is rightly saying supervised learning 00:27:28.000 |
We have to figure out how to do unsupervised learning, 00:27:50.440 |
you know, there's the Gary Marcus kind of hybrid view 00:27:58.160 |
but we need to bring back kind of these symbolic approaches 00:28:03.440 |
Of course, no one knows how to do that very well. 00:28:18.640 |
Then there's people pushing different things. 00:28:24.840 |
who say, you know, deep learning as it's formulated today 00:28:41.340 |
There's a lot of push from the more cognitive science crowd 00:28:46.340 |
saying we have to look at developmental learning. 00:29:05.320 |
we also have to teach machines intuitive metaphysics, 00:29:16.140 |
You know, these things that maybe we're born with. 00:29:38.160 |
So there's just a lot of pieces of the puzzle 00:29:43.080 |
and with different opinions of like how important they are 00:29:47.640 |
and how close we are to being able to put them all together 00:29:59.580 |
Is it the cognitive folks, the Gary Marcus camp, 00:30:03.320 |
the Yon camp, unsupervised and self-supervised? 00:30:11.560 |
You have sort of the Andrej Karpathy at Tesla 00:30:14.720 |
building actual, you know, it's not philosophy, 00:30:17.960 |
it's real systems that operate in the real world. 00:30:21.040 |
What do you take away from all this beautiful variety? 00:30:51.120 |
there's no sort of innate stuff that has to get built in. 00:30:55.800 |
This is, you know, it's because it's a hard problem. 00:31:04.300 |
I'm very sympathetic to the cognitive science side 00:31:07.260 |
'cause that's kind of where I came in to the field. 00:31:10.500 |
I've become more and more sort of an embodiment adherent 00:31:15.500 |
saying that, you know, without having a body, 00:31:22.780 |
- That's definitely something I'd love to talk about 00:31:26.900 |
in a little bit, to step into the cognitive world. 00:31:32.780 |
'cause you've done so many interesting things. 00:31:43.380 |
have created and developed CopyCat more than 30 years ago. 00:31:54.340 |
- It's a program that makes analogies in an idealized domain, 00:32:42.040 |
He says, "Without concepts, there can be no thought, 00:32:46.780 |
"and without analogies, there can be no concepts." 00:33:01.020 |
these kinds of things that we have on IQ tests or whatever, 00:33:06.540 |
it's much more pervasive in everything we do, 00:33:10.940 |
in our language, our thinking, our perception. 00:33:14.900 |
So he had a view that was a very active perception idea. 00:33:25.100 |
kind of a passive network in which you have input 00:33:30.100 |
that's being processed through these feed-forward layers, 00:33:47.020 |
to what we look at next, influences what we look at next 00:34:04.100 |
and you have these agents that are picking things to look at 00:34:08.040 |
and deciding whether they were interesting or not, 00:34:22.180 |
So it was actually inspired by the old blackboard systems 00:34:25.620 |
where you would have agents that post information 00:34:32.940 |
- Is that, are we talking about like in physical space? 00:34:38.300 |
- So agents posting concepts on a blackboard kind of thing? 00:34:50.720 |
that you could think of them as little detectors 00:35:16.260 |
how do the things that are highlighted relate to each other? 00:35:21.520 |
that can build connections between different things. 00:35:38.160 |
And the program had some prior knowledge about the alphabet. 00:35:44.280 |
It had a concept of letter, of successor of letter. 00:35:58.320 |
discover that ABC is a group of letters in succession. 00:36:10.120 |
- So the idea that there could be a sequence of letters, 00:36:39.460 |
How do you flexibly apply them to new situations? 00:36:50.900 |
that you say, "Without concepts, there can be no thought, 00:36:53.700 |
"and without analogies, there can be no concepts." 00:36:58.460 |
you said that it should be one of the mantras of AI. 00:37:16.980 |
So let's, what is a concept and what is an analogy? 00:37:21.880 |
- A concept is in some sense a fundamental unit of thought. 00:37:33.260 |
And a concept is embedded in a whole space of concepts 00:37:45.180 |
so that there's certain concepts that are closer to it 00:37:50.260 |
- Are these concepts, are they really like fundamental, 00:37:53.140 |
like we mentioned innate, almost like axiomatic, 00:37:55.620 |
like very basic, and then there's other stuff 00:38:07.020 |
- Right, I guess that's the question I'm asking. 00:38:20.060 |
- And then what's the role of analogies in that? 00:38:23.020 |
- So analogy is when you recognize that one situation 00:38:28.020 |
is essentially the same as another situation. 00:38:34.700 |
And essentially is kind of the key word there 00:38:40.020 |
So if I say, last week I did a podcast interview 00:38:45.020 |
actually like three days ago in Washington, DC. 00:38:52.980 |
And that situation was very similar to this situation, 00:38:58.740 |
It was a different person sitting across from me. 00:39:26.980 |
like being interviewed for a news article in a newspaper. 00:39:31.140 |
And I can say, well, you kind of play the same role 00:39:53.020 |
You know, there's just all kinds of similarities. 00:39:54.700 |
- And this somehow probably connects to conversations 00:40:03.540 |
that just stretches out in all aspects of life 00:40:11.420 |
- Sure, and if I go and tell a friend of mine 00:40:19.180 |
my friend might say, oh, the same thing happened to me. 00:40:29.260 |
My friend could say, the same thing happened to me, 00:40:31.620 |
but it was like, it wasn't a podcast interview. 00:40:34.060 |
It wasn't, it was a completely different situation. 00:40:39.060 |
And yet my friend is seeing essentially the same thing. 00:40:51.100 |
We just say the same thing. - Right, you imply it, yes. 00:40:52.900 |
- Yeah, and the view that kind of went into, say, 00:40:58.980 |
that act of saying the same thing happened to me 00:41:05.540 |
And in some sense, that's what underlies all of our concepts. 00:41:10.540 |
- Why do you think analogy making that you're describing 00:41:17.020 |
Like, it seems like it's the main element action 00:41:30.940 |
and recognizing concepts in different situations 00:41:42.660 |
That that's, every time I'm recognizing that, say, 00:42:02.380 |
like one of the things I talked about in the book 00:42:11.780 |
because all of that, the details are very different. 00:42:26.820 |
So what's perception is taking raw sensory input 00:42:29.660 |
and it's somehow integrating into our understanding 00:42:34.740 |
And all of that has just this giant mess of analogies 00:42:41.260 |
- If you could just linger on it a little bit, 00:43:00.940 |
And it comes down to internal models, I think. 00:43:13.340 |
that I can, in my head, I can do a simulation 00:43:35.140 |
or situation in the world, or you read about it 00:43:37.220 |
or whatever, you do some kind of mental simulation 00:43:40.740 |
that allows you to predict what's gonna happen, 00:43:43.780 |
to develop expectations of what's gonna happen. 00:43:48.020 |
So that's the kind of structure I think we need, 00:43:55.620 |
and in our brains, somehow these mental models 00:44:00.420 |
- Again, so a lot of stuff we're talking about 00:44:17.380 |
graph data structure of concepts that's in our head? 00:44:22.380 |
Like if we're trying to build that ourselves, 00:44:39.740 |
that underlies what we think of as common sense, 00:44:45.460 |
And I don't even know what units to measure it in. 00:44:55.660 |
We have, what, 100 billion neurons or something, 00:45:07.860 |
and there's all this chemical processing going on. 00:45:19.900 |
it's encoded in electric firing and firing rates. 00:45:25.780 |
but it just seems like there's a huge amount of capacity. 00:45:49.300 |
and there's a lot of dreams from expert systems 00:45:55.420 |
Do you see a hope for these kinds of approaches 00:46:10.700 |
People have been working on this for a long time. 00:46:18.100 |
People have been trying to get these common sense networks. 00:46:20.980 |
Here at MIT, there's this concept net project. 00:46:27.460 |
most of the knowledge that we have is invisible to us. 00:46:49.220 |
that described intuitive physics, intuitive psychology, 00:46:53.500 |
would it be bigger or smaller than Wikipedia? 00:47:09.060 |
how do you represent that knowledge is the question, right? 00:47:21.180 |
But that's probably not the best representation 00:47:27.060 |
of that knowledge for doing the kinds of reasoning 00:47:36.140 |
- So, I don't know, it's impossible to say now. 00:48:03.420 |
And it just never, I think, could do any of the things 00:48:12.940 |
- Of course, that's what they always say, you know, 00:48:18.860 |
well, the psych project finally found a breakthrough 00:48:29.540 |
- Who knows what the next breakthroughs will be. 00:48:36.460 |
- I think Linat was one of the earliest people 00:48:50.420 |
And he basically gave up his whole academic career 00:48:59.380 |
but I think that the approach itself will not, 00:49:08.980 |
- What do you think is wrong with the approach? 00:49:31.860 |
so I don't have a lot of extra funds to invest, 00:49:34.740 |
but also, no one knows what's gonna work in AI, right? 00:50:03.540 |
- Is the breakthroughs that's most promising, 00:50:08.740 |
Do you think we can get far with the current computers? 00:50:16.380 |
I don't know if Turing computation is gonna be sufficient. 00:50:22.020 |
I don't see any reason why we need anything else. 00:50:25.980 |
So in that sense, we have invented the hardware we need, 00:50:28.980 |
but we just need to make it faster and bigger. 00:50:31.860 |
And we need to figure out the right algorithms 00:50:56.220 |
and when you asked about, is our current hardware, 00:51:02.220 |
Well, Turing computation says that our current hardware 00:51:13.300 |
So all we have to do is make it faster and bigger. 00:51:16.500 |
But there have been people like Roger Penrose, 00:51:26.420 |
"because intelligence requires continuous valued numbers." 00:51:30.540 |
I mean, that was sort of my reading of his argument 00:51:34.860 |
and quantum mechanics and what else, whatever. 00:51:50.460 |
I don't think we're gonna be able to scale up 00:51:53.900 |
our current approaches to programming these computers. 00:51:58.420 |
- What is your hope for approaches like CopyCat 00:52:02.700 |
I've talked to the creator of Soar, for example. 00:52:12.060 |
in helping develop systems of greater and greater 00:52:22.180 |
is trying to take some of those ideas and extending it. 00:52:26.100 |
So I think there's some really promising approaches 00:52:57.220 |
You have a meta model that generates a prediction 00:53:00.660 |
and you compare it with, and then the difference 00:53:05.180 |
that generative model is telling you where to look 00:53:08.380 |
and what to look at and what to pay attention to. 00:53:14.060 |
It's not that just you compare it with your perception. 00:53:21.900 |
It's kind of a mixture of the bottom-up information 00:53:28.300 |
coming from the world and your top-down model 00:53:31.860 |
being imposed on the world is what becomes your perception. 00:53:44.180 |
- What's the step, what's the analogy-making step there? 00:53:57.100 |
You can talk about a semantic network or something like that 00:54:00.420 |
with these different kinds of concept models in your brain 00:54:12.380 |
Okay, let's say I see someone out on the street 00:54:24.220 |
between my model of a dog and model of a cat. 00:54:48.260 |
So another example with the walking the dog thing 00:54:51.260 |
is sometimes people, I see people riding their bikes 00:55:12.780 |
okay, riding a bike is sort of similar to walking 00:55:16.580 |
or it's connected, it's a means of transportation. 00:55:33.180 |
- But sort of these analogies are very human interpretable. 00:55:43.460 |
they kind of help you to take raw sensory information 00:55:46.700 |
and to sort of automatically build up hierarchies 00:55:53.020 |
They're just not human interpretable concepts. 00:55:58.660 |
Do you hope, it's sort of the hybrid system question. 00:56:05.780 |
How do you think the two can start to meet each other? 00:56:08.220 |
What's the value of learning in this systems of forming, 00:56:16.020 |
- The goal of, you know, the original goal of deep learning 00:56:24.260 |
you would get the system to learn to extract features 00:56:27.300 |
that at these different levels of complexities. 00:56:30.340 |
It may be edge detection, and that would lead into 00:56:41.540 |
And this was based on the ideas of the neuroscientists, 00:57:02.020 |
Of course, people have found that the whole story 00:57:10.660 |
So I see that as absolutely a good brain-inspired approach 00:57:25.660 |
But one thing that it's lacking, for example, 00:57:29.460 |
is all of that feedback, which is extremely important. 00:57:33.300 |
- The interactive element that you mentioned. 00:57:35.500 |
- The expectation, right, the conceptual level. 00:57:42.180 |
and the perception, and just going back and forth. 00:57:47.980 |
And, you know, one thing about deep neural networks 00:57:52.220 |
is that in a given situation, like, you know, 00:57:55.380 |
they're trained, right, they get these weights 00:57:57.820 |
and everything, but then now I give them a new image, 00:58:02.420 |
They treat every part of the image in the same way. 00:58:07.420 |
You know, they apply the same filters at each layer 00:58:19.940 |
I shouldn't care about this part of the image. 00:58:23.060 |
Or this part of the image is the most important part. 00:58:27.020 |
And that's kind of what we humans are able to do 00:58:30.140 |
because we have these conceptual expectations. 00:58:33.420 |
- There's, by the way, a little bit of work in that. 00:58:35.580 |
There's certainly a lot more in what's under the-- 00:58:42.060 |
That's exceptionally powerful, and it's a very, 00:58:48.500 |
just as you say, it's a really powerful idea. 00:59:01.380 |
as a perception of a new example is being processed, 00:59:10.900 |
- Right, so, I mean, there's a kind of notion 00:59:32.380 |
but there's not a powerful way to represent the world 00:59:42.340 |
I mean, it's so difficult because neural networks 00:59:55.180 |
It's hard to criticize them at the fundamental level, 01:00:07.180 |
mental models sort of almost put a psychology hat on, 01:00:11.660 |
say, look, these networks are clearly not able to achieve 01:00:15.860 |
what we humans do with forming mental models, 01:00:20.060 |
But that doesn't mean that they fundamentally 01:00:24.260 |
It's very difficult to say that, I mean, at least to me. 01:00:26.580 |
Do you have a notion that the learning approaches really, 01:00:29.860 |
I mean, they're going to, not only are they limited today, 01:00:33.980 |
but they will forever be limited in being able 01:00:41.460 |
- I think the idea of the dynamic perception is key here. 01:00:53.800 |
and getting feedback, and that's something that, 01:01:04.760 |
But the problem is that the actual, the recurrence is, 01:01:10.880 |
basically the feedback is, at the next time step, 01:01:23.500 |
which is, and it turns out that that doesn't work very well. 01:01:28.500 |
- But see, the thing I'm saying is, mathematically speaking, 01:01:42.540 |
- So, it's like, it's the same Turing machine question, 01:01:56.980 |
a universal Turing machine can be intelligent, 01:02:10.180 |
is how big of a role do you think deep learning 01:02:14.380 |
needs, will play, or needs to play in this, in perception? 01:02:21.140 |
- I think that deep learning as it currently exists, 01:02:30.340 |
but I think that there's a lot more going on in perception. 01:02:36.820 |
But who knows, the definition of deep learning, 01:02:42.020 |
It's kind of an umbrella for a lot of different things. 01:02:43.700 |
- So, what I mean is purely sort of neural networks. 01:02:58.940 |
is kind of like us birds criticizing airplanes 01:03:03.020 |
for not flying well, or that they're not really flying. 01:03:15.140 |
Do you think that, yeah, the brute force learning approach 01:03:30.060 |
that there's some things that we've been evolved 01:03:39.660 |
and that learning just can't happen without them. 01:03:44.580 |
So, one example, here's an example I had in the book 01:03:59.340 |
And it learned to play these Atari video games 01:04:02.780 |
just by getting input from the pixels of the screen, 01:04:18.220 |
That was one of their results, and it was great. 01:04:22.980 |
through the side of the bricks in the Breakout game, 01:04:30.740 |
Okay, so there was a group who did an experiment 01:04:35.740 |
where they took the paddle that you move with the joystick 01:04:41.540 |
and moved it up two pixels or something like that. 01:04:45.660 |
And then they looked at a deep Q-learning system 01:04:55.860 |
Of course, a human could, but, and it couldn't. 01:04:59.660 |
Maybe that's not surprising, but I guess the point is 01:05:11.860 |
we, looking at it, kind of anthropomorphized it and said, 01:05:16.660 |
"Oh, here's what it's doing in the way we describe it." 01:05:21.500 |
And so because it didn't learn those concepts, 01:05:31.640 |
we also anthropomorphize flaws to inject into the system 01:05:36.460 |
that will then flip how impressed we are by it. 01:05:40.020 |
What I mean by that is, to me, the Atari games 01:05:43.740 |
were to me deeply impressive that that was possible at all. 01:05:50.780 |
and people should look at that, just like the game of Go, 01:06:28.340 |
been able to, through the current neural networks, 01:06:30.180 |
learn very basic concepts that are not enough 01:06:54.700 |
- Because the reason I brought up this example 01:07:08.260 |
to take away from the impressive work that they did, 01:07:16.100 |
do they learn the human, the things that we humans 01:07:36.300 |
maybe it could deal with, maybe it would learn that concept. 01:07:54.820 |
learn to divide up the world into relevant concepts? 01:08:04.020 |
that without some innate notion, that it can't do it. 01:08:27.140 |
100% deep learning will not take us all the way, 01:08:30.060 |
but there's still, I was so personally sort of surprised 01:08:35.060 |
by the Atari games, by Go, by the power of self-play, 01:08:46.860 |
about what's possible in this way of approaching it. 01:08:54.260 |
And that goes way back to Arthur Samuel, right, 01:09:11.340 |
It's the area that I work, at least these days, 01:09:20.780 |
as sort of an example of things we, as humans, 01:09:30.900 |
or the different problems that we think are easy 01:09:32.900 |
when we first try them, and then realize how hard it is. 01:09:41.460 |
autonomous driving being a difficult problem, 01:09:43.540 |
more difficult than we realize, humans give it credit for. 01:09:48.140 |
What are the most difficult parts, in your view? 01:09:50.540 |
- I think it's difficult because of the world 01:09:55.780 |
is so open-ended as to what kinds of things can happen. 01:10:15.500 |
can do really well on most normal situations, 01:10:19.540 |
as long as the weather is reasonably good and everything. 01:10:23.340 |
But if some, we have this notion of edge case, 01:10:34.740 |
which says that there's so many possible things 01:10:37.900 |
that can happen that was not in the training data 01:10:42.100 |
of the machine that it won't be able to handle it 01:10:50.900 |
- Right, it's the old, the paddle moved problem. 01:10:54.700 |
- Yeah, it's the paddle moved problem, right. 01:10:59.180 |
and you probably are more of an expert than I am on this, 01:11:02.140 |
is that current self-driving car vision systems 01:11:10.460 |
meaning that they don't know which obstacles, 01:11:13.900 |
which quote-unquote obstacles they should stop for 01:11:18.580 |
And so a lot of times I read that they tend to slam 01:11:23.900 |
and the most common accidents with self-driving cars 01:11:35.740 |
- Yeah, so there's a lot of interesting questions there, 01:11:39.100 |
whether, 'cause you mentioned kind of two things. 01:11:45.100 |
of understanding, of interpreting the objects 01:11:54.380 |
the action that you take, how you respond to it. 01:11:57.740 |
So a lot of the cars braking is a kind of notion of, 01:12:02.460 |
to clarify it, there's a lot of different kind of things 01:12:11.780 |
are the ones like Waymo and Cruise and those companies, 01:12:15.780 |
they tend to be very conservative and cautious. 01:12:27.900 |
that results in being exceptionally responsive 01:12:31.100 |
to anything that could possibly be an obstacle, right? 01:12:37.260 |
it's unpredictable, it behaves unpredictably. 01:12:41.660 |
- Yeah, that's not a very human thing to do, caution. 01:12:44.100 |
That's not the thing we're good at, especially in driving. 01:13:00.540 |
how much, sort of speaking to public information, 01:13:05.540 |
because a lot of companies say they're doing deep learning 01:13:09.220 |
and machine learning just to attract good candidates. 01:13:14.780 |
it's still not a huge part of the perception. 01:13:20.460 |
that are much more reliable for obstacle detection. 01:13:23.900 |
And then there's Tesla approach, which is vision only. 01:13:27.940 |
And there's, I think a few companies doing that, 01:13:30.860 |
but Tesla most sort of famously pushing that forward. 01:13:33.420 |
- And that's because the LIDAR is too expensive, right? 01:13:41.140 |
if you were to for free give to every Tesla vehicle, 01:13:50.820 |
That if you want to solve the problem with machine learning, 01:13:55.620 |
LIDAR should not be the primary sensor is the belief. 01:14:04.220 |
So if you want to learn, you want that information. 01:14:08.480 |
But if you want to not to hit obstacles, you want LIDAR. 01:14:13.700 |
Right, it's sort of, it's this weird trade-off 01:14:16.340 |
because yeah, so what Tesla vehicles have a lot of, 01:14:35.060 |
except when those things are standing, right? 01:14:50.420 |
So there's a lot of problems with perception. 01:14:52.660 |
They are doing actually some incredible stuff in the, 01:15:02.500 |
where it's constantly taking edge cases and pulling back in. 01:15:13.060 |
that people are studying now is called multitask learning, 01:15:15.860 |
which is sort of breaking apart this problem, 01:15:18.420 |
whatever the problem is, in this case, driving, 01:15:26.260 |
So this giant pipeline, it's kind of interesting. 01:15:42.060 |
of that particular company thinks it will be, 01:15:48.140 |
that through good engineering and data collection 01:16:00.140 |
There's a much longer tail and all these edge cases 01:16:06.500 |
that applies to natural language and all spaces. 01:16:19.220 |
in these practical problems of the human experience? 01:16:33.620 |
but can it be solved in a reasonable timeline 01:16:38.620 |
or do fundamentally other methods need to be invented? 01:16:42.020 |
- So I don't, I think that ultimately driving, 01:16:49.980 |
being able to drive and deal with any situation 01:16:55.220 |
that comes up does require kind of full human intelligence 01:17:00.220 |
and even in humans aren't intelligent enough to do it 01:17:06.220 |
are because the human wasn't paying attention 01:17:12.420 |
- And not because they weren't intelligent enough. 01:17:14.180 |
- And not because they weren't intelligent enough, right. 01:17:16.900 |
Whereas the accidents with autonomous vehicles 01:17:29.540 |
and I think that it's a very fair thing to say 01:17:32.660 |
that autonomous vehicles will be ultimately safer 01:17:49.100 |
'Cause we're really good at the common sense thing. 01:17:50.900 |
- Yeah, we're great at the common sense thing. 01:17:55.100 |
- Especially when we're, you know, driving's kind of boring 01:17:57.180 |
and we have these phones to play with and everything. 01:18:11.540 |
that the definition of self-driving is gonna change 01:18:45.620 |
as long as pedestrians don't mess with them too much. 01:18:52.580 |
- But I don't think we will have fully autonomous 01:19:02.260 |
The person thinks of it for a very long time. 01:19:14.860 |
you have to be able to engineer in common sense. 01:19:18.660 |
- I think it's an important thing to hear and think about. 01:19:39.900 |
Like you mentioned, pedestrians and cyclists, 01:19:41.540 |
actually that's whatever that nonverbal communication 01:19:47.220 |
there's that dynamic that is also part of this common sense. 01:20:07.860 |
'cause I've watched countless hours of pedestrian video 01:20:13.140 |
we humans are also really bad at articulating 01:20:30.460 |
- I'm not sure, but I'm coming around to that more and more. 01:20:42.780 |
- Well, he certainly has a large personality, yes. 01:20:46.860 |
He thinks that the system needs to be grounded, 01:20:50.300 |
meaning it needs to sort of be able to interact 01:20:54.340 |
but doesn't think it necessarily needs to have a body. 01:21:02.580 |
do you mean you have to be able to play with the world? 01:21:37.620 |
that kind of gets in the way of logical thinking. 01:21:48.700 |
or human level intelligence, whatever that means, 01:22:04.860 |
that it's hard to separate intelligence from that. 01:22:27.820 |
but we don't care about all this other stuff. 01:22:30.900 |
And I think the other stuff is very fundamental. 01:22:35.260 |
- So there's the idea that things like emotion 01:22:40.220 |
- As opposed to being an integral part of it. 01:22:45.460 |
so romanticize the notions of emotion and suffering 01:22:57.860 |
There was this recent thing going around the internet. 01:23:00.340 |
This, some, I think he's a Russian or some Slavic 01:23:12.460 |
and one was the argument from Slavic pessimism. 01:23:32.020 |
that what we perceive as sort of the limits of human, 01:23:41.500 |
and all those kinds of things are integral to intelligence. 01:23:48.220 |
Like what, why is that important, do you think? 01:24:00.780 |
it's a big part of how it affects how we perceive the world. 01:24:04.860 |
It affects how we make decisions about the world. 01:24:07.780 |
It affects how we interact with other people. 01:24:10.020 |
It affects our understanding of other people. 01:24:49.380 |
So, it's not something that I can prove that's necessary, 01:25:01.740 |
You've written the op-ed in the New York Times 01:25:04.100 |
titled "We Shouldn't Be Scared by Superintelligent AI" 01:25:07.740 |
and it criticized a little bit Stuart Russell 01:25:11.900 |
Can you try to summarize that article's key ideas? 01:25:18.260 |
- So, it was spurred by an earlier New York Times op-ed 01:25:22.820 |
by Stuart Russell, which was summarizing his book 01:25:37.300 |
we need to have its values aligned with our values 01:25:40.900 |
and it has to learn about what we really want. 01:25:48.900 |
and we give it the problem of solving climate change 01:25:52.820 |
and it decides that the best way to lower the carbon 01:26:14.180 |
And it seems that something that's superintelligent 01:26:19.180 |
can't just be intelligent along this one dimension of, 01:26:27.180 |
the best optimal path to solving climate change 01:26:35.820 |
that you could get to one without having the other. 01:27:05.180 |
that's sufficiently, not even superintelligent, 01:27:07.700 |
but as it approaches greater and greater intelligence, 01:27:20.100 |
So, Bostrom had this example of the superintelligent AI 01:27:31.140 |
'cause its job is to make paperclips or something. 01:27:40.380 |
or as a thing that could possibly be realized? 01:27:44.580 |
So, I think that what my op-ed was trying to do 01:28:10.460 |
and build a machine that has one of these dimensions 01:28:15.500 |
but it doesn't have any of the other dimensions. 01:28:25.820 |
- So, can I read a few sentences from Yoshua Bengio, 01:28:35.140 |
So, he writes, "I have the same impression as Melanie 01:28:43.180 |
"with our ability to learn to solve many problems. 01:29:05.440 |
"well before we reach some hypothetical superintelligence. 01:29:12.660 |
"whose objective function may not be sufficiently aligned 01:29:19.260 |
"creating all kinds of harmful side effects." 01:29:37.740 |
before we reach anything like superintelligence. 01:29:40.780 |
So, your criticism is kind of really nice to saying, 01:29:53.740 |
"but if we look at systems that are much less intelligent, 01:29:58.160 |
"there might be these same kinds of problems that emerge." 01:30:02.160 |
- Sure, but I guess the example that he gives there 01:30:21.700 |
- But the idea is the algorithm, that's right. 01:30:27.160 |
the fundamental element of what does the bad thing 01:30:32.280 |
But the algorithm kind of controls the behavior 01:30:42.880 |
so for example, if it's an advertisement-driven company 01:30:45.320 |
that recommends certain things and encourages engagement, 01:31:07.820 |
- I guess the question here is sort of who has the agency? 01:31:21.940 |
some people have criticized some facial recognition systems 01:32:03.140 |
but my understanding of what Russell's argument was 01:32:08.140 |
is more that the machine itself has the agency now. 01:32:17.660 |
and it's the thing that has what we would call values. 01:32:27.940 |
But I would say that's sort of qualitatively different 01:32:38.720 |
if you look at Elon Musk or Stuart Russell or Bostrom, 01:32:42.920 |
people who are worried about existential risks of AI, 01:32:47.900 |
their argument goes is it eventually happens. 01:32:50.720 |
We don't know how far, but it eventually happens. 01:32:56.800 |
And what kind of concerns in general do you have about AI 01:33:00.120 |
that approach anything like existential threat to humanity? 01:33:16.040 |
- 'Cause you said like 100 years for, so your time-- 01:33:22.200 |
- Maybe even more than 500 years, I don't know. 01:33:34.480 |
that will fundamentally change the nature of our behavior, 01:33:41.680 |
And we have so many other pressing existential threats 01:33:50.720 |
poverty, possible pandemics, you can go on and on. 01:33:58.280 |
And I think worrying about existential threat from AI 01:34:04.800 |
is not the best priority for what we should be worried about. 01:34:14.160 |
That's kind of my view, 'cause we're so far away. 01:34:15.800 |
But I'm not necessarily criticizing Russell or Bostrom 01:34:26.840 |
And I think some people should be worried about it. 01:34:30.000 |
It's certainly fine, but I was more sort of getting 01:34:39.040 |
So I was more focusing on their view of super intelligence 01:34:56.520 |
- We shouldn't be scared by super intelligence. 01:35:01.320 |
we should redefine what you mean by super intelligence. 01:35:03.440 |
- I actually said something like super intelligence 01:35:14.080 |
But that's not like something New York Times would put in. 01:35:19.680 |
- And the follow-up argument that Yoshua makes also, 01:35:28.480 |
He kind of has a very friendly way of phrasing it 01:35:39.600 |
like we shouldn't be, like while your article stands, 01:35:47.360 |
Bostrom does amazing work, you do amazing work. 01:35:50.120 |
And even when you disagree about the definition 01:35:53.200 |
of super intelligence or the usefulness of even the term, 01:35:56.840 |
it's still useful to have people that like use that term, 01:36:14.520 |
What do you think is a good test of intelligence? 01:36:20.680 |
that you find the most compelling, like the original, 01:36:23.720 |
or the higher levels of the Turing test kind of, yeah? 01:36:28.720 |
- Yeah, I still think the original idea of the Turing test 01:36:48.560 |
But I think a real Turing test that really goes into depth, 01:36:52.800 |
like the one that I mentioned, I talk about in the book, 01:36:54.720 |
I talk about Ray Kurzweil and Mitchell Kapoor 01:37:07.440 |
And they have a very specific, like how many hours, 01:37:14.920 |
And, you know, Kurzweil says yes, Kapoor says no. 01:37:18.120 |
We only have like nine more years to go to see. 01:37:21.000 |
But I, you know, if something, a machine could pass that, 01:37:33.840 |
They will say that's just a language model, right? 01:37:45.160 |
I mean, you're right, because I think probably 01:37:50.400 |
deep common sense understanding of the world. 01:37:54.600 |
- And the conversation is enough to reveal that. 01:38:09.640 |
Let me ask the basic question, what is complexity? 01:38:13.360 |
- So complexity is another one of those terms, 01:38:19.280 |
But my book about complexity was about this wide area 01:38:28.240 |
of complex systems, studying different systems in nature, 01:38:33.240 |
in technology, in society, in which you have emergence, 01:38:38.240 |
kind of like I was talking about with intelligence. 01:38:41.880 |
You know, we have the brain, which has billions of neurons, 01:38:49.960 |
to be not very complex compared to the system as a whole, 01:38:53.720 |
but the system, the interactions of those neurons 01:39:13.800 |
general principles that underlie all these systems 01:39:17.080 |
that have these kinds of emergent properties. 01:39:23.400 |
underlying the complex system is usually simple, 01:39:29.080 |
- And the emergence happens when there's just a lot 01:39:43.480 |
- Well, reductionism is when you try and take a system 01:39:55.240 |
whether those be cells or atoms or subatomic particles, 01:40:09.480 |
of the whole system by looking at sort of the sum 01:40:17.920 |
or these kinds of interesting complex systems, 01:40:21.080 |
is it possible to understand them in a reductionist way? 01:40:29.240 |
- I don't think it's always possible to understand 01:40:35.840 |
So I don't think it's possible to look at single neurons 01:40:48.360 |
And the sort of the summing up is the issue here 01:40:53.440 |
that we're, you know, one example is that the human genome, 01:40:57.840 |
right, so there was a lot of work on excitement 01:41:04.000 |
because the idea would be that we'd be able to find genes 01:41:11.480 |
But it turns out that, and it was a very reductionist idea. 01:41:15.800 |
You know, we figure out what all the parts are, 01:41:19.240 |
and then we would be able to figure out which parts cause 01:41:23.080 |
But it turns out that the parts don't cause the things 01:41:26.280 |
It's like the interactions, it's the networks of these parts. 01:41:37.240 |
- What do you, what to use the most beautiful, 01:41:52.120 |
it's the simplest would be cellular automata. 01:41:56.120 |
So I was very captivated by cellular automata, 01:41:58.680 |
and worked on cellular automata for several years. 01:42:01.880 |
- Do you find it amazing, or is it surprising 01:42:08.440 |
in cellular automata can create sort of seemingly 01:42:27.040 |
and even able to engineer things like intelligence? 01:42:32.600 |
How humbling in that, also kind of awe-inspiring that, 01:42:43.000 |
that these incredibly simple rules can produce 01:42:46.240 |
this very beautiful, complex, hard to understand behavior. 01:43:03.600 |
that you might be able to engineer complexity 01:43:10.000 |
Can you briefly say what is the Santa Fe Institute? 01:43:12.320 |
Its history, its culture, its ideas, its future. 01:43:14.840 |
So I've never, as I mentioned to you, I've never been, 01:43:26.600 |
So the Santa Fe Institute was started in 1984, 01:43:40.000 |
which is about a 40-minute drive from Santa Fe Institute. 01:43:53.640 |
because they felt that their field wasn't approaching 01:44:27.560 |
and Nicholas Metropolis, who, a mathematician, physicist, 01:44:54.120 |
that itself has been kind of on the edge of chaos 01:45:07.600 |
whatever funding it can raise through donations 01:45:23.000 |
It's a really fun place to go think about ideas 01:45:41.600 |
- Maybe about 10 who are there on five-year terms 01:46:04.800 |
what's there in terms of the public interaction 01:46:09.360 |
or students or so on that could be a possible interaction 01:46:15.680 |
- Yeah, so there's a few different things they do. 01:46:32.600 |
and you do projects, and people really like that. 01:46:38.200 |
They also have some specialty summer schools. 01:47:08.320 |
including an introduction to complexity course that I taught. 01:47:14.320 |
- Awesome, and there's a bunch of talks too online. 01:47:20.440 |
- Yeah, they have sort of technical seminars and colloquia, 01:47:30.240 |
and they put everything on their YouTube channel 01:47:34.880 |
- Douglas Hostadter, author of "Gertl Escherbach," 01:47:40.600 |
He mentioned a couple of times, and collaborator. 01:47:57.480 |
was that when you're looking at a complex problem 01:48:07.840 |
to try and figure out what is the essence of this problem. 01:48:12.200 |
And this is how the CopyCat program came into being, 01:48:19.000 |
"How can we make this as idealized as possible, 01:48:25.680 |
And that's really been a core theme of my research, I think. 01:48:36.480 |
And it's really very much kind of physics-inspired. 01:48:45.840 |
like you're reduced to the most fundamental aspect 01:48:54.720 |
people used to work in these micro-worlds, right? 01:49:03.080 |
And then that got criticized because they said, 01:49:06.040 |
"Oh, you can't scale that to the real world." 01:49:19.800 |
We've seen a lot of people who are trying to work 01:49:24.600 |
for things like natural language and common sense. 01:49:29.120 |
So that's an interesting evolution of those ideas. 01:49:34.640 |
the fundamental challenges of the problem of intelligence 01:49:47.040 |
is there something that you're just really proud of 01:49:50.340 |
in terms of ideas that you've gotten a chance 01:49:54.340 |
- So I am really proud of my work on the Copycat project. 01:50:04.960 |
I think there's a lot of ideas there to be explored. 01:50:08.960 |
And I guess one of the happiest days of my life, 01:50:30.180 |
- Where you kind of gave life to an artificial system. 01:50:35.200 |
- What, in terms of what people can interact, 01:50:37.240 |
I saw there's like a, I think it's called MetaCopycat, 01:50:45.600 |
If people actually wanted to play around with it 01:50:49.000 |
and maybe integrate into, whether it's with deep learning 01:50:55.800 |
what would you suggest they do to learn more about it 01:50:58.160 |
and to take it forward in different kinds of directions? 01:51:04.920 |
called "Fluid Concepts and Creative Analogies" 01:51:10.120 |
I have a book called "Analogy Making as Perception," 01:51:25.480 |
And I think that would really be the best way 01:51:31.160 |
Well, Melanie, it was an honor talking to you. 01:51:41.720 |
And thank you to our presenting sponsor, Cash App. 01:51:52.560 |
that inspires hundreds of thousands of young minds 01:51:55.200 |
to learn and to dream of engineering our future. 01:51:58.880 |
If you enjoy this podcast, subscribe on YouTube, 01:52:03.200 |
support it on Patreon, or connect with me on Twitter. 01:52:06.580 |
And now, let me leave you with some words of wisdom 01:52:09.440 |
from Douglas Hufstadter and Melanie Mitchell. 01:52:15.320 |
"And without analogies, there can be no concepts." 01:52:26.900 |
Thank you for listening, and hope to see you next time.