back to indexRay Kurzweil: Future of Intelligence | MIT 6.S099: Artificial General Intelligence (AGI)

Chapters
0:0 
4:0 the perceptron
13:17 the neocortex is a very thin structure
36:55 enhancing our intelligence
37:33 continuing to enhance our capability through merging with ai
00:00:13.480 | 
with a 30-year track record of accurate predictions. 00:00:16.800 | 
Called the restless genius by the Wall Street Journal 00:00:19.640 | 
and the ultimate thinking machine by Forbes magazine. 00:00:22.960 | 
He was selected as one of the top entrepreneurs 00:00:31.520 | 
PBS selected him as one of the 16 revolutionaries 00:00:41.060 | 
the first omni-font optical character recognition, 00:00:44.320 | 
the first point-to-speech reading machine for the blind, 00:00:49.100 | 
the first music synthesizer capable of recreating 00:00:52.000 | 
the grand piano and other orchestral instruments, 00:00:59.400 | 
Among Ray's many honors, he received a Grammy Award 00:01:02.620 | 
for Outstanding Achievements in Music Technology. 00:01:05.000 | 
He is the recipient of the National Medal of Technology, 00:01:07.720 | 
was inducted into the National Inventors Hall of Fame, 00:01:16.940 | 
Ray has written five national best-selling books, 00:01:28.840 | 
He is co-founder and chancellor of Singularity University 00:01:36.260 | 
heading up a team developing machine intelligence 00:02:05.100 | 
they started a new major called computer science. 00:02:15.000 | 
Even biotechnology recently got its own course number. 00:02:22.560 | 
Okay, how many of you do work in deep learning? 00:02:36.880 | 
I became excited about artificial intelligence. 00:02:42.000 | 
It had only gotten its name six years earlier, 00:03:00.540 | 
He spent all day with me as if he had nothing else to do. 00:03:13.220 | 
the symbolic school which Minsky was associated with 00:03:18.740 | 
and the connectionist school was not widely known. 00:03:24.380 | 
that Minsky actually invented the neural net in 1953. 00:03:34.120 | 
that these giant brains could solve any problem. 00:03:37.680 | 
So the first popular neural net, the perceptron, 00:03:45.040 | 
was being promulgated by Frank Rosenblatt at Cornell. 00:03:49.280 | 
So Minsky said, "Oh, where are you going now?" 00:03:56.920 | 
And I went there and Rosenblatt was touting the perceptron 00:04:01.920 | 
that it ultimately would be able to solve any problem. 00:04:04.980 | 
So I brought some printed letters that had the camera 00:04:20.560 | 
"and feed it as the input to another perceptron 00:04:22.560 | 
"and take the output of that and feed it to a third layer. 00:04:26.660 | 
"it'll get smarter and smarter and generalized." 00:04:31.960 | 
Well, no, but it's high on our research agenda. 00:04:35.240 | 
Things did not move quite as quickly back then 00:04:39.720 | 
He died nine years later, never having tried that idea. 00:04:46.040 | 
I mean, he never tried multi-layer neural nets 00:04:49.520 | 
and all the excitement that we see now about deep learning 00:04:58.320 | 
many layer neural nets and the law of accelerating returns, 00:05:07.120 | 
which is basically the exponential growth of computing 00:05:15.180 | 
It would be decades before that idea was tried. 00:05:20.600 | 
Several decades later, three-level neural nets were tried. 00:05:37.800 | 
the disappearing gradient or the exploding gradient, 00:05:41.700 | 
which I'm sure many of you are familiar with. 00:05:43.900 | 
Basically, you need to take maximum advantage 00:05:56.260 | 
not let them explode or disappear and lose the resolution. 00:06:09.680 | 
And that's behind sort of all the fantastic gains 00:06:33.380 | 
AlphaGo Zero started with no human input at all. 00:06:37.160 | 
Within hours of iteration, soared past AlphaGo, 00:06:49.080 | 
Basically, you need to evaluate the quality of the board 00:06:53.400 | 
at each point, and they used another 100-layer neural net 00:07:16.140 | 
For example, there's pictures of dogs and cats 00:07:19.220 | 
that are labeled, so you got a picture of a cat 00:07:21.460 | 
and it says cat, and then you can learn from it, 00:07:33.540 | 
And that only created a sort of fair Go player, 00:07:39.660 | 
So, they worked around that in the case of Go 00:07:43.700 | 
by basically generating an infinite amount of data 00:07:55.500 | 
What kind of situations can you do that with? 00:07:58.700 | 
You have to have some way of simulating the world. 00:08:15.140 | 
I mean, math axioms can be contained on a page or two. 00:08:22.340 | 
Gets more difficult when you have real life situations, 00:08:40.460 | 
Autonomous vehicles, you need real life data. 00:08:45.980 | 
So, the Waymo systems have gone three and a half 00:08:52.380 | 
That's enough data to then create a very good simulator. 00:08:58.500 | 
because they had a lot of real world experience 00:09:01.660 | 
and they've gone a billion miles in the simulator. 00:09:09.140 | 
to either create the data or have the data around. 00:09:12.820 | 
Humans can learn from a small number of examples. 00:09:17.820 | 
Your significant other, your professor, your boss, 00:09:21.580 | 
your investor can tell you something once or twice 00:09:29.780 | 
And that's kind of the remaining advantage of humans. 00:09:49.500 | 
There was actually very little neuroscience to go on. 00:09:52.420 | 
There was one neuroscientist, Vernon Mountcastle, 00:09:55.340 | 
that had something relevant to say, which is he did... 00:09:59.260 | 
I mean, there was the common wisdom at the time, 00:10:01.820 | 
and there's still a lot of neuroscientists that say this, 00:10:04.060 | 
that we have all these different regions of the brain, 00:10:06.380 | 
they do different things, they must be different. 00:10:20.140 | 
does these simple feature extractions on visual images. 00:10:23.820 | 
That's actually a large part of the neocortex. 00:10:33.460 | 
through injury or stroke, people can't recognize faces. 00:11:01.980 | 
Otherwise, I could actually observe human brains in action, 00:11:07.980 | 
And there's a lot of hints that you can get that way. 00:11:11.100 | 
For example, if I ask you to recite the alphabet, 00:11:18.580 | 
A, B, C, D, E, F, G, H, I, J, K. 00:11:22.420 | 
So we learn things as forward sequences of sequences. 00:11:26.180 | 
Forward, because if I ask you to recite the alphabet 00:11:29.380 | 
backwards, you can't do it unless you learn that 00:11:36.220 | 
I wrote a paper that the neocortex is organized 00:11:44.660 | 
And that's how I got to meet President Johnson. 00:12:02.260 | 
which was a mentorship that lasted for over 50 years. 00:12:09.260 | 
which the other colleges I considered didn't have. 00:12:24.180 | 
two cycles for instruction, so a quarter of a MIP. 00:12:36.500 | 
It's now actually an explosion of neuroscience evidence 00:12:40.980 | 
The European Brain Reverse Engineering Project 00:12:43.900 | 
has identified a repeating module of about 100 neurons. 00:12:50.100 | 
so it's about 30 billion neurons in the neocortex. 00:12:53.140 | 
The neocortex is the outer layer of the brain. 00:13:04.660 | 
And then the output, the single output axon of that module 00:13:28.060 | 
And we can see that it learns a simple pattern. 00:13:40.380 | 
How many of you have worked with Markov models? 00:13:44.660 | 
That's usually no hands go up when I ask that question. 00:14:00.500 | 
And the speech recognition work I did in the 80s 00:14:03.900 | 
used these Markov models that became the standard approach 00:14:18.100 | 
It doesn't learn long distance relationships. 00:14:26.340 | 
is exactly how the neocortex creates that hierarchy. 00:14:42.940 | 
So does it grow an axon from one place to another, 00:14:50.020 | 
Actually, all these connections are there from birth, 00:14:59.620 | 
So if it decides and how it makes that decision 00:15:05.740 | 
but it wants to connect this module to this module, 00:15:23.140 | 
that's in fact, the neocortex is a hierarchy of modules 00:15:39.700 | 
they may seem three-dimensional or even more complicated, 00:15:46.220 | 
but the complexity comes in with the hierarchy. 00:15:48.620 | 
So the neocortex emerged 200 million years ago with mammals. 00:15:57.020 | 
That's one of the distinguishing features of mammals. 00:16:04.980 | 
but they were capable of a new type of thinking. 00:16:07.780 | 
Other non-mammalian animals had fixed behaviors, 00:16:11.220 | 
but those fixed behaviors were very well adapted 00:16:16.180 | 
But these new mammals could invent a new behavior. 00:16:29.380 | 
it will invent a new behavior to deal with it. 00:16:36.980 | 
And that behavior could spread virally through the community. 00:16:40.180 | 
Another mouse watching this would say to itself, 00:16:43.020 | 
hmm, that was really clever going around that rock, 00:17:01.620 | 
and nothing much happened for 135 million years. 00:17:05.580 | 
But then 65 million years ago, something did happen. 00:17:09.660 | 
There was a sudden violent change to the environment. 00:17:12.900 | 
We now call it the Cretaceous extinction event. 00:17:15.980 | 
There's been debate as to whether it was a meteor 00:17:18.460 | 
or an asteroid, I mean a meteor or a volcanic eruption. 00:17:23.460 | 
The asteroid or meteor hypothesis is in the ascendancy. 00:17:37.060 | 
a very violent sudden change to the environment. 00:17:56.660 | 
And that's when mammals overtook their ecological niche. 00:18:03.380 | 
said to itself, this neocortex is pretty good stuff 00:18:07.940 | 
So now mammals got bigger, their brains got bigger 00:18:10.740 | 
at an even faster pace, taking up a larger fraction 00:18:14.500 | 
The neocortex got bigger even faster than that 00:18:17.700 | 
and developed these curvatures that are distinctive 00:18:20.380 | 
of a primate brain basically to increase its surface area. 00:18:27.700 | 
the human neocortex is still a flat structure. 00:18:29.900 | 
It's about the size of a table napkin, just as thin. 00:18:39.740 | 
which became dominant in their ecological niche. 00:18:45.900 | 
Then something else happened 2 million years ago. 00:18:56.140 | 
of the enclosure and basically filled up the frontal cortex 00:19:03.900 | 
And up until recently it was felt, as I said, 00:19:06.940 | 
that this was, the frontal cortex was different 00:19:09.860 | 
'cause it does these qualitatively different things. 00:19:27.220 | 
We're already doing a very good job of being primates. 00:19:30.100 | 
So we put it at the top of the neocortical hierarchy 00:19:39.020 | 
but it doubled or tripled the number of levels 00:19:53.860 | 
Every human culture we've ever discovered has music. 00:19:59.780 | 
There's debate about that, but it's really true. 00:20:07.500 | 
Technology required another evolutionary adaptation, 00:20:26.100 | 
Yeah, I could take that branch and strip off the leaves 00:20:31.060 | 
We could actually carry out these ideas and create tools 00:20:38.420 | 
and started a whole other evolutionary process 00:20:46.620 | 
So Larry Page read my book in 2012 and liked it. 00:20:51.620 | 
So I met with him and asked him for an investment 00:20:55.460 | 
in a company I'd started actually a couple of weeks earlier 00:21:20.380 | 
"I just started this company to develop this." 00:21:25.140 | 
And I said, "How are you gonna value a company 00:21:41.740 | 
this hierarchical model to understanding language, 00:21:46.620 | 
which I think really is the holy grail of AI. 00:21:56.740 | 
as what we now call a Turing-complete problem 00:22:04.420 | 
that you can apply to pass a valid Turing test 00:22:20.700 | 
and doesn't really describe how to go about it. 00:22:27.620 | 
through interrogation and dialogue that it's a human, 00:22:32.620 | 
that requires a full range of human intelligence. 00:22:37.820 | 
And I think that test has stood the test of time. 00:22:46.180 | 
that two systems passed a paragraph comprehension test. 00:22:57.980 | 
we were trying to pass these paragraph comprehension tests. 00:23:05.820 | 
Second grade test, we kind of got average performance. 00:23:08.820 | 
And the third grade test had too much inference. 00:23:12.100 | 
Already you had to know some common sense knowledge 00:23:15.420 | 
as it's called and make implications of things 00:23:19.260 | 
that were in different parts of the paragraph. 00:23:21.380 | 
And there's too much inference and it really didn't work. 00:23:27.500 | 
just slightly surpassed average human performance. 00:23:33.700 | 
an AI does something at average human levels, 00:23:37.180 | 
it doesn't take long for it to surpassed average human levels. 00:23:47.140 | 
that it surpasses now average human performance. 00:23:50.660 | 
It's used in LSTM, long, short temporal memory. 00:24:00.060 | 
it has to put together inferences and implications 00:24:05.660 | 
with some common sense knowledge that's not explicitly stated. 00:24:08.980 | 
So that's, I think, a pretty impressive milestone. 00:24:14.460 | 
So I've been developing, I've got a team of about 45 people, 00:24:19.300 | 
and we've been developing this hierarchical model. 00:24:26.060 | 
'cause we can use deep learning for each module. 00:24:32.340 | 
and we create an embedding for each sentence. 00:24:43.780 | 
If you use Smart Reply on, if you use Gmail on your phone, 00:24:49.620 | 
you'll see it gives you three suggestions for responses. 00:25:03.460 | 
And the quality of the suggestions is really quite good, 00:25:08.140 | 
That's for my team using this kind of hierarchical model. 00:25:12.380 | 
So instead of Markov models, it uses embeddings 00:25:15.060 | 
'cause we can use backpropagation, we might as well use it. 00:25:20.780 | 
But I think what's missing from deep learning 00:25:30.580 | 
That's why evolution developed a hierarchical brain structure 00:25:35.300 | 
to understand the natural hierarchy in the world. 00:25:39.180 | 
And there's several problems with big, deep neural nets. 00:25:44.780 | 
One is the fact that you really do need a billion examples 00:25:59.980 | 
Very often you don't have a billion examples. 00:26:02.820 | 
We certainly have billions of examples of language, 00:26:10.820 | 
So it's kind of a chicken and an egg problem. 00:26:13.780 | 
So I believe this hierarchical structure is needed. 00:26:29.500 | 
Demis described it's playing in both Go and chess 00:26:34.580 | 
'cause we do things that were shocking to human experts 00:26:37.940 | 
like sacrificing a queen and a bishop at the same time 00:26:42.020 | 
or in close succession, which shocked everybody, 00:26:55.220 | 
'cause you really wanna start controlling territory. 00:26:57.740 | 
And yet on reflection, that was the brilliant move 00:27:02.940 | 
But it doesn't really explain how it does these things. 00:28:01.060 | 
It used to be assumed that not much would happen. 00:28:17.380 | 
What's the, in terms of a statistical likelihood, 00:28:20.940 | 
if there were not continued scientific progress? 00:28:35.980 | 
Now, you could have a computed life expectancy, 00:28:41.260 | 
let's say 30 years, 50 years, 70 years from now, 00:28:46.180 | 
you could still be hit by the proverbial bus tomorrow. 00:28:49.420 | 
We're working on that with self-driving vehicles. 00:28:57.580 | 
you can be there now in terms of basically advancing 00:29:04.580 | 
at least to keep pace with the passage of time. 00:29:08.860 | 
I think it will be there for most of the population, 00:29:12.780 | 
at least if they're diligent within about a decade. 00:29:19.260 | 
we may get to see the remarkable century ahead. 00:29:30.180 | 
- Hi, so you mentioned both neural network models 00:29:39.460 | 
And I was wondering how far have you been thinking 00:29:56.780 | 
How many are familiar with the Psych Project? 00:30:24.620 | 
of trying to define things through logical rules. 00:30:29.620 | 
Now it does seem that humans can understand logical rules. 00:30:34.220 | 
We have logical rules written down for things like law 00:30:41.780 | 
But you can actually define a connectionist system 00:30:46.580 | 
to have such a high reliability on a certain type of action 00:30:55.780 | 
even though it's represented in a connectionist way. 00:31:00.780 | 
And connection systems can both capture the soft edges 00:31:05.500 | 
'cause many things in life are not sharply defined. 00:31:12.060 | 
So you don't wanna sacrifice your queen in chess 00:31:15.340 | 
except certain situations that might be a good idea. 00:31:21.260 | 
So we do wanna be able to learn from accumulated human wisdom 00:31:29.700 | 
But I think we'll do it with a connectionist system. 00:31:33.300 | 
But again, I think that connectionist systems 00:31:43.820 | 
- So I understand how we wanna use the neocortex 00:31:48.420 | 
to extract useful stuff and commercialize that. 00:31:57.140 | 
will be useful for turning that into what you wanna do. 00:32:01.780 | 
- Well, the cerebellum is an interesting case in point. 00:32:05.940 | 
It actually has more neurons than the neocortex. 00:32:10.220 | 
And it's used to govern most of our behavior. 00:32:18.180 | 
that's actually controlled by the cerebellum. 00:32:20.020 | 
So a simple sequence is stored in the cerebellum. 00:32:33.900 | 
has actually been migrated from the cerebellum 00:32:39.660 | 
Some people, entire cerebellum is destroyed through disease. 00:32:54.380 | 
But some of the subtlety is a kind of pre-programmed script. 00:33:09.940 | 
But our thinking really is controlled by the neocortex. 00:33:18.780 | 
I think the neocortex is the brain region we wanna study. 00:33:31.300 | 
in terms of this exponential growth of information. 00:33:47.100 | 
Based on molecular computing as we understand it, 00:33:53.540 | 
and actually go to trillions of trillions of times 00:33:57.220 | 
greater computational capacity than we have today. 00:34:13.780 | 
what the impact on human civilization will be. 00:34:18.780 | 
So to take a maybe slightly more mundane issue 00:34:39.700 | 
which was an actual society that formed in 1800 00:34:42.820 | 
after the automation of the textile industry in England. 00:34:52.900 | 
Indeed, those jobs did go away, but new jobs were created. 00:35:16.260 | 
it's gonna be 2% on farms and 9% in factories 00:35:23.580 | 
For all these jobs we eliminate through automation, 00:35:34.860 | 
We can see jobs very clearly going away fairly soon, 00:35:42.860 | 
I mean, just look at the last five or six years. 00:35:53.380 | 
that just weren't contemplated even six years ago. 00:35:59.460 | 
well, you're gonna get jobs creating mobile apps 00:36:09.020 | 
Nobody would have any idea what I'm talking about. 00:36:16.740 | 
yeah, we created new jobs, but it's not as many. 00:36:19.020 | 
Actually, we've gone from 24 million jobs in 1900 00:36:25.420 | 
from 30% of the population to 45% of the population. 00:36:28.900 | 
The new jobs pay 11 times as much in constant dollars. 00:36:34.260 | 
I mean, as I talk to people starting out their career now, 00:36:37.820 | 
they really want a career that gives them some 00:36:40.500 | 
life definition and purpose and gratification. 00:36:46.060 | 
100 years ago, you were happy if you had a backbreaking job 00:36:59.900 | 
for most of the last 100 years through education. 00:37:03.020 | 
We've expanded K through 12 in constant dollars tenfold. 00:37:07.020 | 
We've gone from 38,000 college students in 1870 00:37:17.860 | 
They're not yet connected directly in our brain, 00:37:21.900 | 
When I was here at MIT, I had to take my bicycle 00:37:28.460 | 
Now we carry them in our pockets and on our belts. 00:37:33.460 | 
They're gonna go inside our bodies and brains. 00:37:36.780 | 
I think that's not a really important distinction. 00:37:43.100 | 
to enhance our capability through merging with AI. 00:37:49.740 | 
to the kind of dystopian view we see in future movies 00:37:54.500 | 
where it's the AI versus a brave band of humans 00:37:58.740 | 
We don't have one or two AIs in the world today. 00:38:05.900 | 
It'll be six billion in just a couple of years 00:38:10.460 | 
So we're already deeply integrated with this. 00:38:19.860 | 
Just as we are doing today things we couldn't imagine 00:38:24.580 | 
- You showed many graphs that go through exponential growth 00:38:36.820 | 
So tell me about regions that you've investigated 00:38:49.180 | 
of information technology invariably follows exponential. 00:38:53.580 | 
When it impacts human society it can be linear. 00:39:10.380 | 
Two centuries ago you could count the number of democracies 00:39:22.460 | 
So the, and I attribute all this to the growth 00:39:29.060 | 
in information technology, communication in particular 00:39:32.500 | 
for progression of social and cultural institutions. 00:39:42.580 | 
because it ultimately depends on a vanishingly small 00:39:51.580 | 
grows exponentially and will for a long time. 00:39:58.820 | 
it's actually a remarkably straight linear progression. 00:40:08.260 | 
and it just soared past that in 1997 with Deep Blue 00:40:17.300 | 
But the chess score is a logarithmic measurement. 00:40:31.060 | 
the meaning of things, especially in the 20th century. 00:40:33.420 | 
So for instance, Martin Heidegger gave a couple of speeches 00:40:37.580 | 
and lectures on the relationship of human society 00:40:40.340 | 
to technology and he particularly distinguished 00:40:43.980 | 
between the mode of thinking which is calculating thinking 00:40:47.540 | 
and a mode of thinking which is reflective thinking 00:40:59.820 | 
He recommended to remain open to what he called, 00:41:07.100 | 
I wonder whether you have any thoughts on this. 00:41:09.540 | 
Is there a meaning of purpose to technological development 00:41:12.980 | 
and is there a way for us humans to access that meaning? 00:41:17.260 | 
- Well, we started using technology to shore up weaknesses 00:41:28.860 | 
So physically, I mean, who here could build this building? 00:41:33.260 | 
So we've leveraged the power of our muscles with machines. 00:41:52.420 | 
Computers can do that trivially, we can't do it. 00:42:00.540 | 
I think the essence of what I've been writing about 00:42:05.180 | 
is to master the unique strengths of humanity, 00:42:09.340 | 
creating loving expressions in poetry and music 00:42:17.100 | 
with the better qualities of humanity with machines. 00:42:28.220 | 
Just in the last year, there's so many milestones 00:42:31.140 | 
that are really significant, including in language. 00:42:34.900 | 
But I think of technology as an expression of humanity. 00:43:09.540 | 
So we invented a tool to extend our physical reach. 00:43:14.820 | 
We can access all of human knowledge with a few keystrokes. 00:43:19.380 | 
And we're gonna make ourselves literally smarter 00:43:31.460 | 
- Hi, first of all, honored to hear you speak here. 00:43:48.140 | 
to over steeply discount was tail risk in geopolitics, 00:44:14.900 | 
swamping all of these trends that are otherwise war proof, 00:44:22.500 | 
So my question for you is what steps do you think 00:44:31.460 | 
in designing social and economic institutions 00:44:35.380 | 
to kind of minimize our exposure to these tail risks 00:44:39.940 | 
and survive to make it to a beautiful mind filled future? 00:45:02.300 | 
get under our desk and put our hands behind our head 00:45:17.140 | 
And those weapons are still there, by the way, 00:45:28.660 | 
much of which I've been in the forefront of initiating 00:45:32.140 | 
on the existential risks of what's sometimes referred to 00:45:35.420 | 
as GNR, G for genetics, which is biotechnology, 00:45:39.180 | 
N for nanotechnology, and gray goo, robotics, which is AI. 00:45:51.180 | 
If you knew all the problems you were gonna encounter, 00:46:01.980 | 
There are specific paradigms, they're not foolproof, 00:46:06.740 | 
that we can follow to keep these technologies safe. 00:46:13.940 | 
some visionaries recognized the revolutionary potential, 00:46:18.140 | 
both for promise and peril, of biotechnology. 00:46:22.500 | 
Neither the promise nor peril was feasible 40 years ago. 00:46:28.340 | 
at the Asilomar Conference Center in California 00:46:30.900 | 
to develop both professional ethics and strategies 00:46:38.820 | 
And they've been known as the Asilomar Guidelines. 00:46:41.300 | 
They've been refined through successive Asilomar conferences. 00:46:51.180 | 
We're now, as I mentioned, getting profound benefit. 00:46:54.140 | 
It's a trickle today, it'll be a flood over the next decade. 00:46:58.300 | 
And the number of people who have been harmed, 00:47:00.740 | 
either through intentional or accidental abuse 00:47:07.180 | 
There was one boy who died in gene therapy trials 00:47:13.860 | 
and they canceled all research for gene therapy 00:47:25.580 | 
as a result of that delay, but you can't name them. 00:47:28.420 | 
They can't go on CNN, so we don't know who they are. 00:47:30.740 | 
But that has to do with the balancing of risk. 00:47:34.340 | 
But in large measure, virtually no one has been hurt 00:47:39.500 | 
Now, that doesn't mean you can cross it off our list. 00:47:43.940 | 
because the technology keeps getting more sophisticated. 00:47:49.300 | 
There's hundreds of trials of CRISPR technologies 00:48:00.580 | 
January, we had our first Asilomar Conference on AI ethics. 00:48:09.540 | 
I think the best way we can assure a democratic future 00:48:31.100 | 
I mean, it's gonna emerge from our society today. 00:48:36.340 | 
it's gonna have a higher chance of us practicing them 00:48:41.980 | 
That doesn't sound like a foolproof solution. 00:48:45.140 | 
It isn't, but I think that's the best approach. 00:48:52.500 | 
You can imagine there are technical solutions 00:49:02.020 | 
in your AI software that will assure that it remains safe. 00:49:10.940 | 
If there's some AI that's much smarter than you 00:49:16.380 | 
is not to get in that situation in the first place. 00:49:29.100 | 
I believe we have been headed through technology 00:49:37.580 | 
and people really think things are getting worse. 00:49:46.060 | 
They say, "Oh, this is the most peaceful time 00:49:49.980 | 
"Didn't you hear about the event yesterday and last week?" 00:49:57.780 | 
and you wouldn't even hear about it for months. 00:50:00.100 | 
I have all these graphs on education and literacy 00:50:15.340 | 
poverty has declined 95% in Asia over the last 25 years, 00:50:22.660 | 
All these trends are very smoothly getting better 00:50:24.820 | 
and everybody thinks things are getting worse. 00:50:39.140 | 
but I think that is something that we need to deal with 00:50:45.020 | 
it's dealing with our social, cultural institutions. 00:50:48.540 | 
- So you mentioned also exponential growth of software 00:50:59.420 | 
that information technology costs is exponential 00:51:02.420 | 
is because of fundamental properties of matter and energy. 00:51:18.820 | 
There was actually a study during the Obama administration 00:51:22.020 | 
by the Scientific Advisory Board on assessing this question, 00:51:27.020 | 
how much gains on 23 classical engineering problems 00:51:36.740 | 
over the last decade and software improvements. 00:51:38.940 | 
And there's about a thousand to one improvements, 00:51:41.460 | 
it's about doubling every year from hardware. 00:51:44.100 | 
There was an average of something like 26,000 to one 00:51:47.580 | 
through software improvements, algorithmic improvements. 00:51:57.380 | 
it doubles the performance or multiplies it by 10. 00:52:01.420 | 
We see basically exponential growth from each innovation. 00:52:05.060 | 
So, and we certainly see that in deep learning, 00:52:13.500 | 
while we also have more data and more computation 00:52:16.220 | 
and more memory to throw at these algorithms.