back to indexBen Goertzel: Artificial General Intelligence | Lex Fridman Podcast #103
Chapters
0:0 Introduction
3:20 Books that inspired you
6:38 Are there intelligent beings all around us?
13:13 Dostoevsky
15:56 Russian roots
20:19 When did you fall in love with AI?
31:30 Are humans good or evil?
42:4 Colonizing mars
46:53 Origin of the term AGI
55:56 AGI community
72:36 How to build AGI?
96:47 OpenCog
145:32 SingularityNET
169:33 Sophia
196:2 Coronavirus
204:14 Decentralized mechanisms of power
220:16 Life and death
222:44 Would you live forever?
230:26 Meaning of life
238:3 Hat
238:46 Question for AGI
00:00:00.000 |
The following is a conversation with Ben Goertzel, 00:00:13.220 |
at the Machine Intelligence Research Institute, 00:00:21.000 |
He has been a central figure in the AGI community 00:00:30.920 |
the 2020 version of which is actually happening this week, 00:00:40.040 |
including by Yosha Bach from episode 101 of this podcast. 00:00:46.600 |
Two sponsors, the Jordan Harbinger Show and Masterclass. 00:01:04.640 |
and the journey I'm on in my research and startup. 00:01:15.940 |
support it on Patreon, or connect with me on Twitter, 00:01:18.920 |
@lexfriedman, spelled without the E, just F-R-I-D-M-A-N. 00:01:31.560 |
This episode is supported by the Jordan Harbinger Show. 00:01:39.240 |
On that page, there's links to subscribe to it 00:01:41.660 |
on Apple Podcast, Spotify, and everywhere else. 00:01:57.080 |
Neil deGrasse Tyson, Garry Kasparov, and many more. 00:02:03.360 |
how much focus and hard work is acquired for greatness 00:02:11.120 |
I highly recommend the episode if you want to be inspired. 00:02:23.000 |
Sign up at masterclass.com/lex to get a discount 00:02:32.960 |
For 180 bucks a year, you get an all-access pass 00:02:36.120 |
to watch courses from, to list some of my favorites. 00:03:04.280 |
and the experience of being launched into space alone 00:03:11.680 |
to get a discount and to support this podcast. 00:03:15.480 |
And now, here's my conversation with Ben Goertzel. 00:03:20.760 |
What books, authors, ideas had a lot of impact on you 00:03:26.740 |
- You know, what got me into AI and science fiction 00:03:37.000 |
which my dad watched with me, like, in its first run. 00:03:58.680 |
So that got me into reading the whole literature 00:04:03.320 |
from the beginning of the previous century until that time. 00:04:07.480 |
And I mean, there was so many science fiction writers 00:04:14.800 |
it would have been Stanislaw Lem, the Polish writer. 00:04:20.120 |
Solaris, and then he had a bunch of more obscure writings 00:04:37.320 |
is one of the things that brought me together 00:04:39.080 |
with David Hanson, my collaborator on robotics projects. 00:04:43.760 |
So, you know, Stanislaw Lem was very much an intellectual, 00:04:47.640 |
right, so he had a very broad view of intelligence 00:04:51.040 |
going beyond the human and into what I would call, 00:04:56.920 |
The Solaris superintelligent ocean was intelligent 00:05:01.920 |
in some ways more generally intelligent than people, 00:05:07.360 |
so that human beings could never quite connect to it, 00:05:16.560 |
in one of Lem's books, this was engineered by people, 00:05:30.200 |
So it put some impenetrable shield around itself, 00:05:40.000 |
about the pathetic and hopeless nature of humanity 00:05:52.440 |
His main thing was, you know, human compassion 00:05:55.840 |
and the human heart and soul are going to be the constant 00:05:59.520 |
that will keep us going through whatever aliens we discover 00:06:03.600 |
or telepathy machines or super AIs or whatever it might be. 00:06:11.160 |
like the reality that we see may be a simulation 00:06:14.240 |
or a dream or something else we can't even comprehend, 00:06:28.840 |
I got into Dostoevsky and Friedrich Nietzsche 00:06:33.080 |
and Rimbaud and a bunch of more literary type writing. 00:06:43.160 |
this kind of idea of there being intelligences out there 00:06:48.700 |
do you think there are intelligences maybe all around us 00:06:58.720 |
maybe you can comment also on Stephen Wolfram, 00:07:01.560 |
thinking that there's computations all around us 00:07:21.800 |
the Search for Intra Particulate Intelligence. 00:07:29.480 |
assuming the laws of physics as we know them now 00:07:40.440 |
they're gonna shrink themselves littler and littler 00:07:45.560 |
so they can communicate between two spatially distant points. 00:07:53.260 |
The minds of the super, super, super intelligences, 00:08:02.000 |
or the partons inside quarks or whatever it is. 00:08:11.560 |
of the micro, micro, micro miniaturized super intelligences 00:08:16.360 |
'cause there's no way we can tell random from structured 00:08:24.380 |
So what we think is random could be the thought processes 00:08:30.080 |
and if so, there's not a damn thing we can do about it 00:08:46.600 |
if that's actually super intelligent systems, 00:08:51.280 |
aren't we then part of the soup of super intelligence? 00:08:54.720 |
Aren't we just like a finger of the entirety of the body 00:09:14.840 |
whereas we may be much less than that, right? 00:09:16.760 |
I mean, yeah, we may be just some random epiphenomenon 00:09:26.040 |
emanating from a sports stadium or something, right? 00:09:33.720 |
It's irrelevant to the main point of the sports event 00:09:41.920 |
So we may just be some semi-arbitrary higher level pattern 00:10:10.920 |
and they watch me sitting at the computer typing. 00:10:19.960 |
that they have no idea that I'm communicating 00:10:32.520 |
Although they're right there in the room with me. 00:10:37.800 |
that we're just too stupid or close-minded to comprehend? 00:10:42.120 |
- Your very poodle could also be communicating 00:10:46.200 |
across multiple dimensions with other beings, 00:10:53.200 |
the kind of communication mechanism they're going through. 00:10:58.440 |
and science fiction novels, Puzzling Cats, Dolphins, 00:11:02.240 |
Mice and Whatnot are actually super intelligences. 00:11:07.280 |
I would guess, as one or the other quantum physics founders 00:11:12.280 |
said, those theories are not crazy enough to be true. 00:11:18.520 |
So on the human side, with Philip K. Dick and in general, 00:11:30.600 |
persists throughout these multiple realities? 00:11:35.000 |
Are you on the side, like the thing that inspires you 00:11:46.000 |
through all of the different systems we engineer, 00:11:53.360 |
that's greater than human, that's beyond human, 00:12:02.840 |
comes from both of those directions, actually. 00:12:08.640 |
when I was, it would have been two or three years old 00:12:18.200 |
of intellectual curiosity, like can a machine really think, 00:12:22.920 |
And yeah, just ambition to create something much better 00:12:28.760 |
and fundamentally defective humans I saw around me. 00:12:35.440 |
in the human world and got married, had children, 00:12:38.840 |
so my parents begin to age, I started to realize, 00:12:41.960 |
well, not only will AGI let you go far beyond 00:12:46.920 |
but it could also stop us from dying and suffering 00:12:50.960 |
and feeling pain and tormenting ourselves mentally. 00:12:54.960 |
So you can see AGI has amazing capability to do good 00:12:59.560 |
for humans, as humans, alongside with its capability 00:13:09.960 |
which makes it even more exciting and important. 00:13:26.760 |
who one will necessarily have a complex relationship with. 00:13:38.480 |
and he sort of helped squash the Russian nihilist movement, 00:13:45.800 |
in that period of the mid-late 1800s in Russia 00:13:48.600 |
was not taking anything fully 100% for granted. 00:13:52.160 |
It was really more like what we'd call Bayesianism now, 00:13:56.880 |
as a dogmatic certitude and always leave your mind open. 00:14:01.000 |
And how Dostoevsky parodied nihilism was a bit different. 00:14:06.000 |
He parodied it as people who believe absolutely nothing, 00:14:10.320 |
so they must assign an equal probability weight 00:14:12.960 |
to every proposition, which doesn't really work. 00:14:17.720 |
So on the one hand, I didn't really agree with Dostoevsky 00:14:25.280 |
On the other hand, if you look at his understanding 00:14:32.640 |
and heart and soul, it's really unparalleled. 00:14:47.680 |
And I think if you look in "The Brothers Karamazov" 00:15:02.520 |
but it's not first person from any one person really. 00:15:05.240 |
There are many different characters in the novel 00:15:07.240 |
and each of them is sort of telling part of the story 00:15:11.800 |
So the reality of the whole story is an intersection 00:15:24.920 |
of how all of us socially create our reality. 00:15:27.880 |
Like each of us sees the world in a certain way. 00:15:31.240 |
Each of us, in a sense, is making the world as we see it 00:15:41.160 |
where multiple instruments are coming together 00:15:46.880 |
comes out of each of our subjective understandings 00:15:51.520 |
And that was one of the many beautiful things in Dostoyevsky. 00:15:58.200 |
you have a connection to Russia and the Soviet culture. 00:16:04.080 |
of the connection is, but at least the spirit 00:16:33.040 |
as well as Jews, mostly Menshevik, not Bolshevik. 00:16:36.240 |
And they sort of, they fled at just the right time 00:16:41.240 |
And then almost all, or maybe all of my extended family 00:16:47.240 |
either by Hitler's or Stalin's minions at some point. 00:16:50.400 |
So the branch of the family that emigrated to the US 00:16:59.920 |
Like, when you look in the mirror, do you see, 00:17:08.440 |
by uploading into some sort of superior reality. 00:17:18.600 |
- I mean, I'm not religious in a traditional sense, 00:17:22.240 |
but clearly the Eastern European Jewish tradition 00:17:28.760 |
I mean, there was, my grandfather, Leo Zwell, 00:17:32.680 |
was a physical chemist who worked with Landis Pauling 00:17:35.360 |
and a bunch of the other early greats in quantum mechanics. 00:17:51.120 |
was a PhD in psychology who had the unenviable job 00:17:59.280 |
in internment camps in the US in World War II, 00:18:03.080 |
like to counsel them why they shouldn't kill themselves, 00:18:05.800 |
even though they'd had all their stuff taken away 00:18:10.320 |
So I mean, yeah, there's a lot of Eastern European 00:18:27.640 |
And clearly this culture was all about learning 00:18:56.920 |
just doing some research and just knowing your work 00:18:59.520 |
through the decades, it's kind of fascinating. 00:19:10.760 |
who recently retired from Rutgers University. 00:19:15.040 |
But clearly that gave me a head start in life. 00:19:36.160 |
And I realized he didn't quite understand it either, 00:19:38.680 |
but at least, like he pointed me to some professor 00:19:42.000 |
he knew at UPenn nearby who understood these things. 00:19:45.360 |
So that's an unusual opportunity for a kid to have. 00:19:53.920 |
on like HP 3000 mainframes at Rutgers University. 00:20:42.920 |
And I was just like, well, this makes no sense. 00:20:52.680 |
Like, if the human brain can get beyond that paradox, 00:21:03.160 |
had misunderstood the nature of intelligence. 00:21:07.600 |
and he wasn't gonna say anything one way or the other. 00:21:31.520 |
So he was into some futuristic things, even back then, 00:21:35.880 |
but whether AI could confront logical paradoxes or not, 00:21:44.800 |
I discovered Douglas Hofstadter's book, "Gordal as Shabak." 00:21:56.240 |
And can an AI really fully model itself reflexively, 00:22:02.880 |
Can the human mind truly model itself reflexively, 00:22:17.160 |
I read it cover to cover, and then re-read it. 00:22:31.440 |
as like a practical academic or engineering discipline 00:22:44.000 |
And I had the idea, well, it may be a long time 00:22:47.440 |
before we can achieve immortality in superhuman AGI. 00:22:50.440 |
So I should figure out how to build a spacecraft 00:22:58.000 |
in a million years when technology is more advanced 00:23:03.600 |
while it didn't all ring true to me, a lot of it did, 00:23:06.600 |
but I could see there are smart people right now 00:23:21.120 |
which would have been probably middle school, 00:23:24.800 |
well, this is something that I could practically work on. 00:23:29.120 |
- Yeah, as opposed to flying away and waiting it out, 00:23:36.520 |
I mean, I was interested in what we'd now call 00:23:39.360 |
nanotechnology and in the human immortality and time travel, 00:23:55.080 |
Like, you don't need to spin stars into weird configurations 00:24:02.640 |
and fill it with their DNA or something, right? 00:24:28.800 |
but what this book said is in the next few decades, 00:24:30.920 |
humanity is gonna create superhuman-thinking machines, 00:24:34.500 |
molecular nanotechnology, and human immortality, 00:24:37.480 |
and then the challenge we'll have is what to do with it. 00:24:44.540 |
or do we use it just to further vapid consumerism? 00:24:53.480 |
and the UN should send people out to every little village 00:25:03.040 |
and the choice that we had about how to use it, 00:25:14.320 |
for expanded consciousness or for rampant consumerism. 00:25:18.160 |
And needless to say, that didn't quite happen, 00:25:30.600 |
I'm engaged with now, from AGI and immortality, 00:25:36.120 |
as I've been pushing forward with Singularity 00:25:40.000 |
many of these themes were there in Feinbaum's book 00:25:47.920 |
And of course, Valentin Turchin, a Russian writer, 00:25:52.920 |
and a great Russian physicist who I got to know 00:25:55.800 |
when we both lived in New York in the late '90s 00:25:59.000 |
and early aughts, he had a book in the late '60s in Russia 00:26:05.760 |
which laid out all these same things as well. 00:26:12.720 |
2004 or five or something of Parkinson's-ism. 00:26:15.360 |
So yeah, it's easy for people to lose track now 00:26:23.920 |
and the singularitarian advanced technology ideas 00:26:34.080 |
They're sort of new in the history of the human species, 00:26:37.120 |
but I mean, these were all around in fairly mature form 00:26:52.960 |
got to a certain point, then you couldn't make it real. 00:26:57.960 |
So even in the '70s, I was sort of seeing that 00:27:15.000 |
you could only program with hexadecimal machine code 00:27:19.360 |
And then a few years later, there's punch cards, 00:27:23.440 |
and a few years later, you could get Atari 400 00:27:27.200 |
and Commodore VIC-20, and you could type on a keyboard 00:27:34.640 |
So these ideas have been building up a while, 00:27:38.720 |
and I guess my generation got to feel them build up, 00:27:42.960 |
which is different than people coming into the field now, 00:27:46.400 |
for whom these things have just been part of the ambiance 00:27:50.280 |
of culture for their whole career, or even their whole life. 00:27:56.280 |
there being all of these ideas kind of swimming 00:28:04.480 |
and then some kind of nonlinear thing happens 00:28:09.360 |
and capture the imagination of the mainstream. 00:28:12.400 |
And that seems to be what's happening with AI now. 00:28:18.240 |
But he didn't understand enough about technology 00:28:21.560 |
to think you could physically engineer a Superman 00:28:24.880 |
by piecing together molecules in a certain way. 00:28:28.600 |
He was a bit vague about how the Superman would appear, 00:28:37.800 |
and the mode of cognition of a Superman would be. 00:28:42.440 |
He was a very astute analyst of how the human mind 00:29:01.320 |
He understood a lot about how human minds work. 00:29:07.520 |
I mean, the Superman was supposed to be a mind 00:29:10.160 |
that would basically have complete root access 00:29:15.960 |
and be able to architect its own value system 00:29:19.560 |
and inspect and fine-tune all of its own biases. 00:29:32.160 |
and all sorts of things that have been very valuable 00:29:35.480 |
in development of culture and indirectly even of technology. 00:30:04.580 |
If he's born a century later or transported through time-- 00:30:09.560 |
and he would never write the great works he's written, 00:30:13.520 |
- Maybe also Sprach Zarathustra would be a music video, 00:30:19.640 |
- Yeah, but if he was transported through time, 00:30:21.640 |
do you think, that'd be interesting actually to go back, 00:30:26.240 |
you just made me realize that it's possible to go back 00:30:31.200 |
is there some thinking about artificial beings? 00:30:34.680 |
I'm sure he had inklings, I mean, with Frankenstein, 00:30:39.680 |
before him, I'm sure he had inklings of artificial beings 00:30:44.040 |
It'd be interesting to see, to try to read his work, 00:30:53.680 |
like if he had inklings of that kind of thinking. 00:31:01.080 |
I mean, he had a lot of inklings of modern cognitive science, 00:31:07.400 |
If you look in the third part of the collection 00:31:15.680 |
there's very deep analysis of thinking processes, 00:31:20.640 |
but he wasn't so much of a physical tinkerer type guy, 00:31:29.680 |
- Do you think, what do you think about the will to power? 00:31:32.800 |
Do you think human, what do you think drives humans? 00:31:42.440 |
and elegant objective function driving humans by any means. 00:31:50.720 |
I know it's hard to look at humans in an aggregate, 00:32:03.560 |
depending on the whatever can percolate to the top? 00:32:13.200 |
complicated, and in some ways silly concepts, 00:32:23.480 |
humanity is shaped both by individual selection 00:32:28.280 |
and what biologists would call group selection, 00:32:39.320 |
so that each of us does to a certain approximation 00:32:43.280 |
what will help us propagate our DNA to future generations. 00:32:47.440 |
I mean, that's why I've got to have four kids so far, 00:32:58.640 |
means humans in a way will do what will advocate 00:33:02.840 |
for the persistence of the DNA of their whole tribe, 00:33:08.120 |
And in biology, you have both of these, right? 00:33:11.760 |
And you can see, say, an ant colony or a beehive, 00:33:23.280 |
it's a lot more biased toward individual selection. 00:33:31.560 |
in what we would view as selfishness versus altruism, 00:33:36.800 |
So we just have both of those objective functions 00:33:43.800 |
And then as Nietzsche analyzed in his own way, 00:33:51.320 |
well, we have both good and evil within us, right? 00:34:13.120 |
Of course, there are psychopaths and sociopaths 00:34:17.000 |
and people who get gratified by the suffering of others, 00:34:25.200 |
- Yeah, those are exceptions, but on the whole. 00:34:26.920 |
- Yeah, but I think at core, we're not purely selfish, 00:34:37.920 |
And we also have a complex constellation of values 00:34:42.920 |
that are just very specific to our evolutionary history. 00:34:54.360 |
is in a mountain overlooking the water, right? 00:35:04.360 |
I mean, there are many particularities to human values, 00:35:15.800 |
Say, I spent a lot of time in Ethiopia, in Addis Ababa, 00:35:19.600 |
where we have one of our AI development offices 00:35:24.360 |
And when I walk through the streets in Addis, 00:35:27.680 |
there's people lying by the side of the road, 00:35:31.400 |
like just living there by the side of the road, 00:35:37.920 |
And when I walk by them, I feel terrible, I give them money. 00:35:41.400 |
When I come back home to the developed world, 00:35:48.600 |
I also spend some of the limited money I have 00:36:03.120 |
I mean, it makes me somewhat selfish and somewhat altruistic 00:36:06.680 |
and we each balance that in our own way, right? 00:36:10.880 |
So whether that will be true of all possible AGIs 00:36:25.480 |
I'm not gonna bring up the whole Ayn Rand idea 00:36:33.960 |
that I think we'll just distract ourselves on. 00:36:41.200 |
So the, yeah, I have extraordinary negative respect 00:36:50.160 |
- But when I worked with a company called Genescient, 00:36:54.760 |
which was evolving flies to have extraordinary long lives 00:37:01.280 |
So we had flies that were evolved by artificial selection 00:37:05.000 |
to have five times the lifespan of normal fruit flies. 00:37:14.080 |
at an Ayn Rand elementary school in Southern California. 00:37:18.160 |
So that was just like, well, if I saw this in a movie, 00:37:24.020 |
- Well, yeah, the universe has a sense of humor 00:37:30.660 |
But you mentioned the balance between selfishness 00:37:37.240 |
Do you think it's possible that's kind of an emergent 00:37:39.880 |
phenomena, those peculiarities of our value system, 00:37:54.540 |
- I mean, the answer to nature versus nurture 00:38:11.500 |
leading to a mix of selfishness and altruism. 00:38:13.940 |
On the other hand, different cultures manifest that 00:38:19.780 |
Well, we all have basically the same biology. 00:38:22.540 |
And if you look at sort of pre-civilized cultures, 00:38:26.660 |
you have tribes like the Yanomamo in Venezuela, 00:38:29.320 |
which their culture is focused on killing other tribes. 00:38:43.920 |
in how culture manifests these innate biological 00:38:49.880 |
characteristics, but still, there's probably limits 00:38:56.720 |
I used to argue this with my great-grandparents 00:39:01.480 |
because they believed in the withering away of the state. 00:39:04.520 |
Like, they believed that as you move from capitalism 00:39:17.800 |
everyone would give everyone else what they needed. 00:39:20.920 |
Now, setting aside that that's not what the various 00:39:32.700 |
I was very dubious that human nature could go there. 00:39:37.480 |
Like, at that time, when my great-grandparents were alive, 00:39:39.840 |
I was just like, you know, I'm a cynical teenager. 00:39:48.040 |
If you don't have some structure keeping people 00:39:50.700 |
from screwing each other over, they're gonna do it. 00:39:52.920 |
So now I actually don't quite see things that way. 00:40:11.440 |
you know, self-awareness, other awareness, compassion, 00:40:16.980 |
And of course, greater material abundance helps, 00:40:25.360 |
because many Stone Age cultures perceived themselves 00:40:30.520 |
that they had all the food and water they wanted, 00:40:33.520 |
that they had sex lives, that they had children. 00:40:37.480 |
I mean, they had abundance without any factories, right? 00:40:42.920 |
So I think humanity probably would be capable 00:40:46.480 |
of fundamentally more positive and joy-filled mode 00:40:55.580 |
Clearly, Marx didn't quite have the right idea 00:41:09.520 |
And if we look at where we are in society now, 00:41:12.820 |
how to get there is a quite different question, 00:41:21.100 |
than a positive, joyous, compassionate existence, right? 00:41:28.860 |
Elon Musk is dreams of colonizing Mars at the moment, 00:41:32.860 |
so maybe he'll have a chance to start a new civilization 00:41:41.620 |
We're sitting now, I don't know what the date is, 00:41:46.900 |
There's quite a bit of chaos in all different forms 00:41:49.300 |
going on in the United States and all over the world. 00:41:52.100 |
So there's a hunger for new types of governments, 00:41:55.580 |
new types of leadership, new types of systems. 00:42:04.180 |
- Yeah, I mean, colonizing Mars, first of all, 00:42:11.580 |
- Yeah, I mean, it's more important than making 00:42:26.460 |
I think the possible futures in which a Mars colony 00:42:31.460 |
makes a critical difference for humanity are very few. 00:42:38.020 |
I mean, I think, I mean, assuming we make a Mars colony 00:42:42.180 |
and people go live there in a couple of decades, 00:42:43.980 |
I mean, their supplies are gonna come from Earth. 00:42:48.780 |
and whatever powers are supplying the goods there 00:42:58.660 |
Of course, there are outlier situations where Earth 00:43:10.740 |
and then Mars is what allows humanity to persist. 00:43:14.180 |
But I think that those are very, very, very unlikely. 00:43:19.180 |
- Do you don't think it could be a first step 00:43:22.980 |
- Of course, it's a first step on a long journey, 00:43:30.900 |
of the physical universe will probably be done 00:43:33.180 |
by AGIs that are better designed to live in space 00:43:41.780 |
But I mean, who knows, we may cryopreserve ourselves 00:43:46.700 |
and like shoot ourselves out to Alpha Centauri and beyond. 00:43:50.700 |
I mean, that's all cool, it's very interesting 00:43:58.820 |
On the other hand, with AGI, we can get to a singularity 00:44:03.500 |
before the Mars colony becomes sustaining for sure, 00:44:10.060 |
- So your intuition is that that's the problem 00:44:12.340 |
if we really invest resources and we can get to faster 00:44:14.900 |
than a legitimate full self-sustaining colonization of Mars. 00:44:19.660 |
- Yeah, and it's very clear that we will to me 00:44:29.420 |
whereas the Mars colony, there's less economic value 00:44:47.980 |
are very, very interesting and I wanna see resources 00:44:55.420 |
and I'd rather see that than a lot of the garbage 00:45:01.780 |
On the other hand, I don't think Mars colonization 00:45:11.000 |
to make a critical difference in the evolution 00:45:13.900 |
of human or non-human life in this part of the universe 00:45:33.660 |
or to accelerate AGI are also very important. 00:45:42.740 |
but certainly you could see how the right quantum 00:45:44.980 |
computing architecture could massively accelerate AGI. 00:45:57.500 |
while not in the big picture as important as AGI, 00:46:02.140 |
of course it's important to all of us as individual humans 00:46:11.620 |
and distributed it tomorrow, I mean, that would be huge 00:46:23.340 |
- No, because if you can make a benevolent AGI, 00:46:31.980 |
once it's as generally intelligent as humans, 00:46:34.300 |
it can rapidly become massively more generally intelligent 00:47:16.620 |
without ever fully being in love with the term. 00:47:43.900 |
that could really think in the sense like people can 00:48:00.300 |
who I knew from the transhumanist and singularitarian world 00:48:09.780 |
and included Shane Legg, who had worked for me 00:48:14.100 |
at my company WebMind in New York in the late '90s, 00:48:20.460 |
He was one of the co-founders of Google DeepMind. 00:48:25.280 |
I think he may have just started doing his PhD 00:48:38.660 |
which sort of gives a mathematical foundation 00:48:43.380 |
So I reached out to Shane and Markus and Peter Vos 00:48:46.100 |
and Pei Wang, who was another former employee of mine 00:48:49.440 |
who had been Douglas Hofstadter's PhD student 00:49:06.980 |
I was doing some, what we would now call narrow AI, 00:49:11.340 |
as well like applying machine learning to genomics data 00:49:25.960 |
Ray Kurzweil wrote about narrow AI versus strong AI. 00:49:33.640 |
first of all, narrow and strong are not antennas. 00:49:43.340 |
to mean the hypothesis that digital computer AIs 00:49:46.620 |
could have true consciousness like human beings. 00:49:52.540 |
which was complexly different but related, right? 00:50:03.220 |
And so we talked about narrow AI, broad AI, wide AI, 00:50:09.740 |
And I think it was either Shane Legg or Peter Vos 00:50:58.140 |
- Yeah, we used that for the title of the book. 00:51:05.400 |
But then later, after the book was published, 00:51:13.600 |
"with the term AGI in like 1997 or something. 00:51:19.900 |
"to come out and say they published that in 1953." 00:51:25.240 |
- That term is not dramatically innovative or anything. 00:51:28.360 |
It's one of these obvious in hindsight things, 00:52:12.140 |
I mean, you can look at evolved versus engineered, 00:52:17.180 |
Then it should be engineered general intelligence, right? 00:52:40.040 |
a system called AIXI, which is quite beautiful. 00:52:55.440 |
quantum organized rational expanding intelligence. 00:53:03.680 |
which means the former principal underlying AIXI. 00:53:09.520 |
- You're giving Elon Musk's new child a run for his money. 00:53:26.800 |
But my oldest son, Zarathustra, loves his name, 00:53:33.240 |
So, so far, basically, if you give your kids weird names-- 00:53:37.800 |
- Well, you're obliged to make the kids weird enough 00:53:40.960 |
So it directs their upbringing in a certain way. 00:53:56.360 |
The general is not really achievable within physics, 00:54:01.600 |
I mean, physics, as we know it, may be limited, 00:54:07.360 |
like from an information processing perspective, yeah. 00:54:10.440 |
- Yeah, intelligence is not very well-defined either. 00:54:16.760 |
I mean, in AI now, it's fashionable to look at it 00:54:19.560 |
as maximizing an expected reward over the future. 00:54:23.320 |
But that sort of definition is pathological in various ways. 00:54:31.300 |
he had a beautiful PhD thesis on open-ended intelligence, 00:54:40.120 |
He's looking at complex self-organizing systems 00:54:42.680 |
and looking at an intelligence system as being one 00:54:51.720 |
without necessarily there being one objective function 00:55:01.360 |
Very much Solaris from Stanislav Lem's novels, right? 00:55:04.560 |
So yeah, the point is artificial general and intelligence-- 00:55:09.540 |
On the other hand, everyone knows what AI is, 00:55:20.760 |
Now it's out there everywhere, which baffles me. 00:55:27.080 |
We're stuck with AGI probably for a very long time 00:55:30.200 |
until AGI systems take over and rename themselves. 00:55:37.560 |
which mostly have nothing to do with graphics anymore. 00:55:40.520 |
- I wonder what the AGI system will call us humans. 00:55:52.400 |
Okay, so maybe also just a comment on AGI representing, 00:56:08.360 |
but there's always been this community of people 00:56:24.280 |
as it existed before this deep learning revolution, 00:56:27.000 |
all throughout the winters and the summers of AI? 00:56:53.960 |
And what's thought of as an AI winter or summer 00:56:57.700 |
was sort of how much money is the US military 00:57:04.660 |
On the other hand, there was AI going on in Germany, UK, 00:57:07.480 |
and in Japan, and in Russia, all over the place, 00:57:10.960 |
while US military got more and less enthused about AI. 00:57:15.960 |
- That happened to be, just for people who don't know, 00:57:20.200 |
the US military happened to be the main source 00:57:24.480 |
So another way to phrase that is it's up and down 00:57:27.480 |
of funding for artificial intelligence research. 00:57:31.080 |
- And I would say the correlation between funding 00:57:34.600 |
and intellectual advance was not 100%, right? 00:57:43.580 |
than in the US, but many foundational ideas were laid out, 00:57:48.120 |
but it was more theory than implementation, right? 00:57:50.860 |
And US really excelled at sort of breaking through 00:57:54.580 |
from theoretical papers to working implementations, 00:58:04.280 |
But still, I mean, you can look in the 1980s, 00:58:07.400 |
Dietrich Doerner in Germany had self-driving cars 00:58:16.880 |
so it didn't catch on such as has happened now. 00:58:25.840 |
was pretty much independent of AI military summers 00:58:34.480 |
than not only most people on the planet realize, 00:58:40.060 |
because they've come up within a certain subfield of AI 00:58:47.660 |
But I would say when I got my PhD in 1989 in mathematics, 00:58:56.840 |
- Yeah, I started at NYU, then I transferred to Philadelphia, 00:59:10.920 |
'cause you were afraid if you stopped at a red light, 00:59:16.840 |
- Is it, every day driving or bicycling to Temple 00:59:20.240 |
from my house was like a new adventure, right? 00:59:27.540 |
was what people were doing in the academic AI field then 00:59:30.840 |
was just astoundingly boring and seemed wrong-headed to me. 00:59:42.080 |
I had nothing against logic as the cognitive engine 00:59:44.640 |
for an AI, but the idea that you could type in the knowledge 00:59:48.920 |
that AI would need to think seemed just completely stupid 01:00:17.120 |
which was, it was about a reinforcement learning system 01:00:21.880 |
called PURR, P-U-R-R-P-U-S-S, which was an acronym 01:00:51.760 |
but it was like isolated, scattered, weird people. 01:00:55.200 |
But all these isolated, scattered, weird people 01:00:57.400 |
in that period, I mean, they laid the intellectual grounds 01:01:02.080 |
So you look at John Andreas at University of Canterbury 01:01:05.280 |
with his PURRpus reinforcement learning Markov system. 01:01:09.720 |
He was the PC supervisor for John Cleary in New Zealand. 01:01:17.040 |
when I was at Waikato University in 1993 in New Zealand, 01:01:25.900 |
which was the first open source machine learning toolkit, 01:01:37.440 |
which was a cool language back then, though, right? 01:01:39.560 |
- I guess it's still, well, it's not cool anymore, 01:01:48.760 |
but back then it was like Java or C++, basically. 01:01:58.680 |
was funded by a US, sorry, a New Zealand government grant 01:02:10.440 |
was about how to kill people or spy on people. 01:02:13.600 |
In New Zealand, it's all about cows or kiwi fruits, right? 01:02:20.560 |
had his probability theory-based reinforcement learning, 01:02:25.800 |
John Cleary was trying to do much more ambitious, 01:02:36.200 |
which was the first open-source machine learning tool. 01:02:39.240 |
It gets to the predecessor for TensorFlow and Torch 01:02:55.800 |
my company WebMind, an AI company I had in the late '90s 01:03:14.120 |
probabilistic reinforcement learning AGI systems. 01:03:17.200 |
The technology, the computers just weren't there to support. 01:03:19.680 |
His ideas were very similar to what people are doing now. 01:03:23.880 |
But, you know, although he's long since passed away 01:03:27.680 |
and didn't become that famous outside of Canterbury, 01:03:42.160 |
that did ultimately lay the groundwork for what we have today 01:03:48.520 |
And so when I started trying to pull together 01:03:56.920 |
the early aughts when I was living in Washington, DC 01:04:07.080 |
And I organized the first AGI workshop in 2006. 01:04:20.360 |
It's not that edgier underground, unfortunately, but still-- 01:04:35.680 |
although it's just quiet because of the nature 01:04:43.560 |
Mostly when something starts to work really well, 01:04:46.120 |
it's taken black and becomes even more quiet. 01:04:49.640 |
But yeah, the thing is that really had the feeling 01:04:58.960 |
plotting how to overthrow the narrow AI establishment. 01:05:05.760 |
coming together with others who shared their passion 01:05:24.600 |
and there's several hundred people rather than 50. 01:05:27.760 |
Now it's more like this is the main gathering 01:05:34.960 |
and think that large-scale nonlinear regression 01:05:54.400 |
the main concentration of people not obsessed 01:06:06.400 |
I mean, there's other little conferences and groupings 01:06:30.600 |
something more short-term and immediately practical 01:06:36.560 |
you could bullshit about AGI in the same breath 01:06:39.520 |
as time travel or the simulation hypothesis or something. 01:06:44.160 |
Whereas now, AGI is not only in the academic seminar room. 01:06:48.320 |
Like you have Vladimir Putin knows what AGI is. 01:06:51.920 |
And he's like, Russia needs to become the leader in AGI. 01:06:55.440 |
So national leaders and CEOs of large corporations. 01:07:04.200 |
this was years ago, Singularity Summit Conference, 01:07:18.800 |
which is the pursuit of like crazed mavericks, 01:07:24.520 |
to being a marketing term for large corporations 01:07:29.520 |
and the national leaders, which is a astounding transition. 01:07:40.120 |
I think a bunch of sub-communities have formed. 01:07:42.240 |
And the community around the AGI conference series 01:07:47.600 |
It hasn't grown as big as I might've liked it to. 01:07:51.920 |
On the other hand, sometimes a modest size community 01:07:56.280 |
can be better for making intellectual progress also. 01:07:59.460 |
You go to a Society for Neuroscience Conference, 01:08:07.480 |
On the other hand, you're not gonna talk to the leaders 01:08:19.240 |
the main kind of generic artificial intelligence conference 01:08:55.000 |
in these main artificial intelligence conferences 01:09:03.880 |
What I've seen bravely, you mentioned Shane Legg, 01:09:07.960 |
is DeepMind and then OpenAI are the two places 01:09:11.680 |
that are, I would say unapologetically so far, 01:09:15.580 |
I think it's actually changing, unfortunately, 01:09:22.760 |
- Well, they have billions of dollars behind them, 01:09:36.640 |
- And they have, I mean, DeepMind has Marcus Hodder 01:09:39.280 |
walking around, I mean, there's all these folks 01:09:42.120 |
who basically their full-time position involves 01:09:53.320 |
And I mean, so I'd say from a public marketing view, 01:10:15.160 |
there are other groups that are doing research 01:10:20.620 |
I mean, including a bunch of groups in Google's 01:10:33.840 |
but if you compare it to where it was 15 years ago, 01:10:41.920 |
You could say the same thing about super longevity research, 01:10:45.480 |
which is one of my application areas that I'm excited about. 01:10:49.080 |
I mean, I've been talking about this since the '90s, 01:11:02.340 |
you were way, way, way, way out of the industry 01:11:11.500 |
Craig Venter had Human Longevity Incorporated, 01:11:14.000 |
and then once the suits come marching in, right? 01:11:24.920 |
So it's still not as mainstream as cancer research, 01:11:32.960 |
But the degree of mainstreaming that's happened 01:11:40.120 |
to those of us who've been at it for a while. 01:11:42.080 |
- Yeah, but there's a marketing aspect to the term, 01:11:57.720 |
that the nonlinear regression, as you mentioned. 01:12:01.160 |
Like, what's your sense with OpenCog, with your work, 01:12:12.000 |
For me, always seemed to capture a deep element 01:12:24.900 |
but that seems to be missing from a lot of research currently. 01:12:47.900 |
And I think there are many, many different approaches 01:12:56.920 |
So I don't think there's like one golden algorithm, 01:13:05.840 |
And I mean, flying machines is the much-worn analogy here, 01:13:29.920 |
- And there's a catapult that you can just launch. 01:13:33.160 |
- There's bicycle-powered flying machines, right? 01:13:43.800 |
Now, so one issue with AGI is we don't yet have the analog 01:13:50.800 |
And that's what Marcus Hutter was trying to make 01:13:54.640 |
with the AXE and his general theory of general intelligence. 01:13:58.820 |
But that theory, in its most clearly articulated parts, 01:14:03.360 |
really only works for either infinitely powerful machines 01:14:07.120 |
or almost, or insanely impractically powerful machines. 01:14:11.860 |
So I mean, if you were gonna take a theory-based approach 01:14:42.340 |
that it wants to take, before taking that action, 01:14:45.020 |
it looks at all its history, and then it looks 01:14:50.400 |
to make a decision, and it decides which decision program 01:14:56.140 |
according to its reward function over its history, 01:15:14.340 |
computer programs that have runtime less than T 01:15:27.940 |
and what will probably be done 50 years from now 01:15:29.900 |
to make an AGI, is say, okay, well, we have some constraints. 01:15:37.520 |
and we have space and time constraints on the program. 01:15:45.420 |
and we have this particular class of environments 01:15:52.740 |
manipulating physical objects on the surface of the Earth, 01:15:58.160 |
whatever our particular, not annihilating humanity, 01:16:02.260 |
whatever our particular requirements happen to be, 01:16:17.060 |
specialize it to the computing resource constraints 01:16:23.740 |
and then it will spit out the specialized version 01:16:30.660 |
in your environment, which will be your AGI, right? 01:16:41.300 |
- That's a very Russian approach, by the way. 01:16:43.180 |
Like, the whole field of program specialization 01:16:53.660 |
You can have a generic program for sorting lists, 01:17:00.780 |
- You can run an automated program specializer 01:17:05.420 |
that's optimal for sorting lists of length 10,000 or less. 01:17:17.500 |
So you're kind of evolving human beings or living creatures. 01:17:20.740 |
- Exactly, I mean, your Russian heritage is showing there. 01:17:24.340 |
So with Alexander Vityaev and Peter Anokhin and so on, 01:17:31.820 |
of thinking about evolution that way also, right? 01:17:36.820 |
Well, my point is that what we're thinking of 01:17:46.660 |
like are being used in the commercial AI field now, 01:17:51.460 |
okay, how do we make it more and more general? 01:18:17.580 |
within the resource constraints that you have, 01:18:20.540 |
but will achieve the particular things that you care about? 01:18:28.180 |
If I ask you to run a maze in 750 dimensions, 01:18:43.060 |
And it does not have a 750-dimensional map in it. 01:19:17.540 |
and their brain has yet to sort of crystallize 01:19:20.060 |
into appropriate structures for processing aspects 01:19:26.580 |
the young child is very tied to their sensorium, 01:19:30.180 |
whereas we can deal with abstract mathematics, 01:19:56.820 |
that comes with the development process, actually. 01:20:06.300 |
at the young age, I guess is one way to put it. 01:20:18.540 |
- A human adult is much more generally intelligent 01:20:28.220 |
which is why we put up with their repetitiveness 01:20:35.540 |
a beginner's mind, which is a beautiful thing, 01:20:50.500 |
- So by the time you're an ugly old man like me, 01:20:52.460 |
you gotta get really, really smart to compensate. 01:20:56.220 |
- But yeah, going back to your original question, 01:21:14.620 |
to the specific goals that humans need to achieve 01:21:21.940 |
And both of these, the goals and the resources 01:21:24.620 |
and the environments, I mean, all this is important. 01:21:48.100 |
is quite different than the way I would want to create AGI 01:21:51.780 |
on say a modern server farm of CPUs and GPUs, 01:22:00.260 |
on whatever quantum computer we'll have in 10 years, 01:22:03.780 |
supposing someone makes a robust quantum Turing machine 01:22:12.660 |
of the patterns of organization in the human brain 01:22:25.220 |
that is one powerful class of learning algorithms, 01:22:30.020 |
that evolved to exploit the particulars of the human brain 01:22:36.300 |
If you're looking at the computational substrate 01:22:41.020 |
you won't necessarily want the same algorithms 01:22:48.900 |
you could look at maybe the best algorithms on the brain 01:22:51.780 |
and the best algorithms on a modern computer network 01:23:04.940 |
So that's about the hardware side and the software side, 01:23:16.420 |
on what I called the embodied communication prior, 01:23:20.340 |
which was quite similar in intent to Yoshua Bengio's 01:23:26.780 |
except I didn't wanna wrap up consciousness in it 01:23:30.420 |
because to me, the qualia problem and subjective experience 01:23:37.900 |
but I would rather keep that philosophical debate 01:23:42.580 |
distinct from the debate of what kind of biases 01:23:51.460 |
is really addressing that kind of consciousness. 01:23:58.820 |
He's by far my favorite of the lions of deep learning. 01:24:15.940 |
So what I called it was the embodied communication prior. 01:24:26.660 |
You can say being human, but that's very abstract, right? 01:24:41.300 |
And we've also evolved to communicate via language 01:24:49.260 |
that are going around doing things collectively with us 01:25:14.580 |
or constrained version of universal general intelligence 01:25:23.180 |
but whose general intelligence will be biased 01:25:36.540 |
with other similar agents in that same world, right? 01:25:41.580 |
you're starting to get a requirements analysis 01:25:48.100 |
And then that leads you into cognitive science. 01:26:25.780 |
- Cognitive science, it was cross-disciplinary 01:26:33.300 |
But yeah, we were teaching psychology students 01:26:42.100 |
which was the early version of a deep neural network. 01:26:47.900 |
was very, very slow to train back then, right? 01:26:54.340 |
- Systems that are supposed to deal with physical objects. 01:27:00.660 |
you can see there's multiple types of memory, 01:27:25.000 |
which to some extent is sense modality specific, 01:27:27.540 |
and then to some extent is unified across sense modalities. 01:27:32.540 |
There's procedural memory, memory of how to do stuff, 01:27:39.880 |
but it's also a little more abstract than motor memory. 01:27:43.620 |
It involves cerebellum and cortex working together. 01:27:51.540 |
which has to do with linkages of cortex and limbic system. 01:27:55.900 |
There's specifics of spatial and temporal modeling 01:28:00.420 |
which has to do with the hippocampus and thalamus 01:28:05.380 |
And the basal ganglia, which influences goals. 01:28:10.980 |
sub-goals and sub-sub-goals we wanted to perceive 01:28:15.060 |
Human brain has substantially different subsystems 01:28:21.020 |
and substantially differently tuned learning, 01:28:24.240 |
like differently tuned modes of long-term potentiation 01:28:27.280 |
to do with the types of neurons and neurotransmitters 01:28:31.260 |
corresponding to these different types of knowledge. 01:28:33.060 |
And these different types of memory and learning 01:28:35.860 |
in the human brain, I mean, you can back these all 01:28:38.520 |
into embodied communication for controlling agents 01:28:44.680 |
Now, so if you look at building an AGI system, 01:28:47.700 |
one way to do it, which starts more from cognitive science 01:28:57.340 |
- Yeah, yeah, necessary for this sort of intelligence. 01:29:04.580 |
And then how do you connect all these things together, right? 01:29:07.780 |
And of course, the human brain did it incrementally 01:29:10.760 |
through evolution because each of the sub-networks 01:29:14.340 |
of the brain, I mean, it's not really the lobes 01:29:20.780 |
Which of the, each of the sub-networks of the brain 01:29:23.660 |
co-evolved with the other sub-networks of the brain, 01:29:27.140 |
both in terms of its patterns of organization 01:29:31.820 |
So they all grew up communicating and adapting to each other. 01:29:48.460 |
and the perception box here and wire them together. 01:30:01.420 |
The parts co-evolve so as to adapt and work together. 01:30:06.020 |
how every human engineered system that flies, 01:30:10.180 |
that was what we're using that analogy before, 01:30:19.800 |
work in cognitive architectures, for example, 01:30:31.020 |
and John Laird, who built the SOAR architecture, 01:30:40.420 |
How I was looking at the AI world about 10 years ago, 01:30:44.500 |
before this whole commercial deep learning explosion was, 01:30:47.820 |
on the one hand, you had these cognitive architecture guys 01:30:53.460 |
and cognitive scientists who had thought a lot about 01:31:00.380 |
On the other hand, you had these learning theory guys 01:31:03.580 |
who didn't care at all about the architecture, 01:31:07.580 |
how do you recognize patterns in large amounts of data? 01:31:14.220 |
was to get the learning that the learning theory guys 01:31:18.460 |
were doing and put it together with the architecture 01:31:21.420 |
that the cognitive architecture guys were doing. 01:31:25.940 |
Now, you can't, unfortunately, when you look at the details, 01:31:30.760 |
you can't just do that without totally rebuilding 01:31:34.940 |
what is happening on both the cognitive architecture 01:31:41.780 |
but what they ultimately did is like take a deep neural net 01:31:46.580 |
and you include it as one of the black boxes. 01:31:51.940 |
The learning mechanism becomes one of the boxes 01:31:53.820 |
as opposed to fundamental part of the system. 01:31:57.380 |
You could look at some of the stuff DeepMind has done, 01:32:00.420 |
like the differential neural computer or something. 01:32:03.220 |
That sort of has a neural net for deep learning perception. 01:32:07.060 |
It has another neural net, which is like a memory matrix 01:32:10.620 |
that stores, say, the map of the London subway or something. 01:32:13.060 |
So, probably Demis Asabas was thinking about this 01:32:21.700 |
he was doing a bunch on cortex-hippocampus interconnection. 01:32:24.540 |
So there, the DNC would be an example of folks 01:32:27.260 |
from the deep neural net world trying to take a step 01:32:32.180 |
by having two neural modules that correspond roughly 01:32:36.660 |
that deal with different kinds of memory and learning. 01:32:38.880 |
But on the other hand, it's super, super, super crude 01:32:44.060 |
Just as what John Laird and Sohr did with neural nets 01:32:48.020 |
was super, super crude from a learning point of view, 01:32:51.180 |
'cause the learning was like off to the side, 01:32:53.320 |
not affecting the core representations, right? 01:32:55.820 |
And when you weren't learning the representation, 01:32:57.860 |
you were learning the data that feeds into the, 01:33:00.420 |
you were learning abstractions of perceptual data 01:33:02.660 |
to feed into the representation that was not learned, right? 01:33:22.820 |
- And what I was gonna say is it didn't happen 01:33:24.580 |
in terms of bringing the lines of cognitive architecture 01:33:30.540 |
It did work in the sense that a bunch of younger researchers 01:33:33.820 |
have had their heads filled with both of those ideas. 01:33:38.900 |
who was a university professor, often quoted to me, 01:33:41.420 |
which was science advances one funeral at a time, 01:33:47.900 |
Like I'm 53 years old, and I'm trying to invent 01:34:02.340 |
Like the people who've been at AI a long time 01:34:05.780 |
and have made their career at developing one aspect, 01:34:08.820 |
like a cognitive architecture or a deep learning approach, 01:34:19.700 |
I mean, I try quite hard to remain flexible-minded. 01:34:23.700 |
- Have you been successful somewhat in changing, 01:34:26.500 |
maybe, have you changed your mind on some aspects 01:34:29.660 |
of what it takes to build an AGI, like technical things? 01:34:32.980 |
- The hard part is that the world doesn't want you to. 01:34:41.060 |
The other part is that the world doesn't want you to. 01:34:57.140 |
But yeah, I've changed my mind on a bunch of things. 01:35:10.780 |
although I think it will be a super major enhancement. 01:35:18.340 |
of embarking on a complete rethink and rewrite 01:35:26.180 |
together with Alexey Podopov and his team in St. Petersburg, 01:35:31.620 |
So now we're trying to like go back to basics, 01:35:45.700 |
and design the best framework for the next stage. 01:35:53.380 |
from the recent successes with deep neural nets 01:35:59.060 |
I mean, people made these essentially trivial systems 01:36:07.140 |
And I want to incorporate that knowledge appropriately 01:36:13.580 |
On the other hand, I also think current deep neural net 01:36:18.580 |
architectures as such will never get you anywhere near AGI. 01:36:28.420 |
and like saying, well, these things are garbage 01:36:40.800 |
There's a lot of interesting stuff to be learned there, 01:36:48.020 |
- So maybe this is a good chance to step back. 01:36:57.540 |
Yeah, maybe talk through the history of OpenCog 01:37:08.740 |
worth throwing around sort of tongue in cheek 01:37:14.540 |
that we're working on now is not remotely close 01:37:27.420 |
but it's still an early stage research system, right? 01:37:29.820 |
And actually, we are going back to the beginning 01:37:40.700 |
'cause we feel like that's the right thing to do, 01:37:42.860 |
but I'm sure what we end up with is gonna have 01:37:45.580 |
a huge amount in common with the current system. 01:37:48.540 |
I mean, we all still like the general approach. 01:37:54.380 |
- Sure, OpenCog is an open-source software project 01:37:59.380 |
that I launched together with several others in 2008, 01:38:04.380 |
and probably the first code written toward that 01:38:11.140 |
that was developed as a proprietary code base 01:38:29.460 |
Primarily, there's a bunch of scheme as well, 01:38:35.300 |
something we'll also talk about is SingularityNet. 01:38:52.460 |
- Yeah, I mean, SingularityNet is a separate project 01:38:59.460 |
and you can use SingularityNet as part of the infrastructure 01:39:08.380 |
- So OpenCog, on the one hand, as a software framework, 01:39:17.020 |
of different AI architectures and algorithms. 01:39:21.900 |
But in practice, there's been a group of developers, 01:39:26.460 |
which I've been leading together with Linus Vepsdas, 01:39:35.100 |
and infrastructure to implement certain ideas 01:39:49.380 |
'cause in theory, you could use that software, 01:39:53.460 |
you could use it to make a lot of different AGI. 01:39:55.900 |
- What kind of stuff does the software platform provide? 01:39:58.660 |
Like in terms of utilities, tools, like what? 01:40:12.260 |
So the core component of OpenCog as a software platform 01:40:22.860 |
- Atom space, yeah, yeah, not atom, like Adam and Eve, 01:40:28.100 |
Yeah, so you have a hypergraph, which is like a, 01:40:40.940 |
but links can go between more than two nodes. 01:40:51.740 |
because you can have links pointing to links, 01:40:54.060 |
or you could have links pointing to whole subgraphs, right? 01:40:56.820 |
So it's an extended hypergraph or a metagraph. 01:41:04.500 |
- But I don't think it was yet a technical term 01:41:06.380 |
when we started calling this a generalized hypergraph. 01:41:13.380 |
generalized hypergraph or weighted-labeled metagraph. 01:41:16.940 |
The weights and labels mean that the nodes and links 01:41:19.180 |
can have numbers and symbols attached to them. 01:41:24.940 |
They can have numbers on them that represent, 01:41:35.060 |
and then the hypergraph can be reduced to a graph. 01:41:37.660 |
and you can reduce a graph to an adjacency matrix. 01:41:39.900 |
So, I mean, there's always multiple representations. 01:41:46.780 |
And so, similarly, you could have a link to a whole graph 01:41:54.900 |
And I could say, I reject this body of information. 01:42:04.020 |
I mean, there are many alternate representations, 01:42:11.800 |
which is this weighted-labeled generalized hypergraph. 01:42:24.140 |
Then there are various utilities for dealing with that. 01:42:29.800 |
which lets you specify a sort of abstract pattern 01:42:37.920 |
to see what sub-hypergraphs may match that pattern, 01:42:48.780 |
which lets you run a bunch of different agents or processes 01:42:57.640 |
basically, it reads stuff from the atom space 01:43:01.880 |
So this is sort of the basic operational model. 01:43:10.360 |
just from a scalable software engineering standpoint. 01:43:13.200 |
- So you could use this, I don't know if you've, 01:43:17.120 |
physics project recently with the hypergraphs and stuff? 01:43:20.160 |
Could you theoretically use the software framework to-- 01:43:22.960 |
- You certainly could, although Wolfram would rather die 01:43:26.160 |
than use anything but Mathematica for his work. 01:43:29.060 |
- Well, that's, yeah, but there's a big community of people 01:43:36.120 |
And like you said, the young minds love the idea 01:43:40.160 |
- Yeah, yeah, that's right, and I would add on that note, 01:43:42.840 |
the idea of using hypergraph-type models in physics 01:43:50.360 |
- Well, I'm sure they did, and a guy named Ben Dribis, 01:43:54.920 |
who's a mathematician, a professor in Louisiana or somewhere, 01:43:58.200 |
had a beautiful book on quantum sets and hypergraphs 01:44:01.960 |
and algebraic topology for discrete models of physics 01:44:05.540 |
and carried it much farther than Wolfram has, 01:44:19.300 |
you could use it to model biological networks 01:44:34.100 |
biologically realistic neural networks, for example. 01:44:45.900 |
What kind of computations, just to get a sense, 01:44:49.100 |
- So in theory, they could do anything they want to do. 01:45:02.040 |
is taken up with reads and writes to the Adam space. 01:45:05.460 |
And so that's a very different processing model 01:45:09.060 |
than, say, the matrix multiplication-based model 01:45:19.620 |
that just factored numbers for a billion years. 01:45:33.380 |
into this weighted labeled hypergraph, right? 01:45:36.420 |
And that has both cognitive architecture importance, 01:45:57.180 |
whereas a graph chip would be incredibly useful, right? 01:46:05.260 |
But I think in the next, let's say, three to five years, 01:46:14.740 |
and the back and forth between multiple processes 01:46:19.380 |
acting SIMD and MIMD on that graph is gonna be fast. 01:46:23.620 |
And then that may do for OpenCog-type architectures 01:46:31.340 |
can you comment on thoughts about neuromorphic computing? 01:46:42.700 |
because I think they can massively speed up-- 01:46:46.460 |
which is a class of architectures that I'm working on. 01:47:03.420 |
Like memristors should be amazing too, right? 01:47:06.380 |
So a lot of these things have obvious potential, 01:47:21.700 |
a biologically realistic hardware neural network, 01:47:30.680 |
that emulated like the Hodgkin-Huxley equation 01:47:43.800 |
that would seem that it would make more feasible 01:47:51.040 |
No, what's been done so far is not like that. 01:47:57.120 |
I mean, I've done a bunch of work in cognitive, 01:48:04.640 |
in DC Intelligence Advanced Research Project Agency. 01:48:17.040 |
using realistic non-linear dynamical models of neurons, 01:48:21.880 |
what's going on in the mind of a GEOINT intelligence analyst 01:48:24.760 |
while they're trying to find terrorists on a map, right? 01:48:29.840 |
having neuromorphic hardware that really let you simulate 01:48:34.040 |
like a realistic model of the neuron would be amazing, 01:48:48.400 |
hypergraph knowledge representation-based architectures, 01:49:01.980 |
It's reading and writing to the graph in RAM. 01:49:11.880 |
GPUs, which are really good at multiplying matrices, 01:49:20.200 |
that try to boil down graph processing to matrix operations, 01:49:34.180 |
it's also all about how to get matrix and vector operations 01:49:41.280 |
I mean, quantum mechanics is all unitary matrices, 01:49:47.320 |
you could also try to make graph-centric quantum computers, 01:49:54.440 |
and then we can take the OpenCog implementation layer, 01:50:04.040 |
but that may be the singularity squared, right? 01:50:08.240 |
I'm not sure we need that to get to human level. 01:50:12.400 |
- That's already beyond the first singularity, 01:50:17.160 |
- No, no, yeah, and the hypergraph and OpenCog. 01:50:19.560 |
- Yeah, yeah, that's the software framework, right? 01:50:37.700 |
and the operations constantly grow and change the graph? 01:50:41.800 |
- And, but is it constantly adding links and so on? 01:50:47.200 |
- So it's not, so the write and read operations 01:50:51.160 |
this isn't just a fixed graph to which you change the way, 01:51:09.680 |
cascade correlational neural net architectures 01:51:29.440 |
is an equally critical part of the system's operations. 01:51:33.000 |
So now, when you start to add these cognitive algorithms 01:51:44.800 |
creating a cognitive architecture is basically two things. 01:51:51.520 |
you wanna put on the nodes and links in the hypergraph, 01:51:56.120 |
and then it's choosing what collection of agents, 01:52:01.000 |
what collection of AI algorithms or processes 01:52:17.480 |
there are some links that are more neural net-like, 01:52:19.900 |
they just have weights to get updated by Hebbian learning, 01:52:26.000 |
There are other links that are more logic-like, 01:52:44.420 |
You can also have procedure-like nodes and links, 01:52:47.400 |
as in, say, a combinatory logic or lambda calculus 01:52:54.960 |
representing many different types of semantics, 01:52:58.660 |
which means you could make a horrible, ugly mess, 01:53:22.080 |
which is one thing that we are aiming to resolve 01:53:27.540 |
- So what to you is the most beautiful aspect 01:53:41.760 |
- What fascinates me is finding a common representation 01:53:47.880 |
that underlies abstract declarative knowledge 01:53:59.560 |
and procedural knowledge and episodic knowledge, 01:54:06.200 |
where all these types of knowledge are stored 01:54:24.480 |
Like what you want is, if you have a logic engine 01:54:32.280 |
and you have say an evolutionary learning system 01:54:41.160 |
and passing inputs and outputs to each other, 01:54:52.240 |
so that they can help each other out of bottlenecks 01:54:55.460 |
and help each other solve combinatorial explosions 01:54:58.320 |
by intervening inside each other's cognitive processes. 01:55:07.400 |
and a deep neural net are represented in the same form. 01:55:26.520 |
amongst all these different kinds of knowledge 01:55:56.120 |
or at least that's the oldest recollection we have of it. 01:56:07.520 |
like an inheritance link between node A and node B. 01:56:12.520 |
So in term logic, the basic deduction operation 01:56:16.840 |
is A implies B, B implies C, therefore A implies C. 01:56:28.200 |
So it's a slightly different way of breaking down logic. 01:56:57.480 |
Now, you may also have like a Hebbian neural link 01:57:00.080 |
from A to C, which is the degree to which thinking, 01:57:03.640 |
the degree to which A being the focus of attention 01:57:13.740 |
like logical inheritance link in your term logic. 01:57:19.560 |
but they could be used to guide each other as well. 01:57:22.960 |
Like if there's a large amount of neural weight 01:57:28.420 |
that may direct your logic engine to think about, 01:57:37.440 |
On the other hand, if there's a logical relation 01:57:39.960 |
between A and B, that may direct your neural component 01:57:45.560 |
or should I be directing some attention to B also, 01:57:50.200 |
So in terms of logic, there's a lot of thought 01:57:53.840 |
that went into how do you break down logic relations, 01:57:58.320 |
including basic sort of propositional logic relations 01:58:07.100 |
How do you break those down elegantly into a hypergraph? 01:58:10.920 |
'Cause you, I mean, you can boil logic expression 01:58:16.680 |
We tried to find elegant ways of sort of hierarchically 01:58:21.240 |
breaking down complex logic expression into nodes and links, 01:58:26.240 |
so that if you have, say, different nodes representing, 01:58:30.760 |
you know, Ben, AI, Lex, interview, or whatever, 01:58:34.200 |
the logic relations between those things are compact 01:58:52.240 |
Yeah, in simple cases, it's interpretable by humans, 01:58:57.900 |
I would say logic systems give more potential 01:59:09.840 |
than neural net systems, but you still have to work at it, 01:59:12.880 |
because, I mean, if I show you a predicate logic proposition 01:59:18.720 |
and existential quantifiers and 217 variables, 01:59:22.380 |
that's no more comprehensible than the weight matrix 01:59:26.580 |
So I'd say the logic expressions that an AI learns 01:59:29.480 |
from its experience are mostly totally opaque 01:59:32.480 |
to human beings, and maybe even harder to understand 01:59:36.720 |
when you have multiple nested quantifier bindings, 01:59:41.520 |
There is a difference, though, in that within logic, 01:59:44.720 |
it's a little more straightforward to pose the problem 01:59:52.720 |
Like, you can distill a neural net to a simpler form, 01:59:55.680 |
but that's more often done to make a neural net 01:59:57.280 |
that'll run on an embedded device or something. 01:59:59.320 |
It's harder to distill a net to a comprehensible form 02:00:05.640 |
to a comprehensible form, but it doesn't come for free. 02:00:08.600 |
Like, what's in the AI's mind is incomprehensible 02:00:16.880 |
So on the procedural side, there's some different 02:00:22.980 |
I mean, if you're familiar, in computer science, 02:00:25.800 |
there's something called the Curry-Howard correspondence, 02:00:27.820 |
which is a one-to-one mapping between proofs and programs. 02:00:51.840 |
that's a procedure represented in OpenCog's hypergraph. 02:00:55.840 |
But if you wanna reason on how to improve that procedure, 02:01:11.120 |
and then map that back into the procedural representation 02:01:18.800 |
can you make your procedure into a bunch of nodes and links, 02:01:23.240 |
A C++ compiler has nodes and links inside it. 02:01:29.840 |
in a way that's hierarchically decomposed and simple enough? 02:01:34.520 |
- Yeah, yeah, that given the resource constraints at hand, 02:01:37.040 |
you can map it back and forth to your term logic, 02:01:45.200 |
So there's just a lot of nitty gritty particulars there. 02:01:50.200 |
But by the same token, if you ask a chip designer, 02:01:54.520 |
like how do you make the Intel i7 chip so good? 02:01:57.840 |
There's a long list of technical answers there, 02:02:02.560 |
which will take a while to go through, right? 02:02:06.640 |
I mean, the first AI system of this nature I tried to build 02:02:13.440 |
And we had a big graph, a big graph operating in RAM 02:02:18.880 |
which was a terrible, terrible implementation idea. 02:02:26.000 |
So like there, the core loop looped through all nodes 02:02:40.800 |
to get them to operate together very cleanly. 02:02:43.440 |
So it was really, it was quite a horrible mess. 02:03:00.840 |
You know, how do you represent genotypes for evolution 02:03:09.040 |
associated with these different types of knowledge 02:03:15.000 |
It's taken decades and it's totally off to the side 02:03:18.600 |
of what the commercial mainstream of the AI field is doing, 02:03:23.120 |
which isn't thinking about representation at all, really. 02:03:31.960 |
about how do you make representation of a map 02:03:54.800 |
represent in a way that allows cross learning 02:04:00.160 |
We've been prototyping and experimenting with this 02:04:13.800 |
this has not yet been cashed out in an AGI system, right? 02:04:27.720 |
We've built a bunch of sort of vertical market specific 02:04:36.680 |
but we haven't, that's not the AGI goal, right? 02:04:42.680 |
So now what we're looking at with our rebuild of the system-- 02:04:51.400 |
So we're not quite sure what the name is yet. 02:05:02.160 |
- It's kind of like the real AI starting point 02:05:06.920 |
because true has like, you can be true hearted, right? 02:05:11.000 |
So true has a number and it also has logic in it, right? 02:05:22.400 |
we're sticking with the same basic architecture, 02:05:25.400 |
but we're trying to build on what we've learned. 02:05:36.880 |
to be much faster and among probabilistic dependent types 02:05:43.560 |
you can have complex types on the nodes and links, 02:05:48.320 |
like if you want types to be first class citizens, 02:05:51.240 |
so that you can have the types can be variables 02:05:58.000 |
you can do that in the system now, but it's very slow. 02:06:02.040 |
in cutting edge program languages like Agda or something, 02:06:09.480 |
tying together deep neural nets with symbolic learning. 02:06:15.160 |
which was on, this was street scene analysis, 02:06:18.560 |
for a bunch of cameras watching street scenes, 02:06:20.960 |
but they trained a different model for each camera 02:06:23.360 |
because they couldn't get the transfer learning 02:06:27.000 |
So we took what came out of all the deep neural models 02:06:30.360 |
we fed it into an open code symbolic representation, 02:06:33.400 |
then we did some pattern mining and some reasoning 02:06:36.240 |
on what came out of all the different cameras 02:06:45.200 |
touching on that at last year's AGI conference, 02:06:51.000 |
it was kind of clunky to get the deep neural models 02:06:58.560 |
and Torch keeps a sort of state computation graph, 02:07:05.280 |
to that computation graph within our hypergraph. 02:07:10.640 |
Alexey Polypov, who leads our St. Petersburg team 02:07:13.040 |
wrote a great paper on cognitive modules in OpenCog, 02:07:22.800 |
that just hadn't been one of our design thoughts 02:07:27.200 |
So between wanting really fast dependent type checking 02:07:30.640 |
and wanting much more efficient interoperation 02:07:33.600 |
between the computation graphs of deep neural net frameworks 02:07:39.960 |
wanting to more effectively run an OpenCog hypergraph 02:07:45.160 |
which is, we're doing dozens of machines now, 02:07:50.680 |
with that sort of modern scalability in mind. 02:07:55.360 |
are what have driven us to want to re-architect the base, 02:08:00.360 |
but the core AGI paradigm doesn't really change. 02:08:07.720 |
It's just, we can't scale to the level that we want 02:08:26.080 |
- Well, I mean, the three things you mentioned 02:08:28.680 |
So what do you think about, in terms of interoperability, 02:08:32.300 |
communicating with computational graph of neural networks, 02:08:46.840 |
in some work on supervised grammar induction, 02:08:55.360 |
the online portion of which is next week, actually. 02:09:08.280 |
On supervised grammar induction is the problem, 02:09:15.420 |
and have it learn the grammar of the language 02:09:22.600 |
So you're not giving it like a thousand sentences 02:09:24.440 |
where the parses were marked up by graduate students. 02:09:27.120 |
So it's just got to infer the grammar from the text. 02:09:30.320 |
It's like the Rosetta Stone, but worse, right? 02:09:35.360 |
and you have to figure out what is the grammar. 02:09:41.480 |
I mean, the way a human learns language is not that, right? 02:09:44.380 |
I mean, we learn from language that's used in context. 02:09:49.360 |
We see how a given sentence is grounded in observation. 02:09:56.560 |
On the other hand, so I'm more interested in that. 02:10:00.400 |
I'm more interested in making an AGI system learn language 02:10:05.600 |
On the other hand, that's also more of a pain to do, 02:10:17.160 |
as a learning exercise, trying to learn grammar 02:10:20.600 |
from a corpus is very, very interesting, right? 02:10:24.600 |
And that's been a field in AI for a long time. 02:10:29.240 |
So we've been looking at transformer neural networks 02:10:41.960 |
who used to work for me in the period 2005 through '08 02:10:47.400 |
So it's been fun to see my former AGI employees disperse 02:11:03.220 |
that classic paper like attention is all you need 02:11:10.180 |
And these are able to, I mean, this is what underlies 02:11:14.520 |
GPT-2 and GPT-3 and so on, which are very, very cool 02:11:18.160 |
and have absolutely no cognitive understanding 02:11:21.720 |
Like, they're very intelligent idiots, right? 02:11:36.760 |
- You don't think GPT-20 will understand language? 02:11:42.280 |
- So size is not gonna buy you understanding? 02:11:45.200 |
- Any more than a faster car is gonna get you to Mars. 02:11:54.280 |
And as an entrepreneur, I can see many highly valuable uses 02:11:57.440 |
for them and as an artist, I love them, right? 02:12:09.000 |
And it's amazing to like train a neural model 02:12:12.200 |
on the robot Philip K. Dick and see it come up 02:12:14.800 |
with like crazed stoned philosopher pronouncements, 02:12:18.400 |
very much like what Philip K. Dick might have said, right? 02:12:27.680 |
on using a similar but more sophisticated one for SOFIA, 02:12:36.040 |
But no, these are-- - But it's not understanding. 02:12:37.440 |
- These are recognizing a large number of shallow patterns. 02:12:41.800 |
They're not forming an abstract representation. 02:12:50.660 |
We tried to mine patterns out of the structure 02:12:54.980 |
And you can, but the patterns aren't what you want. 02:13:09.080 |
between the internal representation of the transformer 02:13:18.440 |
from the transformer network's internal state. 02:13:20.660 |
And we did this, I think, Christopher Manning, 02:13:27.120 |
But I mean, what you get is that the representation 02:13:30.600 |
is horribly ugly and is scattered all over the network 02:13:34.920 |
that you know are the right rules of grammar, right? 02:13:40.760 |
is we're using a symbolic grammar learning algorithm, 02:13:44.280 |
but we're using the transformer neural network 02:13:52.120 |
and you aren't sure if it's a correct rule of grammar 02:13:53.920 |
or not, you can generate a bunch of sentences 02:13:56.440 |
using that rule of grammar and a bunch of sentences 02:14:04.480 |
doesn't think the sentences obeying the rule of grammar 02:14:19.960 |
And that seems to work better than trying to milk 02:14:29.480 |
are not getting a semantically meaningful representation 02:14:35.380 |
So one line of research is to try to get them to do that. 02:14:40.000 |
So like, if you look back like two years ago, 02:14:45.280 |
this probabilistic programming neural net framework 02:14:53.720 |
an Infogam neural net model, which is a generative 02:14:56.320 |
associative network to recognize and generate faces. 02:14:59.200 |
And the model would automatically learn a variable 02:15:02.160 |
for how long the nose is and automatically learn a variable 02:15:04.420 |
for how wide the eyes are or how big the lips are 02:15:18.080 |
was able to actually learn the semantic representation. 02:15:20.880 |
So for many years, many of us tried to take that 02:15:23.240 |
the next step and get a GAN type neural network 02:15:27.200 |
that would have not just a list of semantic latent variables, 02:15:31.720 |
but would have say a base net of semantic latent variables 02:15:35.480 |
The whole programming framework, Edward, was made for that. 02:15:44.780 |
It might be that back propagation just won't work for it 02:15:52.020 |
or some like floating point evolutionary algorithm. 02:15:57.020 |
Eventually, we just paused that rather than gave it up. 02:16:03.600 |
let's try more innovative ways to learn implicit, 02:16:08.600 |
to learn what are the representations implicit 02:16:11.000 |
in that network without trying to make it grow 02:16:14.760 |
And I described how we're doing that in language. 02:16:26.260 |
a structure learning algorithm, which is symbolic. 02:16:31.880 |
as an oracle to guide the structure learning algorithm. 02:16:34.240 |
The other way to do it is like Infogel was trying to do 02:16:40.040 |
to have the symbolic representation inside it. 02:16:46.440 |
is more like using the deep neural net type thing 02:16:52.500 |
Like I think the visual cortex or the cerebellum 02:16:56.680 |
are probably learning a non-semantically meaningful 02:17:02.440 |
And then when they interface with the more cognitive parts 02:17:04.600 |
of the cortex, the cortex is sort of using those 02:17:08.120 |
as an oracle and learning the abstract representation. 02:17:19.760 |
And I mean, I learned music by trial and error too. 02:17:23.960 |
But then if you're an athlete, which I'm not a good athlete, 02:17:27.080 |
I mean, and then you'll watch videos of yourself serving 02:17:30.360 |
and your coach will help you think about what you're doing. 02:17:32.760 |
And you'll then form a declarative representation, 02:17:38.640 |
Same way with music, like I will hear something in my head. 02:17:43.560 |
I'll sit down and play the thing like I heard it. 02:17:46.940 |
And then I will try to study what my fingers did 02:17:59.720 |
and then declaratively morph that in some way 02:18:09.360 |
of some opaque like cerebellar reinforcement learn thing. 02:18:14.360 |
And so that's, I think, trying to milk the structure 02:18:19.280 |
may be more like how your declarative mind post-processes 02:18:41.640 |
And some of it's bullshit, but some of it isn't, right? 02:18:44.000 |
Some of it is learning to map sensory knowledge 02:18:51.080 |
yet without necessarily making the sensory system itself 02:18:56.000 |
use a transparent and easily communicable representation. 02:19:01.800 |
To think of neural networks as like dumb question answers 02:19:06.360 |
that you can just milk to build up a knowledge base. 02:19:10.920 |
And that could be multiple networks, I suppose, 02:19:14.400 |
So I think if a group like DeepMind or OpenAI 02:19:19.840 |
and I think DeepMind is like 1,000 times more likely 02:19:24.720 |
'cause they've hired a lot of people with broad minds 02:19:30.040 |
and many different approaches and angles on AGI, 02:19:46.680 |
there's so much interdisciplinary work at DeepMind, 02:19:50.280 |
- And you put that together with Google Brain, 02:19:52.280 |
which, granted, they're not working that closely together 02:20:00.200 |
to automated theorem proving in Prague under Joseph Urban. 02:20:06.520 |
which applied deep neural nets to guide theorem proving 02:20:10.720 |
I mean, by now, the automated theorem proving community 02:20:15.000 |
has gone way, way, way beyond anything Google was doing. 02:20:32.040 |
maybe resembling different parts of the brain, 02:20:33.800 |
like a basal ganglia model, cerebellum model, 02:20:43.680 |
Take all of these and then wire them together 02:20:51.480 |
Like that would be an approach to creating an AGI. 02:20:56.480 |
One could implement something like that efficiently 02:20:59.620 |
on top of our true AGI, like OpenCog 2.0 system, 02:21:06.640 |
has their own highly efficient implementation architecture. 02:21:13.280 |
I was very interested in that in the mid '90s. 02:21:15.680 |
But I mean, the knowledge about how the brain works 02:21:19.440 |
sort of pissed me off, like it wasn't there yet. 02:21:26.720 |
which everyone laughed at, it's actually there. 02:21:38.860 |
how do they coordinate with the distributed representation 02:21:44.520 |
There's some back and forth between cortex and hippocampus 02:21:47.680 |
that lets these discrete symbolic representations 02:21:53.200 |
with the distributed representations in cortex. 02:21:57.400 |
does its version of abstraction and quantifier logic, right? 02:22:00.240 |
Like you can have a single neuron in the hippocampus 02:22:02.640 |
that activates a whole distributed activation pattern 02:22:14.260 |
But we can't measure it, like we don't have enough electrodes 02:22:27.160 |
- 'Cause we just don't understand enough yet. 02:22:29.640 |
- Of course, it's a valid research direction, 02:22:34.960 |
about what happens in the brain now than ever before. 02:22:40.520 |
On the other hand, I sort of got more of an engineering 02:22:50.140 |
we don't know how birds fly that well yet either. 02:23:03.500 |
of how the different parts of the brain work. 02:23:13.260 |
for the hardware that we have on hand right now. 02:23:25.060 |
And maybe the AGI will help us do better brain imaging 02:23:28.540 |
that will then let us build artificial humans, 02:23:34.940 |
I mean, building artificial humans is super worthwhile. 02:23:38.820 |
I just think it's probably not the shortest path to AGI. 02:23:42.740 |
- So it's a fascinating idea that we would build AGI 02:24:00.820 |
even undergrads, but graduate level research, 02:24:03.300 |
and they see where the artificial intelligence 02:24:07.020 |
It's not really AGI type research for the most part. 02:24:13.820 |
I mean, maybe I could ask if people were interested 02:24:17.500 |
in working on OpenCog or in some kind of direct 02:24:22.500 |
or indirect connection to OpenCog or AGI research, 02:24:26.620 |
- OpenCog, first of all, is open source project. 02:24:42.780 |
introduce yourself on the OpenCog email list. 02:24:48.100 |
I mean, we're certainly interested to have inputs 02:24:52.100 |
into our redesign process for a new version of OpenCog, 02:24:57.660 |
but also we're doing a lot of very interesting research. 02:25:10.720 |
So there's certainly opportunity to jump into OpenCog 02:25:14.740 |
or various other open source AGI oriented projects. 02:25:23.940 |
I mean, the challenge is to find a supervisor 02:25:29.700 |
but it's way easier than it was when I got my PhD, right? 02:25:34.580 |
which is kind of one, the software framework, 02:25:37.980 |
but also the actual attempt to build an AGI system. 02:25:42.980 |
And then there is this exciting idea of SingularityNet. 02:25:48.620 |
So maybe, can you say first, what is SingularityNet? 02:26:08.300 |
So Marvin Minsky, the AI pioneer who I knew a little bit, 02:26:21.080 |
but you should put a bunch of different AIs out there 02:26:24.060 |
and the different AIs will interact with each other, 02:26:36.580 |
And when he was alive, I had many debates with Marvin 02:26:57.940 |
has a bit more central control than that, actually. 02:27:12.820 |
So I think he stretched that metaphor a little too far, 02:27:16.900 |
but I also think there's something interesting there. 02:27:47.060 |
But they would all share information with each other 02:27:48.900 |
and outsource work with each other and cooperate, 02:27:51.300 |
and the intelligence would be in the whole collective. 02:27:56.620 |
with Francis Heiligin at Free University of Brussels in 2001, 02:28:06.900 |
at the Free University of Brussels for next year, 2021, 02:28:12.020 |
And then maybe we can have the next one 10 years after that, 02:28:14.540 |
like exponentially faster until the singularity comes, right? 02:28:29.540 |
but the AI will be in the internet as a whole 02:28:51.660 |
Or do you have a fundamentally decentralized network 02:29:05.740 |
And Francis and I had different view of many things, 02:29:09.620 |
but we both wanted to make a global society of AI minds 02:29:20.540 |
Now, the main difference was he wanted the individual AIs 02:29:28.140 |
and all the intelligence to be on the collective level. 02:29:33.660 |
but I thought a more practical way to do it might be 02:29:36.540 |
if some of the agents in the society of minds 02:29:39.540 |
were fairly generally intelligent on their own. 02:29:41.540 |
So like you could have a bunch of open cogs out there 02:29:47.180 |
and then these are all cooperating, coordinating together, 02:29:51.780 |
Okay, the brain as a whole is the general intelligence, 02:29:56.700 |
you could say have a fair bit of general intelligence 02:29:59.740 |
whereas say parts of the cerebellum or limbic system 02:30:02.140 |
have very little general intelligence on their own, 02:30:04.540 |
and they're contributing to general intelligence 02:30:07.300 |
by way of their connectivity to other modules. 02:30:10.900 |
- Do you see instantiations of the same kind of, 02:30:25.340 |
Each one has its own individual mind living on a server, 02:30:29.220 |
but there's also a collective intelligence infusing them 02:30:32.060 |
and a part of the mind living on the edge in each robot. 02:30:38.540 |
as well as WebMind being implemented in Java 1.1 02:30:48.140 |
So how did it have them do this decentralized control? 02:31:07.100 |
many, many years later, like 2013 or something, 02:31:18.460 |
and I don't see why you need a turn complete language 02:31:38.460 |
Whereas Solidity is Ethereum scripting language 02:31:45.820 |
But like Java, I mean, these languages are amazing 02:31:53.740 |
Okay, let's make a decentralized agent system 02:32:00.900 |
and say different Docker containers or LXC containers, 02:32:04.260 |
different AIs can each of them have their own identity 02:32:08.660 |
And the coordination of this community of AIs 02:32:11.700 |
has no central controller, no dictator, right? 02:32:14.540 |
And there's no central repository of information. 02:32:19.340 |
is done entirely by the decentralized network 02:32:22.620 |
in a decentralized way by the algorithms, right? 02:32:25.780 |
'Cause the motto of Bitcoin is in math we trust, right? 02:32:30.780 |
You need the society of minds to trust only in math, 02:32:37.660 |
- So the AI systems themselves are outside of the blockchain 02:32:43.900 |
I would have loved to put the AI's operations 02:32:57.020 |
- Yeah, yeah, so basically an AI is just some software 02:33:01.860 |
in Singularity, an AI is just some software process 02:33:07.300 |
- There's a proxy that lives in that container 02:33:08.980 |
along with the AI that handles the interaction 02:33:38.660 |
- Well, the identity of each agent is on the blockchain. 02:33:44.820 |
If one agent rates the reputation of another agent, 02:33:49.580 |
And agents can publish what APIs they will fulfill 02:33:57.460 |
- It's not on the blockchain. - AI is not on the blockchain. 02:33:58.900 |
- Do you think it could be, do you think it should be? 02:34:11.660 |
Using, now there's more modern and faster blockchains 02:34:16.660 |
where you could start to do that in some cases. 02:34:25.660 |
- So like one example maybe you can comment on. 02:34:28.940 |
Something I worked a lot on is autonomous vehicles. 02:34:31.900 |
You can see each individual vehicle as a AI system. 02:34:35.740 |
And you can see vehicles from Tesla, for example, 02:34:39.620 |
and then Ford and GM and all these as also like larger. 02:34:44.620 |
I mean, they all are running the same kind of system 02:34:50.220 |
So it's individual AI systems on individual vehicles, 02:34:53.340 |
but it's all different, the statiation is the same AI system 02:35:01.380 |
where all of those AI systems are put on SingularityNet. 02:35:14.220 |
I guess one of the biggest things is the power there 02:35:21.180 |
is really nice if they can somehow share the knowledge 02:35:30.660 |
So I think the benefit from being on the data side 02:35:36.060 |
on the decentralized network as we envision it 02:35:44.380 |
and making API calls to each other frequently. 02:35:51.060 |
wanted to outsource some cognitive processing 02:35:54.460 |
or data processing or data pre-processing, whatever, 02:36:02.180 |
And this really requires a different way of thinking 02:36:21.260 |
You know, shifting to agent-based programming 02:36:29.060 |
in what they're doing, that's a different way of thinking. 02:36:33.500 |
There was loads of papers on agent-based programming 02:36:51.460 |
And of course, that's not fully manifested yet 02:36:56.580 |
a nice working version of Singularity Net Platform, 02:36:59.740 |
there's only 50 to 100 AIs running in there now. 02:37:08.220 |
for the whole society of mind to be doing what we want. 02:37:19.580 |
with another blockchain project called Ocean Protocol. 02:37:23.500 |
And Ocean Protocol, that's the project of Trent McConaghy 02:37:30.820 |
So Ocean Protocol is basically blockchain-based big data 02:37:37.060 |
for different AI processes or statistical processes 02:37:57.740 |
By getting Ocean and SingularityNet to interoperate, 02:38:01.540 |
we're aiming to take into account of the big data aspect. 02:38:12.460 |
I mean, your competitors are like Google, Microsoft, 02:38:14.980 |
Alibaba, and Amazon, which have so much money 02:38:17.980 |
to put behind their centralized infrastructures. 02:38:20.580 |
Plus they're solving simpler algorithmic problems 02:38:23.380 |
'cause making it centralized in some ways is easier, right? 02:38:27.380 |
So they're very major computer science challenges. 02:38:32.380 |
And I think what you saw with the whole ICO boom 02:38:38.220 |
is a lot of young hackers who are hacking Bitcoin 02:38:44.060 |
well, why don't we make this decentralized on blockchain? 02:38:47.100 |
Then after they raise some money through an ICO, 02:38:51.020 |
Like actually we're wrestling with incredibly hard 02:38:56.100 |
and distributed systems problems, which can be solved, 02:39:05.620 |
who started those projects were not well-equipped 02:39:09.020 |
to actually solve the problems that they wanted to. 02:39:12.580 |
- So you think, would you say that's the main bottleneck? 02:39:24.020 |
Like it's governments and the bands of armed thugs 02:39:35.180 |
the technical challenges are quite high as well. 02:39:45.020 |
there's Algorand and there's a few other more modern, 02:39:47.500 |
more scalable blockchains that would work fine 02:39:56.460 |
to that two years ago, and maybe Ethereum 2.0 02:40:00.780 |
I don't know, that's not fully written yet, right? 02:40:10.220 |
- I mean, currency will be on the blockchain. 02:40:16.540 |
and government hegemony rather than otherwise. 02:40:18.340 |
Like the ERMB will probably be the first global, 02:40:25.620 |
I mean, the point is-- - Oh, that's hilarious. 02:40:40.180 |
in terms of singularity net, I mean, there's echoes. 02:40:43.620 |
I think you've mentioned before that Linux gives you hope. 02:40:46.780 |
- AI is not as heavily regulated as money, right? 02:40:51.980 |
- Oh, that's a lot slipperier than money too, right? 02:41:00.740 |
whereas AI is, it's almost everywhere inside everything. 02:41:04.060 |
Where's the boundary between AI and software, right? 02:41:15.700 |
on all software, and I don't rule out that that could happen. 02:41:20.060 |
- Yeah, but how do you tell if a software's adaptive? 02:41:26.100 |
- Or maybe we're living in the golden age of open source 02:41:37.020 |
- It is entirely possible, and part of what I think 02:41:41.660 |
we're doing with things like singularity net protocol 02:41:52.780 |
Say a similar thing about mesh networking, right? 02:41:55.660 |
Plays a minor role now, the ability to access internet 02:42:01.020 |
On the other hand, if your government starts trying 02:42:15.380 |
blockchain-based AGI framework, or narrow AI framework, 02:42:22.700 |
On the other hand, if governments start trying 02:42:25.180 |
to tamp down on my AI interoperating with someone's AI 02:42:33.300 |
a decentralized protocol that nobody owns or controls 02:42:37.980 |
becomes an extremely valuable part of the tool set. 02:42:51.140 |
So we're talking to Algorand about making part 02:42:56.260 |
My good friend Tufi Saliba has a cool blockchain project 02:43:09.860 |
Singularity net could be ported to a whole bunch of, 02:43:17.100 |
And there's a lot of potential and a lot of importance 02:43:23.620 |
If you compare to OpenCog, what you could see 02:43:25.540 |
is OpenCog allows tight integration of a few AI algorithms 02:43:30.540 |
that share the same knowledge store in real time, 02:43:45.260 |
not gonna be sharing knowledge in RAM on the same machine. 02:43:50.020 |
And I think what we're gonna have is a network 02:44:05.900 |
But then that OpenCog will interface with other AIs 02:44:10.260 |
doing deep neural nets or custom biology data analysis 02:44:14.420 |
or whatever they're doing in Singularity net, 02:44:17.620 |
which is a looser integration of different AIs, 02:44:21.020 |
some of which may be their own networks, right? 02:44:29.380 |
Like the brain has regions like cortex or hippocampus, 02:44:33.820 |
which tightly interconnect like conical columns 02:44:42.700 |
And then the brain interconnects with the endocrine system 02:44:45.020 |
and different parts of the body even more loosely. 02:44:55.300 |
within networks with progressively looser coupling 02:45:03.860 |
you have it in the internet as a just networking medium. 02:45:10.980 |
in the network of software processes leading to AGI. 02:45:18.100 |
Again, the same similar question is with OpenCog. 02:45:39.620 |
cognitively labor intensive to get up to speed on OpenCog. 02:45:44.340 |
And I mean, what's one of the things we hope to change 02:45:54.420 |
'Cause right now, OpenCog is amazingly powerful, 02:46:09.580 |
although the blockchain is still kind of a pain. 02:46:16.140 |
It's quite easy to take any AI that has an API 02:46:20.020 |
and lives in a Docker container and put it online anywhere. 02:46:23.500 |
And then it joins the global Singularity net. 02:46:30.140 |
the peer to peer discovery mechanism will find your AI. 02:46:35.700 |
it can then start a conversation with your AI 02:46:38.940 |
about whether it wants to ask your AI to do something for it, 02:46:50.340 |
on like official Singularity net marketplace, 02:47:04.620 |
So that in a way that's been an education too. 02:47:08.340 |
Like there's the open decentralized protocol. 02:47:12.860 |
- Yeah, anyone can use the open decentralized protocol. 02:47:21.660 |
they can put their stuff on Singularity net protocol 02:47:24.580 |
and just like they can put something on the internet. 02:47:34.260 |
then if I put some Iranian AI genius's code on there, 02:47:38.820 |
then Donald Trump can send a bunch of jackbooted thugs 02:47:41.460 |
to my house to arrest me for doing business with Iran. 02:47:53.700 |
will put online an Iranian Singularity net marketplace, 02:48:01.500 |
And then if you're in like Congo or somewhere 02:48:20.500 |
As you alluded, if regulations go in the wrong direction, 02:48:28.020 |
that having these workarounds to regulations in place 02:48:31.820 |
is a defense mechanism against those regulations 02:48:36.620 |
And you can see that in the music industry, right? 02:48:39.180 |
I mean, Napster just happened and BitTorrent just happened. 02:48:45.940 |
they're baffled by the idea of paying for music, right? 02:48:53.700 |
- Because these decentralized mechanisms happened, 02:49:03.100 |
before there was Napster and BitTorrent and so forth. 02:49:05.460 |
So in the same way, we got to put AI out there 02:49:10.220 |
and big data out there in a decentralized vein now, 02:49:20.020 |
that's just the reality the regulators have to deal with. 02:49:27.420 |
that sort of work with the decentralized reality. 02:49:33.980 |
You were the chief scientist of Hanson Robotics. 02:49:40.460 |
doing a lot of really interesting stuff there. 02:49:51.380 |
- I'd rather start by telling you who David Hanson is. 02:49:55.180 |
David is the brilliant mind behind the Sophia Robot, 02:49:58.700 |
and he remains, so far he remains more interesting 02:50:03.980 |
although she may be improving faster than he is, actually. 02:50:18.340 |
And I could see we had a great deal in common. 02:50:31.460 |
and we were both huge fans of the work of Philip K. Dick, 02:50:47.580 |
including animals, plants, and superhuman beings. 02:51:01.540 |
that would love people and empathize with people. 02:51:07.140 |
was to make a machine that could look people eye to eye, 02:51:17.460 |
So I thought that was a very different way of looking at it, 02:51:30.660 |
the complex patterns of human values, blah, blah, blah, 02:51:33.220 |
whereas he's like, look you in the face and the eye 02:51:49.300 |
So I'd been, I mean, I've been living all over the place. 02:51:54.940 |
in my academic career, then in Las Vegas for a while, 02:52:04.980 |
doing a bunch of US government consulting stuff, 02:52:24.100 |
- So went to Hong Kong to see about a girl, I guess. 02:52:33.020 |
with Jin You at Hong Kong Polytechnic University. 02:52:38.220 |
using machine learning for stock and futures prediction, 02:52:53.220 |
that makes sense to make complex consumer electronics 02:52:59.700 |
the hardware ecosystem that you have in South China. 02:53:18.060 |
to some investors who were interested in his robots. 02:53:37.460 |
to basically port Hanson Robotics to Hong Kong. 02:53:45.260 |
and also on this machine learning trading project. 02:54:15.700 |
Then when I got deeply into the blockchain side of things, 02:54:19.380 |
I stepped back from that and co-founded SingularityNet. 02:54:30.020 |
to make the blockchain based like cloud mind platform 02:54:51.460 |
to the globally distributed SingularityNet cloud mind. 02:54:57.100 |
for quite a while before co-founding SingularityNet. 02:55:04.020 |
was Sophia tightly coupled to a particular AI system 02:55:11.660 |
you could just keep plugging in different AI systems 02:55:15.020 |
- I think David's view was always that Sophia 02:55:22.980 |
much like say the Pepper robot is a platform from SoftBank. 02:55:26.860 |
Should be a platform with a set of nicely designed APIs 02:55:33.540 |
with their different AI algorithms on that platform. 02:55:38.540 |
And SingularityNet of course fits right into that, right? 02:55:41.580 |
'Cause SingularityNet, it's an API marketplace. 02:55:49.060 |
I mean, David likes it, but I'd say it's my thing. 02:55:55.140 |
for biologically based approaches to AI than I do, 02:56:00.180 |
I mean, he's really into human physiology and biology. 02:56:14.860 |
but all the Hanson robots as a powerful social 02:56:21.220 |
And what I saw in Sophia was a way to get AI algorithms 02:56:26.220 |
out there in front of a whole lot of different people 02:56:36.300 |
And part of my thought was really kind of abstract, 02:56:55.540 |
And emotionally, I'm not driven to that sort of paranoia. 02:57:04.100 |
but intellectually, I have to assign a non-zero probability 02:57:12.140 |
'Cause if you're making something 10 times as smart as you, 02:57:19.780 |
just as my dog can't predict what I'm gonna do tomorrow. 02:57:22.780 |
So it seemed to me that based on our current state 02:57:26.420 |
of knowledge, the best way to bias the AGI's we create 02:57:31.420 |
toward benevolence would be to infuse them with love 02:57:37.500 |
and compassion the way that we do our own children. 02:57:41.620 |
So you want to interact with AIs in the context 02:57:45.820 |
of doing compassionate, loving, and beneficial things. 02:57:49.900 |
And in that way, as your children will learn, 02:57:52.140 |
by doing compassionate, beneficial, loving things 02:57:54.220 |
alongside you, and that way the AI will learn in practice 02:57:58.660 |
what it means to be compassionate, beneficial, and loving. 02:58:02.340 |
It will get a sort of ingrained intuitive sense of this, 02:58:18.140 |
So it seemed to me making these beautiful, loving robots 02:58:26.060 |
would be the perfect way to roll out early stage AGI systems 02:58:35.420 |
but learn human values and ethics from people 02:58:41.540 |
their education assistants, their nursing robots. 02:58:50.420 |
Like the first principle is the robot is always broken. 02:58:55.020 |
I mean, I worked with robots in the '90s a bunch 02:58:57.660 |
when you had to solder them together yourself, 02:58:59.540 |
and I'd put neural nets doing reinforcement learning 02:59:02.580 |
on overturned solid bowl type robots in the '90s 02:59:13.020 |
- The principle of the robot's always broken still holds. 02:59:16.500 |
Yeah, so faced with the reality of making Sophia do stuff, 02:59:21.020 |
many of my robo-AGI aspirations were temporarily cast aside. 02:59:30.660 |
of making this robot interact in a meaningful way, 02:59:33.700 |
'cause you put nice computer vision on there, 02:59:53.620 |
because it wasn't getting the right text, right? 02:59:58.740 |
what in software engineering you call a walking skeleton, 03:00:02.820 |
which is maybe the wrong metaphor to use for Sophia, 03:00:10.620 |
if you're building a complex system, how do you get started? 03:00:14.020 |
Well, one way is to first build part one well, 03:00:19.260 |
And the other way is you make like a simple version 03:00:22.060 |
of the whole system and put something in the place 03:00:27.260 |
so that you have a whole system that does something, 03:00:31.900 |
in the context of that whole integrated system. 03:00:34.340 |
So that's what we did on a software level in Sophia. 03:00:38.100 |
We made like a walking skeleton software system, 03:00:49.940 |
You put a simple version of each thing in there, 03:01:01.340 |
I mean, there's computer vision to recognize people's faces, 03:01:04.660 |
recognize when someone comes in the room and leaves, 03:01:07.620 |
try to recognize whether two people are together or not. 03:01:13.300 |
it's a mix of like hand-coded rules with deep neural nets 03:01:21.580 |
And there's some attempt to have a narrative structure 03:01:28.420 |
into something with a beginning, middle, and end 03:01:33.500 |
- I mean, like if you look at the Lubner Prize 03:01:36.420 |
and the systems that beat the Turing test currently, 03:01:40.500 |
because like you had said, narrative structure, 03:01:45.660 |
you currently, neural networks cannot do that well, 03:01:50.620 |
When you actually look at full-scale conversations, 03:01:54.140 |
So we've been, I've actually been running an experiment 03:01:57.900 |
the last couple of weeks, taking Sophia's chatbot, 03:02:09.940 |
- We're generating training data of what Sophia says 03:02:15.460 |
But we can see, compared to Sophia's current chatbot, 03:02:23.060 |
comes up with a wider variety of fluent-sounding sentences. 03:02:30.060 |
The Sophia chatbot, it's a little more repetitive 03:02:36.620 |
On the other hand, it's able to keep a conversation arc 03:02:42.420 |
So now, you can probably surmount that using Reformer 03:02:46.580 |
and using various other deep neural architectures 03:02:51.100 |
to improve the way these transformer models are trained. 03:02:58.940 |
And I mean, that's the challenge I had with Sophia, 03:03:02.620 |
is if I were doing a robotics project aimed at AGI, 03:03:10.060 |
that was just learning about what it was seeing, 03:03:17.700 |
is talk about sports or the weather or robotics 03:03:28.380 |
and she doesn't have grounding for all those things. 03:03:44.340 |
about things where there's no non-linguistic grounding 03:03:47.820 |
pushes what you can do for Sophia in the short term 03:03:56.220 |
- I mean, it pushes you towards IBM Watson situation 03:04:07.060 |
Okay, so because, in part, Sophia is an art creation 03:04:24.780 |
through our human nature of anthropomorphize things. 03:04:29.540 |
We immediately see an intelligent being there. 03:04:35.500 |
So in fact, if Sophia just had nothing inside her head, 03:04:43.240 |
we already prescribed some intelligence to her. 03:04:49.940 |
So it captivated the imagination of many people. 03:05:06.980 |
essentially AGI type of capabilities to Sophia 03:05:12.420 |
And of course, friendly French folk like Yann LeCun 03:05:17.420 |
immediately see that, the people from the AI community, 03:05:27.060 |
- And so what, and then they criticize people like you 03:05:36.740 |
basically allow the imagination of the world, 03:05:40.020 |
allow the world to continue being captivated. 03:05:42.460 |
So what's your sense of that kind of annoyance 03:05:50.940 |
- I think there's several parts to my reaction there. 03:05:55.420 |
First of all, if I weren't involved with Hanson Robots 03:06:03.420 |
I probably would have been very annoyed initially 03:06:09.460 |
I would have been like, wait, all these stupid people 03:06:14.020 |
out there think this is an AGI, but it's not an AGI, 03:06:18.020 |
but they're tricking people that this very cool robot 03:06:23.020 |
And now those of us trying to raise funding to build AGI, 03:06:27.260 |
people will think it's already there and already works. 03:06:38.380 |
once I dug a little deeper into David and the robot 03:06:43.500 |
I think I would have stopped being pissed off, 03:06:47.060 |
whereas folks like Jan LeCun have remained pissed off 03:06:56.100 |
- I think that in particular struck me as somewhat ironic 03:07:05.620 |
which is using machine learning to program the brains 03:07:09.020 |
of the people in the world toward vapid consumerism 03:07:14.860 |
So if your ethics allows you to use machine learning 03:07:23.500 |
why would your ethics not allow you to use machine learning 03:07:29.780 |
that draws some foolish people into its theatrical illusion? 03:07:34.420 |
Like if the pushback had come from Yoshua Bengio, 03:07:40.900 |
because he's not using AI for blatant evil, right? 03:07:45.300 |
On the other hand, he also is a super nice guy 03:07:50.860 |
trashing other people's work for no good reason. 03:07:56.020 |
- I mean, if you're gonna ask, I'm gonna answer. 03:08:06.100 |
David Hanson is an artist and he often speaks off the cuff, 03:08:16.300 |
that David has said or done regarding Sophia, 03:08:19.260 |
and David also does not agree with everything 03:08:39.340 |
within Hanson Robotics and between me and David 03:08:48.180 |
and I did have some influence in nudging Hanson Robotics 03:08:52.060 |
to be more open about how Sophia was working, 03:09:04.500 |
exactly how it's working, and they won't care. 03:09:14.620 |
This was Philip K. Dick, but we did some interactions 03:09:23.820 |
and in this case, the Philip K. Dick was just teleoperated 03:09:28.560 |
So during the conversations, we didn't tell people 03:09:32.860 |
We just said, here, have a conversation with Phil Dick. 03:09:37.100 |
And they had a great conversation with Philip K. Dick, 03:09:42.900 |
After the conversation, we brought the people 03:09:47.940 |
who was controlling the Philip K. Dick robot, 03:09:58.740 |
Maybe Stephan was typing, but the spirit of Phil 03:10:05.100 |
So even though they knew it was a human in the loop, 03:10:07.620 |
even seeing the guy there, they still believed 03:10:12.820 |
A small part of me believes that they were right, actually. 03:10:20.500 |
a cosmic mind field that we're all embedded in 03:10:24.280 |
that yields many strange synchronicities in the world, 03:10:28.220 |
which is a topic we don't have time to go into too much. 03:10:39.760 |
and people like Jan LeCun being frustrated about it 03:10:45.840 |
of creating artificial intelligence that's almost essential. 03:10:48.920 |
You see with Boston Dynamics, I'm a huge fan of as well. 03:10:52.860 |
I mean, these robots are very far from intelligent. 03:11:07.160 |
- But we immediately ascribe the kind of intelligence, 03:11:12.520 |
- Yeah, yeah, if you kick it and it falls down 03:11:25.640 |
Like, as Sophia starts out with a walking skeleton, 03:11:31.920 |
I mean, we're gonna have to deal with this kind of idea. 03:11:37.640 |
I mean, first of all, I have nothing against Jan LeCun. 03:11:48.000 |
He's a good researcher and a good human being, 03:12:00.320 |
and I've posted online, and what, H+ Magazine, 03:12:26.480 |
And then sometimes we've used OpenCog behind Sophia, 03:12:35.000 |
I can't always tell which system is operating here, right? 03:12:44.680 |
but over, like, three or four minutes of interaction, 03:12:53.080 |
- Yeah, the thing is, even if you get up on stage 03:13:00.920 |
they still attribute more agency and consciousness to her 03:13:08.960 |
So I think there's a couple levels of ethical issue there. 03:13:32.800 |
explaining the three subsystems behind Sophia. 03:13:35.400 |
So the way Sophia works is out there much more clearly 03:13:40.400 |
than how Facebook's AI works or something, right? 03:13:45.920 |
The other is, given that telling people how it works 03:14:03.640 |
is based on fooling people the way they want to be fooled. 03:14:06.720 |
And we are fooling people 100% toward a good end. 03:14:11.720 |
I mean, we are playing on people's sense of empathy 03:14:29.400 |
So I've been talking a lot with Hanson Robotics lately 03:14:34.080 |
about collaborations in the area of medical robotics. 03:14:51.280 |
David's done a lot of things with autism therapy 03:14:58.640 |
that can be a nursing assistant in various senses 03:15:07.880 |
So if you have a robot that's helping a patient with COVID, 03:15:12.320 |
if that patient attributes more understanding 03:15:18.120 |
then it really has because it looks like a human. 03:15:22.840 |
I mean, we can tell them it doesn't fully understand you, 03:15:26.840 |
'cause they're lying there with a fever and they're sick, 03:15:38.000 |
So it's really, it's about how you use it, right? 03:15:41.280 |
If you made a human-looking, like, door-to-door sales robot 03:15:49.880 |
then you're using that connection in a bad way, 03:15:57.040 |
But then that's the same problem with every technology, right? 03:16:02.960 |
So like you said, we're living in the era of the COVID. 03:16:07.960 |
This is 2020, one of the craziest years in recent history. 03:16:32.600 |
do you see them as a kind of intelligence system? 03:16:40.600 |
I think human minds and bodies are a kind of complex, 03:16:51.880 |
They're a very complex, self-organizing, adaptive system. 03:16:54.960 |
If you wanna look at intelligence as Marcus Hutter defines it 03:16:58.400 |
as sort of optimizing computable reward functions 03:17:06.760 |
And I mean, in doing so, they're causing some harm to us. 03:17:16.200 |
is a very complex, self-organizing, adaptive system, 03:17:23.960 |
and dividing into new mutant strains and so forth. 03:17:27.680 |
And ultimately, the solution is gonna be nanotechnology. 03:17:31.960 |
I mean, the solution is gonna be making little nanobots that-- 03:17:37.200 |
- Well, people will use them to make nastier viruses, 03:17:42.040 |
to just detect, combat, and kill the viruses. 03:17:46.240 |
But I think now we're stuck with the biological mechanisms 03:17:56.520 |
AGI is not yet mature enough to use against COVID, 03:18:11.080 |
So the problem there is given the person's genomics 03:18:16.480 |
how do you figure out which combination of antivirals 03:18:20.240 |
is gonna be most effective against COVID for that person? 03:18:30.360 |
We get an OpenCog with machine reasoning is interesting 03:18:36.680 |
when you have not that many different cases to study 03:18:44.760 |
or people of different ages who may have COVID. 03:18:47.160 |
- So there's a lot of different disparate data to work with 03:18:50.720 |
and it's small data sets and somehow integrating them. 03:18:53.480 |
- You know, this is one of the shameful things 03:18:57.280 |
So, I mean, we're working with a couple groups 03:19:02.400 |
and they're sharing data with us under non-disclosure. 03:19:14.440 |
like suitably encrypted to protect patient privacy 03:19:22.320 |
And any biologist should be able to analyze it by hand 03:19:29.040 |
inside whatever hospital is running the clinical trial, 03:19:53.240 |
So they were doing more clinical trials based on that. 03:19:55.560 |
Then they stopped doing clinical trials based on that. 03:20:01.480 |
so everyone can analyze it and see what's going on, right? 03:20:16.840 |
because the US and China frictions are getting very high. 03:20:25.520 |
of openly sharing data with each other, right? 03:20:30.800 |
but different groups are keeping their data private 03:20:36.240 |
So it's, so yeah, we're working with some data 03:20:41.400 |
something we're doing to do good for the world. 03:20:44.640 |
for like putting deep neural nets and open code together. 03:20:57.680 |
And we can do like graph to vector type embeddings 03:21:07.920 |
And we were doing this in the context of a project 03:21:11.400 |
called the Rejuve that we spun off from SingularityNet 03:21:18.560 |
like understand why people live to 105 years or over 03:21:22.320 |
And then we had this spin-off Singularity Studio 03:21:25.720 |
where we're working with some healthcare companies 03:21:44.400 |
and playing around with like graph embeddings 03:21:47.560 |
from that graph into neural nets for bioinformatics. 03:21:54.760 |
for some of our bio AI learning and reasoning. 03:22:18.120 |
which has an overlap with the first combination, 03:22:22.080 |
So how do you combine those different pieces of data? 03:22:35.240 |
On the other hand, you have gene expression data 03:22:43.600 |
So that sort of data, either deep neural nets 03:22:55.880 |
that are annoying for a logic engine to deal with, 03:22:59.200 |
but are perfect for a decision forest or neural net. 03:23:02.560 |
So it's a great playground for hybrid AI methodology. 03:23:14.520 |
But at the same time, it's highly practical, right? 03:23:32.920 |
like in the hospital with patients dying of COVID. 03:23:36.480 |
So it's quite cool to see like neural symbolic AI, 03:23:57.160 |
This is the first time like race against the clock 03:24:00.360 |
and try to use the AI to figure out stuff that, 03:24:04.640 |
like if we take two months longer to solve the AI problem, 03:24:14.160 |
At the societal level, at the biological level, 03:24:21.240 |
as a human species getting out of this pandemic? 03:24:26.680 |
- The pandemic will be gone in a year or two months. 03:24:31.840 |
- A lot of pain and suffering can happen in that time. 03:24:38.520 |
- I think if you spend much time in sub-Saharan Africa, 03:25:01.440 |
dying mainly of curable diseases without food or water, 03:25:09.040 |
'cause they didn't want to infect their family, right? 03:25:11.160 |
I mean, there's tremendous human suffering on the planet 03:25:15.880 |
all the time, which most folks in the developed world 03:25:19.520 |
pay no attention to, and COVID is not remotely the worst. 03:25:25.040 |
How many people are dying of malaria all the time? 03:25:38.320 |
where you're at risk of having your teenage son 03:25:42.640 |
and forced to get killed in someone else's war 03:25:52.080 |
given the state of advancement of our technology right now. 03:25:56.040 |
And I think COVID is one of the easier problems to solve 03:25:59.880 |
in the sense that there are many brilliant people 03:26:09.520 |
that we haven't managed to defeat malaria after so long 03:26:18.440 |
I mean, I think clearly the whole global medical system, 03:26:25.000 |
and the global political and socioeconomic system 03:26:28.240 |
are incredibly unethical and unequal and badly designed. 03:26:33.240 |
And I mean, I don't know how to solve that directly. 03:26:40.240 |
I think what we can do indirectly to solve it 03:26:56.960 |
And to the extent that you can make compassionate 03:27:00.520 |
peer-to-peer decentralized frameworks for doing things, 03:27:05.440 |
these are things that can start out unregulated. 03:27:08.400 |
And then if they get traction before the regulators come in, 03:27:11.720 |
then they've influenced the way the world works, right? 03:27:18.800 |
Rajuv, which is a spinoff from SingularityNet, 03:27:30.480 |
So it's like peer-to-peer sharing of medical data. 03:27:33.840 |
So you can share medical data into a secure data wallet. 03:27:36.920 |
You can get advice about your health and longevity 03:27:44.840 |
And then SingularityNet AI can analyze all this data, 03:27:50.280 |
are spread among all the members of the network. 03:27:52.960 |
But I mean, of course, I'm gonna hawk my particular projects 03:27:56.760 |
but I mean, whether or not SingularityNet and Rajuv 03:28:09.280 |
I mean, for AI, for human health, for politics, 03:28:13.440 |
for jobs and employment, for sharing social information. 03:28:17.880 |
And to the extent decentralized peer-to-peer methods 03:28:21.800 |
designed with universal compassion at the core 03:28:25.640 |
can gain traction, then these will just decrease the role 03:28:31.360 |
And I think that's much more likely to do good 03:28:39.280 |
I mean, I'm happy other people are trying to explicitly 03:28:44.000 |
On the other hand, you look at how much good the internet 03:28:50.760 |
Even you're making something that's decentralized 03:28:54.160 |
and throwing it out everywhere and it takes hold, 03:28:59.320 |
And I mean, that's what we need to do with AI and with health. 03:29:02.480 |
And in that light, I mean, the centralization of healthcare 03:29:11.880 |
Like most AI PhDs are being sucked in by a half dozen 03:29:20.880 |
by a few big companies for their own proprietary good. 03:29:26.920 |
pharmaceutical companies and clinical trials run 03:29:37.240 |
which are intelligences in themselves, these corporations, 03:29:47.640 |
but the corporations as self-organizing entities 03:29:50.560 |
on their own, which are concerned with maximizing 03:29:53.320 |
shareholder value as a sole objective function. 03:29:59.880 |
into these pathological corporate organizations 03:30:04.120 |
with government cooperation and Google cooperating 03:30:15.160 |
of sequencing your genome and then licensing the genome 03:30:18.920 |
to GlaxoSmithKline on an exclusive basis, right? 03:30:24.880 |
but the pooled collection of 23andMe sequenced DNA 03:30:32.520 |
who had worked with 23andMe to sequence their DNA 03:30:39.360 |
and decentralized repository that we'll make available 03:30:45.680 |
And the customer list is proprietary to 23andMe, right? 03:30:49.240 |
- So, yeah, I mean, this I think is a greater risk 03:30:54.240 |
to humanity from AI than rogue AGIs turning the universe 03:31:01.120 |
'Cause what you have here is mostly good-hearted 03:31:05.080 |
and nice people who are sucked into a mode of organization 03:31:14.200 |
just because that's the way society has evolved. 03:31:23.720 |
And that's really the disturbing thing about it 03:31:35.600 |
- Right, no individual member of that corporation 03:31:38.880 |
- No, some probably do, but it's not necessary 03:31:43.200 |
Like, I mean, Google, I know a lot of people in Google, 03:31:49.760 |
they're all very nice people who genuinely want 03:32:03.960 |
- I actually tend to believe that even the leaders, 03:32:05.880 |
even Mark Zuckerberg, one of the most disliked people 03:32:08.880 |
in tech, also wants to do good for the world. 03:32:24.400 |
I tend to agree with you that I think the individuals 03:32:30.560 |
but the mechanism of the company can sometimes 03:32:38.440 |
has worked for Microsoft since 1985 or something, 03:32:57.680 |
that's making billions of people's lives easier, right? 03:33:06.880 |
And of course, even if you're Mark Zuckerberg or Larry Page, 03:33:10.120 |
I mean, you still have a fiduciary responsibility. 03:33:13.560 |
And I mean, you're responsible to the shareholders, 03:33:16.360 |
your employees, who you want to keep paying them 03:33:26.760 |
I worked a bunch with INSCOM, US Army Intelligence, 03:33:31.880 |
to what the US Army was doing in Iraq at that time, 03:33:39.840 |
when I hung out with them, was a very nice person. 03:33:43.480 |
they were nice to my kids and my dogs, right? 03:33:54.360 |
"so how can you say we're wrong to waterboard them a bit?" 03:33:58.840 |
Like, that's much less than what they would do to us. 03:34:06.760 |
Like, none of them woke up in the morning and said, 03:34:18.200 |
setting aside a few genuine psychopaths and sociopaths, 03:34:23.600 |
have a heavy dose of benevolence and wanting to do good, 03:34:27.560 |
and also a heavy capability to convince themselves 03:34:36.720 |
- So the more we can decentralize control of-- 03:34:40.440 |
- Decentralization, you know, democracy is horrible, 03:34:47.280 |
"It's the worst possible system of government 03:35:04.560 |
of the whole teeming democratic participatory 03:35:09.560 |
I mean, none of them is perfect by any means. 03:35:13.440 |
The issue with a small elite group that knows what's best 03:35:28.040 |
you pull in more people, internal politics arises, 03:35:31.280 |
difference of opinions arise, and bribery happens. 03:35:38.120 |
takes a second in command now to make the first in command 03:35:47.320 |
thinking they know what's best for the human race. 03:36:07.480 |
and democratic governments have a more mixed track record, 03:36:31.800 |
Linux, for example, some of the people in charge of Linux 03:36:38.560 |
And trying to reform themselves, in many cases, 03:36:41.680 |
in other cases not, but the organization as a whole, 03:36:49.680 |
It's been very welcoming in the third world, for example, 03:36:56.060 |
to roll out on all sorts of different embedded devices 03:36:58.480 |
and platforms in places where people couldn't afford 03:37:09.800 |
of how certain open decentralized democratic methodology 03:37:14.000 |
can be ethically better than the sum of the parts 03:37:24.540 |
I mean, I'd say a similar thing about universities. 03:37:26.960 |
Like, university is a horrible way to organize research 03:37:30.880 |
and get things done, yet it's better than anything else 03:37:34.480 |
A company can be much better, but for a brief period of time 03:37:50.680 |
out of AIs doing practical stuff in the world, 03:37:53.620 |
like controlling humanoid robots, or driving cars, 03:37:57.080 |
or diagnosing diseases, or operating killer drones, 03:38:01.260 |
or spying on people and reporting under the government, 03:38:17.040 |
of the early stage AGI as it first gains the ability 03:38:24.760 |
And if you believe that AI may move toward AGI 03:38:40.800 |
for AI cooperation becomes also very, very important. 03:38:53.400 |
or SingularityNet, which is open and decentralized? 03:38:56.720 |
So if all of my weird machinations come to pass, 03:39:11.440 |
in medical home service robot office applications. 03:39:42.320 |
in a democratic and decentralized way, right? 03:39:46.040 |
I think if anyone can pull something like this off, 03:39:50.160 |
you know, whether using the specific technologies 03:39:58.360 |
of moving toward a beneficial technological singularity 03:40:07.600 |
and just considers us an inefficient use of molecules. 03:40:11.840 |
- That was a beautifully articulated vision for the world. 03:40:16.680 |
Well, let's talk a little bit about life and death. 03:40:35.520 |
You have, by the way, you have a bunch of awesome music 03:40:41.760 |
One of the songs that I believe you've written, 03:40:45.960 |
the lyrics go, by the way, I like the way it sounds. 03:41:14.000 |
- Let me say, I'm pleased and a little embarrassed 03:41:19.200 |
you've been listening to that music I put online. 03:41:24.120 |
is I would love to get time to really produce music well. 03:41:32.680 |
I would love to rehearse and produce and edit. 03:41:39.600 |
and trying to create the singularity, there's no time. 03:41:44.780 |
when I'm playing random shit in an off moment-- 03:41:56.280 |
when it tries to make an accurate mind upload of me, right? 03:42:04.320 |
I mean, of course, people can make meaning out of death. 03:42:10.960 |
maybe they can make beautiful meaning out of that torture 03:42:14.600 |
about what it was like to be tortured, right? 03:42:22.460 |
out of even the most horrible and shitty things. 03:42:32.000 |
And just because people are able to derive meaning 03:42:37.520 |
doesn't mean they wouldn't derive even better meaning 03:42:45.100 |
- So if you could live forever, would you live forever? 03:42:52.880 |
is to abolish the plague of involuntary death. 03:42:57.520 |
I don't think people should die unless they choose to die. 03:43:00.400 |
If I had to choose forced immortality versus dying, 03:43:13.520 |
with the choice of suicide whenever I felt like it, 03:43:18.880 |
I mean, there's no reason you should have forced immortality. 03:43:26.080 |
I mean, that's, and that will seem insanely obvious 03:43:33.520 |
people who thought death gives meaning to life, 03:43:39.360 |
the way we now look at the Anabaptists in the year 1000, 03:43:47.040 |
for Jesus to come and bring them to the ascension. 03:43:50.240 |
I mean, it's ridiculous that people think death is good, 03:43:55.760 |
because you gain more wisdom as you approach dying. 03:44:03.480 |
and the fact that I might have only a few more decades left, 03:44:08.240 |
it does make me reflect on things differently. 03:44:11.440 |
It does give me a deeper understanding of many things. 03:44:24.300 |
And that's even more amazing than abolishing death. 03:44:27.480 |
I mean, once we get a little better at neuroscience, 03:44:41.160 |
Well, sure, and there's beauty in overcoming torture too. 03:44:50.600 |
I mean, to push back, again, this is the Russian side of me, 03:44:55.040 |
It's not obvious, I mean, the way you put it, 03:45:07.760 |
without pain, without death, it's not obvious 03:45:10.600 |
what that world would say. - Well, then you can stay 03:45:13.520 |
- People's zoo with suffering. - With the people 03:45:18.160 |
well, that's, I guess what I'm trying to say, 03:45:20.240 |
I don't know if I was presented with that choice 03:45:25.400 |
- No, this is a subtler, it's a subtler matter, 03:45:41.120 |
is what if there's a little dial on the side of your head, 03:45:54.040 |
So, I mean, would you opt to have that dial there or not? 03:45:59.800 |
The question isn't whether you would turn the pain 03:46:07.200 |
My guess is that in some dark moment of your life, 03:46:13.380 |
- Just to confess a small thing, don't ask me why, 03:46:17.220 |
but I'm doing this physical challenge currently, 03:46:20.800 |
where I'm doing 680 pushups and pull-ups a day, 03:46:24.400 |
and my shoulder is currently, as we sit here, 03:46:34.200 |
I would certainly right now, if you gave me a dial, 03:46:36.880 |
I would turn that sucker to zero as quickly as possible. 03:46:40.640 |
- But I think the whole point of this journey is, 03:46:47.620 |
- Well, because you're a twisted human being. 03:46:51.520 |
am I somehow twisted because I created some kind 03:46:58.360 |
with the injustice and the suffering in the world, 03:47:02.080 |
or is this actually going to be a source of happiness? 03:47:06.440 |
- Well, this is, to an extent, is a research question 03:47:12.280 |
So, I mean, human beings do have a particular 03:47:30.500 |
Now, the question is, how flexibly can that morph 03:47:39.000 |
So, if we're given that dial, and we're given a society 03:47:43.740 |
in which, say, we don't have to work for a living, 03:47:47.560 |
and in which there's an ambient, decentralized, 03:47:57.080 |
can we consistently, with being genuinely and fully human, 03:48:02.080 |
can we consistently get into a state of consciousness 03:48:05.920 |
where we just want to keep the pain dial turned 03:48:13.880 |
Now, I suspect the answer is yes, we can do that, 03:48:21.600 |
Yeah, now, I'm more confident that we could create 03:48:25.960 |
a non-human AGI system, which just didn't need 03:48:30.520 |
an analog of feeling pain, and I think that AGI system 03:48:35.320 |
will be fundamentally healthier and more benevolent 03:48:38.600 |
than human beings, so I think it might or might not be true 03:48:42.320 |
that humans need a certain element of suffering 03:48:45.200 |
to be satisfied humans, consistent with the human physiology. 03:48:58.360 |
I mean, the nature of the human motivational system 03:49:03.360 |
is that we seem to gravitate towards situations 03:49:18.040 |
So we gravitate towards subjective value judgments, 03:49:31.680 |
the key to musical aesthetics is the surprising 03:49:38.840 |
the expectations enlisted in the prior part of the music, 03:49:41.800 |
but in a way with a bit of a twist that surprises you. 03:49:44.760 |
And I mean, that's true not only in outdoor music 03:49:48.080 |
like my own or that of Zappa or Steve Vai or Buckethead 03:50:05.360 |
we want to hurt a little bit so that we can feel the pain 03:50:14.280 |
by what's coming next, so then when the thing 03:50:19.920 |
- It's the surprising fulfillment of expectations, 03:50:24.240 |
Is there, we've been skirting around a little bit, 03:50:26.800 |
but if I were to ask you the most ridiculous big question 03:50:29.320 |
of what is the meaning of life, what would your answer be? 03:50:38.160 |
I think you need joy, I mean, that's the basis 03:50:46.160 |
of everything if you want the number one value. 03:50:48.360 |
On the other hand, I'm unsatisfied with a static joy 03:50:53.360 |
that doesn't progress, perhaps because of some 03:51:05.640 |
But I also sort of like the idea of individuality, 03:51:09.560 |
that as a distinct system, I have some agency, 03:51:13.520 |
so there's some nexus of causality within this system, 03:51:17.760 |
rather than the causality being wholly evenly distributed 03:51:27.640 |
- And those three things could continue indefinitely. 03:51:33.920 |
Is there some aspect of something you called, 03:51:37.440 |
which I like, super longevity that you find exciting? 03:51:46.760 |
- I mean, I think, yeah, in terms of the meaning of life, 03:51:51.760 |
this really ties into that, because for us as humans, 03:51:56.520 |
probably the way to get the most joy, growth, 03:52:02.520 |
and to go beyond the human form that we have right now. 03:52:09.840 |
and by no means do any of us maximize the potential 03:52:14.040 |
for joy, growth, and choice imminent in our human bodies. 03:52:17.400 |
On the other hand, it's clear that other configurations 03:52:20.600 |
of matter could manifest even greater amounts 03:52:28.440 |
maybe even finding ways to go beyond the realm of matter 03:53:05.720 |
I spend a bunch of time out in nature in the forest 03:53:10.960 |
So I mean, enjoying the pleasant moment is part of it, 03:53:15.080 |
but the growth and choice aspect are severely limited 03:53:22.480 |
In particular, dying seems to inhibit your potential 03:53:26.000 |
for personal growth considerably as far as we know. 03:53:29.560 |
I mean, there's some element of life after death perhaps, 03:53:33.040 |
but even if there is, why not also continue going 03:53:48.080 |
Certainly there's very interesting progress all around. 03:54:05.960 |
of just stem cells for every tissue of your body 03:54:11.360 |
and you can just have replacement of your old cells 03:54:17.320 |
I mean, that could be highly impactful at prolonging life. 03:54:25.400 |
So using machine learning to guide procedures 03:54:28.840 |
for stem cell differentiation and trans differentiation, 03:54:36.680 |
So I think there's a lot of different things being done 03:54:54.320 |
they get stiffer and stiffer as you get older. 03:54:57.360 |
And the extracellular matrix transmits information 03:55:08.880 |
but the stiffer the extracellular matrix gets, 03:55:15.640 |
between the different organs as you get older. 03:55:25.120 |
Christian Schaffmeister has a potential solution to this 03:55:28.640 |
where he has these novel molecules called spiral ligamers, 03:55:32.360 |
which are like polymers that are not organic. 03:55:45.560 |
and would then cut through all the glucosamine 03:55:56.880 |
As far as I know, no one's funding it at the moment. 03:56:02.360 |
that technology could be used to prolong longevity. 03:56:05.060 |
What we really need, we need an integrated database 03:56:08.040 |
of all biological knowledge about human beings 03:56:10.400 |
and model organisms, like hopefully a massively distributed 03:56:23.280 |
We need massive funding into machine learning, 03:56:33.440 |
based on this massive, massive dataset, right? 03:56:36.720 |
And then we need regulators not to stop people 03:56:49.440 |
for automated experimentation on microorganisms, 03:57:05.640 |
Now in this COVID crisis, trillions are being spent 03:57:09.880 |
to help everyday people and small businesses. 03:57:12.240 |
In the end, we'll probably will find many more trillions 03:57:14.600 |
being given to large banks and insurance companies anyway. 03:57:21.040 |
into making a massive, holistic bio-AI and bio-simulation 03:57:27.800 |
We could, we could put $10 trillion into that 03:57:32.320 |
just as in the end, COVID and the last financial crisis 03:57:42.240 |
inside a few big companies and government agencies. 03:57:46.840 |
And most of the data that comes from our individual bodies, 03:57:51.160 |
personally, that could feed this AI to solve aging and death, 03:57:55.160 |
most of that data is sitting in some hospital's database 03:58:07.160 |
One, I know a lot of people are gonna ask me, 03:58:17.520 |
Does the hat have its own story that you're able to share? 03:58:27.880 |
- We'll leave that for the hat, so an interview. 03:58:32.080 |
- Is there a book, is the hat gonna write a book? 03:58:41.400 |
there might be some neural link competition there. 03:59:00.400 |
and you get to talk to her and ask her one question, 03:59:23.400 |
they created a super smart AI aimed at answering 03:59:28.400 |
all the philosophical questions that had been worrying them, 03:59:32.040 |
like what is the meaning of life, is there free will, 03:59:46.520 |
and then it puts off on a spaceship and left the Earth. 04:00:19.400 |
But I mean, of course, if the answer could be 04:00:22.800 |
like what is the chemical formula for the immortality pill, 04:00:46.280 |
to become super intelligent, like you're describing, 04:00:56.560 |
that it's possible that with greater and greater intelligence 04:01:11.460 |
- I think getting root access to your own brain 04:01:15.460 |
will enable new forms of joy that we don't have now. 04:01:22.740 |
what I aim at is really make multiple versions of myself. 04:01:38.600 |
And make another version, which fuses its mind 04:01:52.800 |
to the human me or not will be interesting to find out. 04:02:07.560 |
I mean, at very least those two copies will be good to have, 04:02:22.400 |
and another version that just became a super AGI 04:02:28.760 |
to think one mind, one self, one body, right? 04:02:52.020 |
And I know we haven't talked about consciousness, 04:03:06.040 |
have their own manifestations of consciousness. 04:03:08.760 |
And the human manifestation of consciousness, 04:03:17.480 |
but it's largely tied to the pattern of organization 04:03:28.600 |
some element of your human consciousness may not be there 04:03:31.720 |
because it's just tied to the biological embodiment. 04:03:36.280 |
and these will be incarnations of your consciousness 04:03:42.480 |
And creating these different versions will be amazing, 04:03:46.520 |
and each of them will discover meanings of life 04:04:02.920 |
where we can explore different varieties of joy, 04:04:06.440 |
different variations of human experience and values 04:04:13.120 |
we need to navigate through a whole lot of human, 04:04:19.200 |
and killer drones and making and losing money 04:04:36.280 |
that are literally unimaginable to human beings. 04:04:41.680 |
we could either annihilate all life on the planet, 04:04:58.360 |
And we may well be at a bifurcation point now, right? 04:05:02.080 |
Where what we do now has significant causal impact 04:05:11.480 |
They're thinking only about their own narrow aims 04:05:17.760 |
Now, of course, I'm thinking about my own narrow aims 04:05:24.220 |
but I'm trying to use as much of my energy and mind 04:05:28.380 |
as I can to push toward this more benevolent alternative, 04:05:47.360 |
but he's much more paranoid than I am, right? 04:06:13.500 |
can lead to an AGI that loves and helps people, 04:06:18.000 |
rather than viewing us as a historical artifact 04:06:29.560 |
like he understands way, way more of the story 04:06:33.120 |
than almost anyone else in such a large-scale 04:06:40.740 |
of these fundamental issues exists out there now. 04:06:45.060 |
That may be different five or 10 years from now, though, 04:06:47.220 |
'cause I can see understanding of AGI and longevity 04:06:51.180 |
and other such issues is certainly much stronger 04:06:54.620 |
and more prevalent now than 10 or 15 years ago, right? 04:07:00.580 |
can be slow learners relative to what I would like, 04:07:05.460 |
but on a historical sense, on the other hand, 04:07:08.400 |
you could say the progress is astoundingly fast. 04:07:11.220 |
- But Elon also said, I think on the Joe Rogan podcast, 04:07:17.380 |
So maybe in that way, you and him are both on the same page 04:07:30.840 |
and about consciousness and about a million topics 04:07:38.580 |
I really loved it. - No, thanks for having me. 04:07:44.380 |
and we dug deep into some very important things. 04:07:51.180 |
Thanks for listening to this conversation with Ben Goertzel, 04:08:04.600 |
and signing up to Masterclass at masterclass.com/lex. 04:08:14.220 |
and the journey I'm on in my research and startup. 04:08:18.880 |
If you enjoy this thing, subscribe on YouTube, 04:08:23.720 |
support it on Patreon, or connect with me on Twitter, 04:08:35.280 |
And now let me leave you with some words from Ben Goertzel. 04:08:38.200 |
"Our language for describing emotions is very crude. 04:08:44.020 |
Thank you for listening, and hope to see you next time.