back to index

Ben Goertzel: Artificial General Intelligence | Lex Fridman Podcast #103


Chapters

0:0 Introduction
3:20 Books that inspired you
6:38 Are there intelligent beings all around us?
13:13 Dostoevsky
15:56 Russian roots
20:19 When did you fall in love with AI?
31:30 Are humans good or evil?
42:4 Colonizing mars
46:53 Origin of the term AGI
55:56 AGI community
72:36 How to build AGI?
96:47 OpenCog
145:32 SingularityNET
169:33 Sophia
196:2 Coronavirus
204:14 Decentralized mechanisms of power
220:16 Life and death
222:44 Would you live forever?
230:26 Meaning of life
238:3 Hat
238:46 Question for AGI

Whisper Transcript | Transcript Only Page

00:00:00.000 | The following is a conversation with Ben Goertzel,
00:00:03.000 | one of the most interesting minds
00:00:04.560 | in the artificial intelligence community.
00:00:06.800 | He's the founder of SingularityNet,
00:00:08.920 | designer of OpenCog AI framework,
00:00:11.520 | formerly a director of research
00:00:13.220 | at the Machine Intelligence Research Institute,
00:00:15.720 | and chief scientist of Hanson Robotics,
00:00:18.440 | the company that created the Sophia robot.
00:00:21.000 | He has been a central figure in the AGI community
00:00:23.680 | for many years, including in his organizing
00:00:26.960 | and contributing to the conference
00:00:28.720 | and artificial general intelligence,
00:00:30.920 | the 2020 version of which is actually happening this week,
00:00:34.440 | Wednesday, Thursday, and Friday.
00:00:36.480 | It's virtual and free.
00:00:38.480 | I encourage you to check out the talks,
00:00:40.040 | including by Yosha Bach from episode 101 of this podcast.
00:00:45.040 | Quick summary of the ads.
00:00:46.600 | Two sponsors, the Jordan Harbinger Show and Masterclass.
00:00:51.040 | Please consider supporting this podcast
00:00:52.820 | by going to jordanharbinger.com/lex
00:00:56.520 | and signing up at masterclass.com/lex.
00:01:00.400 | Click the links, buy all the stuff.
00:01:02.840 | It's the best way to support this podcast
00:01:04.640 | and the journey I'm on in my research and startup.
00:01:08.860 | This is the Artificial Intelligence Podcast.
00:01:11.480 | If you enjoy it, subscribe on YouTube,
00:01:13.680 | review it with five stars on Apple Podcast,
00:01:15.940 | support it on Patreon, or connect with me on Twitter,
00:01:18.920 | @lexfriedman, spelled without the E, just F-R-I-D-M-A-N.
00:01:25.280 | As usual, I'll do a few minutes of ads now
00:01:27.480 | and never any ads in the middle
00:01:28.920 | that can break the flow of the conversation.
00:01:31.560 | This episode is supported by the Jordan Harbinger Show.
00:01:34.840 | Go to jordanharbinger.com/lex.
00:01:37.400 | It's how he knows I sent you.
00:01:39.240 | On that page, there's links to subscribe to it
00:01:41.660 | on Apple Podcast, Spotify, and everywhere else.
00:01:44.780 | I've been binging on his podcast.
00:01:46.600 | Jordan is great.
00:01:47.800 | He gets the best out of his guests,
00:01:49.320 | dives deep, calls them out when it's needed,
00:01:51.580 | and makes the whole thing fun to listen to.
00:01:53.880 | He's interviewed Kobe Bryant, Mark Cuban,
00:01:57.080 | Neil deGrasse Tyson, Garry Kasparov, and many more.
00:02:00.800 | His conversation with Kobe is a reminder
00:02:03.360 | how much focus and hard work is acquired for greatness
00:02:07.760 | in sport, business, and life.
00:02:11.120 | I highly recommend the episode if you want to be inspired.
00:02:14.120 | Again, go to jordanharbinger.com/lex.
00:02:17.560 | It's how Jordan knows I sent you.
00:02:19.400 | This show is sponsored by Masterclass.
00:02:23.000 | Sign up at masterclass.com/lex to get a discount
00:02:27.040 | and to support this podcast.
00:02:29.480 | When I first heard about Masterclass,
00:02:31.080 | I thought it was too good to be true.
00:02:32.960 | For 180 bucks a year, you get an all-access pass
00:02:36.120 | to watch courses from, to list some of my favorites.
00:02:39.480 | Chris Hadfield on space exploration,
00:02:41.680 | Neil deGrasse Tyson on scientific thinking
00:02:43.600 | and communication, Will Wright, creator of
00:02:46.960 | the greatest city-building game ever,
00:02:49.000 | SimCity and Sims on game design,
00:02:52.480 | Carlos Santana on guitar, Garry Kasparov,
00:02:56.120 | the greatest chess player ever on chess,
00:02:59.120 | Daniel Negreanu on poker, and many more.
00:03:01.600 | Chris Hadfield explaining how rockets work
00:03:04.280 | and the experience of being launched into space alone
00:03:06.920 | is worth the money.
00:03:08.320 | Once again, sign up at masterclass.com/lex
00:03:11.680 | to get a discount and to support this podcast.
00:03:15.480 | And now, here's my conversation with Ben Goertzel.
00:03:20.760 | What books, authors, ideas had a lot of impact on you
00:03:25.040 | in your life in the early days?
00:03:26.740 | - You know, what got me into AI and science fiction
00:03:32.120 | and such in the first place wasn't a book,
00:03:34.520 | but the original Star Trek TV show,
00:03:37.000 | which my dad watched with me, like, in its first run.
00:03:39.800 | It would have been 1968, '69 or something.
00:03:42.640 | And that was incredible 'cause every show
00:03:45.280 | that visited a different alien civilization
00:03:49.080 | with different culture and weird mechanisms.
00:03:51.240 | But that got me into science fiction,
00:03:54.960 | and there wasn't that much science fiction
00:03:57.160 | to watch on TV at that stage.
00:03:58.680 | So that got me into reading the whole literature
00:04:01.400 | of science fiction, you know,
00:04:03.320 | from the beginning of the previous century until that time.
00:04:07.480 | And I mean, there was so many science fiction writers
00:04:10.840 | who were inspirational to me.
00:04:12.400 | I'd say if I had to pick two,
00:04:14.800 | it would have been Stanislaw Lem, the Polish writer.
00:04:18.720 | - Yeah. - Yeah.
00:04:20.120 | Solaris, and then he had a bunch of more obscure writings
00:04:23.360 | on superhuman AIs that were engineered.
00:04:26.640 | Solaris was sort of a superhuman,
00:04:28.640 | naturally occurring intelligence.
00:04:31.520 | Then Philip K. Dick, who, you know,
00:04:34.760 | ultimately my fandom for Philip K. Dick
00:04:37.320 | is one of the things that brought me together
00:04:39.080 | with David Hanson, my collaborator on robotics projects.
00:04:43.760 | So, you know, Stanislaw Lem was very much an intellectual,
00:04:47.640 | right, so he had a very broad view of intelligence
00:04:51.040 | going beyond the human and into what I would call,
00:04:54.440 | you know, open-ended superintelligence.
00:04:56.920 | The Solaris superintelligent ocean was intelligent
00:05:01.920 | in some ways more generally intelligent than people,
00:05:04.400 | but in a complex and confusing way
00:05:07.360 | so that human beings could never quite connect to it,
00:05:10.160 | but it was still palpably very, very smart.
00:05:13.240 | And then the Golem 4 supercomputer,
00:05:16.560 | in one of Lem's books, this was engineered by people,
00:05:20.360 | but eventually it became very intelligent
00:05:24.360 | in a different direction than humans
00:05:25.960 | and decided that humans were kind of trivial
00:05:29.160 | and not that interesting.
00:05:30.200 | So it put some impenetrable shield around itself,
00:05:35.200 | shut itself off from humanity,
00:05:36.640 | and then issued some philosophical screed
00:05:40.000 | about the pathetic and hopeless nature of humanity
00:05:44.440 | and all human thought, and then disappeared.
00:05:48.360 | Now, Philip K. Dick, he was a bit different.
00:05:51.120 | He was human-focused, right?
00:05:52.440 | His main thing was, you know, human compassion
00:05:55.840 | and the human heart and soul are going to be the constant
00:05:59.520 | that will keep us going through whatever aliens we discover
00:06:03.600 | or telepathy machines or super AIs or whatever it might be.
00:06:08.600 | So he didn't believe in reality,
00:06:11.160 | like the reality that we see may be a simulation
00:06:14.240 | or a dream or something else we can't even comprehend,
00:06:17.560 | but he believed in love and compassion
00:06:19.600 | as something persistent
00:06:21.120 | through the various simulated realities.
00:06:22.960 | So those two science fiction writers
00:06:25.480 | had a huge impact on me.
00:06:27.240 | Then a little older than that,
00:06:28.840 | I got into Dostoevsky and Friedrich Nietzsche
00:06:33.080 | and Rimbaud and a bunch of more literary type writing.
00:06:37.400 | - Can we talk about some of those things?
00:06:38.600 | So on the Solaris side, Stanislav Lem,
00:06:43.160 | this kind of idea of there being intelligences out there
00:06:47.000 | that are different than our own,
00:06:48.700 | do you think there are intelligences maybe all around us
00:06:53.000 | that we're not able to even detect?
00:06:56.400 | So this kind of idea of,
00:06:58.720 | maybe you can comment also on Stephen Wolfram,
00:07:01.560 | thinking that there's computations all around us
00:07:04.360 | and we're just not smart enough
00:07:05.840 | to kind of detect their intelligence
00:07:09.040 | or appreciate their intelligence.
00:07:10.400 | - Yeah, so my friend Hugo de Garis,
00:07:13.560 | who I've been talking to about these things
00:07:15.840 | for many decades, since the early '90s,
00:07:19.320 | he had an idea he called SIPI,
00:07:21.800 | the Search for Intra Particulate Intelligence.
00:07:25.120 | So the concept there was,
00:07:26.880 | as AIs get smarter and smarter and smarter,
00:07:29.480 | assuming the laws of physics as we know them now
00:07:33.680 | are still what these super intelligences
00:07:37.480 | perceived to hold and are bound by,
00:07:39.240 | as they get smarter and smarter,
00:07:40.440 | they're gonna shrink themselves littler and littler
00:07:43.020 | because special relativity makes it
00:07:45.560 | so they can communicate between two spatially distant points.
00:07:49.320 | So they're gonna get smaller and smaller,
00:07:50.820 | but then ultimately, what does that mean?
00:07:53.260 | The minds of the super, super, super intelligences,
00:07:56.560 | they're gonna be packed into the interaction
00:07:59.080 | of elementary particles or quarks
00:08:02.000 | or the partons inside quarks or whatever it is.
00:08:04.640 | So what we perceive as random fluctuations
00:08:07.680 | on the quantum or subquantum level
00:08:09.800 | may actually be the thoughts
00:08:11.560 | of the micro, micro, micro miniaturized super intelligences
00:08:16.360 | 'cause there's no way we can tell random from structured
00:08:20.120 | but with an algorithmic information
00:08:21.720 | more complex than our brains, right?
00:08:23.200 | We can't tell the difference.
00:08:24.380 | So what we think is random could be the thought processes
00:08:27.080 | of some really tiny super minds,
00:08:30.080 | and if so, there's not a damn thing we can do about it
00:08:34.080 | except try to upgrade our intelligences
00:08:37.240 | and expand our minds so that we can perceive
00:08:40.120 | more of what's around us.
00:08:41.400 | - But if those random fluctuations,
00:08:44.360 | even if we go to quantum mechanics,
00:08:46.600 | if that's actually super intelligent systems,
00:08:51.280 | aren't we then part of the soup of super intelligence?
00:08:54.720 | Aren't we just like a finger of the entirety of the body
00:08:59.160 | of the super intelligent system?
00:09:01.360 | - We could be.
00:09:02.920 | I mean, a finger is a strange metaphor.
00:09:06.000 | I mean, we--
00:09:07.240 | - Well, a finger is dumb is what I mean.
00:09:09.680 | - But a finger is also useful
00:09:12.320 | and is controlled with intent by the brain,
00:09:14.840 | whereas we may be much less than that, right?
00:09:16.760 | I mean, yeah, we may be just some random epiphenomenon
00:09:21.400 | that they don't care about too much.
00:09:23.360 | Like think about the shape of the crowd
00:09:26.040 | emanating from a sports stadium or something, right?
00:09:28.720 | There's some emergent shape to the crowd.
00:09:30.600 | It's there.
00:09:31.600 | You could take a picture of it.
00:09:32.680 | It's kind of cool.
00:09:33.720 | It's irrelevant to the main point of the sports event
00:09:36.360 | or where the people are going
00:09:37.880 | or what's on the minds of the people
00:09:40.240 | making that shape in the crowd, right?
00:09:41.920 | So we may just be some semi-arbitrary higher level pattern
00:09:46.920 | popping out of a lower level
00:09:49.720 | hyper intelligent self-organization.
00:09:52.280 | And I mean, so be it, right?
00:09:55.880 | I mean, that's one thing that--
00:09:57.080 | - Still a fun ride.
00:09:58.120 | - Yeah, I mean, the older I've gotten,
00:09:59.520 | the more respect I've achieved
00:10:01.760 | for our fundamental ignorance.
00:10:04.200 | I mean, mine and everybody else's.
00:10:06.240 | I mean, I look at my two dogs,
00:10:08.800 | two beautiful little toy poodles,
00:10:10.920 | and they watch me sitting at the computer typing.
00:10:14.760 | They just think I'm sitting there
00:10:15.800 | wiggling my fingers to exercise and maybe,
00:10:18.120 | or guarding the monitor on the desk
00:10:19.960 | that they have no idea that I'm communicating
00:10:22.320 | with other people halfway around the world,
00:10:24.400 | let alone creating complex algorithms
00:10:27.640 | running in RAM on some computer server
00:10:30.240 | in St. Petersburg or something, right?
00:10:32.520 | Although they're right there in the room with me.
00:10:35.080 | So what things are there right around us
00:10:37.800 | that we're just too stupid or close-minded to comprehend?
00:10:40.800 | Probably quite a lot.
00:10:42.120 | - Your very poodle could also be communicating
00:10:46.200 | across multiple dimensions with other beings,
00:10:49.960 | and you're too unintelligent to understand
00:10:53.200 | the kind of communication mechanism they're going through.
00:10:55.680 | - There have been various TV shows
00:10:58.440 | and science fiction novels, Puzzling Cats, Dolphins,
00:11:02.240 | Mice and Whatnot are actually super intelligences.
00:11:05.640 | Here to observe that.
00:11:07.280 | I would guess, as one or the other quantum physics founders
00:11:12.280 | said, those theories are not crazy enough to be true.
00:11:15.520 | The reality's probably crazier than that.
00:11:17.680 | - Beautifully put.
00:11:18.520 | So on the human side, with Philip K. Dick and in general,
00:11:25.400 | where do you fall on this idea that love
00:11:28.520 | and just the basic spirit of human nature
00:11:30.600 | persists throughout these multiple realities?
00:11:35.000 | Are you on the side, like the thing that inspires you
00:11:38.440 | about artificial intelligence,
00:11:40.000 | is it the human side of somehow persisting
00:11:46.000 | through all of the different systems we engineer,
00:11:49.840 | or is AI inspire you to create something
00:11:53.360 | that's greater than human, that's beyond human,
00:11:55.480 | that's almost non-human?
00:11:57.480 | - I would say my motivation to create AGI
00:12:02.840 | comes from both of those directions, actually.
00:12:05.240 | So when I first became passionate about AGI,
00:12:08.640 | when I was, it would have been two or three years old
00:12:11.440 | after watching robots on Star Trek,
00:12:14.680 | I mean, then it was really a combination
00:12:18.200 | of intellectual curiosity, like can a machine really think,
00:12:21.480 | how would you do that?
00:12:22.920 | And yeah, just ambition to create something much better
00:12:27.280 | than all the clearly limited
00:12:28.760 | and fundamentally defective humans I saw around me.
00:12:32.000 | Then as I got older and got more enmeshed
00:12:35.440 | in the human world and got married, had children,
00:12:38.840 | so my parents begin to age, I started to realize,
00:12:41.960 | well, not only will AGI let you go far beyond
00:12:45.400 | the limitations of the human,
00:12:46.920 | but it could also stop us from dying and suffering
00:12:50.960 | and feeling pain and tormenting ourselves mentally.
00:12:54.960 | So you can see AGI has amazing capability to do good
00:12:59.560 | for humans, as humans, alongside with its capability
00:13:03.440 | to go far, far beyond the human level.
00:13:06.600 | So I mean, both aspects are there,
00:13:09.960 | which makes it even more exciting and important.
00:13:13.240 | - So you mentioned the Sievskii Nietzsche.
00:13:15.480 | Where did you pick up from those guys?
00:13:17.040 | I mean-- (laughs)
00:13:18.960 | - That would probably go beyond the scope
00:13:21.520 | of a brief interview, certainly.
00:13:24.320 | But both of those are amazing thinkers
00:13:26.760 | who one will necessarily have a complex relationship with.
00:13:31.760 | So, I mean, Dostoevsky, on the minus side,
00:13:36.480 | he's kind of a religious fanatic,
00:13:38.480 | and he sort of helped squash the Russian nihilist movement,
00:13:42.000 | which was very interesting,
00:13:43.120 | 'cause what nihilism meant originally
00:13:45.800 | in that period of the mid-late 1800s in Russia
00:13:48.600 | was not taking anything fully 100% for granted.
00:13:52.160 | It was really more like what we'd call Bayesianism now,
00:13:54.360 | where you don't want to adopt anything
00:13:56.880 | as a dogmatic certitude and always leave your mind open.
00:14:01.000 | And how Dostoevsky parodied nihilism was a bit different.
00:14:06.000 | He parodied it as people who believe absolutely nothing,
00:14:10.320 | so they must assign an equal probability weight
00:14:12.960 | to every proposition, which doesn't really work.
00:14:17.720 | So on the one hand, I didn't really agree with Dostoevsky
00:14:22.520 | on his sort of religious point of view.
00:14:25.280 | On the other hand, if you look at his understanding
00:14:29.640 | of human nature and sort of the human mind
00:14:32.640 | and heart and soul, it's really unparalleled.
00:14:37.080 | He had an amazing view of how human beings
00:14:40.800 | construct a world for themselves
00:14:43.400 | based on their own understanding
00:14:45.600 | and their own mental predisposition.
00:14:47.680 | And I think if you look in "The Brothers Karamazov"
00:14:50.320 | in particular, the Russian literary theorist
00:14:54.800 | Mikhail Bakhtin wrote about this
00:14:57.000 | as a polyphonic mode of fiction,
00:14:59.800 | which means it's not third person,
00:15:02.520 | but it's not first person from any one person really.
00:15:05.240 | There are many different characters in the novel
00:15:07.240 | and each of them is sort of telling part of the story
00:15:10.240 | from their own point of view.
00:15:11.800 | So the reality of the whole story is an intersection
00:15:16.120 | like synergetically of the many different
00:15:18.600 | characters' world views.
00:15:19.800 | And that really, it's a beautiful metaphor
00:15:23.400 | and even a reflection, I think,
00:15:24.920 | of how all of us socially create our reality.
00:15:27.880 | Like each of us sees the world in a certain way.
00:15:31.240 | Each of us, in a sense, is making the world as we see it
00:15:34.960 | based on our own minds and understanding,
00:15:37.800 | but it's polyphony, like in music
00:15:41.160 | where multiple instruments are coming together
00:15:43.520 | to create the sound.
00:15:44.840 | The ultimate reality that's created
00:15:46.880 | comes out of each of our subjective understandings
00:15:50.440 | intersecting with each other.
00:15:51.520 | And that was one of the many beautiful things in Dostoyevsky.
00:15:55.880 | - So maybe a little bit to mention,
00:15:58.200 | you have a connection to Russia and the Soviet culture.
00:16:02.480 | I mean, I'm not sure exactly what the nature
00:16:04.080 | of the connection is, but at least the spirit
00:16:06.400 | of your thinking is in there.
00:16:07.600 | - Well, my ancestry is three quarters
00:16:11.160 | Eastern European Jewish.
00:16:13.000 | So, I mean, three of my great-grandparents
00:16:17.000 | emigrated to New York from Lithuania
00:16:20.600 | and sort of border regions of Poland,
00:16:23.320 | which were in and out of Poland
00:16:24.840 | around the time of World War I.
00:16:28.280 | And they were socialists and communists
00:16:33.040 | as well as Jews, mostly Menshevik, not Bolshevik.
00:16:36.240 | And they sort of, they fled at just the right time
00:16:39.280 | to the US for their own personal reasons.
00:16:41.240 | And then almost all, or maybe all of my extended family
00:16:45.600 | that remained in Eastern Europe was killed
00:16:47.240 | either by Hitler's or Stalin's minions at some point.
00:16:50.400 | So the branch of the family that emigrated to the US
00:16:53.600 | was pretty much the only one that survived.
00:16:56.760 | - So how much of the spirit of the people
00:16:58.680 | is in your blood still?
00:16:59.920 | Like, when you look in the mirror, do you see,
00:17:02.800 | what do you see?
00:17:04.880 | - Meat.
00:17:05.720 | I see a bag of meat that I want to transcend
00:17:08.440 | by uploading into some sort of superior reality.
00:17:12.080 | (laughing)
00:17:14.320 | I mean, yeah, very clearly.
00:17:17.760 | - Well put.
00:17:18.600 | - I mean, I'm not religious in a traditional sense,
00:17:22.240 | but clearly the Eastern European Jewish tradition
00:17:27.240 | was what I was raised in.
00:17:28.760 | I mean, there was, my grandfather, Leo Zwell,
00:17:32.680 | was a physical chemist who worked with Landis Pauling
00:17:35.360 | and a bunch of the other early greats in quantum mechanics.
00:17:38.120 | I mean, he was into X-ray diffraction.
00:17:41.240 | He was on the material science side,
00:17:42.960 | experimentalist rather than a theorist.
00:17:45.440 | His sister was also a physicist.
00:17:47.720 | And my father's father, Victor Goertzel,
00:17:51.120 | was a PhD in psychology who had the unenviable job
00:17:56.120 | of giving psychotherapy to the Japanese
00:17:59.280 | in internment camps in the US in World War II,
00:18:03.080 | like to counsel them why they shouldn't kill themselves,
00:18:05.800 | even though they'd had all their stuff taken away
00:18:08.440 | and been imprisoned for no good reason.
00:18:10.320 | So I mean, yeah, there's a lot of Eastern European
00:18:15.320 | Jewish tradition in my background.
00:18:18.080 | One of my great uncles was, I guess,
00:18:20.200 | conductor of San Francisco Orchestra.
00:18:22.440 | So there's a lot of Mickey Salkind,
00:18:25.640 | bunch of music in there also.
00:18:27.640 | And clearly this culture was all about learning
00:18:31.520 | and understanding the world
00:18:34.880 | and also not quite taking yourself
00:18:38.000 | too seriously while you do it, right?
00:18:39.840 | There's a lot of Yiddish humor in there.
00:18:42.000 | So I do appreciate that culture,
00:18:45.200 | although the whole idea that the Jews
00:18:47.560 | are the chosen people of God
00:18:49.000 | never resonated with me too much.
00:18:51.720 | - The graph of the Goertzel family,
00:18:55.080 | I mean, just the people I've encountered
00:18:56.920 | just doing some research and just knowing your work
00:18:59.520 | through the decades, it's kind of fascinating.
00:19:03.600 | Just the number of PhDs.
00:19:06.400 | - Yeah, yeah.
00:19:07.240 | I mean, my dad is a sociology professor
00:19:10.760 | who recently retired from Rutgers University.
00:19:15.040 | But clearly that gave me a head start in life.
00:19:18.560 | I mean, my grandfather gave me all those
00:19:20.520 | quantum mechanics books when I was like
00:19:22.000 | seven or eight years old.
00:19:24.240 | I remember going through them
00:19:26.080 | and it was all the old quantum mechanics,
00:19:28.040 | like Rutherford atoms and stuff.
00:19:30.440 | So I got to the part of wave functions,
00:19:32.880 | which I didn't understand,
00:19:34.280 | although I was a very bright kid.
00:19:36.160 | And I realized he didn't quite understand it either,
00:19:38.680 | but at least, like he pointed me to some professor
00:19:42.000 | he knew at UPenn nearby who understood these things.
00:19:45.360 | So that's an unusual opportunity for a kid to have.
00:19:49.640 | My dad, he was programming Fortran
00:19:52.400 | when I was 10 or 11 years old
00:19:53.920 | on like HP 3000 mainframes at Rutgers University.
00:19:57.640 | So I got to do linear regression in Fortran
00:20:00.680 | on punch cards when I was in middle school,
00:20:04.200 | 'cause he was doing, I guess,
00:20:05.960 | analysis of demographic and sociology data.
00:20:09.600 | So yes, certainly that gave me a head start
00:20:14.600 | and a push towards science
00:20:16.400 | beyond what would have been the case
00:20:17.620 | with many, many different situations.
00:20:19.720 | - When did you first fall in love with AI?
00:20:21.320 | Is it the programming side of Fortran?
00:20:24.680 | Is it maybe the sociology, psychology
00:20:27.280 | that you picked up from your dad?
00:20:28.280 | - I fell in love with AI
00:20:29.280 | when I was probably three years old
00:20:30.680 | when I saw a robot on Star Trek.
00:20:32.600 | It was turning around in a circle going,
00:20:34.600 | error, error, error, error,
00:20:36.640 | because Spock and Kirk had tricked it
00:20:39.560 | into a mechanical breakdown
00:20:40.760 | by presenting it with a logical paradox.
00:20:42.920 | And I was just like, well, this makes no sense.
00:20:45.640 | This AI is very, very smart.
00:20:47.520 | It's been traveling all around the universe,
00:20:49.600 | but these people could trick it
00:20:51.000 | with a simple logical paradox.
00:20:52.680 | Like, if the human brain can get beyond that paradox,
00:20:57.040 | why can't this AI?
00:20:59.480 | So I felt the screenwriters of Star Trek
00:21:03.160 | had misunderstood the nature of intelligence.
00:21:06.080 | And I complained to my dad about it,
00:21:07.600 | and he wasn't gonna say anything one way or the other.
00:21:12.240 | But before I was born,
00:21:16.040 | when my dad was at Antioch College
00:21:18.480 | in the middle of the US,
00:21:20.840 | he led a protest movement called SLAM,
00:21:25.840 | Student League Against Mortality.
00:21:27.480 | They were protesting against death
00:21:29.960 | wandering across the campus.
00:21:31.520 | So he was into some futuristic things, even back then,
00:21:35.880 | but whether AI could confront logical paradoxes or not,
00:21:40.240 | he didn't know.
00:21:41.200 | But 10 years after that or something,
00:21:44.800 | I discovered Douglas Hofstadter's book, "Gordal as Shabak."
00:21:48.480 | And that was sort of to the same point of AI
00:21:51.160 | and paradox and logic, right?
00:21:52.680 | 'Cause he was over and over
00:21:54.520 | with Gordal's incompleteness theorem.
00:21:56.240 | And can an AI really fully model itself reflexively,
00:22:00.600 | or does that lead you into some paradox?
00:22:02.880 | Can the human mind truly model itself reflexively,
00:22:05.320 | or does that lead you into some paradox?
00:22:07.600 | So I think that book, "Gordal as Shabak,"
00:22:10.720 | which I think I read when it first came out.
00:22:13.560 | I would've been 12 years old or something.
00:22:15.040 | I remember it was like 16-hour day.
00:22:17.160 | I read it cover to cover, and then re-read it.
00:22:19.800 | I re-read it after that,
00:22:21.320 | 'cause there was a lot of weird things
00:22:22.460 | with little formal systems in there
00:22:24.440 | that were hard for me at the time.
00:22:25.680 | But that was the first book I read
00:22:28.000 | that gave me a feeling for AI
00:22:31.440 | as like a practical academic or engineering discipline
00:22:35.800 | that people were working in.
00:22:37.400 | 'Cause before I read "Gordal as Shabak,"
00:22:40.060 | I was into AI from the point of view
00:22:42.000 | of a science fiction fan.
00:22:44.000 | And I had the idea, well, it may be a long time
00:22:47.440 | before we can achieve immortality in superhuman AGI.
00:22:50.440 | So I should figure out how to build a spacecraft
00:22:54.400 | traveling close to the speed of light,
00:22:56.360 | go far away, then come back to the Earth
00:22:58.000 | in a million years when technology is more advanced
00:23:00.240 | and we can build these things.
00:23:01.720 | Reading "Gordal as Shabak,"
00:23:03.600 | while it didn't all ring true to me, a lot of it did,
00:23:06.600 | but I could see there are smart people right now
00:23:09.880 | at various universities around me
00:23:11.600 | who are actually trying to work on building
00:23:15.480 | what I would now call AGI,
00:23:17.000 | although Hofstadter didn't call it that.
00:23:19.040 | So really, it was when I read that book,
00:23:21.120 | which would have been probably middle school,
00:23:23.560 | that then I started to think,
00:23:24.800 | well, this is something that I could practically work on.
00:23:29.120 | - Yeah, as opposed to flying away and waiting it out,
00:23:31.680 | you can actually be one of the people
00:23:33.480 | that actually builds the system.
00:23:34.560 | - Yeah, exactly, and if you think about,
00:23:36.520 | I mean, I was interested in what we'd now call
00:23:39.360 | nanotechnology and in the human immortality and time travel,
00:23:44.360 | all the same cool things as every other
00:23:46.960 | science fiction-loving kid,
00:23:49.280 | but AI seemed like if Hofstadter was right,
00:23:52.720 | you just figure out the right program,
00:23:54.160 | sit there and type it.
00:23:55.080 | Like, you don't need to spin stars into weird configurations
00:23:59.640 | or get government approval to cut people up
00:24:02.640 | and fill it with their DNA or something, right?
00:24:05.000 | It's just programming, and then, of course,
00:24:07.600 | that can achieve anything else.
00:24:09.800 | There's another book from back then,
00:24:12.200 | which was by Feinbaum, Gerald Feinbaum,
00:24:17.040 | who was a physicist at Princeton,
00:24:21.580 | and that was the Prometheus Project,
00:24:24.600 | and this book was written in the late 1960s,
00:24:26.680 | so I encountered it in the mid '70s,
00:24:28.800 | but what this book said is in the next few decades,
00:24:30.920 | humanity is gonna create superhuman-thinking machines,
00:24:34.500 | molecular nanotechnology, and human immortality,
00:24:37.480 | and then the challenge we'll have is what to do with it.
00:24:41.160 | Do we use it to expand human consciousness
00:24:43.040 | in a positive direction,
00:24:44.540 | or do we use it just to further vapid consumerism?
00:24:49.540 | And what he proposed was that the UN
00:24:51.840 | should do a survey on this,
00:24:53.480 | and the UN should send people out to every little village
00:24:56.480 | in remotest Africa or South America
00:24:58.960 | and explain to everyone what technology
00:25:01.340 | was gonna bring the next few decades
00:25:03.040 | and the choice that we had about how to use it,
00:25:05.040 | and let everyone on the whole planet vote
00:25:07.800 | about whether we should develop
00:25:09.720 | super AI nanotechnology and immortality
00:25:14.320 | for expanded consciousness or for rampant consumerism.
00:25:18.160 | And needless to say, that didn't quite happen,
00:25:22.040 | and I think this guy died in the mid '80s,
00:25:24.120 | so he didn't even see his ideas
00:25:25.600 | start to become more mainstream,
00:25:28.160 | but it's interesting, many of the themes
00:25:30.600 | I'm engaged with now, from AGI and immortality,
00:25:33.300 | even to trying to democratize technology,
00:25:36.120 | as I've been pushing forward with Singularity
00:25:37.920 | and my work in the blockchain world,
00:25:40.000 | many of these themes were there in Feinbaum's book
00:25:43.560 | in the late '60s even.
00:25:47.920 | And of course, Valentin Turchin, a Russian writer,
00:25:52.920 | and a great Russian physicist who I got to know
00:25:55.800 | when we both lived in New York in the late '90s
00:25:59.000 | and early aughts, he had a book in the late '60s in Russia
00:26:03.360 | which was The Phenomenon of Science,
00:26:05.760 | which laid out all these same things as well.
00:26:10.160 | And Val died in, I don't remember,
00:26:12.720 | 2004 or five or something of Parkinson's-ism.
00:26:15.360 | So yeah, it's easy for people to lose track now
00:26:20.360 | of the fact that the futurist
00:26:23.920 | and the singularitarian advanced technology ideas
00:26:27.800 | that are now almost mainstream
00:26:29.720 | and are on TV all the time, I mean,
00:26:31.420 | these are not that new, right?
00:26:34.080 | They're sort of new in the history of the human species,
00:26:37.120 | but I mean, these were all around in fairly mature form
00:26:41.120 | in the middle of the last century,
00:26:43.680 | were written about quite articulately
00:26:45.480 | by fairly mainstream people
00:26:47.360 | who were professors at top universities.
00:26:50.160 | It's just until the enabling technologies
00:26:52.960 | got to a certain point, then you couldn't make it real.
00:26:57.960 | So even in the '70s, I was sort of seeing that
00:27:02.800 | and living through it, right?
00:27:04.760 | From Star Trek to Douglas Hofstetter,
00:27:07.920 | things were getting very, very practical
00:27:09.680 | from the late '60s to the late '70s.
00:27:11.960 | And the first computer I bought,
00:27:15.000 | you could only program with hexadecimal machine code
00:27:17.560 | and you had to solder it together.
00:27:19.360 | And then a few years later, there's punch cards,
00:27:23.440 | and a few years later, you could get Atari 400
00:27:27.200 | and Commodore VIC-20, and you could type on a keyboard
00:27:30.280 | and program in higher level languages
00:27:32.840 | alongside the assembly language.
00:27:34.640 | So these ideas have been building up a while,
00:27:38.720 | and I guess my generation got to feel them build up,
00:27:42.960 | which is different than people coming into the field now,
00:27:46.400 | for whom these things have just been part of the ambiance
00:27:50.280 | of culture for their whole career, or even their whole life.
00:27:54.120 | - Well, it's fascinating to think about
00:27:56.280 | there being all of these ideas kind of swimming
00:28:00.000 | almost with a noise all around the world,
00:28:02.760 | all the different generations,
00:28:04.480 | and then some kind of nonlinear thing happens
00:28:07.880 | where they percolate up
00:28:09.360 | and capture the imagination of the mainstream.
00:28:12.400 | And that seems to be what's happening with AI now.
00:28:14.760 | - I mean, Nietzsche, who you mentioned,
00:28:16.120 | had the idea of the Superman, right?
00:28:18.240 | But he didn't understand enough about technology
00:28:21.560 | to think you could physically engineer a Superman
00:28:24.880 | by piecing together molecules in a certain way.
00:28:28.600 | He was a bit vague about how the Superman would appear,
00:28:33.600 | but he was quite deep in thinking about
00:28:36.080 | what the state of consciousness
00:28:37.800 | and the mode of cognition of a Superman would be.
00:28:42.440 | He was a very astute analyst of how the human mind
00:28:47.440 | constructs the illusion of a self,
00:28:49.400 | how it constructs the illusion of free will,
00:28:52.120 | how it constructs values like good and evil
00:28:56.600 | out of its own desire to maintain
00:28:59.640 | and advance its own organism.
00:29:01.320 | He understood a lot about how human minds work.
00:29:03.920 | Then he understood a lot
00:29:05.560 | about how post-human minds would work.
00:29:07.520 | I mean, the Superman was supposed to be a mind
00:29:10.160 | that would basically have complete root access
00:29:13.200 | to its own brain and consciousness
00:29:15.960 | and be able to architect its own value system
00:29:19.560 | and inspect and fine-tune all of its own biases.
00:29:24.280 | So that's a lot of powerful thinking there,
00:29:27.320 | which then fed in and sort of seeded
00:29:29.320 | all of postmodern continental philosophy
00:29:32.160 | and all sorts of things that have been very valuable
00:29:35.480 | in development of culture and indirectly even of technology.
00:29:39.680 | But of course, without the technology there,
00:29:42.080 | it was all some quite abstract thinking.
00:29:44.800 | So now we're at a time in history
00:29:46.880 | when a lot of these ideas can be made real,
00:29:51.680 | which is amazing and scary, right?
00:29:54.280 | - It's kind of interesting to think,
00:29:56.000 | what do you think Nietzsche would,
00:29:57.080 | if he was born a century later,
00:29:59.160 | or transported through time,
00:30:00.880 | what do you think he would say about AI?
00:30:02.920 | I mean-- - Well, those are
00:30:03.760 | quite different.
00:30:04.580 | If he's born a century later or transported through time--
00:30:07.240 | - Well, he'd be on TikTok and Instagram
00:30:09.560 | and he would never write the great works he's written,
00:30:11.920 | so let's transport him through time.
00:30:13.520 | - Maybe also Sprach Zarathustra would be a music video,
00:30:16.440 | right, I mean, who knows?
00:30:19.640 | - Yeah, but if he was transported through time,
00:30:21.640 | do you think, that'd be interesting actually to go back,
00:30:26.240 | you just made me realize that it's possible to go back
00:30:29.320 | and read Nietzsche with an eye of,
00:30:31.200 | is there some thinking about artificial beings?
00:30:34.680 | I'm sure he had inklings, I mean, with Frankenstein,
00:30:39.680 | before him, I'm sure he had inklings of artificial beings
00:30:42.840 | somewhere in the text.
00:30:44.040 | It'd be interesting to see, to try to read his work,
00:30:46.880 | to see if he had an,
00:30:48.680 | if Superman was actually an AGI system,
00:30:53.680 | like if he had inklings of that kind of thinking.
00:30:57.880 | - He didn't. - He didn't.
00:30:59.400 | - No, I would say not.
00:31:01.080 | I mean, he had a lot of inklings of modern cognitive science,
00:31:06.080 | which are very interesting.
00:31:07.400 | If you look in the third part of the collection
00:31:11.800 | that's been titled The Will to Power,
00:31:13.520 | I mean, in book three there,
00:31:15.680 | there's very deep analysis of thinking processes,
00:31:20.640 | but he wasn't so much of a physical tinkerer type guy,
00:31:25.640 | right, it was very abstract.
00:31:29.680 | - Do you think, what do you think about the will to power?
00:31:32.800 | Do you think human, what do you think drives humans?
00:31:36.160 | Is it--
00:31:37.480 | - Oh, an unholy mix of things.
00:31:39.520 | I don't think there's one pure, simple,
00:31:42.440 | and elegant objective function driving humans by any means.
00:31:47.440 | - What do you think, if we look at,
00:31:50.720 | I know it's hard to look at humans in an aggregate,
00:31:53.280 | but do you think overall humans are good?
00:31:56.240 | Or do we have both good and evil within us
00:32:01.600 | that depending on the circumstances,
00:32:03.560 | depending on the whatever can percolate to the top?
00:32:08.280 | - Good and evil are very ambiguous,
00:32:13.200 | complicated, and in some ways silly concepts,
00:32:15.960 | but if we could dig into your question
00:32:18.600 | from a couple directions.
00:32:19.720 | So I think if you look in evolution,
00:32:23.480 | humanity is shaped both by individual selection
00:32:28.280 | and what biologists would call group selection,
00:32:30.960 | like tribe level selection, right?
00:32:32.800 | So individual selection has driven us
00:32:36.560 | in a selfish DNA sort of way,
00:32:39.320 | so that each of us does to a certain approximation
00:32:43.280 | what will help us propagate our DNA to future generations.
00:32:47.440 | I mean, that's why I've got to have four kids so far,
00:32:50.760 | and probably that's not the last one.
00:32:53.960 | On the other hand--
00:32:55.040 | - I like the ambition.
00:32:56.840 | - Tribal, like group selection,
00:32:58.640 | means humans in a way will do what will advocate
00:33:02.840 | for the persistence of the DNA of their whole tribe,
00:33:06.320 | or their social group.
00:33:08.120 | And in biology, you have both of these, right?
00:33:11.760 | And you can see, say, an ant colony or a beehive,
00:33:14.440 | there's a lot of group selection
00:33:15.960 | in the evolution of those social animals.
00:33:18.960 | On the other hand, say, a big cat
00:33:21.480 | or some very solitary animal,
00:33:23.280 | it's a lot more biased toward individual selection.
00:33:26.560 | Humans are an interesting balance,
00:33:28.680 | and I think this reflects itself
00:33:31.560 | in what we would view as selfishness versus altruism,
00:33:35.040 | to some extent.
00:33:36.800 | So we just have both of those objective functions
00:33:40.560 | contributing to the makeup of our brains.
00:33:43.800 | And then as Nietzsche analyzed in his own way,
00:33:47.320 | and others have analyzed in different ways,
00:33:49.040 | I mean, we abstract this as,
00:33:51.320 | well, we have both good and evil within us, right?
00:33:55.360 | 'Cause a lot of what we view as evil
00:33:57.840 | is really just selfishness.
00:34:00.480 | A lot of what we view as good is altruism,
00:34:03.720 | which means doing what's good for the tribe.
00:34:07.200 | And on that level, we have both of those
00:34:09.640 | just baked into us, and that's how it is.
00:34:13.120 | Of course, there are psychopaths and sociopaths
00:34:17.000 | and people who get gratified by the suffering of others,
00:34:21.280 | and that's a different thing.
00:34:25.200 | - Yeah, those are exceptions, but on the whole.
00:34:26.920 | - Yeah, but I think at core, we're not purely selfish,
00:34:31.480 | we're not purely altruistic.
00:34:33.120 | We are a mix, and that's the nature of it.
00:34:37.920 | And we also have a complex constellation of values
00:34:42.920 | that are just very specific to our evolutionary history.
00:34:48.280 | Like we love waterways and mountains,
00:34:52.400 | and the ideal place to put a house
00:34:54.360 | is in a mountain overlooking the water, right?
00:34:56.240 | And we care a lot about our kids,
00:35:00.520 | and we care a little less about our cousins,
00:35:02.760 | and even less about our fifth cousins.
00:35:04.360 | I mean, there are many particularities to human values,
00:35:09.360 | which whether they're good or evil
00:35:11.840 | depends on your perspective.
00:35:15.800 | Say, I spent a lot of time in Ethiopia, in Addis Ababa,
00:35:19.600 | where we have one of our AI development offices
00:35:22.400 | for my Singularity Net project.
00:35:24.360 | And when I walk through the streets in Addis,
00:35:27.680 | there's people lying by the side of the road,
00:35:31.400 | like just living there by the side of the road,
00:35:33.880 | dying probably of curable diseases
00:35:35.760 | without enough food or medicine.
00:35:37.920 | And when I walk by them, I feel terrible, I give them money.
00:35:41.400 | When I come back home to the developed world,
00:35:43.840 | they're not on my mind that much.
00:35:46.560 | I do donate some, but I mean,
00:35:48.600 | I also spend some of the limited money I have
00:35:52.800 | enjoying myself in frivolous ways
00:35:54.680 | rather than donating it to those people
00:35:56.720 | who are right now starving, dying,
00:35:59.040 | and suffering on the roadside.
00:36:01.000 | So does that make me evil?
00:36:03.120 | I mean, it makes me somewhat selfish and somewhat altruistic
00:36:06.680 | and we each balance that in our own way, right?
00:36:10.880 | So whether that will be true of all possible AGIs
00:36:15.880 | is a subtler question.
00:36:19.240 | So that's how humans are.
00:36:21.280 | - So you have a sense, you kind of mentioned
00:36:23.040 | that there's a selfish,
00:36:25.480 | I'm not gonna bring up the whole Ayn Rand idea
00:36:28.240 | of selfishness being the core virtue.
00:36:31.080 | That's a whole interesting kind of tangent
00:36:33.960 | that I think we'll just distract ourselves on.
00:36:36.160 | - I have to make one amusing comment.
00:36:38.400 | - Sure.
00:36:39.240 | - Comment that has amused me anyway.
00:36:41.200 | So the, yeah, I have extraordinary negative respect
00:36:46.200 | for Ayn Rand.
00:36:47.760 | - Negative, what's a negative respect?
00:36:50.160 | - But when I worked with a company called Genescient,
00:36:54.760 | which was evolving flies to have extraordinary long lives
00:36:59.200 | in Southern California.
00:37:01.280 | So we had flies that were evolved by artificial selection
00:37:05.000 | to have five times the lifespan of normal fruit flies.
00:37:07.680 | But the population of super long-lived flies
00:37:11.800 | was physically sitting in a spare room
00:37:14.080 | at an Ayn Rand elementary school in Southern California.
00:37:18.160 | So that was just like, well, if I saw this in a movie,
00:37:21.320 | I wouldn't believe it.
00:37:24.020 | - Well, yeah, the universe has a sense of humor
00:37:26.040 | in that kind of way.
00:37:26.880 | That fits in, humor fits in somehow
00:37:28.920 | into this whole absurd existence.
00:37:30.660 | But you mentioned the balance between selfishness
00:37:34.040 | and altruism as kind of being innate.
00:37:37.240 | Do you think it's possible that's kind of an emergent
00:37:39.880 | phenomena, those peculiarities of our value system,
00:37:45.480 | how much of it is innate?
00:37:47.200 | How much of it is something we collectively,
00:37:49.800 | kind of like a Dostoevsky novel,
00:37:52.320 | bring to life together as a civilization?
00:37:54.540 | - I mean, the answer to nature versus nurture
00:37:57.740 | is usually both.
00:37:58.900 | And of course, it's nature versus nurture
00:38:01.820 | versus self-organization, as you mentioned.
00:38:04.820 | So clearly, there are evolutionary roots
00:38:08.500 | to individual and group selection
00:38:11.500 | leading to a mix of selfishness and altruism.
00:38:13.940 | On the other hand, different cultures manifest that
00:38:18.020 | in different ways.
00:38:19.780 | Well, we all have basically the same biology.
00:38:22.540 | And if you look at sort of pre-civilized cultures,
00:38:26.660 | you have tribes like the Yanomamo in Venezuela,
00:38:29.320 | which their culture is focused on killing other tribes.
00:38:34.320 | And you have other Stone Age tribes
00:38:37.640 | that are mostly peaceable
00:38:39.500 | and have big taboos against violence.
00:38:41.440 | So you can certainly have a big difference
00:38:43.920 | in how culture manifests these innate biological
00:38:49.880 | characteristics, but still, there's probably limits
00:38:54.720 | that are given by our biology.
00:38:56.720 | I used to argue this with my great-grandparents
00:39:00.040 | who were Marxists, actually,
00:39:01.480 | because they believed in the withering away of the state.
00:39:04.520 | Like, they believed that as you move from capitalism
00:39:08.240 | to socialism to communism,
00:39:10.640 | people would just become more social-minded
00:39:13.400 | so that a state would be unnecessary
00:39:15.920 | and people would just give,
00:39:17.800 | everyone would give everyone else what they needed.
00:39:20.920 | Now, setting aside that that's not what the various
00:39:24.440 | Marxist experiments on the planet
00:39:26.600 | seem to be heading toward in practice,
00:39:29.860 | just as a theoretical point,
00:39:32.700 | I was very dubious that human nature could go there.
00:39:37.480 | Like, at that time, when my great-grandparents were alive,
00:39:39.840 | I was just like, you know, I'm a cynical teenager.
00:39:43.240 | I think humans are just jerks.
00:39:46.020 | The state is not gonna wither away.
00:39:48.040 | If you don't have some structure keeping people
00:39:50.700 | from screwing each other over, they're gonna do it.
00:39:52.920 | So now I actually don't quite see things that way.
00:39:56.240 | I mean, I think my feeling now subjectively
00:39:59.920 | is the culture aspect is more significant
00:40:02.640 | than I thought it was when I was a teenager.
00:40:04.640 | And I think you could have a human society
00:40:08.280 | that was dialed dramatically further toward,
00:40:11.440 | you know, self-awareness, other awareness, compassion,
00:40:14.440 | and sharing than our current society.
00:40:16.980 | And of course, greater material abundance helps,
00:40:20.580 | but to some extent, material abundance
00:40:23.480 | is a subjective perception also,
00:40:25.360 | because many Stone Age cultures perceived themselves
00:40:28.280 | as living in great material abundance,
00:40:30.520 | that they had all the food and water they wanted,
00:40:32.120 | they lived in a beautiful place,
00:40:33.520 | that they had sex lives, that they had children.
00:40:37.480 | I mean, they had abundance without any factories, right?
00:40:42.920 | So I think humanity probably would be capable
00:40:46.480 | of fundamentally more positive and joy-filled mode
00:40:51.160 | of social existence than what we have now.
00:40:55.580 | Clearly, Marx didn't quite have the right idea
00:40:59.480 | about how to get there.
00:41:01.800 | I mean, he missed a number of key aspects
00:41:05.640 | of human society and its evolution.
00:41:09.520 | And if we look at where we are in society now,
00:41:12.820 | how to get there is a quite different question,
00:41:15.780 | because there are very powerful forces
00:41:18.140 | pushing people in different directions
00:41:21.100 | than a positive, joyous, compassionate existence, right?
00:41:26.100 | - So if we were to try to, you know,
00:41:28.860 | Elon Musk is dreams of colonizing Mars at the moment,
00:41:32.860 | so maybe he'll have a chance to start a new civilization
00:41:36.900 | with a new governmental system.
00:41:38.420 | And certainly, there's quite a bit of chaos.
00:41:41.620 | We're sitting now, I don't know what the date is,
00:41:44.340 | but this is June.
00:41:46.900 | There's quite a bit of chaos in all different forms
00:41:49.300 | going on in the United States and all over the world.
00:41:52.100 | So there's a hunger for new types of governments,
00:41:55.580 | new types of leadership, new types of systems.
00:41:58.280 | And so what are the forces at play?
00:42:02.020 | And how do we move forward?
00:42:04.180 | - Yeah, I mean, colonizing Mars, first of all,
00:42:06.580 | it's a super cool thing to do.
00:42:09.020 | We should be doing it.
00:42:10.100 | - So you love the idea?
00:42:11.580 | - Yeah, I mean, it's more important than making
00:42:14.780 | chocolatier chocolates and sexier lingerie
00:42:18.500 | and many of the things that we spend
00:42:21.020 | a lot more resources on as a species, right?
00:42:24.120 | So I mean, we certainly should do it.
00:42:26.460 | I think the possible futures in which a Mars colony
00:42:31.460 | makes a critical difference for humanity are very few.
00:42:38.020 | I mean, I think, I mean, assuming we make a Mars colony
00:42:42.180 | and people go live there in a couple of decades,
00:42:43.980 | I mean, their supplies are gonna come from Earth.
00:42:46.340 | The money to make the colony came from Earth
00:42:48.780 | and whatever powers are supplying the goods there
00:42:53.700 | from Earth are gonna in effect be in control
00:42:56.780 | of that Mars colony.
00:42:58.660 | Of course, there are outlier situations where Earth
00:43:04.180 | gets nuked into oblivion and somehow Mars
00:43:07.980 | has been made self-sustaining by that point
00:43:10.740 | and then Mars is what allows humanity to persist.
00:43:14.180 | But I think that those are very, very, very unlikely.
00:43:19.180 | - Do you don't think it could be a first step
00:43:21.300 | on a long journey?
00:43:22.980 | - Of course, it's a first step on a long journey,
00:43:24.660 | which is awesome.
00:43:27.060 | I'm guessing the colonization of the rest
00:43:30.900 | of the physical universe will probably be done
00:43:33.180 | by AGIs that are better designed to live in space
00:43:38.100 | than by the meat machines that we are.
00:43:41.780 | But I mean, who knows, we may cryopreserve ourselves
00:43:44.700 | in some superior way to what we know now
00:43:46.700 | and like shoot ourselves out to Alpha Centauri and beyond.
00:43:50.700 | I mean, that's all cool, it's very interesting
00:43:53.900 | and it's much more valuable than most things
00:43:56.340 | that humanity is spending its resources on.
00:43:58.820 | On the other hand, with AGI, we can get to a singularity
00:44:03.500 | before the Mars colony becomes sustaining for sure,
00:44:07.740 | possibly before it's even operational.
00:44:10.060 | - So your intuition is that that's the problem
00:44:12.340 | if we really invest resources and we can get to faster
00:44:14.900 | than a legitimate full self-sustaining colonization of Mars.
00:44:19.660 | - Yeah, and it's very clear that we will to me
00:44:23.140 | because there's so much economic value
00:44:25.980 | in getting from narrow AI toward AGI,
00:44:29.420 | whereas the Mars colony, there's less economic value
00:44:33.340 | until you get quite far out into the future.
00:44:37.340 | So I think that's very interesting.
00:44:40.240 | I just think it's somewhat off to the side.
00:44:44.340 | I mean, just as I think say, art and music
00:44:47.980 | are very, very interesting and I wanna see resources
00:44:51.820 | go into amazing art and music being created
00:44:55.420 | and I'd rather see that than a lot of the garbage
00:44:59.580 | that society spends their money on.
00:45:01.780 | On the other hand, I don't think Mars colonization
00:45:04.620 | or inventing amazing new genres of music
00:45:07.780 | is not one of the things that is most likely
00:45:11.000 | to make a critical difference in the evolution
00:45:13.900 | of human or non-human life in this part of the universe
00:45:18.380 | over the next decade.
00:45:19.860 | - Do you think AGI is really--
00:45:21.540 | - AGI is by far the most important thing
00:45:25.820 | that's on the horizon and then technologies
00:45:29.580 | that have direct ability to enable AGI
00:45:33.660 | or to accelerate AGI are also very important.
00:45:37.260 | For example, say quantum computing.
00:45:40.540 | I don't think that's critical to achieve AGI
00:45:42.740 | but certainly you could see how the right quantum
00:45:44.980 | computing architecture could massively accelerate AGI.
00:45:49.300 | Similar other types of nanotechnology.
00:45:52.500 | Now, the quest to cure aging and end disease
00:45:57.500 | while not in the big picture as important as AGI,
00:46:02.140 | of course it's important to all of us as individual humans
00:46:07.140 | and if someone made a super longevity pill
00:46:11.620 | and distributed it tomorrow, I mean, that would be huge
00:46:15.380 | and a much larger impact than a Mars colony
00:46:18.220 | is gonna have for quite some time.
00:46:20.500 | - But perhaps not as much as an AGI system.
00:46:23.340 | - No, because if you can make a benevolent AGI,
00:46:27.100 | then all the other problems are solved.
00:46:28.740 | I mean, then the AGI can be,
00:46:31.980 | once it's as generally intelligent as humans,
00:46:34.300 | it can rapidly become massively more generally intelligent
00:46:37.440 | than humans and then that AGI should be able
00:46:41.740 | to solve science and engineering problems
00:46:43.900 | much better than human beings
00:46:46.860 | as long as it is in fact motivated to do so.
00:46:49.700 | That's why I said a benevolent AGI.
00:46:52.740 | There could be other kinds.
00:46:54.020 | - Maybe it's good to step back a little bit.
00:46:56.000 | I mean, we've been using the term AGI.
00:46:57.960 | People often cite you as the creator
00:47:00.860 | or at least the popularizer of the term AGI,
00:47:03.060 | artificial general intelligence.
00:47:05.700 | Can you tell the origin story of the term?
00:47:08.580 | - Sure, sure.
00:47:09.420 | So yeah, I would say I launched the term AGI
00:47:14.020 | upon the world for what it's worth
00:47:16.620 | without ever fully being in love with the term.
00:47:21.620 | What happened is I was editing a book
00:47:25.380 | and this process started around 2001 or two.
00:47:27.860 | I think the book came out 2005, funny.
00:47:30.500 | I was editing a book which I provisionally
00:47:33.140 | was titling Real AI.
00:47:34.960 | And I mean, the goal was to gather together
00:47:38.800 | fairly serious academicish papers
00:47:41.660 | on the topic of making thinking machines
00:47:43.900 | that could really think in the sense like people can
00:47:46.780 | or even more broadly than people can.
00:47:49.220 | So then I was reaching out to other folks
00:47:52.740 | that I had encountered here or there
00:47:54.060 | who were interested in that,
00:47:57.360 | which included some other folks
00:48:00.300 | who I knew from the transhumanist and singularitarian world
00:48:04.340 | like Peter Vos, who has a company,
00:48:06.420 | AGI Incorporated still in California
00:48:09.780 | and included Shane Legg, who had worked for me
00:48:14.100 | at my company WebMind in New York in the late '90s,
00:48:17.540 | who by now has become rich and famous.
00:48:20.460 | He was one of the co-founders of Google DeepMind.
00:48:22.740 | But at that time, Shane was,
00:48:25.280 | I think he may have just started doing his PhD
00:48:31.740 | with Markus Hutter, who at that time
00:48:35.860 | hadn't yet published his book, Universal AI,
00:48:38.660 | which sort of gives a mathematical foundation
00:48:41.020 | for artificial general intelligence.
00:48:43.380 | So I reached out to Shane and Markus and Peter Vos
00:48:46.100 | and Pei Wang, who was another former employee of mine
00:48:49.440 | who had been Douglas Hofstadter's PhD student
00:48:51.860 | who had his own approach to AGI
00:48:53.260 | and a bunch of some Russian folks
00:48:55.700 | reached out to these guys
00:48:58.000 | and they contributed papers for the book.
00:49:01.340 | But that was my provisional title,
00:49:03.400 | but I never loved it because in the end,
00:49:06.980 | I was doing some, what we would now call narrow AI,
00:49:11.340 | as well like applying machine learning to genomics data
00:49:14.620 | or chat data for sentiment analysis.
00:49:17.860 | I mean, that work is real.
00:49:19.180 | And in a sense, it's really AI.
00:49:22.740 | It's just a different kind of AI.
00:49:25.960 | Ray Kurzweil wrote about narrow AI versus strong AI.
00:49:30.300 | But that seemed weird to me because,
00:49:33.640 | first of all, narrow and strong are not antennas.
00:49:36.940 | (laughing)
00:49:37.780 | - That's right.
00:49:38.700 | - But secondly, strong AI was used
00:49:41.940 | in the cognitive science literature
00:49:43.340 | to mean the hypothesis that digital computer AIs
00:49:46.620 | could have true consciousness like human beings.
00:49:50.140 | So there was already a meaning to strong AI,
00:49:52.540 | which was complexly different but related, right?
00:49:56.460 | So we were tossing around on an email list
00:50:00.540 | whether what title it should be.
00:50:03.220 | And so we talked about narrow AI, broad AI, wide AI,
00:50:07.540 | narrow AI, general AI.
00:50:09.740 | And I think it was either Shane Legg or Peter Vos
00:50:14.740 | on the private email discussion we had.
00:50:18.100 | He said, "Well, why don't we go with AGI,
00:50:20.140 | "artificial general intelligence?"
00:50:21.780 | And Pei Wang wanted to do GAI,
00:50:24.260 | general artificial intelligence,
00:50:25.740 | 'cause in Chinese, it goes in that order.
00:50:27.820 | But we figured gay wouldn't work
00:50:30.180 | in US culture at that time, right?
00:50:33.200 | So we went with the AGI.
00:50:37.300 | We used it for the title of that book.
00:50:39.460 | And part of Peter and Shane's reasoning
00:50:43.420 | was you have the G factor in psychology,
00:50:45.420 | which is IQ, general intelligence, right?
00:50:47.460 | So you have a meaning of GI,
00:50:49.420 | general intelligence in psychology.
00:50:52.140 | So then you're looking like artificial GI.
00:50:55.380 | So then--
00:50:56.220 | - That makes a lot of sense.
00:50:58.140 | - Yeah, we used that for the title of the book.
00:51:00.440 | And so I think, maybe both Shane and Peter
00:51:04.080 | think they invented the term.
00:51:05.400 | But then later, after the book was published,
00:51:08.360 | this guy Mark Gubrid came up to me,
00:51:11.160 | and he's like, "Well, I published an essay
00:51:13.600 | "with the term AGI in like 1997 or something.
00:51:17.060 | "And so I'm just waiting for some Russian
00:51:19.900 | "to come out and say they published that in 1953."
00:51:22.880 | (laughing)
00:51:23.720 | I mean, that term-- - For sure.
00:51:25.240 | - That term is not dramatically innovative or anything.
00:51:28.360 | It's one of these obvious in hindsight things,
00:51:31.580 | which is also annoying in a way,
00:51:34.860 | because Josh Habach, who you interviewed,
00:51:39.500 | is a close friend of mine.
00:51:40.400 | He likes the term synthetic intelligence,
00:51:43.220 | which I like much better,
00:51:44.300 | but it hasn't actually caught on, right?
00:51:47.060 | Because artificial is a bit off to me,
00:51:51.780 | 'cause artifice is like a tool or something,
00:51:54.620 | but not all AGIs are gonna be tools.
00:51:57.760 | I mean, they may be now,
00:51:58.700 | but we're aiming toward making them agents
00:52:00.620 | rather than tools.
00:52:02.820 | And in a way, I don't like the distinction
00:52:04.840 | between artificial and natural,
00:52:07.220 | because I mean, we're part of nature also,
00:52:09.380 | and machines are part of nature.
00:52:12.140 | I mean, you can look at evolved versus engineered,
00:52:14.860 | but that's a different distinction.
00:52:17.180 | Then it should be engineered general intelligence, right?
00:52:20.020 | And then general, well, if you look at
00:52:22.500 | Marcus Hutter's book, "Universal AI,"
00:52:25.420 | what he argues there is,
00:52:26.920 | within the domain of computation theory,
00:52:30.520 | which is limited but interesting,
00:52:31.920 | so if you assume computable environments
00:52:33.680 | and computable reward functions,
00:52:35.600 | then he articulates what would be
00:52:37.560 | a truly general intelligence,
00:52:40.040 | a system called AIXI, which is quite beautiful.
00:52:43.160 | - AIXI.
00:52:44.000 | - AIXI, and that's the middle name
00:52:46.040 | of my latest child, actually.
00:52:48.480 | - What's the first name?
00:52:50.200 | - First name is QORXI, Q-O-R-X-I,
00:52:52.400 | which my wife came up with,
00:52:53.760 | but that's an acronym for
00:52:55.440 | quantum organized rational expanding intelligence.
00:52:58.920 | - AIXI's the middle name.
00:53:01.080 | - His middle name is Aixiphanes, actually,
00:53:03.680 | which means the former principal underlying AIXI.
00:53:08.360 | But in any case--
00:53:09.520 | - You're giving Elon Musk's new child a run for his money.
00:53:12.160 | - Well, I did it first.
00:53:13.760 | He copied me with this new freakish name,
00:53:17.320 | but now if I have another baby,
00:53:18.600 | I'm gonna have to outdo him.
00:53:20.080 | - Outdo him.
00:53:21.440 | - An arms race of weird, geeky baby names.
00:53:24.560 | We'll see what the babies think about it.
00:53:26.800 | But my oldest son, Zarathustra, loves his name,
00:53:30.200 | and my daughter, Sherazade, loves her name.
00:53:33.240 | So, so far, basically, if you give your kids weird names--
00:53:36.920 | - They live up to it.
00:53:37.800 | - Well, you're obliged to make the kids weird enough
00:53:39.760 | that they like the names.
00:53:40.960 | So it directs their upbringing in a certain way.
00:53:43.880 | But anyway, what Marcus showed in that book
00:53:47.640 | is that a truly general intelligence,
00:53:50.520 | theoretically, is possible,
00:53:51.800 | but would take infinite computing power.
00:53:53.840 | So then the artificial is a little off.
00:53:56.360 | The general is not really achievable within physics,
00:53:59.800 | as we know it.
00:54:01.600 | I mean, physics, as we know it, may be limited,
00:54:03.520 | but that's what we have to work with now.
00:54:05.300 | Intelligence--
00:54:06.140 | - Infinitely general, you mean,
00:54:07.360 | like from an information processing perspective, yeah.
00:54:10.440 | - Yeah, intelligence is not very well-defined either.
00:54:14.760 | I mean, what does it mean?
00:54:16.760 | I mean, in AI now, it's fashionable to look at it
00:54:19.560 | as maximizing an expected reward over the future.
00:54:23.320 | But that sort of definition is pathological in various ways.
00:54:27.800 | And my friend David Weinbaum, aka Weaver,
00:54:31.300 | he had a beautiful PhD thesis on open-ended intelligence,
00:54:34.840 | trying to conceive intelligence in a--
00:54:36.880 | - Without a reward.
00:54:38.240 | - Yeah, he's just looking at it differently.
00:54:40.120 | He's looking at complex self-organizing systems
00:54:42.680 | and looking at an intelligence system as being one
00:54:45.360 | that revises and grows and improves itself
00:54:48.880 | in conjunction with its environment
00:54:51.720 | without necessarily there being one objective function
00:54:54.880 | it's trying to maximize.
00:54:56.040 | Although, over certain intervals of time,
00:54:58.520 | it may act as if it's optimizing
00:54:59.960 | a certain objective function.
00:55:01.360 | Very much Solaris from Stanislav Lem's novels, right?
00:55:04.560 | So yeah, the point is artificial general and intelligence--
00:55:07.880 | - Don't work.
00:55:08.720 | - They're all bad.
00:55:09.540 | On the other hand, everyone knows what AI is,
00:55:12.040 | and AGI seems immediately comprehensible
00:55:15.880 | to people with a technical background.
00:55:17.520 | So I think that the term has served
00:55:19.360 | as sociological function.
00:55:20.760 | Now it's out there everywhere, which baffles me.
00:55:24.760 | - It's like KFC, I mean, that's it.
00:55:27.080 | We're stuck with AGI probably for a very long time
00:55:30.200 | until AGI systems take over and rename themselves.
00:55:33.640 | - Yeah, and then we'll be--
00:55:35.720 | - We're stuck with GPUs, too,
00:55:37.560 | which mostly have nothing to do with graphics anymore.
00:55:40.520 | - I wonder what the AGI system will call us humans.
00:55:43.240 | That was maybe--
00:55:44.260 | - Grandpa.
00:55:45.100 | (laughing)
00:55:46.600 | - GPs.
00:55:47.520 | (laughing)
00:55:48.360 | - Grandpa processing unit, yeah.
00:55:50.320 | - Biological grandpa processing units.
00:55:52.400 | Okay, so maybe also just a comment on AGI representing,
00:55:59.280 | before even the term existed,
00:56:02.160 | representing a kind of community.
00:56:04.640 | You've talked about this in the past,
00:56:06.240 | sort of AI has come in waves,
00:56:08.360 | but there's always been this community of people
00:56:10.440 | who dream about creating general human-level
00:56:15.160 | superintelligent systems.
00:56:16.860 | Can you maybe give your sense of the history
00:56:21.880 | of this community as it exists today,
00:56:24.280 | as it existed before this deep learning revolution,
00:56:27.000 | all throughout the winters and the summers of AI?
00:56:29.520 | - Sure, first I would say as a side point,
00:56:33.520 | the winters and summers of AI
00:56:36.120 | are greatly exaggerated by Americans.
00:56:39.920 | And if you look at the publication record
00:56:43.600 | of the artificial intelligence community
00:56:46.400 | since say the 1950s,
00:56:48.480 | you would find a pretty steady growth
00:56:51.360 | in advance of ideas and papers.
00:56:53.960 | And what's thought of as an AI winter or summer
00:56:57.700 | was sort of how much money is the US military
00:57:00.480 | pumping into AI, which was meaningful.
00:57:04.660 | On the other hand, there was AI going on in Germany, UK,
00:57:07.480 | and in Japan, and in Russia, all over the place,
00:57:10.960 | while US military got more and less enthused about AI.
00:57:15.960 | - That happened to be, just for people who don't know,
00:57:20.200 | the US military happened to be the main source
00:57:22.840 | of funding for AI research.
00:57:24.480 | So another way to phrase that is it's up and down
00:57:27.480 | of funding for artificial intelligence research.
00:57:31.080 | - And I would say the correlation between funding
00:57:34.600 | and intellectual advance was not 100%, right?
00:57:38.080 | Because in Russia, as an example,
00:57:40.780 | or in Germany, there was less dollar funding
00:57:43.580 | than in the US, but many foundational ideas were laid out,
00:57:48.120 | but it was more theory than implementation, right?
00:57:50.860 | And US really excelled at sort of breaking through
00:57:54.580 | from theoretical papers to working implementations,
00:57:59.580 | which did go up and down somewhat
00:58:03.000 | with US military funding.
00:58:04.280 | But still, I mean, you can look in the 1980s,
00:58:07.400 | Dietrich Doerner in Germany had self-driving cars
00:58:10.360 | on the Autobahn, right?
00:58:11.400 | And I mean, this, it was a little early
00:58:15.560 | with regard to the car industry,
00:58:16.880 | so it didn't catch on such as has happened now.
00:58:20.160 | But I mean, that whole advancement
00:58:22.920 | of self-driving car technology in Germany
00:58:25.840 | was pretty much independent of AI military summers
00:58:29.680 | and winters in the US.
00:58:31.000 | So there's been more going on in AI globally
00:58:34.480 | than not only most people on the planet realize,
00:58:37.100 | but then most new AI PhDs realize,
00:58:40.060 | because they've come up within a certain subfield of AI
00:58:44.600 | and haven't had to look so much beyond that.
00:58:47.660 | But I would say when I got my PhD in 1989 in mathematics,
00:58:52.660 | I was interested in AI already.
00:58:56.000 | - In Philadelphia, by the way.
00:58:56.840 | - Yeah, I started at NYU, then I transferred to Philadelphia,
00:59:00.940 | to Temple University, good old North Philly.
00:59:03.920 | - North Philly, yeah.
00:59:04.760 | - Yeah, yeah, yeah, the pearl of the US.
00:59:07.900 | (laughing)
00:59:09.280 | You never stopped at a red light then,
00:59:10.920 | 'cause you were afraid if you stopped at a red light,
00:59:12.760 | someone will carjack you,
00:59:13.760 | so you just drive through every red light.
00:59:16.000 | - Yeah.
00:59:16.840 | - Is it, every day driving or bicycling to Temple
00:59:20.240 | from my house was like a new adventure, right?
00:59:24.280 | But yeah, the reason I didn't do a PhD in AI
00:59:27.540 | was what people were doing in the academic AI field then
00:59:30.840 | was just astoundingly boring and seemed wrong-headed to me.
00:59:34.880 | It was really like rule-based expert systems
00:59:38.060 | and production systems.
00:59:39.360 | And actually, I loved mathematical logic.
00:59:42.080 | I had nothing against logic as the cognitive engine
00:59:44.640 | for an AI, but the idea that you could type in the knowledge
00:59:48.920 | that AI would need to think seemed just completely stupid
00:59:52.720 | and wrong-headed to me.
00:59:55.360 | I mean, you can use logic if you want,
00:59:57.400 | but somehow the system has gotta be--
01:00:00.160 | - Automated.
01:00:00.980 | - Learning, right?
01:00:01.820 | It should be learning from experience.
01:00:03.740 | And the AI field then was not interested
01:00:06.080 | in learning from experience.
01:00:08.300 | I mean, some researchers certainly were.
01:00:10.980 | I mean, I remember in mid '80s,
01:00:13.920 | I discovered a book by John Andreas,
01:00:17.120 | which was, it was about a reinforcement learning system
01:00:21.880 | called PURR, P-U-R-R-P-U-S-S, which was an acronym
01:00:26.880 | that I can't even remember what it was for,
01:00:28.600 | but PURRpus, anyway.
01:00:30.360 | But he, I mean, that was a system
01:00:31.960 | that was supposed to be an AGI.
01:00:34.320 | And basically, by some sort of fancy
01:00:38.080 | like Markov decision process learning,
01:00:40.960 | it was supposed to learn everything
01:00:43.400 | just from the bits coming into it
01:00:44.800 | and learn to maximize its reward
01:00:46.480 | and become intelligent, right?
01:00:49.040 | So that was there in academia back then,
01:00:51.760 | but it was like isolated, scattered, weird people.
01:00:55.200 | But all these isolated, scattered, weird people
01:00:57.400 | in that period, I mean, they laid the intellectual grounds
01:01:01.240 | for what happened later.
01:01:02.080 | So you look at John Andreas at University of Canterbury
01:01:05.280 | with his PURRpus reinforcement learning Markov system.
01:01:09.720 | He was the PC supervisor for John Cleary in New Zealand.
01:01:14.040 | Now, John Cleary worked with me
01:01:17.040 | when I was at Waikato University in 1993 in New Zealand,
01:01:21.640 | and he worked with Ian Witton there,
01:01:23.880 | and they launched Weka,
01:01:25.900 | which was the first open source machine learning toolkit,
01:01:29.800 | which was launched in, I guess, '93 or '94
01:01:33.480 | when I was at Waikato University.
01:01:35.120 | - Written in Java, unfortunately.
01:01:36.400 | - Written in Java,
01:01:37.440 | which was a cool language back then, though, right?
01:01:39.560 | - I guess it's still, well, it's not cool anymore,
01:01:41.680 | but it's powerful.
01:01:43.240 | - I find, like most programmers now,
01:01:45.720 | I find Java unnecessarily bloated,
01:01:48.760 | but back then it was like Java or C++, basically.
01:01:51.960 | - Object-oriented, so it's nice.
01:01:53.360 | - Java was easier for students.
01:01:55.760 | Amusingly, a lot of the work on Weka
01:01:57.720 | when we were in New Zealand
01:01:58.680 | was funded by a US, sorry, a New Zealand government grant
01:02:03.680 | to use machine learning
01:02:05.440 | to predict the menstrual cycles of cows.
01:02:08.240 | So in the US, all the grant funding for AI
01:02:10.440 | was about how to kill people or spy on people.
01:02:13.600 | In New Zealand, it's all about cows or kiwi fruits, right?
01:02:16.400 | - Yeah.
01:02:17.560 | - So yeah, anyway, I mean, John Andreas
01:02:20.560 | had his probability theory-based reinforcement learning,
01:02:24.360 | proto-AGI.
01:02:25.800 | John Cleary was trying to do much more ambitious,
01:02:29.440 | probabilistic AGI systems.
01:02:31.840 | Now, John Cleary helped do Weka,
01:02:36.200 | which was the first open-source machine learning tool.
01:02:39.240 | It gets to the predecessor for TensorFlow and Torch
01:02:41.520 | and all these things.
01:02:43.080 | Also, Shane Legg was at Wikado
01:02:46.800 | working with John Cleary and Ian Winton
01:02:50.240 | and this whole group,
01:02:52.240 | and then working with my own company's,
01:02:55.800 | my company WebMind, an AI company I had in the late '90s
01:02:59.840 | with a team there at Wikado University,
01:03:02.320 | which is how Shane got his head full of AGI,
01:03:05.400 | which led him to go on,
01:03:06.440 | and with Demis Hassabis found DeepMind.
01:03:08.680 | So what you can see through that lineage is,
01:03:11.080 | you know, in the '80s and '70s,
01:03:12.600 | John Andreas was trying to build
01:03:14.120 | probabilistic reinforcement learning AGI systems.
01:03:17.200 | The technology, the computers just weren't there to support.
01:03:19.680 | His ideas were very similar to what people are doing now.
01:03:23.880 | But, you know, although he's long since passed away
01:03:27.680 | and didn't become that famous outside of Canterbury,
01:03:30.920 | I mean, the lineage of ideas passed on
01:03:33.280 | from him to his students to their students,
01:03:35.120 | you can go trace directly from there to me
01:03:37.880 | and to DeepMind, right?
01:03:39.440 | So there was a lot going on in AGI
01:03:42.160 | that did ultimately lay the groundwork for what we have today
01:03:46.960 | but there wasn't a community, right?
01:03:48.520 | And so when I started trying to pull together
01:03:53.480 | an AGI community, it was in the, I guess,
01:03:56.920 | the early aughts when I was living in Washington, DC
01:04:00.360 | and making a living doing AI consulting
01:04:03.400 | for various US government agencies.
01:04:07.080 | And I organized the first AGI workshop in 2006.
01:04:12.080 | And I mean, it wasn't like it was literally
01:04:15.760 | in my basement or something.
01:04:16.960 | I mean, it was in the conference room
01:04:18.480 | at Marriott in Bethesda.
01:04:20.360 | It's not that edgier underground, unfortunately, but still--
01:04:24.920 | - How many people attended?
01:04:25.920 | - About 60 or something.
01:04:27.560 | - That's not bad.
01:04:28.440 | - I mean, DC has a lot of AI going on,
01:04:30.720 | probably until the last five or 10 years,
01:04:34.160 | much more than Silicon Valley,
01:04:35.680 | although it's just quiet because of the nature
01:04:38.720 | of what happens in DC.
01:04:41.240 | Their business isn't driven by PR.
01:04:43.560 | Mostly when something starts to work really well,
01:04:46.120 | it's taken black and becomes even more quiet.
01:04:49.640 | But yeah, the thing is that really had the feeling
01:04:52.880 | of a group of starry-eyed mavericks
01:04:55.880 | huddled in a basement,
01:04:58.960 | plotting how to overthrow the narrow AI establishment.
01:05:02.520 | And for the first time, in some cases,
01:05:05.760 | coming together with others who shared their passion
01:05:08.680 | for AGI and a technical seriousness
01:05:11.160 | about working on it.
01:05:13.800 | I mean, that's very, very different
01:05:17.080 | than what we have today.
01:05:19.120 | I mean, now it's a little bit different.
01:05:22.280 | We have AGI conference every year,
01:05:24.600 | and there's several hundred people rather than 50.
01:05:27.760 | Now it's more like this is the main gathering
01:05:32.720 | of people who want to achieve AGI
01:05:34.960 | and think that large-scale nonlinear regression
01:05:39.200 | is not the golden path to AGI.
01:05:42.440 | - So I mean it's-- - AKA neural network.
01:05:44.040 | - Yeah, yeah, yeah.
01:05:44.920 | Well, certain architectures for learning
01:05:49.920 | using neural networks.
01:05:51.800 | So yeah, the AGI conferences are sort of now
01:05:54.400 | the main concentration of people not obsessed
01:05:57.920 | with deep neural nets-- - The rebels.
01:05:59.200 | - And deep reinforcement learning,
01:06:00.840 | but still interested in AGI.
01:06:04.320 | Not the only ones.
01:06:06.400 | I mean, there's other little conferences and groupings
01:06:10.160 | interested in human-level AI
01:06:13.240 | and cognitive architectures and so forth.
01:06:16.000 | But yeah, it's been a big shift.
01:06:17.840 | Like back then, you couldn't really,
01:06:21.920 | it would be very, very edgy then
01:06:23.480 | to give a university department seminar
01:06:26.200 | that mentioned AGI or human-level AI.
01:06:28.400 | It was more like you had to talk about
01:06:30.600 | something more short-term and immediately practical
01:06:34.320 | than in the bar after the seminar,
01:06:36.560 | you could bullshit about AGI in the same breath
01:06:39.520 | as time travel or the simulation hypothesis or something.
01:06:44.160 | Whereas now, AGI is not only in the academic seminar room.
01:06:48.320 | Like you have Vladimir Putin knows what AGI is.
01:06:51.920 | And he's like, Russia needs to become the leader in AGI.
01:06:55.440 | So national leaders and CEOs of large corporations.
01:07:00.440 | I mean, the CTO of Intel, Justin Ratner,
01:07:04.200 | this was years ago, Singularity Summit Conference,
01:07:06.800 | 2008 or something.
01:07:07.760 | He's like, we believe Ray Kurzweil,
01:07:10.080 | the Singularity will happen in 2045
01:07:11.960 | and it will have Intel inside.
01:07:13.600 | (laughing)
01:07:14.600 | I mean, so it's gone from being something
01:07:18.800 | which is the pursuit of like crazed mavericks,
01:07:21.680 | crackpots and science fiction fanatics
01:07:24.520 | to being a marketing term for large corporations
01:07:29.520 | and the national leaders, which is a astounding transition.
01:07:35.120 | But yeah, in the course of this transition,
01:07:40.120 | I think a bunch of sub-communities have formed.
01:07:42.240 | And the community around the AGI conference series
01:07:44.880 | is certainly one of them.
01:07:47.600 | It hasn't grown as big as I might've liked it to.
01:07:51.920 | On the other hand, sometimes a modest size community
01:07:56.280 | can be better for making intellectual progress also.
01:07:59.460 | You go to a Society for Neuroscience Conference,
01:08:02.120 | you have 35 or 40,000 neuroscientists.
01:08:05.360 | On the one hand, it's amazing.
01:08:07.480 | On the other hand, you're not gonna talk to the leaders
01:08:10.880 | of the field there if you're an outsider.
01:08:14.120 | - Yeah, in the same sense, the AAAI,
01:08:17.880 | the artificial intelligence,
01:08:19.240 | the main kind of generic artificial intelligence conference
01:08:25.160 | is too big, it's too amorphous.
01:08:28.240 | Like it doesn't make sense.
01:08:30.160 | - Well, yeah, but NIPS has become
01:08:33.000 | a company advertising outlet now.
01:08:35.660 | (laughing)
01:08:37.000 | - So, I mean, to comment on the role of AGI
01:08:40.240 | in the research community, I'd still,
01:08:42.680 | if you look at NeurIPS, if you look at CVPR,
01:08:45.160 | if you look at these iClear,
01:08:47.360 | AGI is still seen as the outcast.
01:08:51.840 | I would say in these main machine learning,
01:08:55.000 | in these main artificial intelligence conferences
01:08:59.040 | amongst the researchers,
01:09:00.880 | I don't know if it's an accepted term yet.
01:09:03.880 | What I've seen bravely, you mentioned Shane Legg,
01:09:07.960 | is DeepMind and then OpenAI are the two places
01:09:11.680 | that are, I would say unapologetically so far,
01:09:15.580 | I think it's actually changing, unfortunately,
01:09:17.440 | but so far they've been pushing the idea
01:09:19.640 | that the goal is to create an AGI.
01:09:22.760 | - Well, they have billions of dollars behind them,
01:09:24.360 | so I mean, in the public mind,
01:09:27.240 | that certainly carries some oomph, right?
01:09:30.120 | I mean, Monial is-- - But they also
01:09:31.520 | have really strong researchers, right?
01:09:33.000 | - They do, they're great teams.
01:09:34.260 | I mean, DeepMind in particular, yeah.
01:09:36.640 | - And they have, I mean, DeepMind has Marcus Hodder
01:09:39.280 | walking around, I mean, there's all these folks
01:09:42.120 | who basically their full-time position involves
01:09:45.920 | dreaming about creating AGI.
01:09:47.800 | - I mean, Google Brain has a lot of amazing
01:09:51.320 | AGI-oriented people also.
01:09:53.320 | And I mean, so I'd say from a public marketing view,
01:09:58.320 | DeepMind and OpenAI are the two large,
01:10:03.200 | well-funded organizations that have put
01:10:05.920 | the term and concept AGI out there
01:10:08.800 | sort of as part of their public image.
01:10:12.680 | But I mean, they're certainly not,
01:10:15.160 | there are other groups that are doing research
01:10:17.120 | that seems just as AGI-ish to me,
01:10:20.620 | I mean, including a bunch of groups in Google's
01:10:23.280 | main Mountain View office.
01:10:25.960 | So yeah, it's true, AGI is somewhat
01:10:30.360 | away from the mainstream now,
01:10:33.840 | but if you compare it to where it was 15 years ago,
01:10:38.000 | there's been an amazing mainstreaming.
01:10:41.920 | You could say the same thing about super longevity research,
01:10:45.480 | which is one of my application areas that I'm excited about.
01:10:49.080 | I mean, I've been talking about this since the '90s,
01:10:52.840 | but working on this since 2001.
01:10:54.520 | And back then, really to say you're trying
01:10:57.760 | to create therapies to allow people to live
01:11:00.320 | hundreds or thousands of years,
01:11:02.340 | you were way, way, way, way out of the industry
01:11:05.500 | academic mainstream.
01:11:06.680 | But now, Google had Project Calico,
01:11:11.500 | Craig Venter had Human Longevity Incorporated,
01:11:14.000 | and then once the suits come marching in, right?
01:11:17.120 | I mean, once there's big money in it,
01:11:20.200 | then people are forced to take it seriously,
01:11:22.720 | 'cause that's the way modern society works.
01:11:24.920 | So it's still not as mainstream as cancer research,
01:11:28.400 | just as AGI is not as mainstream
01:11:31.040 | as automated driving or something.
01:11:32.960 | But the degree of mainstreaming that's happened
01:11:36.020 | in the last 10 to 15 years is astounding
01:11:40.120 | to those of us who've been at it for a while.
01:11:42.080 | - Yeah, but there's a marketing aspect to the term,
01:11:45.360 | but in terms of actual full force research
01:11:48.800 | that's going on under the header of AGI,
01:11:51.280 | it's currently, I would say, dominated,
01:11:54.280 | maybe you can disagree,
01:11:55.960 | dominated by neural networks research,
01:11:57.720 | that the nonlinear regression, as you mentioned.
01:12:01.160 | Like, what's your sense with OpenCog, with your work,
01:12:06.520 | but in general, I was logic-based systems
01:12:10.920 | and expert systems.
01:12:12.000 | For me, always seemed to capture a deep element
01:12:17.000 | of intelligence that needs to be there.
01:12:21.560 | Like you said, it needs to learn,
01:12:23.000 | it needs to be automated somehow,
01:12:24.900 | but that seems to be missing from a lot of research currently.
01:12:29.900 | So what's your sense?
01:12:32.760 | I guess one way to ask this question,
01:12:36.280 | what's your sense of what kind of things
01:12:39.180 | will an AGI system need to have?
01:12:42.600 | - Yeah, that's a very interesting topic
01:12:45.960 | that I've thought about for a long time.
01:12:47.900 | And I think there are many, many different approaches
01:12:52.900 | that can work for getting to human level AI.
01:12:56.920 | So I don't think there's like one golden algorithm,
01:13:01.920 | one golden design that can work.
01:13:05.840 | And I mean, flying machines is the much-worn analogy here,
01:13:11.360 | right, like I mean, you have airplanes,
01:13:12.840 | you have helicopters, you have balloons,
01:13:15.440 | you have stealth bombers
01:13:17.160 | that don't look like regular airplanes.
01:13:18.760 | You've got all blimps.
01:13:21.040 | - Birds too.
01:13:21.880 | - Birds, yeah, and bugs, right?
01:13:24.280 | - Yeah.
01:13:25.120 | - And I mean, there are certainly many kinds
01:13:28.760 | of flying machines that--
01:13:29.920 | - And there's a catapult that you can just launch.
01:13:32.320 | (laughing)
01:13:33.160 | - There's bicycle-powered flying machines, right?
01:13:36.160 | - Nice, yeah.
01:13:37.000 | - Yeah, so now these are all analyzable
01:13:40.940 | by a basic theory of aerodynamics, right?
01:13:43.800 | Now, so one issue with AGI is we don't yet have the analog
01:13:48.800 | of the theory of aerodynamics.
01:13:50.800 | And that's what Marcus Hutter was trying to make
01:13:54.640 | with the AXE and his general theory of general intelligence.
01:13:58.820 | But that theory, in its most clearly articulated parts,
01:14:03.360 | really only works for either infinitely powerful machines
01:14:07.120 | or almost, or insanely impractically powerful machines.
01:14:11.860 | So I mean, if you were gonna take a theory-based approach
01:14:14.900 | to AGI, what you would do is say,
01:14:17.340 | well, let's take what's called, say, AXE-TL,
01:14:22.340 | which is Hutter's AXE machine that can work
01:14:26.140 | on merely insanely much processing power
01:14:29.000 | rather than infinitely much--
01:14:30.180 | - What does TL stand for?
01:14:32.220 | - Time and length.
01:14:33.560 | - Okay.
01:14:34.400 | - So you're, basically, how--
01:14:35.220 | - Like constraints and all.
01:14:36.500 | - Yeah, yeah, yeah.
01:14:37.340 | So how AXE works, basically, is each action
01:14:42.340 | that it wants to take, before taking that action,
01:14:45.020 | it looks at all its history, and then it looks
01:14:48.440 | at all possible programs that it could use
01:14:50.400 | to make a decision, and it decides which decision program
01:14:54.320 | would have let it make the best decisions
01:14:56.140 | according to its reward function over its history,
01:14:58.420 | and it uses that decision program
01:15:00.020 | to make the next decision, right?
01:15:02.100 | - It's not afraid of infinite resources.
01:15:04.740 | - It's searching through the space
01:15:06.340 | of all possible computer programs
01:15:08.460 | in between each action and each next action.
01:15:10.700 | Now, AXE-TL searches through all possible
01:15:14.340 | computer programs that have runtime less than T
01:15:16.780 | and length less than L, which is still
01:15:20.340 | an impracticably humongous space, right?
01:15:22.940 | So what you would like to do to make an AGI,
01:15:27.940 | and what will probably be done 50 years from now
01:15:29.900 | to make an AGI, is say, okay, well, we have some constraints.
01:15:34.900 | We have these processing power constraints,
01:15:37.520 | and we have space and time constraints on the program.
01:15:42.520 | We have energy utilization constraints,
01:15:45.420 | and we have this particular class of environments
01:15:49.220 | that we care about, which may be, say,
01:15:52.740 | manipulating physical objects on the surface of the Earth,
01:15:55.460 | communicating in human language, I mean,
01:15:58.160 | whatever our particular, not annihilating humanity,
01:16:02.260 | whatever our particular requirements happen to be,
01:16:05.460 | if you formalize those requirements
01:16:07.300 | in some formal specification language,
01:16:10.340 | you should then be able to run
01:16:12.340 | a automated program specializer on AXE-TL,
01:16:17.060 | specialize it to the computing resource constraints
01:16:21.440 | and the particular environment and goal,
01:16:23.740 | and then it will spit out the specialized version
01:16:27.640 | of AXE-TL to your resource restrictions
01:16:30.660 | in your environment, which will be your AGI, right?
01:16:32.740 | And that, I think, is how our super AGI
01:16:36.220 | will create new AGI systems, right?
01:16:38.620 | But that's a very--
01:16:40.460 | - That just seems really inefficient.
01:16:41.300 | - That's a very Russian approach, by the way.
01:16:43.180 | Like, the whole field of program specialization
01:16:45.780 | came out of Russia.
01:16:47.300 | - Can you backtrack?
01:16:48.140 | So what is program specialization?
01:16:49.720 | So it's basically--
01:16:51.180 | - Well, take sorting, for example.
01:16:53.660 | You can have a generic program for sorting lists,
01:16:56.660 | but what if all your lists you care about
01:16:58.300 | are length 10,000 or less?
01:16:59.940 | - Got it.
01:17:00.780 | - You can run an automated program specializer
01:17:02.580 | on your sorting algorithm,
01:17:04.140 | and it will come up with the algorithm
01:17:05.420 | that's optimal for sorting lists of length 10,000 or less.
01:17:09.860 | - That's kind of like,
01:17:10.780 | isn't that kind of the process of evolution,
01:17:13.500 | is a program specializer to the environment?
01:17:17.500 | So you're kind of evolving human beings or living creatures.
01:17:20.740 | - Exactly, I mean, your Russian heritage is showing there.
01:17:24.340 | So with Alexander Vityaev and Peter Anokhin and so on,
01:17:28.500 | I mean, there's a long history
01:17:31.820 | of thinking about evolution that way also, right?
01:17:36.820 | Well, my point is that what we're thinking of
01:17:40.140 | as a human-level general intelligence,
01:17:43.980 | you know, if you start from narrow AIs,
01:17:46.660 | like are being used in the commercial AI field now,
01:17:50.340 | then you're thinking,
01:17:51.460 | okay, how do we make it more and more general?
01:17:53.380 | On the other hand, if you start from Aixi
01:17:56.340 | or Schmidhuber's girdle machine,
01:17:58.060 | or these infinitely powerful,
01:18:01.140 | but practically infeasible AIs,
01:18:04.020 | then getting to a human-level AGI
01:18:06.420 | is a matter of specialization.
01:18:08.220 | It's like, how do you take
01:18:09.620 | these maximally general learning processes,
01:18:12.900 | and how do you specialize them
01:18:15.780 | so that they can operate
01:18:17.580 | within the resource constraints that you have,
01:18:20.540 | but will achieve the particular things that you care about?
01:18:24.340 | 'Cause we humans are not
01:18:26.500 | maximally general intelligences, right?
01:18:28.180 | If I ask you to run a maze in 750 dimensions,
01:18:31.420 | you'll probably be very slow,
01:18:33.060 | whereas at two dimensions,
01:18:34.620 | you're probably way better, right?
01:18:37.100 | So I mean, because our hippocampus
01:18:40.820 | has a two-dimensional map in it, right?
01:18:43.060 | And it does not have a 750-dimensional map in it.
01:18:46.020 | So I mean, we're a peculiar mix
01:18:51.020 | of generality and specialization, right?
01:18:55.980 | - We probably start quite general at birth.
01:18:58.220 | Not obviously still narrow,
01:19:00.740 | but more general than we are at age 20,
01:19:05.740 | and 30, and 40, and 50, and 60.
01:19:07.500 | - I don't think that,
01:19:08.340 | I think it's more complex than that,
01:19:10.220 | because in some sense, a young child
01:19:15.820 | is less biased,
01:19:17.540 | and their brain has yet to sort of crystallize
01:19:20.060 | into appropriate structures for processing aspects
01:19:23.460 | of the physical and social world.
01:19:25.420 | On the other hand,
01:19:26.580 | the young child is very tied to their sensorium,
01:19:30.180 | whereas we can deal with abstract mathematics,
01:19:33.940 | like 750 dimensions,
01:19:35.580 | and the young child cannot,
01:19:37.660 | because they haven't grown
01:19:40.260 | what Piaget called the formal capabilities.
01:19:43.420 | They haven't learned to abstract yet, right?
01:19:46.300 | And the ability to abstract
01:19:48.140 | gives you a different kind of generality
01:19:49.740 | than what a baby has.
01:19:51.700 | So there's both more specialization
01:19:55.460 | and more generalization
01:19:56.820 | that comes with the development process, actually.
01:19:59.780 | - I mean, I guess just the trajectories
01:20:02.300 | of the specialization are most controllable
01:20:06.300 | at the young age, I guess is one way to put it.
01:20:09.700 | - Do you have kids?
01:20:10.700 | - No.
01:20:11.660 | They're not as controllable as you think.
01:20:13.620 | - So you think it's interesting.
01:20:15.180 | - I think, honestly, I think--
01:20:17.700 | - Interesting.
01:20:18.540 | - A human adult is much more generally intelligent
01:20:21.500 | than a human baby.
01:20:23.220 | Babies are very stupid.
01:20:24.620 | I mean, they're cute,
01:20:28.220 | which is why we put up with their repetitiveness
01:20:31.100 | and stupidity.
01:20:33.060 | And they have what the Zen guys would call
01:20:35.540 | a beginner's mind, which is a beautiful thing,
01:20:38.180 | but that doesn't necessarily correlate
01:20:40.780 | with a high level of intelligence.
01:20:43.340 | - So on the plot of cuteness and stupidity,
01:20:46.140 | there's a process that allows us
01:20:48.380 | to put up with their stupidity
01:20:49.660 | as they become more intelligent.
01:20:50.500 | - So by the time you're an ugly old man like me,
01:20:52.460 | you gotta get really, really smart to compensate.
01:20:54.820 | - To compensate, okay, cool.
01:20:56.220 | - But yeah, going back to your original question,
01:20:59.180 | so the way I look at human-level AGI
01:21:04.180 | is how do you specialize
01:21:08.660 | unrealistically inefficient superhuman
01:21:12.180 | brute force learning processes
01:21:14.620 | to the specific goals that humans need to achieve
01:21:18.380 | and the specific resources that we have?
01:21:21.940 | And both of these, the goals and the resources
01:21:24.620 | and the environments, I mean, all this is important.
01:21:27.180 | And on the resources side,
01:21:30.380 | it's important that the hardware resources
01:21:33.620 | we're bringing to bear are very different
01:21:36.460 | than the human brain.
01:21:38.260 | So the way I would want to implement AGI
01:21:42.740 | on a bunch of neurons in a vat
01:21:46.020 | that I could rewire arbitrarily
01:21:48.100 | is quite different than the way I would want to create AGI
01:21:51.780 | on say a modern server farm of CPUs and GPUs,
01:21:55.820 | which in turn may be quite different
01:21:57.460 | than the way I would wanna implement AGI
01:22:00.260 | on whatever quantum computer we'll have in 10 years,
01:22:03.780 | supposing someone makes a robust quantum Turing machine
01:22:06.740 | or something, right?
01:22:08.220 | So I think there's been co-evolution
01:22:12.660 | of the patterns of organization in the human brain
01:22:16.940 | and the physiological particulars
01:22:19.940 | of the human brain over time.
01:22:23.260 | And when you look at neural networks,
01:22:25.220 | that is one powerful class of learning algorithms,
01:22:28.020 | but it's also a class of learning algorithms
01:22:30.020 | that evolved to exploit the particulars of the human brain
01:22:33.420 | as a computational substrate.
01:22:36.300 | If you're looking at the computational substrate
01:22:38.900 | of a modern server farm,
01:22:41.020 | you won't necessarily want the same algorithms
01:22:43.180 | that you want on the human brain.
01:22:45.740 | Then from the right level of abstraction,
01:22:48.900 | you could look at maybe the best algorithms on the brain
01:22:51.780 | and the best algorithms on a modern computer network
01:22:54.500 | as implementing the same abstract learning
01:22:56.500 | and representation processes,
01:22:59.060 | but finding that level of abstraction
01:23:01.700 | is its own AGI research project then, right?
01:23:04.940 | So that's about the hardware side and the software side,
01:23:09.340 | which follows from that.
01:23:10.860 | Then regarding what are the requirements,
01:23:14.180 | I wrote the paper years ago
01:23:16.420 | on what I called the embodied communication prior,
01:23:20.340 | which was quite similar in intent to Yoshua Bengio's
01:23:24.340 | recent paper on the consciousness prior,
01:23:26.780 | except I didn't wanna wrap up consciousness in it
01:23:30.420 | because to me, the qualia problem and subjective experience
01:23:34.260 | is a very interesting issue also,
01:23:35.900 | which we can chat about,
01:23:37.900 | but I would rather keep that philosophical debate
01:23:42.580 | distinct from the debate of what kind of biases
01:23:45.260 | do you wanna put in a general intelligence
01:23:47.100 | to give it human-like general intelligence.
01:23:49.860 | - And I'm not sure Yoshua Bengio
01:23:51.460 | is really addressing that kind of consciousness.
01:23:55.100 | He's just using the term.
01:23:56.580 | - I love Yoshua to pieces.
01:23:58.820 | He's by far my favorite of the lions of deep learning.
01:24:02.980 | He's such a good-hearted guy.
01:24:05.780 | - He's a good human being, yeah, for sure.
01:24:07.620 | - I am not sure he has plumbed to the depths
01:24:11.180 | of the philosophy of consciousness.
01:24:13.500 | - No, he's using it as a sexy term.
01:24:14.940 | - Yeah, yeah, yeah.
01:24:15.940 | So what I called it was the embodied communication prior.
01:24:20.940 | - Can you maybe explain it a little bit?
01:24:22.500 | - Yeah, yeah, what I meant was,
01:24:24.260 | what are we humans evolved for?
01:24:26.660 | You can say being human, but that's very abstract, right?
01:24:29.700 | I mean, our minds control individual bodies,
01:24:32.980 | which are autonomous agents,
01:24:35.020 | moving around in a world that's composed
01:24:38.340 | largely of solid objects, right?
01:24:41.300 | And we've also evolved to communicate via language
01:24:46.260 | with other solid object agents
01:24:49.260 | that are going around doing things collectively with us
01:24:52.220 | in a world of solid objects.
01:24:54.420 | And these things are very obvious,
01:24:56.900 | but if you compare them to the scope
01:24:58.420 | of all possible intelligences,
01:25:01.420 | or even all possible intelligences
01:25:03.140 | that are physically realizable,
01:25:05.420 | that actually constrains things a lot.
01:25:07.420 | So if you start to look at,
01:25:09.260 | how would you realize some specialized
01:25:14.580 | or constrained version of universal general intelligence
01:25:18.380 | in a system that has limited memory
01:25:21.180 | and limited speed of processing,
01:25:23.180 | but whose general intelligence will be biased
01:25:26.220 | toward controlling a solid object agent,
01:25:28.860 | which is mobile in a solid object world
01:25:31.380 | for manipulating solid objects
01:25:33.500 | and communicating via language
01:25:36.540 | with other similar agents in that same world, right?
01:25:39.940 | Then starting from that,
01:25:41.580 | you're starting to get a requirements analysis
01:25:43.660 | for human level general intelligence.
01:25:48.100 | And then that leads you into cognitive science.
01:25:50.900 | And you can look at, say,
01:25:51.980 | what are the different types of memory
01:25:53.860 | that the human mind and brain has?
01:25:56.980 | And this has matured over the last decades.
01:26:00.860 | And I got into this a lot.
01:26:02.900 | So after getting my PhD in math,
01:26:04.580 | I was an academic for eight years.
01:26:06.060 | I was in departments of mathematics,
01:26:08.740 | computer science, and psychology.
01:26:11.340 | When I was in the psychology department
01:26:12.780 | at University of Western Australia,
01:26:14.220 | I was focused on cognitive science
01:26:16.660 | of memory and perception.
01:26:18.700 | Actually, I was teaching neural nets
01:26:20.460 | and deep neural nets,
01:26:21.300 | and it was multilayer perceptrons, right?
01:26:23.620 | - Psychology?
01:26:24.460 | (laughs)
01:26:25.780 | - Cognitive science, it was cross-disciplinary
01:26:27.860 | among engineering, math, psychology,
01:26:30.020 | philosophy, linguistics, computer science.
01:26:33.300 | But yeah, we were teaching psychology students
01:26:36.020 | to try to model the data
01:26:38.420 | from human cognition experiments
01:26:40.060 | using multilayer perceptrons,
01:26:42.100 | which was the early version of a deep neural network.
01:26:45.060 | Very, very, yeah, recurrent backprop
01:26:47.900 | was very, very slow to train back then, right?
01:26:51.220 | - So this is the study of these constraints.
01:26:53.500 | - Yeah.
01:26:54.340 | - Systems that are supposed to deal with physical objects.
01:26:55.660 | - So if you look at cognitive psychology,
01:27:00.660 | you can see there's multiple types of memory,
01:27:04.460 | which are to some extent represented
01:27:06.540 | by different subsystems in the human brain.
01:27:08.460 | So we have episodic memory,
01:27:10.320 | which takes into account our life history
01:27:12.520 | and everything that's happened to us.
01:27:15.220 | We have declarative or semantic memory,
01:27:17.280 | which is like facts and beliefs
01:27:19.340 | abstracted from the particular situations
01:27:21.660 | that they occurred in.
01:27:22.780 | There's sensory memory,
01:27:25.000 | which to some extent is sense modality specific,
01:27:27.540 | and then to some extent is unified across sense modalities.
01:27:32.540 | There's procedural memory, memory of how to do stuff,
01:27:36.060 | like how to swing the tennis racket, right?
01:27:38.100 | Which is, there's motor memory,
01:27:39.880 | but it's also a little more abstract than motor memory.
01:27:43.620 | It involves cerebellum and cortex working together.
01:27:47.500 | Then there's memory linkage with emotion,
01:27:51.540 | which has to do with linkages of cortex and limbic system.
01:27:55.900 | There's specifics of spatial and temporal modeling
01:27:59.160 | connected with memory,
01:28:00.420 | which has to do with the hippocampus and thalamus
01:28:03.380 | connecting to cortex.
01:28:05.380 | And the basal ganglia, which influences goals.
01:28:08.140 | So we have specific memory of what goals,
01:28:10.980 | sub-goals and sub-sub-goals we wanted to perceive
01:28:13.140 | in which context in the past.
01:28:15.060 | Human brain has substantially different subsystems
01:28:18.220 | for these different types of memory,
01:28:21.020 | and substantially differently tuned learning,
01:28:24.240 | like differently tuned modes of long-term potentiation
01:28:27.280 | to do with the types of neurons and neurotransmitters
01:28:29.700 | in the different parts of the brain
01:28:31.260 | corresponding to these different types of knowledge.
01:28:33.060 | And these different types of memory and learning
01:28:35.860 | in the human brain, I mean, you can back these all
01:28:38.520 | into embodied communication for controlling agents
01:28:41.900 | in worlds of solid objects.
01:28:44.680 | Now, so if you look at building an AGI system,
01:28:47.700 | one way to do it, which starts more from cognitive science
01:28:50.420 | and neuroscience is to say,
01:28:52.660 | okay, what are the types of memory that--
01:28:55.820 | - That are necessary for this kind of world?
01:28:57.340 | - Yeah, yeah, necessary for this sort of intelligence.
01:29:00.700 | What types of learning work well
01:29:02.740 | with these different types of memory?
01:29:04.580 | And then how do you connect all these things together, right?
01:29:07.780 | And of course, the human brain did it incrementally
01:29:10.760 | through evolution because each of the sub-networks
01:29:14.340 | of the brain, I mean, it's not really the lobes
01:29:16.640 | of the brain, it's the sub-networks,
01:29:18.220 | each of which is widely distributed.
01:29:20.780 | Which of the, each of the sub-networks of the brain
01:29:23.660 | co-evolved with the other sub-networks of the brain,
01:29:27.140 | both in terms of its patterns of organization
01:29:29.460 | and the particulars of the neurophysiology.
01:29:31.820 | So they all grew up communicating and adapting to each other.
01:29:34.420 | It's not like they were separate black boxes
01:29:36.740 | that were then glommed together, right?
01:29:40.200 | Whereas as engineers, we would tend to say,
01:29:43.340 | let's make the declarative memory box here
01:29:46.660 | and the procedural memory box here
01:29:48.460 | and the perception box here and wire them together.
01:29:51.380 | And when you can do that, it's interesting.
01:29:54.100 | I mean, that's how a car is built, right?
01:29:55.680 | But on the other hand, that's clearly not
01:29:58.540 | how biological systems are made.
01:30:01.420 | The parts co-evolve so as to adapt and work together.
01:30:04.900 | So this-- - That's by the way
01:30:06.020 | how every human engineered system that flies,
01:30:10.180 | that was what we're using that analogy before,
01:30:11.980 | is built as well. - Yes.
01:30:12.820 | - So do you find this at all appealing?
01:30:14.420 | Like there's been a lot of really exciting,
01:30:16.660 | which I find strange that it's ignored,
01:30:19.800 | work in cognitive architectures, for example,
01:30:21.860 | throughout the last few decades.
01:30:23.300 | Do you find that-- - Yeah, I mean,
01:30:25.020 | I had a lot to do with that community.
01:30:27.940 | And Paul Rosenblum, who was one of the,
01:30:31.020 | and John Laird, who built the SOAR architecture,
01:30:33.460 | are friends of mine.
01:30:34.620 | And I learned SOAR quite well and ACT-R
01:30:37.780 | and these different cognitive architectures.
01:30:40.420 | How I was looking at the AI world about 10 years ago,
01:30:44.500 | before this whole commercial deep learning explosion was,
01:30:47.820 | on the one hand, you had these cognitive architecture guys
01:30:51.540 | who were working closely with psychologists
01:30:53.460 | and cognitive scientists who had thought a lot about
01:30:55.980 | how the different parts of a human-like mind
01:30:58.540 | should work together.
01:31:00.380 | On the other hand, you had these learning theory guys
01:31:03.580 | who didn't care at all about the architecture,
01:31:06.020 | but were just thinking about like,
01:31:07.580 | how do you recognize patterns in large amounts of data?
01:31:10.300 | And in some sense, what you needed to do
01:31:14.220 | was to get the learning that the learning theory guys
01:31:18.460 | were doing and put it together with the architecture
01:31:21.420 | that the cognitive architecture guys were doing.
01:31:24.260 | And then you would have what you needed.
01:31:25.940 | Now, you can't, unfortunately, when you look at the details,
01:31:30.760 | you can't just do that without totally rebuilding
01:31:34.940 | what is happening on both the cognitive architecture
01:31:37.860 | and the learning side.
01:31:38.740 | So, I mean, they tried to do that in SOAR,
01:31:41.780 | but what they ultimately did is like take a deep neural net
01:31:45.340 | or something for perception,
01:31:46.580 | and you include it as one of the black boxes.
01:31:50.100 | - Yeah, it becomes one of the boxes.
01:31:51.940 | The learning mechanism becomes one of the boxes
01:31:53.820 | as opposed to fundamental part of the system.
01:31:55.900 | - That doesn't quite work.
01:31:57.380 | You could look at some of the stuff DeepMind has done,
01:32:00.420 | like the differential neural computer or something.
01:32:03.220 | That sort of has a neural net for deep learning perception.
01:32:07.060 | It has another neural net, which is like a memory matrix
01:32:10.620 | that stores, say, the map of the London subway or something.
01:32:13.060 | So, probably Demis Asabas was thinking about this
01:32:16.380 | like part of cortex and part of hippocampus,
01:32:18.500 | 'cause hippocampus has a spatial map.
01:32:20.380 | And when he was a neuroscientist,
01:32:21.700 | he was doing a bunch on cortex-hippocampus interconnection.
01:32:24.540 | So there, the DNC would be an example of folks
01:32:27.260 | from the deep neural net world trying to take a step
01:32:30.120 | in the cognitive architecture direction
01:32:32.180 | by having two neural modules that correspond roughly
01:32:34.980 | to two different parts of the human brain
01:32:36.660 | that deal with different kinds of memory and learning.
01:32:38.880 | But on the other hand, it's super, super, super crude
01:32:41.960 | from the cognitive architecture view, right?
01:32:44.060 | Just as what John Laird and Sohr did with neural nets
01:32:48.020 | was super, super crude from a learning point of view,
01:32:51.180 | 'cause the learning was like off to the side,
01:32:53.320 | not affecting the core representations, right?
01:32:55.820 | And when you weren't learning the representation,
01:32:57.860 | you were learning the data that feeds into the,
01:33:00.420 | you were learning abstractions of perceptual data
01:33:02.660 | to feed into the representation that was not learned, right?
01:33:06.620 | So yeah, this was clear to me a while ago.
01:33:11.060 | And one of my hopes with the AGI community
01:33:14.300 | was to sort of bring people
01:33:16.020 | from those two directions together.
01:33:18.520 | That didn't happen much in terms of--
01:33:21.980 | - Not yet.
01:33:22.820 | - And what I was gonna say is it didn't happen
01:33:24.580 | in terms of bringing the lines of cognitive architecture
01:33:27.660 | together with the lines of deep learning.
01:33:30.540 | It did work in the sense that a bunch of younger researchers
01:33:33.820 | have had their heads filled with both of those ideas.
01:33:35.820 | This comes back to a saying my dad,
01:33:38.900 | who was a university professor, often quoted to me,
01:33:41.420 | which was science advances one funeral at a time,
01:33:45.020 | which I'm trying to avoid.
01:33:47.900 | Like I'm 53 years old, and I'm trying to invent
01:33:51.380 | amazing, weird-ass new things
01:33:53.580 | that nobody ever thought about,
01:33:56.260 | which we'll talk about in a few minutes.
01:33:59.300 | But there is that aspect, right?
01:34:02.340 | Like the people who've been at AI a long time
01:34:05.780 | and have made their career at developing one aspect,
01:34:08.820 | like a cognitive architecture or a deep learning approach,
01:34:12.980 | it can be hard once you're old
01:34:14.820 | and have made your career doing one thing.
01:34:17.360 | It can be hard to mentally shift gears.
01:34:19.700 | I mean, I try quite hard to remain flexible-minded.
01:34:23.700 | - Have you been successful somewhat in changing,
01:34:26.500 | maybe, have you changed your mind on some aspects
01:34:29.660 | of what it takes to build an AGI, like technical things?
01:34:32.980 | - The hard part is that the world doesn't want you to.
01:34:36.140 | - The world or your own brain?
01:34:37.380 | - The world, well, that one point
01:34:39.580 | is that your brain doesn't want to.
01:34:41.060 | The other part is that the world doesn't want you to.
01:34:43.700 | Like the people who have followed your ideas
01:34:46.560 | get mad at you if you change your mind.
01:34:49.300 | And the media wants to pigeonhole you
01:34:52.780 | as an avatar of a certain idea.
01:34:57.140 | But yeah, I've changed my mind on a bunch of things.
01:35:01.460 | I mean, when I started my career,
01:35:03.780 | I really thought quantum computing
01:35:05.220 | would be necessary for AGI.
01:35:07.900 | And I doubt it's necessary now,
01:35:10.780 | although I think it will be a super major enhancement.
01:35:14.660 | But I mean, I'm also, I'm now in the middle
01:35:18.340 | of embarking on a complete rethink and rewrite
01:35:22.660 | from scratch of our OpenCog AGI system,
01:35:26.180 | together with Alexey Podopov and his team in St. Petersburg,
01:35:29.860 | who's working with me in SingularityNet.
01:35:31.620 | So now we're trying to like go back to basics,
01:35:35.660 | take everything we learned from working
01:35:37.820 | with the current OpenCog system,
01:35:39.620 | take everything everybody else has learned
01:35:41.900 | from working with their proto-AGI systems
01:35:45.700 | and design the best framework for the next stage.
01:35:50.040 | And I do think there's a lot to be learned
01:35:53.380 | from the recent successes with deep neural nets
01:35:56.880 | and deep reinforcement systems.
01:35:59.060 | I mean, people made these essentially trivial systems
01:36:02.740 | work much better than I thought they would.
01:36:04.900 | And there's a lot to be learned from that.
01:36:07.140 | And I want to incorporate that knowledge appropriately
01:36:10.780 | in our OpenCog 2.0 system.
01:36:13.580 | On the other hand, I also think current deep neural net
01:36:18.580 | architectures as such will never get you anywhere near AGI.
01:36:22.260 | So I think you want to avoid the pathology
01:36:25.140 | of throwing the baby out with the bathwater
01:36:28.420 | and like saying, well, these things are garbage
01:36:30.940 | because foolish journalists overblow them
01:36:33.860 | as being the path to AGI,
01:36:37.060 | and a few researchers overblow them as well.
01:36:40.800 | There's a lot of interesting stuff to be learned there,
01:36:45.460 | even though those are not the golden path.
01:36:48.020 | - So maybe this is a good chance to step back.
01:36:50.140 | You mentioned OpenCog 2.0, but--
01:36:52.900 | - Go back to OpenCog 0.0, which exists now.
01:36:56.020 | - Alpha, yeah.
01:36:57.540 | Yeah, maybe talk through the history of OpenCog
01:37:01.900 | and your thinking about these ideas.
01:37:03.900 | - I would say OpenCog 2.0 is a term
01:37:08.740 | worth throwing around sort of tongue in cheek
01:37:11.420 | because the existing OpenCog system
01:37:14.540 | that we're working on now is not remotely close
01:37:17.220 | to what we'd consider a 1.0, right?
01:37:20.020 | I mean, it's an early, it's been around,
01:37:24.620 | what, 13 years or something,
01:37:27.420 | but it's still an early stage research system, right?
01:37:29.820 | And actually, we are going back to the beginning
01:37:34.820 | in terms of theory and implementation
01:37:40.700 | 'cause we feel like that's the right thing to do,
01:37:42.860 | but I'm sure what we end up with is gonna have
01:37:45.580 | a huge amount in common with the current system.
01:37:48.540 | I mean, we all still like the general approach.
01:37:51.260 | So there's-- - So first of all,
01:37:52.380 | what is OpenCog?
01:37:54.380 | - Sure, OpenCog is an open-source software project
01:37:59.380 | that I launched together with several others in 2008,
01:38:04.380 | and probably the first code written toward that
01:38:08.260 | was written in 2001 or two or something
01:38:11.140 | that was developed as a proprietary code base
01:38:15.300 | within my AI company, Novamente LLC,
01:38:18.300 | and we decided to open-source it in 2008,
01:38:21.980 | cleaned up the code, threw out some things,
01:38:23.860 | added some new things, and--
01:38:26.940 | - What language is it written in?
01:38:28.140 | - It's C++.
01:38:29.460 | Primarily, there's a bunch of scheme as well,
01:38:31.420 | but most of it's C++.
01:38:33.060 | - And it's separate from,
01:38:35.300 | something we'll also talk about is SingularityNet.
01:38:37.500 | So it was born as a non-networked thing.
01:38:41.340 | - Correct, correct.
01:38:42.420 | Well, there are many levels of networks,
01:38:45.060 | and involved here, right?
01:38:47.100 | - No connectivity to the internet.
01:38:49.460 | Or no, at birth.
01:38:52.460 | - Yeah, I mean, SingularityNet is a separate project
01:38:57.260 | and a separate body of code,
01:38:59.460 | and you can use SingularityNet as part of the infrastructure
01:39:02.620 | for a distributed OpenCog system,
01:39:04.500 | but there are different layers.
01:39:07.540 | - Yeah, got it.
01:39:08.380 | - So OpenCog, on the one hand, as a software framework,
01:39:14.860 | could be used to implement a variety
01:39:17.020 | of different AI architectures and algorithms.
01:39:21.900 | But in practice, there's been a group of developers,
01:39:26.460 | which I've been leading together with Linus Vepsdas,
01:39:29.460 | Neil Geisweiler, and a few others,
01:39:31.700 | which have been using the OpenCog platform
01:39:35.100 | and infrastructure to implement certain ideas
01:39:39.460 | about how to make an AGI.
01:39:41.300 | So there's been a little bit of ambiguity
01:39:43.500 | about OpenCog, the software platform,
01:39:46.140 | versus OpenCog, the AGI design,
01:39:49.380 | 'cause in theory, you could use that software,
01:39:52.180 | you could use it to make a neural net,
01:39:53.460 | you could use it to make a lot of different AGI.
01:39:55.900 | - What kind of stuff does the software platform provide?
01:39:58.660 | Like in terms of utilities, tools, like what?
01:40:00.780 | - Yeah, let me first tell about OpenCog
01:40:03.900 | as a software platform,
01:40:05.540 | and then I'll tell you the specific AGI R&D
01:40:08.700 | we've been building on top of it.
01:40:12.260 | So the core component of OpenCog as a software platform
01:40:16.180 | is what we call the atom space,
01:40:17.940 | which is a weighted labeled hypergraph.
01:40:21.260 | - A-T-O-M, atom space.
01:40:22.860 | - Atom space, yeah, yeah, not atom, like Adam and Eve,
01:40:25.900 | although that would be cool too.
01:40:28.100 | Yeah, so you have a hypergraph, which is like a,
01:40:32.140 | so a graph in this sense is a bunch of nodes
01:40:35.340 | with links between them.
01:40:37.100 | A hypergraph is like a graph,
01:40:40.940 | but links can go between more than two nodes.
01:40:43.980 | You have a link between three nodes.
01:40:45.500 | And in fact, OpenCog's atom space
01:40:49.540 | would properly be called a metagraph
01:40:51.740 | because you can have links pointing to links,
01:40:54.060 | or you could have links pointing to whole subgraphs, right?
01:40:56.820 | So it's an extended hypergraph or a metagraph.
01:41:00.940 | - Is metagraph a technical term?
01:41:02.260 | - It is now a technical term.
01:41:03.660 | - Interesting.
01:41:04.500 | - But I don't think it was yet a technical term
01:41:06.380 | when we started calling this a generalized hypergraph.
01:41:10.100 | But in any case, it's a weighted-labeled
01:41:13.380 | generalized hypergraph or weighted-labeled metagraph.
01:41:16.940 | The weights and labels mean that the nodes and links
01:41:19.180 | can have numbers and symbols attached to them.
01:41:22.360 | So they can have types on them.
01:41:24.940 | They can have numbers on them that represent,
01:41:27.460 | say, a truth value or an importance value
01:41:30.100 | for a certain purpose.
01:41:32.020 | - And of course, like with all things,
01:41:33.260 | you can reduce that to a hypergraph,
01:41:35.060 | and then the hypergraph can be reduced to a graph.
01:41:35.900 | - You can reduce a hypergraph to a graph,
01:41:37.660 | and you can reduce a graph to an adjacency matrix.
01:41:39.900 | So, I mean, there's always multiple representations.
01:41:42.740 | - But there's a layer of representation
01:41:44.020 | that seems to work well here.
01:41:45.100 | Got it.
01:41:45.940 | - Right, right, right.
01:41:46.780 | And so, similarly, you could have a link to a whole graph
01:41:51.780 | because a whole graph could represent,
01:41:53.460 | say, a body of information.
01:41:54.900 | And I could say, I reject this body of information.
01:41:58.620 | Then one way to do that is make that link
01:42:00.340 | go to that whole subgraph
01:42:01.580 | representing the body of information, right?
01:42:04.020 | I mean, there are many alternate representations,
01:42:07.200 | but that's, anyway, what we have in OpenCog,
01:42:10.720 | we have an atom space,
01:42:11.800 | which is this weighted-labeled generalized hypergraph.
01:42:15.100 | Knowledge Store, it lives in RAM.
01:42:17.820 | There's also a way to back it up to disk.
01:42:20.120 | There are ways to spread it
01:42:21.120 | among multiple different machines.
01:42:24.140 | Then there are various utilities for dealing with that.
01:42:27.960 | So there's a pattern matcher,
01:42:29.800 | which lets you specify a sort of abstract pattern
01:42:33.880 | and then search through a whole atom space,
01:42:36.200 | weighted-labeled hypergraph,
01:42:37.920 | to see what sub-hypergraphs may match that pattern,
01:42:41.920 | for an example.
01:42:42.880 | So that's, then there's something called
01:42:45.920 | the COGS server in OpenCog,
01:42:48.780 | which lets you run a bunch of different agents or processes
01:42:53.680 | in a scheduler, and each of these agents,
01:42:57.640 | basically, it reads stuff from the atom space
01:43:00.000 | and it writes stuff to the atom space.
01:43:01.880 | So this is sort of the basic operational model.
01:43:05.640 | - That's the software framework.
01:43:07.760 | - Right, and of course, there's a lot there,
01:43:10.360 | just from a scalable software engineering standpoint.
01:43:13.200 | - So you could use this, I don't know if you've,
01:43:15.080 | have you looked into the Steven Wolfram's
01:43:17.120 | physics project recently with the hypergraphs and stuff?
01:43:20.160 | Could you theoretically use the software framework to--
01:43:22.960 | - You certainly could, although Wolfram would rather die
01:43:26.160 | than use anything but Mathematica for his work.
01:43:29.060 | - Well, that's, yeah, but there's a big community of people
01:43:32.140 | who are, you know, would love integration.
01:43:36.120 | And like you said, the young minds love the idea
01:43:38.400 | of integrating, of connecting things.
01:43:40.160 | - Yeah, yeah, that's right, and I would add on that note,
01:43:42.840 | the idea of using hypergraph-type models in physics
01:43:46.600 | is not very new.
01:43:47.680 | Like if you look at--
01:43:49.120 | - The Russians did it first.
01:43:50.360 | - Well, I'm sure they did, and a guy named Ben Dribis,
01:43:54.920 | who's a mathematician, a professor in Louisiana or somewhere,
01:43:58.200 | had a beautiful book on quantum sets and hypergraphs
01:44:01.960 | and algebraic topology for discrete models of physics
01:44:05.540 | and carried it much farther than Wolfram has,
01:44:09.100 | but he's not rich and famous,
01:44:10.940 | so it didn't get in the headlines.
01:44:13.320 | But yeah, Wolfram aside, yes, certainly,
01:44:15.960 | that's a good way to put it.
01:44:17.160 | The whole OpenCog framework,
01:44:19.300 | you could use it to model biological networks
01:44:22.260 | and simulate biology processes.
01:44:24.260 | You could use it to model physics
01:44:26.500 | on discrete graph models of physics.
01:44:30.220 | So you could use it to do, say,
01:44:34.100 | biologically realistic neural networks, for example.
01:44:39.100 | So that's a framework.
01:44:42.380 | - What do agents and processes do?
01:44:44.260 | Do they grow the graph?
01:44:45.900 | What kind of computations, just to get a sense,
01:44:48.260 | are they supposed to do?
01:44:49.100 | - So in theory, they could do anything they want to do.
01:44:51.240 | They're just C++ processes.
01:44:53.140 | On the other hand, the computation framework
01:44:56.920 | is sort of designed for agents
01:44:59.180 | where most of their processing time
01:45:02.040 | is taken up with reads and writes to the Adam space.
01:45:05.460 | And so that's a very different processing model
01:45:09.060 | than, say, the matrix multiplication-based model
01:45:12.500 | as underlies most deep learning systems.
01:45:15.140 | So you could create an agent
01:45:19.620 | that just factored numbers for a billion years.
01:45:22.760 | It would run within the OpenCog platform,
01:45:25.040 | but it would be pointless.
01:45:26.660 | I mean, the point of doing OpenCog
01:45:28.940 | is because you want to make agents
01:45:30.580 | that are cooperating via reading and writing
01:45:33.380 | into this weighted labeled hypergraph, right?
01:45:36.420 | And that has both cognitive architecture importance,
01:45:41.420 | because then this hypergraph is being used
01:45:43.440 | as a sort of shared memory
01:45:46.100 | among different cognitive processes,
01:45:48.300 | but it also has software
01:45:50.560 | and hardware implementation implications,
01:45:52.900 | 'cause current GPU architectures
01:45:54.900 | are not so useful for OpenCog,
01:45:57.180 | whereas a graph chip would be incredibly useful, right?
01:46:01.260 | And I think GraphCore has those now,
01:46:03.660 | but they're not ideally suited for this.
01:46:05.260 | But I think in the next, let's say, three to five years,
01:46:10.260 | we're gonna see new chips
01:46:12.060 | where a graph is put on the chip,
01:46:14.740 | and the back and forth between multiple processes
01:46:19.380 | acting SIMD and MIMD on that graph is gonna be fast.
01:46:23.620 | And then that may do for OpenCog-type architectures
01:46:26.500 | what GPUs did for deep neural architectures.
01:46:29.860 | - As a small tangent,
01:46:31.340 | can you comment on thoughts about neuromorphic computing?
01:46:34.620 | So like hardware implementations
01:46:36.380 | of all these different kind of,
01:46:38.180 | are you excited by that possibility?
01:46:40.980 | - I'm excited by graph processors,
01:46:42.700 | because I think they can massively speed up--
01:46:45.380 | - OpenCog. - OpenCog,
01:46:46.460 | which is a class of architectures that I'm working on.
01:46:50.620 | I think if, in principle,
01:46:54.820 | neuromorphic computing should be amazing.
01:46:58.740 | I haven't yet been fully sold
01:47:00.460 | on any of the systems that are out there.
01:47:03.420 | Like memristors should be amazing too, right?
01:47:06.380 | So a lot of these things have obvious potential,
01:47:09.360 | but I haven't yet put my hands on a system
01:47:11.340 | that seemed to manifest that potential.
01:47:13.260 | - Mark system should be amazing,
01:47:14.860 | but the current systems have not been great.
01:47:17.980 | (laughs)
01:47:18.980 | For example, if you wanted to make
01:47:21.700 | a biologically realistic hardware neural network,
01:47:25.680 | like making a circuit in hardware
01:47:30.680 | that emulated like the Hodgkin-Huxley equation
01:47:34.340 | or the Izhekevich equation,
01:47:35.640 | like differential equations
01:47:38.220 | for a biologically realistic neuron,
01:47:40.660 | and putting that in hardware on the chip,
01:47:43.800 | that would seem that it would make more feasible
01:47:46.360 | to make a large scale, truly,
01:47:48.600 | biologically realistic neural network.
01:47:51.040 | No, what's been done so far is not like that.
01:47:54.320 | So I guess, personally, as a researcher,
01:47:57.120 | I mean, I've done a bunch of work in cognitive,
01:47:59.880 | sorry, in computational neuroscience,
01:48:02.480 | where I did some work with IARPA
01:48:04.640 | in DC Intelligence Advanced Research Project Agency.
01:48:08.220 | We were looking at how do you make
01:48:10.880 | a biologically realistic simulation
01:48:12.980 | of seven different parts of the brain
01:48:15.720 | cooperating with each other,
01:48:17.040 | using realistic non-linear dynamical models of neurons,
01:48:20.400 | and how do you get that to simulate
01:48:21.880 | what's going on in the mind of a GEOINT intelligence analyst
01:48:24.760 | while they're trying to find terrorists on a map, right?
01:48:27.120 | So if you wanna do something like that,
01:48:29.840 | having neuromorphic hardware that really let you simulate
01:48:34.040 | like a realistic model of the neuron would be amazing,
01:48:38.720 | but that's sort of with my
01:48:40.940 | computational neuroscience hat on, right?
01:48:43.080 | With an AGI hat on,
01:48:45.040 | I'm just more interested in these
01:48:48.400 | hypergraph knowledge representation-based architectures,
01:48:51.560 | which would benefit more
01:48:54.440 | from various types of graph processors,
01:48:57.680 | 'cause the main processing bottleneck
01:49:00.440 | is reading and writing to RAM.
01:49:01.980 | It's reading and writing to the graph in RAM.
01:49:03.920 | The main processing bottleneck
01:49:05.240 | for this kind of proto-AGI architecture
01:49:08.140 | is not multiplying matrices,
01:49:09.800 | and for that reason,
01:49:11.880 | GPUs, which are really good at multiplying matrices,
01:49:14.560 | don't apply as well.
01:49:17.480 | There are frameworks like Gunrock and others
01:49:20.200 | that try to boil down graph processing to matrix operations,
01:49:23.240 | and they're cool,
01:49:24.600 | but you're still putting a square peg
01:49:26.120 | into a round hole in a certain way.
01:49:28.760 | The same is true,
01:49:29.940 | I mean, current quantum machine learning,
01:49:32.720 | which is very cool,
01:49:34.180 | it's also all about how to get matrix and vector operations
01:49:37.280 | in quantum mechanics,
01:49:38.800 | and I see why that's natural to do.
01:49:41.280 | I mean, quantum mechanics is all unitary matrices,
01:49:44.240 | and vectors, right?
01:49:45.840 | On the other hand,
01:49:47.320 | you could also try to make graph-centric quantum computers,
01:49:50.820 | which I think is where things will go,
01:49:54.440 | and then we can take the OpenCog implementation layer,
01:49:59.440 | implement it in an uncollapsed state
01:50:02.720 | inside a quantum computer,
01:50:04.040 | but that may be the singularity squared, right?
01:50:06.640 | (Luke laughs)
01:50:08.240 | I'm not sure we need that to get to human level.
01:50:11.120 | - Singularity squared. - Human level.
01:50:12.400 | - That's already beyond the first singularity,
01:50:14.680 | but can we just--
01:50:15.520 | - Yeah, let's go back to OpenCog.
01:50:17.160 | - No, no, yeah, and the hypergraph and OpenCog.
01:50:19.560 | - Yeah, yeah, that's the software framework, right?
01:50:21.600 | So then the next thing is,
01:50:23.600 | our cognitive architecture
01:50:25.440 | tells us particular algorithms to put there.
01:50:27.980 | - Got it.
01:50:28.820 | Can we backtrack on the kind of,
01:50:30.680 | is this graph designed,
01:50:33.720 | is it, in general, supposed to be sparse,
01:50:37.700 | and the operations constantly grow and change the graph?
01:50:40.480 | - Yeah, the graph is sparse.
01:50:41.800 | - And, but is it constantly adding links and so on?
01:50:44.840 | - Yeah, it is a self-modifying hypergraph.
01:50:47.200 | - So it's not, so the write and read operations
01:50:49.800 | you're referring to,
01:50:51.160 | this isn't just a fixed graph to which you change the way,
01:50:54.320 | it's just a constantly growing graph.
01:50:55.840 | - Yeah, that's true.
01:50:58.000 | So it is different model than, say,
01:51:03.000 | current deep neural nets
01:51:04.680 | and have a fixed neural architecture,
01:51:06.820 | and you're updating the weights,
01:51:08.600 | although there have been, like,
01:51:09.680 | cascade correlational neural net architectures
01:51:11.920 | that grow new nodes and links,
01:51:13.920 | but the most common neural architectures now
01:51:16.620 | have a fixed neural architecture,
01:51:17.960 | you're updating the weights,
01:51:19.080 | and in OpenCog, you can update the weights,
01:51:22.520 | and that certainly happens a lot,
01:51:24.740 | but adding new nodes,
01:51:27.040 | adding new links, removing nodes and links
01:51:29.440 | is an equally critical part of the system's operations.
01:51:32.160 | - Got it.
01:51:33.000 | So now, when you start to add these cognitive algorithms
01:51:37.040 | on top of this OpenCog architecture,
01:51:39.840 | what does that look like?
01:51:41.280 | - Yeah, so within this framework, then,
01:51:44.800 | creating a cognitive architecture is basically two things.
01:51:48.520 | It's choosing what type system
01:51:51.520 | you wanna put on the nodes and links in the hypergraph,
01:51:53.800 | what types of nodes and links you want,
01:51:56.120 | and then it's choosing what collection of agents,
01:52:01.000 | what collection of AI algorithms or processes
01:52:04.620 | are gonna run to operate on this hypergraph,
01:52:08.040 | and of course, those two decisions
01:52:10.520 | are closely connected to each other.
01:52:13.920 | So in terms of the type system,
01:52:17.480 | there are some links that are more neural net-like,
01:52:19.900 | they just have weights to get updated by Hebbian learning,
01:52:23.800 | and activation spreads along them.
01:52:26.000 | There are other links that are more logic-like,
01:52:29.080 | and nodes that are more logic-like,
01:52:30.520 | so you could have a variable node,
01:52:32.200 | and you can have a node representing
01:52:33.640 | a universal or existential quantifier,
01:52:36.160 | as in predicate logic or term logic.
01:52:39.160 | So you can have logic-like nodes and links,
01:52:42.080 | or you can have neural-like nodes and links.
01:52:44.420 | You can also have procedure-like nodes and links,
01:52:47.400 | as in, say, a combinatory logic or lambda calculus
01:52:51.680 | representing programs.
01:52:53.660 | So you can have nodes and links
01:52:54.960 | representing many different types of semantics,
01:52:58.660 | which means you could make a horrible, ugly mess,
01:53:00.880 | or you could make a system
01:53:02.820 | where these different types of knowledge
01:53:04.320 | all interpenetrate and synergize
01:53:06.880 | with each other beautifully, right?
01:53:09.000 | - So the hypergraph can contain programs.
01:53:12.840 | - Yeah, it can contain programs,
01:53:14.480 | although in the current version,
01:53:18.000 | it is a very inefficient way
01:53:19.800 | to guide the execution of programs,
01:53:22.080 | which is one thing that we are aiming to resolve
01:53:25.040 | with our rewrite of the system now.
01:53:27.540 | - So what to you is the most beautiful aspect
01:53:31.860 | of OpenCog?
01:53:33.580 | Just to you personally,
01:53:35.540 | some aspect that captivates your imagination
01:53:38.960 | from beauty or power?
01:53:41.760 | - What fascinates me is finding a common representation
01:53:47.880 | that underlies abstract declarative knowledge
01:53:54.280 | and sensory knowledge and movement knowledge
01:53:59.560 | and procedural knowledge and episodic knowledge,
01:54:03.040 | finding the right level of representation
01:54:06.200 | where all these types of knowledge are stored
01:54:08.960 | in a sort of universal and interconvertible
01:54:12.880 | yet practically manipulable way, right?
01:54:15.800 | So to me, that's the core,
01:54:19.140 | 'cause once you've done that,
01:54:21.000 | then the different learning algorithms
01:54:23.120 | can help each other out.
01:54:24.480 | Like what you want is, if you have a logic engine
01:54:27.340 | that helps with declarative knowledge,
01:54:29.100 | and you have a deep neural net
01:54:30.360 | that gathers perceptual knowledge,
01:54:32.280 | and you have say an evolutionary learning system
01:54:34.680 | that learns procedures,
01:54:36.400 | you want these to not only interact
01:54:38.920 | on the level of sharing results
01:54:41.160 | and passing inputs and outputs to each other,
01:54:43.400 | you want the logic engine when it gets stuck
01:54:45.960 | to be able to share its intermediate state
01:54:48.560 | with the neural net
01:54:49.840 | and with the evolutionary learning algorithm
01:54:52.240 | so that they can help each other out of bottlenecks
01:54:55.460 | and help each other solve combinatorial explosions
01:54:58.320 | by intervening inside each other's cognitive processes.
01:55:02.040 | But that can only be done
01:55:03.520 | if the intermediate state of a logic engine,
01:55:06.000 | evolutionary learning engine,
01:55:07.400 | and a deep neural net are represented in the same form.
01:55:11.120 | And that's what we figured out how to do
01:55:13.140 | by putting the right type system
01:55:14.780 | on top of this weighted labeled hypergraph.
01:55:17.040 | - So is there, can you maybe elaborate
01:55:19.660 | on what are the different characteristics
01:55:21.880 | of a type system that can coexist
01:55:26.520 | amongst all these different kinds of knowledge
01:55:28.760 | that needs to be represented?
01:55:30.080 | And is, I mean, like is it hierarchical?
01:55:33.340 | Just any kind of insights you can give
01:55:36.720 | on that kind of type system?
01:55:37.800 | - Yeah, yeah, so this gets very nitty gritty
01:55:41.640 | and mathematical, of course.
01:55:43.980 | But one key part is switching
01:55:47.220 | from predicate logic to term logic.
01:55:50.480 | - What is predicate logic?
01:55:51.660 | What is term logic?
01:55:53.240 | - So term logic was invented by Aristotle,
01:55:56.120 | or at least that's the oldest recollection we have of it.
01:56:01.120 | But term logic breaks down basic logic
01:56:05.320 | into basically simple links between nodes,
01:56:07.520 | like an inheritance link between node A and node B.
01:56:12.520 | So in term logic, the basic deduction operation
01:56:16.840 | is A implies B, B implies C, therefore A implies C.
01:56:21.600 | Whereas in predicate logic,
01:56:23.120 | the basic operation is modus ponens,
01:56:25.040 | like A, A implies B, therefore B.
01:56:28.200 | So it's a slightly different way of breaking down logic.
01:56:31.960 | But by breaking down logic into term logic,
01:56:35.840 | you get a nice way of breaking logic
01:56:37.640 | down into nodes and links.
01:56:40.620 | So your concepts can become nodes,
01:56:43.020 | the logical relations become links.
01:56:45.260 | And so then inference is like,
01:56:46.680 | so if this link is A implies B,
01:56:48.800 | this link is B implies C,
01:56:50.880 | then deduction builds a link A implies C.
01:56:53.400 | And your probabilistic algorithm
01:56:54.960 | can assign a certain weight there.
01:56:57.480 | Now, you may also have like a Hebbian neural link
01:57:00.080 | from A to C, which is the degree to which thinking,
01:57:03.640 | the degree to which A being the focus of attention
01:57:06.680 | should make B the focus of attention, right?
01:57:09.100 | So you could have then a neural link,
01:57:10.920 | and you could have a symbolic,
01:57:13.740 | like logical inheritance link in your term logic.
01:57:17.040 | And they have separate meaning,
01:57:19.560 | but they could be used to guide each other as well.
01:57:22.960 | Like if there's a large amount of neural weight
01:57:26.760 | on the link between A and B,
01:57:28.420 | that may direct your logic engine to think about,
01:57:30.440 | well, what is the relation?
01:57:31.360 | Are they similar?
01:57:32.200 | Is there an inheritance relation?
01:57:33.920 | Are they similar in some context?
01:57:37.440 | On the other hand, if there's a logical relation
01:57:39.960 | between A and B, that may direct your neural component
01:57:43.400 | to think, well, when I'm thinking about A,
01:57:45.560 | or should I be directing some attention to B also,
01:57:48.280 | because there's a logical relation.
01:57:50.200 | So in terms of logic, there's a lot of thought
01:57:53.840 | that went into how do you break down logic relations,
01:57:58.320 | including basic sort of propositional logic relations
01:58:02.360 | as Aristotelian term logic deals with,
01:58:04.200 | and then quantifier logic relations also.
01:58:07.100 | How do you break those down elegantly into a hypergraph?
01:58:10.920 | 'Cause you, I mean, you can boil logic expression
01:58:13.480 | into a graph in many different ways.
01:58:14.840 | Many of them are very ugly, right?
01:58:16.680 | We tried to find elegant ways of sort of hierarchically
01:58:21.240 | breaking down complex logic expression into nodes and links,
01:58:26.240 | so that if you have, say, different nodes representing,
01:58:30.760 | you know, Ben, AI, Lex, interview, or whatever,
01:58:34.200 | the logic relations between those things are compact
01:58:37.920 | in the node and link representation,
01:58:40.500 | so that when you have a neural net acting
01:58:42.080 | on those same nodes and links,
01:58:43.960 | the neural net and the logic engine
01:58:45.760 | can sort of interoperate with each other.
01:58:48.240 | - And also interpretable by humans?
01:58:49.920 | Is that an important-- - That's tough.
01:58:52.240 | Yeah, in simple cases, it's interpretable by humans,
01:58:54.600 | but honestly, you know,
01:58:57.900 | I would say logic systems give more potential
01:59:05.280 | for transparency and comprehensibility
01:59:09.840 | than neural net systems, but you still have to work at it,
01:59:12.880 | because, I mean, if I show you a predicate logic proposition
01:59:16.720 | with, like, 500 nested universal
01:59:18.720 | and existential quantifiers and 217 variables,
01:59:22.380 | that's no more comprehensible than the weight matrix
01:59:24.640 | of a neural network, right?
01:59:26.580 | So I'd say the logic expressions that an AI learns
01:59:29.480 | from its experience are mostly totally opaque
01:59:32.480 | to human beings, and maybe even harder to understand
01:59:35.160 | than a neural net, 'cause, I mean,
01:59:36.720 | when you have multiple nested quantifier bindings,
01:59:38.960 | it's a very high level of abstraction.
01:59:41.520 | There is a difference, though, in that within logic,
01:59:44.720 | it's a little more straightforward to pose the problem
01:59:47.920 | of, like, normalize this and boil this down
01:59:49.900 | to a certain form.
01:59:51.080 | I mean, you can do that in neural nets, too.
01:59:52.720 | Like, you can distill a neural net to a simpler form,
01:59:55.680 | but that's more often done to make a neural net
01:59:57.280 | that'll run on an embedded device or something.
01:59:59.320 | It's harder to distill a net to a comprehensible form
02:00:03.440 | than it is to simplify a logic expression
02:00:05.640 | to a comprehensible form, but it doesn't come for free.
02:00:08.600 | Like, what's in the AI's mind is incomprehensible
02:00:13.040 | to a human unless you do some special work
02:00:15.720 | to make it comprehensible.
02:00:16.880 | So on the procedural side, there's some different
02:00:20.400 | and sort of interesting voodoo there.
02:00:22.980 | I mean, if you're familiar, in computer science,
02:00:25.800 | there's something called the Curry-Howard correspondence,
02:00:27.820 | which is a one-to-one mapping between proofs and programs.
02:00:30.960 | So every program can be mapped into a proof.
02:00:33.560 | Every proof can be mapped into a program.
02:00:36.000 | You can model this using category theory
02:00:37.800 | and a bunch of nice math,
02:00:40.960 | but we wanna make that practical, right?
02:00:43.280 | So that if you have an executable program
02:00:46.520 | that moves a robot's arm or figures out
02:00:49.960 | in what order to say things in a dialogue,
02:00:51.840 | that's a procedure represented in OpenCog's hypergraph.
02:00:55.840 | But if you wanna reason on how to improve that procedure,
02:01:00.120 | you need to map that procedure into logic
02:01:03.080 | using Curry-Howard isomorphism,
02:01:05.520 | so then the logic engine can reason
02:01:09.320 | about how to improve that procedure
02:01:11.120 | and then map that back into the procedural representation
02:01:14.080 | that is efficient for execution.
02:01:16.160 | So again, that comes down to not just
02:01:18.800 | can you make your procedure into a bunch of nodes and links,
02:01:21.440 | 'cause I mean, that can be done trivially.
02:01:23.240 | A C++ compiler has nodes and links inside it.
02:01:26.440 | Can you boil down your procedure
02:01:27.960 | into a bunch of nodes and links
02:01:29.840 | in a way that's hierarchically decomposed and simple enough?
02:01:33.680 | - That you can reason about.
02:01:34.520 | - Yeah, yeah, that given the resource constraints at hand,
02:01:37.040 | you can map it back and forth to your term logic,
02:01:40.920 | like fast enough and without having
02:01:43.520 | a bloated logic expression, right?
02:01:45.200 | So there's just a lot of nitty gritty particulars there.
02:01:50.200 | But by the same token, if you ask a chip designer,
02:01:54.520 | like how do you make the Intel i7 chip so good?
02:01:57.840 | There's a long list of technical answers there,
02:02:02.560 | which will take a while to go through, right?
02:02:04.760 | And this has been decades of work.
02:02:06.640 | I mean, the first AI system of this nature I tried to build
02:02:10.880 | was called WebMind in the mid 1990s.
02:02:13.440 | And we had a big graph, a big graph operating in RAM
02:02:17.240 | implemented with Java 1.1,
02:02:18.880 | which was a terrible, terrible implementation idea.
02:02:21.840 | And then each node had its own processing.
02:02:26.000 | So like there, the core loop looped through all nodes
02:02:29.040 | in the network and let each node enact
02:02:30.680 | what its little thing was doing.
02:02:32.960 | And we had logic and neural nets in there,
02:02:35.880 | but an evolutionary learning,
02:02:38.440 | but we hadn't done enough of the math
02:02:40.800 | to get them to operate together very cleanly.
02:02:43.440 | So it was really, it was quite a horrible mess.
02:02:46.280 | So as well as shifting an implementation
02:02:49.440 | where the graph is its own object
02:02:51.880 | and the agents are separately scheduled,
02:02:54.760 | we've also done a lot of work
02:02:56.840 | on how do you represent programs?
02:02:58.440 | How do you represent procedures?
02:03:00.840 | You know, how do you represent genotypes for evolution
02:03:03.720 | in a way that the interoperability
02:03:06.680 | between the different types of learning
02:03:09.040 | associated with these different types of knowledge
02:03:11.520 | actually works?
02:03:13.120 | And that's been quite difficult.
02:03:15.000 | It's taken decades and it's totally off to the side
02:03:18.600 | of what the commercial mainstream of the AI field is doing,
02:03:23.120 | which isn't thinking about representation at all, really.
02:03:27.640 | Although you could see like in the DNC,
02:03:30.760 | they had to think a little bit
02:03:31.960 | about how do you make representation of a map
02:03:33.840 | in this memory matrix work together
02:03:36.640 | with the representation needed
02:03:38.120 | for say visual pattern recognition
02:03:40.200 | in a hierarchical neural network.
02:03:42.080 | But I would say we have taken that direction
02:03:45.080 | of taking the types of knowledge you need
02:03:47.880 | for different types of learning,
02:03:49.080 | like declarative, procedural, attentional,
02:03:52.000 | and how do you make these types of knowledge
02:03:54.800 | represent in a way that allows cross learning
02:03:58.120 | across these different types of memory.
02:04:00.160 | We've been prototyping and experimenting with this
02:04:03.880 | within OpenCog and before that WebMind
02:04:07.520 | since the mid 1990s.
02:04:10.600 | Now, disappointingly to all of us,
02:04:13.800 | this has not yet been cashed out in an AGI system, right?
02:04:18.360 | I mean, we've used this system
02:04:20.600 | within our consulting business.
02:04:22.440 | So we've built natural language processing
02:04:24.280 | and robot control and financial analysis.
02:04:27.720 | We've built a bunch of sort of vertical market specific
02:04:31.160 | proprietary AI projects.
02:04:33.560 | They use OpenCog on the backend,
02:04:36.680 | but we haven't, that's not the AGI goal, right?
02:04:39.680 | It's interesting, but it's not the AGI goal.
02:04:42.680 | So now what we're looking at with our rebuild of the system--
02:04:47.680 | - 2.0.
02:04:49.360 | - Yeah, we're also calling it true AGI.
02:04:51.400 | So we're not quite sure what the name is yet.
02:04:54.800 | We made a website for 2agi.io,
02:04:57.480 | but we haven't put anything on there yet.
02:04:59.840 | We may come up with an even better name.
02:05:02.160 | - It's kind of like the real AI starting point
02:05:04.960 | for your AGI book.
02:05:05.800 | - Yeah, but I like true better
02:05:06.920 | because true has like, you can be true hearted, right?
02:05:09.760 | You can be true to your girlfriend.
02:05:11.000 | So true has a number and it also has logic in it, right?
02:05:15.720 | Because logic is a key part of the system.
02:05:17.080 | - I like it, yeah.
02:05:18.280 | - So yeah, with the true AGI system,
02:05:22.400 | we're sticking with the same basic architecture,
02:05:25.400 | but we're trying to build on what we've learned.
02:05:29.640 | And one thing we've learned is that,
02:05:32.360 | we need type checking among dependent types
02:05:36.880 | to be much faster and among probabilistic dependent types
02:05:39.880 | to be much faster.
02:05:41.080 | So as it is now,
02:05:43.560 | you can have complex types on the nodes and links,
02:05:47.080 | but if you want to put,
02:05:48.320 | like if you want types to be first class citizens,
02:05:51.240 | so that you can have the types can be variables
02:05:53.760 | and then you do type checking
02:05:55.640 | among complex higher order types,
02:05:58.000 | you can do that in the system now, but it's very slow.
02:06:00.920 | This is stuff like is done
02:06:02.040 | in cutting edge program languages like Agda or something,
02:06:05.320 | these obscure research languages.
02:06:07.360 | On the other hand, we've been doing a lot
02:06:09.480 | tying together deep neural nets with symbolic learning.
02:06:12.320 | So we did a project for Cisco, for example,
02:06:15.160 | which was on, this was street scene analysis,
02:06:17.360 | but they had deep neural models
02:06:18.560 | for a bunch of cameras watching street scenes,
02:06:20.960 | but they trained a different model for each camera
02:06:23.360 | because they couldn't get the transfer learning
02:06:24.800 | to work between camera A and camera B.
02:06:27.000 | So we took what came out of all the deep neural models
02:06:29.000 | for the different cameras,
02:06:30.360 | we fed it into an open code symbolic representation,
02:06:33.400 | then we did some pattern mining and some reasoning
02:06:36.240 | on what came out of all the different cameras
02:06:38.080 | within the symbolic graph.
02:06:39.440 | And that worked well for that application.
02:06:42.000 | I mean, Hugo Latapie from Cisco gave a talk
02:06:45.200 | touching on that at last year's AGI conference,
02:06:47.360 | it was in Shenzhen.
02:06:48.760 | On the other hand, we learned from there,
02:06:51.000 | it was kind of clunky to get the deep neural models
02:06:53.280 | to work well with the symbolic system
02:06:55.640 | because we were using Torch
02:06:58.560 | and Torch keeps a sort of state computation graph,
02:07:03.560 | but you needed like real time access
02:07:05.280 | to that computation graph within our hypergraph.
02:07:07.640 | And we certainly did it.
02:07:10.640 | Alexey Polypov, who leads our St. Petersburg team
02:07:13.040 | wrote a great paper on cognitive modules in OpenCog,
02:07:16.440 | explaining sort of how do you deal
02:07:17.680 | with the Torch compute graph inside OpenCog.
02:07:19.920 | But in the end, we realized like,
02:07:22.800 | that just hadn't been one of our design thoughts
02:07:25.360 | when we built OpenCog, right?
02:07:27.200 | So between wanting really fast dependent type checking
02:07:30.640 | and wanting much more efficient interoperation
02:07:33.600 | between the computation graphs of deep neural net frameworks
02:07:36.240 | and OpenCog's hypergraph,
02:07:37.720 | and adding on top of that,
02:07:39.960 | wanting to more effectively run an OpenCog hypergraph
02:07:42.440 | distributed across RAM and 10,000 machines,
02:07:45.160 | which is, we're doing dozens of machines now,
02:07:47.240 | but it's just not, we didn't architect it
02:07:50.680 | with that sort of modern scalability in mind.
02:07:53.040 | So these performance requirements
02:07:55.360 | are what have driven us to want to re-architect the base,
02:08:00.360 | but the core AGI paradigm doesn't really change.
02:08:05.280 | Like the mathematics is the same.
02:08:07.720 | It's just, we can't scale to the level that we want
02:08:11.400 | in terms of distributed processing
02:08:13.840 | or speed of various kinds of processing
02:08:16.240 | with the current infrastructure
02:08:19.120 | that was built in the phase 2001 to 2008,
02:08:22.840 | which is hardly shocking, right?
02:08:26.080 | - Well, I mean, the three things you mentioned
02:08:27.840 | are really interesting.
02:08:28.680 | So what do you think about, in terms of interoperability,
02:08:32.300 | communicating with computational graph of neural networks,
02:08:36.280 | what do you think about the representations
02:08:38.440 | that neural networks form?
02:08:40.640 | - They're bad, but there's many ways
02:08:42.920 | that you could deal with that.
02:08:44.320 | So I've been wrestling with this a lot
02:08:46.840 | in some work on supervised grammar induction,
02:08:49.880 | and I have a simple paper on that
02:08:52.120 | that I'll give at the next AGI conference,
02:08:55.360 | the online portion of which is next week, actually.
02:08:58.200 | - What is grammar induction?
02:09:00.400 | - So this isn't AGI either,
02:09:02.600 | but it's sort of on the verge
02:09:05.200 | between Neural AI and AGI or something.
02:09:08.280 | On supervised grammar induction is the problem,
02:09:11.320 | throw your AI system a huge body of text
02:09:15.420 | and have it learn the grammar of the language
02:09:18.160 | that produced that text.
02:09:20.280 | So you're not giving it labeled examples.
02:09:22.600 | So you're not giving it like a thousand sentences
02:09:24.440 | where the parses were marked up by graduate students.
02:09:27.120 | So it's just got to infer the grammar from the text.
02:09:30.320 | It's like the Rosetta Stone, but worse, right?
02:09:33.480 | 'Cause you only have the one language,
02:09:35.360 | and you have to figure out what is the grammar.
02:09:37.200 | So that's not really AGI because,
02:09:41.480 | I mean, the way a human learns language is not that, right?
02:09:44.380 | I mean, we learn from language that's used in context.
02:09:47.760 | So it's a social embodied thing.
02:09:49.360 | We see how a given sentence is grounded in observation.
02:09:53.560 | - As an interactive element, I guess.
02:09:55.240 | - Yeah, yeah, yeah.
02:09:56.560 | On the other hand, so I'm more interested in that.
02:10:00.400 | I'm more interested in making an AGI system learn language
02:10:02.980 | from its social and embodied experience.
02:10:05.600 | On the other hand, that's also more of a pain to do,
02:10:08.280 | and that would lead us into Hanson Robotics
02:10:10.680 | and their robotics work I've known,
02:10:11.960 | which we'll talk about in a few minutes.
02:10:14.640 | But just as an intellectual exercise,
02:10:17.160 | as a learning exercise, trying to learn grammar
02:10:20.600 | from a corpus is very, very interesting, right?
02:10:24.600 | And that's been a field in AI for a long time.
02:10:27.520 | No one can do it very well.
02:10:29.240 | So we've been looking at transformer neural networks
02:10:32.100 | and tree transformers, which are amazing.
02:10:35.840 | These came out of Google Brain, actually.
02:10:39.120 | And actually, on that team was Lucas Kaiser,
02:10:41.960 | who used to work for me in the period 2005 through '08
02:10:46.560 | or something.
02:10:47.400 | So it's been fun to see my former AGI employees disperse
02:10:52.200 | and do all these amazing things.
02:10:54.100 | Way too many sucked into Google, actually.
02:10:56.200 | (laughing)
02:10:57.040 | But yeah, anyway--
02:10:57.860 | - We'll talk about that, too.
02:10:59.000 | - Lucas Kaiser and a bunch of these guys,
02:11:00.680 | they create transformer networks,
02:11:03.220 | that classic paper like attention is all you need
02:11:05.520 | and all these things following on from that.
02:11:08.200 | So we're looking at transformer networks.
02:11:10.180 | And these are able to, I mean, this is what underlies
02:11:14.520 | GPT-2 and GPT-3 and so on, which are very, very cool
02:11:18.160 | and have absolutely no cognitive understanding
02:11:20.360 | of any of the texts we're looking at.
02:11:21.720 | Like, they're very intelligent idiots, right?
02:11:24.940 | - So, sorry to take, but to bring us back,
02:11:28.100 | but do you think GPT-3 understands language?
02:11:31.800 | - No, not at all, it understands nothing.
02:11:34.120 | It's a complete idiot.
02:11:35.360 | But it's a brilliant idiot.
02:11:36.760 | - You don't think GPT-20 will understand language?
02:11:40.520 | - No, no, no, no.
02:11:42.280 | - So size is not gonna buy you understanding?
02:11:45.200 | - Any more than a faster car is gonna get you to Mars.
02:11:48.880 | It's a completely different kind of thing.
02:11:50.920 | - I mean, these networks are very cool.
02:11:54.280 | And as an entrepreneur, I can see many highly valuable uses
02:11:57.440 | for them and as an artist, I love them, right?
02:12:01.100 | So I mean, we're using our own neural model,
02:12:05.240 | which is along those lines to control
02:12:07.040 | the Philip K. Dick robot now.
02:12:09.000 | And it's amazing to like train a neural model
02:12:12.200 | on the robot Philip K. Dick and see it come up
02:12:14.800 | with like crazed stoned philosopher pronouncements,
02:12:18.400 | very much like what Philip K. Dick might have said, right?
02:12:21.440 | Like these models are super cool.
02:12:24.840 | And I'm working with Hanson Robotics now
02:12:27.680 | on using a similar but more sophisticated one for SOFIA,
02:12:30.560 | which we haven't launched yet.
02:12:34.040 | But so I think it's cool.
02:12:36.040 | But no, these are-- - But it's not understanding.
02:12:37.440 | - These are recognizing a large number of shallow patterns.
02:12:41.800 | They're not forming an abstract representation.
02:12:44.800 | And that's the point I was coming to
02:12:47.100 | when we're looking at grammar induction.
02:12:50.660 | We tried to mine patterns out of the structure
02:12:53.500 | of the transformer network.
02:12:54.980 | And you can, but the patterns aren't what you want.
02:12:59.560 | They're nasty.
02:13:00.560 | So I mean, if you do supervised learning,
02:13:03.180 | if you look at sentences where you know
02:13:04.520 | the correct parts of a sentence,
02:13:06.480 | you can learn a matrix that maps
02:13:09.080 | between the internal representation of the transformer
02:13:12.240 | and the parts of the sentence.
02:13:14.120 | And so then you can actually train something
02:13:16.120 | that will output the sentence parts
02:13:18.440 | from the transformer network's internal state.
02:13:20.660 | And we did this, I think, Christopher Manning,
02:13:24.760 | some others have now done this also.
02:13:27.120 | But I mean, what you get is that the representation
02:13:30.600 | is horribly ugly and is scattered all over the network
02:13:33.200 | and doesn't look like the rules of grammar
02:13:34.920 | that you know are the right rules of grammar, right?
02:13:37.240 | It's kind of ugly.
02:13:38.240 | So what we're actually doing
02:13:40.760 | is we're using a symbolic grammar learning algorithm,
02:13:44.280 | but we're using the transformer neural network
02:13:46.760 | as a sentence probability oracle.
02:13:48.900 | So like, if you have a rule of grammar
02:13:52.120 | and you aren't sure if it's a correct rule of grammar
02:13:53.920 | or not, you can generate a bunch of sentences
02:13:56.440 | using that rule of grammar and a bunch of sentences
02:13:59.080 | violating that rule of grammar.
02:14:00.880 | And you can see the transformer model
02:14:04.480 | doesn't think the sentences obeying the rule of grammar
02:14:06.740 | are more probable than the sentences
02:14:08.280 | disobeying the rule of grammar.
02:14:10.080 | So in that way, you can use the neural model
02:14:11.840 | as a sense probability oracle to guide
02:14:14.720 | a symbolic grammar learning process.
02:14:19.960 | And that seems to work better than trying to milk
02:14:24.000 | the grammar out of the neural network
02:14:25.840 | that doesn't have it in there.
02:14:26.760 | So I think the thing is these neural nets
02:14:29.480 | are not getting a semantically meaningful representation
02:14:32.880 | internally by and large.
02:14:35.380 | So one line of research is to try to get them to do that.
02:14:38.120 | And Infogam was trying to do that.
02:14:40.000 | So like, if you look back like two years ago,
02:14:43.040 | there was all these papers on like Edward,
02:14:45.280 | this probabilistic programming neural net framework
02:14:47.400 | that Google had, which came out of Infogam.
02:14:49.680 | So the idea there was like, you could train
02:14:53.720 | an Infogam neural net model, which is a generative
02:14:56.320 | associative network to recognize and generate faces.
02:14:59.200 | And the model would automatically learn a variable
02:15:02.160 | for how long the nose is and automatically learn a variable
02:15:04.420 | for how wide the eyes are or how big the lips are
02:15:07.080 | or something, right?
02:15:08.060 | So it automatically learned these variables,
02:15:11.040 | which have a semantic meaning.
02:15:12.480 | So that was a rare case where a neural net
02:15:15.360 | trained with a fairly standard GAN method
02:15:18.080 | was able to actually learn the semantic representation.
02:15:20.880 | So for many years, many of us tried to take that
02:15:23.240 | the next step and get a GAN type neural network
02:15:27.200 | that would have not just a list of semantic latent variables,
02:15:31.720 | but would have say a base net of semantic latent variables
02:15:33.960 | with dependencies between them.
02:15:35.480 | The whole programming framework, Edward, was made for that.
02:15:38.860 | I mean, no one got it to work, right?
02:15:40.740 | And it could be--
02:15:41.580 | - Do you think it's possible?
02:15:42.980 | Yeah, do you think--
02:15:43.820 | - I don't know.
02:15:44.780 | It might be that back propagation just won't work for it
02:15:47.320 | because the gradients are too screwed up.
02:15:49.740 | Maybe you could get it to work using CMAES
02:15:52.020 | or some like floating point evolutionary algorithm.
02:15:54.860 | We tried, we didn't get it to work.
02:15:57.020 | Eventually, we just paused that rather than gave it up.
02:16:01.380 | We paused that and said, well, okay,
02:16:03.600 | let's try more innovative ways to learn implicit,
02:16:08.600 | to learn what are the representations implicit
02:16:11.000 | in that network without trying to make it grow
02:16:13.480 | inside that network.
02:16:14.760 | And I described how we're doing that in language.
02:16:19.600 | You can do similar things in vision, right?
02:16:21.440 | So what--
02:16:22.280 | - Use it as an oracle.
02:16:23.400 | - Yeah, yeah, yeah.
02:16:24.240 | So you can, that's one way is that you use
02:16:26.260 | a structure learning algorithm, which is symbolic.
02:16:29.120 | And then you use the deep neural net
02:16:31.880 | as an oracle to guide the structure learning algorithm.
02:16:34.240 | The other way to do it is like Infogel was trying to do
02:16:37.880 | and try to tweak the neural network
02:16:40.040 | to have the symbolic representation inside it.
02:16:44.160 | I tend to think what the brain is doing
02:16:46.440 | is more like using the deep neural net type thing
02:16:51.440 | as an oracle.
02:16:52.500 | Like I think the visual cortex or the cerebellum
02:16:56.680 | are probably learning a non-semantically meaningful
02:17:00.280 | opaque tangled representation.
02:17:02.440 | And then when they interface with the more cognitive parts
02:17:04.600 | of the cortex, the cortex is sort of using those
02:17:08.120 | as an oracle and learning the abstract representation.
02:17:10.760 | So if you do sports, take for example,
02:17:13.240 | serving in tennis, right?
02:17:15.240 | I mean, my tennis serve is okay, not great,
02:17:17.680 | but I learned it by trial and error, right?
02:17:19.760 | And I mean, I learned music by trial and error too.
02:17:22.120 | I just sit down and play.
02:17:23.960 | But then if you're an athlete, which I'm not a good athlete,
02:17:27.080 | I mean, and then you'll watch videos of yourself serving
02:17:30.360 | and your coach will help you think about what you're doing.
02:17:32.760 | And you'll then form a declarative representation,
02:17:35.040 | but your cerebellum maybe didn't have
02:17:37.160 | a declarative representation.
02:17:38.640 | Same way with music, like I will hear something in my head.
02:17:43.560 | I'll sit down and play the thing like I heard it.
02:17:46.940 | And then I will try to study what my fingers did
02:17:51.000 | to see like, what did you just play?
02:17:52.760 | Like, how did you do that, right?
02:17:55.600 | Because if you're composing,
02:17:57.700 | you may wanna see how you did it
02:17:59.720 | and then declaratively morph that in some way
02:18:02.660 | that your fingers wouldn't think of, right?
02:18:05.200 | But the physiological movement may come out
02:18:09.360 | of some opaque like cerebellar reinforcement learn thing.
02:18:14.360 | And so that's, I think, trying to milk the structure
02:18:17.640 | of a neural net by treating it as an oracle
02:18:19.280 | may be more like how your declarative mind post-processes
02:18:23.920 | what your visual or motor cortex.
02:18:27.480 | So I mean, in vision, it's the same way.
02:18:29.360 | Like you can recognize beautiful art
02:18:33.520 | much better than you can say why
02:18:36.720 | you think that piece of art is beautiful.
02:18:38.480 | But if you're trained as an art critic,
02:18:40.480 | you do learn to say why.
02:18:41.640 | And some of it's bullshit, but some of it isn't, right?
02:18:44.000 | Some of it is learning to map sensory knowledge
02:18:46.800 | into declarative and linguistic knowledge,
02:18:51.080 | yet without necessarily making the sensory system itself
02:18:56.000 | use a transparent and easily communicable representation.
02:19:00.600 | - Yeah, that's fascinating.
02:19:01.800 | To think of neural networks as like dumb question answers
02:19:06.360 | that you can just milk to build up a knowledge base.
02:19:10.920 | And that could be multiple networks, I suppose,
02:19:12.640 | from different--
02:19:13.560 | - Yeah, yeah.
02:19:14.400 | So I think if a group like DeepMind or OpenAI
02:19:18.120 | were to build AGI,
02:19:19.840 | and I think DeepMind is like 1,000 times more likely
02:19:22.920 | from what I could tell,
02:19:24.720 | 'cause they've hired a lot of people with broad minds
02:19:30.040 | and many different approaches and angles on AGI,
02:19:34.360 | whereas OpenAI is also awesome,
02:19:36.640 | but I see them as more of like a pure
02:19:39.060 | deep reinforcement learning shop.
02:19:41.240 | - Yeah, this time I got you.
02:19:42.080 | - So far. - Yeah, there's a lot of,
02:19:43.920 | you're right, there's, I mean,
02:19:46.680 | there's so much interdisciplinary work at DeepMind,
02:19:49.440 | like neuroscience--
02:19:50.280 | - And you put that together with Google Brain,
02:19:52.280 | which, granted, they're not working that closely together
02:19:54.520 | now, but my oldest son, Zarathustra,
02:19:57.120 | is doing his PhD in machine learning applied
02:20:00.200 | to automated theorem proving in Prague under Joseph Urban.
02:20:03.860 | So the first paper, DeepMath,
02:20:06.520 | which applied deep neural nets to guide theorem proving
02:20:09.440 | was out of Google Brain.
02:20:10.720 | I mean, by now, the automated theorem proving community
02:20:15.000 | has gone way, way, way beyond anything Google was doing.
02:20:18.400 | But still, yeah, but anyway,
02:20:21.120 | if that community was gonna make an AGI,
02:20:23.760 | probably one way they would do it was,
02:20:26.880 | you know, take 25 different neural modules
02:20:30.680 | architected in different ways,
02:20:32.040 | maybe resembling different parts of the brain,
02:20:33.800 | like a basal ganglia model, cerebellum model,
02:20:36.280 | a thalamus module, a few hippocampus models,
02:20:40.440 | number of different models
02:20:41.480 | representing parts of the cortex, right?
02:20:43.680 | Take all of these and then wire them together
02:20:47.920 | to co-train and learn them together.
02:20:51.480 | Like that would be an approach to creating an AGI.
02:20:56.480 | One could implement something like that efficiently
02:20:59.620 | on top of our true AGI, like OpenCog 2.0 system,
02:21:03.760 | once it exists, although obviously Google
02:21:06.640 | has their own highly efficient implementation architecture.
02:21:10.240 | So I think that's a decent way to build AGI.
02:21:13.280 | I was very interested in that in the mid '90s.
02:21:15.680 | But I mean, the knowledge about how the brain works
02:21:19.440 | sort of pissed me off, like it wasn't there yet.
02:21:21.520 | Like, you know, in the hippocampus,
02:21:23.080 | you have these concept neurons,
02:21:24.740 | like the so-called grandmother neuron,
02:21:26.720 | which everyone laughed at, it's actually there.
02:21:28.520 | Like I have some Lex Friedman neurons
02:21:31.080 | that fire differentially when I see you
02:21:33.280 | and not when I see any other person, right?
02:21:35.360 | So how do these Lex Friedman neurons,
02:21:38.860 | how do they coordinate with the distributed representation
02:21:41.400 | of Lex Friedman I have in my cortex, right?
02:21:44.520 | There's some back and forth between cortex and hippocampus
02:21:47.680 | that lets these discrete symbolic representations
02:21:50.100 | in hippocampus correlate and cooperate
02:21:53.200 | with the distributed representations in cortex.
02:21:55.680 | This probably has to do with how the brain
02:21:57.400 | does its version of abstraction and quantifier logic, right?
02:22:00.240 | Like you can have a single neuron in the hippocampus
02:22:02.640 | that activates a whole distributed activation pattern
02:22:05.860 | in cortex, well, this may be how the brain
02:22:08.760 | does like symbolization and abstraction,
02:22:11.080 | as in functional programming or something.
02:22:14.260 | But we can't measure it, like we don't have enough electrodes
02:22:17.280 | stuck between the cortex and the hippocampus
02:22:20.920 | in any known experiment to measure it.
02:22:23.060 | So I got frustrated with that direction,
02:22:26.320 | not 'cause it's impossible--
02:22:27.160 | - 'Cause we just don't understand enough yet.
02:22:29.640 | - Of course, it's a valid research direction,
02:22:31.720 | you can try to understand more and more.
02:22:33.680 | And we are measuring more and more
02:22:34.960 | about what happens in the brain now than ever before.
02:22:38.100 | So it's quite interesting.
02:22:40.520 | On the other hand, I sort of got more of an engineering
02:22:45.100 | mindset about AGI, I'm like, well, okay,
02:22:47.880 | we don't know how the brain works that well,
02:22:50.140 | we don't know how birds fly that well yet either.
02:22:52.340 | We have no idea how a hummingbird flies
02:22:54.060 | in terms of the aerodynamics of it.
02:22:56.220 | On the other hand, we know basic principles
02:22:59.260 | of like flapping and pushing the air down,
02:23:01.740 | and we know the basic principles
02:23:03.500 | of how the different parts of the brain work.
02:23:05.700 | So let's take those basic principles
02:23:07.460 | and engineer something that embodies
02:23:09.660 | those basic principles, but is well designed
02:23:13.260 | for the hardware that we have on hand right now.
02:23:18.060 | - Yeah, so do you think we can create AGI
02:23:20.180 | before we understand how the brain works?
02:23:22.420 | - I think that's probably what will happen.
02:23:25.060 | And maybe the AGI will help us do better brain imaging
02:23:28.540 | that will then let us build artificial humans,
02:23:30.860 | which is very, very interesting to us
02:23:33.340 | because we are humans, right?
02:23:34.940 | I mean, building artificial humans is super worthwhile.
02:23:38.820 | I just think it's probably not the shortest path to AGI.
02:23:42.740 | - So it's a fascinating idea that we would build AGI
02:23:45.660 | to help us understand ourselves.
02:23:47.300 | A lot of people ask me if young people
02:23:54.580 | interested in doing artificial intelligence,
02:23:56.400 | they look at sort of doing graduate level,
02:24:00.820 | even undergrads, but graduate level research,
02:24:03.300 | and they see where the artificial intelligence
02:24:06.020 | community stands now.
02:24:07.020 | It's not really AGI type research for the most part.
02:24:10.100 | So the natural question they ask is,
02:24:12.300 | what advice would you give?
02:24:13.820 | I mean, maybe I could ask if people were interested
02:24:17.500 | in working on OpenCog or in some kind of direct
02:24:22.500 | or indirect connection to OpenCog or AGI research,
02:24:25.360 | what would you recommend?
02:24:26.620 | - OpenCog, first of all, is open source project.
02:24:31.180 | There's a Google group discussion list.
02:24:35.540 | There's a GitHub repository.
02:24:36.940 | So if anyone's interested in lending a hand
02:24:39.980 | with that aspect of AGI,
02:24:42.780 | introduce yourself on the OpenCog email list.
02:24:46.180 | And there's a Slack as well.
02:24:48.100 | I mean, we're certainly interested to have inputs
02:24:52.100 | into our redesign process for a new version of OpenCog,
02:24:57.660 | but also we're doing a lot of very interesting research.
02:25:01.160 | I mean, we're working on data analysis
02:25:04.060 | for COVID clinical trials.
02:25:05.620 | We're working with Hanson Robotics.
02:25:06.940 | We're doing a lot of cool things
02:25:08.000 | with the current version of OpenCog now.
02:25:10.720 | So there's certainly opportunity to jump into OpenCog
02:25:14.740 | or various other open source AGI oriented projects.
02:25:18.780 | - So would you say there's like masters
02:25:20.300 | and PhD theses in there?
02:25:22.100 | - Plenty, yeah, plenty, of course.
02:25:23.940 | I mean, the challenge is to find a supervisor
02:25:26.920 | who wants to foster that sort of research,
02:25:29.700 | but it's way easier than it was when I got my PhD, right?
02:25:32.820 | - So, okay, great.
02:25:33.660 | So you talked about OpenCog,
02:25:34.580 | which is kind of one, the software framework,
02:25:37.980 | but also the actual attempt to build an AGI system.
02:25:42.980 | And then there is this exciting idea of SingularityNet.
02:25:48.620 | So maybe, can you say first, what is SingularityNet?
02:25:53.180 | - Sure, sure.
02:25:54.300 | SingularityNet is a platform
02:25:59.060 | for realizing a decentralized network
02:26:04.060 | of artificial intelligences.
02:26:08.300 | So Marvin Minsky, the AI pioneer who I knew a little bit,
02:26:13.300 | he had the idea of a society of minds.
02:26:16.580 | Like you should achieve an AI
02:26:18.420 | not by writing one algorithm or one program,
02:26:21.080 | but you should put a bunch of different AIs out there
02:26:24.060 | and the different AIs will interact with each other,
02:26:27.780 | each playing their own role.
02:26:29.500 | And then the totality of the society of AIs
02:26:32.540 | would be the thing that displayed
02:26:34.940 | the human level intelligence.
02:26:36.580 | And when he was alive, I had many debates with Marvin
02:26:40.900 | about this idea.
02:26:43.020 | And I think he really thought the mind
02:26:48.020 | was more like a society than I do.
02:26:51.260 | Like, I think you could have a mind
02:26:54.140 | that was as disorganized as a human society,
02:26:56.780 | but I think a human-like mind
02:26:57.940 | has a bit more central control than that, actually.
02:27:00.340 | I mean, we have this thalamus
02:27:02.900 | and the medulla and limbic system.
02:27:04.780 | We have a sort of top-down control system
02:27:07.980 | that guides much of what we do,
02:27:10.900 | more so than a society does.
02:27:12.820 | So I think he stretched that metaphor a little too far,
02:27:16.900 | but I also think there's something interesting there.
02:27:20.860 | And so in the '90s, when I started my first
02:27:25.740 | sort of non-academic AI project, WebMind,
02:27:28.540 | which was an AI startup in New York
02:27:30.940 | in the Silicon Alley area in the late '90s,
02:27:34.620 | what I was aiming to do there
02:27:36.220 | was make a distributed society of AIs,
02:27:38.860 | the different parts of which would live
02:27:41.300 | on different computers all around the world,
02:27:43.620 | and each one would do its own thinking
02:27:45.180 | about the data local to it.
02:27:47.060 | But they would all share information with each other
02:27:48.900 | and outsource work with each other and cooperate,
02:27:51.300 | and the intelligence would be in the whole collective.
02:27:53.980 | And I organized a conference together
02:27:56.620 | with Francis Heiligin at Free University of Brussels in 2001,
02:28:00.580 | which was the Global Brain Zero Conference.
02:28:02.900 | And we're planning the next version,
02:28:04.700 | the Global Brain One Conference,
02:28:06.900 | at the Free University of Brussels for next year, 2021,
02:28:10.100 | so 20 years after.
02:28:12.020 | And then maybe we can have the next one 10 years after that,
02:28:14.540 | like exponentially faster until the singularity comes, right?
02:28:18.460 | - The timing is right, yeah.
02:28:20.660 | - Yeah, yeah, exactly.
02:28:22.140 | So yeah, the idea with the global brain was,
02:28:25.460 | maybe the AI won't just be in a program
02:28:28.100 | on one guy's computer,
02:28:29.540 | but the AI will be in the internet as a whole
02:28:32.940 | with the cooperation of different AI modules
02:28:35.020 | living in different places.
02:28:37.020 | So one of the issues you face
02:28:39.260 | when architecting a system like that
02:28:41.180 | is how is the whole thing controlled?
02:28:45.700 | Do you have a centralized control unit
02:28:48.140 | that pulls the puppet strings
02:28:49.540 | of all the different modules there?
02:28:51.660 | Or do you have a fundamentally decentralized network
02:28:56.380 | where the society of AIs is controlled
02:29:00.220 | in some democratic and self-organized way
02:29:02.060 | by all the AIs in that society, right?
02:29:05.740 | And Francis and I had different view of many things,
02:29:09.620 | but we both wanted to make a global society of AI minds
02:29:14.620 | with a decentralized organizational mode.
02:29:20.540 | Now, the main difference was he wanted the individual AIs
02:29:25.540 | to be all incredibly simple
02:29:28.140 | and all the intelligence to be on the collective level.
02:29:31.060 | Whereas I thought that was cool,
02:29:33.660 | but I thought a more practical way to do it might be
02:29:36.540 | if some of the agents in the society of minds
02:29:39.540 | were fairly generally intelligent on their own.
02:29:41.540 | So like you could have a bunch of open cogs out there
02:29:44.540 | and a bunch of simpler learning systems,
02:29:47.180 | and then these are all cooperating, coordinating together,
02:29:49.900 | sort of like in the brain.
02:29:51.780 | Okay, the brain as a whole is the general intelligence,
02:29:55.340 | but some parts of the cortex,
02:29:56.700 | you could say have a fair bit of general intelligence
02:29:58.580 | on their own,
02:29:59.740 | whereas say parts of the cerebellum or limbic system
02:30:02.140 | have very little general intelligence on their own,
02:30:04.540 | and they're contributing to general intelligence
02:30:07.300 | by way of their connectivity to other modules.
02:30:10.900 | - Do you see instantiations of the same kind of,
02:30:13.740 | maybe different versions of open cog,
02:30:15.460 | but also just the same version of open cog
02:30:17.380 | and maybe many instantiations
02:30:19.700 | of it as being all parts of it?
02:30:21.340 | - That's what David and Hans and I wanna do
02:30:23.020 | with many Sophia and other robots.
02:30:25.340 | Each one has its own individual mind living on a server,
02:30:29.220 | but there's also a collective intelligence infusing them
02:30:32.060 | and a part of the mind living on the edge in each robot.
02:30:34.860 | So the thing is, at that time,
02:30:38.540 | as well as WebMind being implemented in Java 1.1
02:30:41.860 | as like a massive distributed system,
02:30:44.780 | blockchain wasn't there yet.
02:30:48.140 | So how did it have them do this decentralized control?
02:30:51.540 | You know, we sort of knew it.
02:30:52.860 | We knew about distributed systems.
02:30:54.340 | We knew about encryption.
02:30:55.740 | So I mean, we had the key principles
02:30:58.060 | of what underlies blockchain now,
02:31:00.060 | but I mean, we didn't put it together
02:31:01.740 | in the way that it's been done now.
02:31:02.860 | So when Vitalik Buterin and colleagues
02:31:05.340 | came out with Ethereum blockchain,
02:31:07.100 | many, many years later, like 2013 or something,
02:31:10.960 | then I was like, well, this is interesting.
02:31:13.900 | Like this is Solidity scripting language.
02:31:16.940 | It's kind of dorky in a way,
02:31:18.460 | and I don't see why you need a turn complete language
02:31:21.380 | for this purpose.
02:31:22.380 | But on the other hand,
02:31:24.260 | this is like the first time I could sit down
02:31:27.100 | and start to like script infrastructure
02:31:29.820 | for decentralized control of the AIs
02:31:32.380 | in a society of minds in a tractable way.
02:31:35.180 | Like you can hack the Bitcoin code base,
02:31:37.140 | but it's really annoying.
02:31:38.460 | Whereas Solidity is Ethereum scripting language
02:31:41.660 | is just nicer and easier to use.
02:31:44.340 | I'm very annoyed with it by this point.
02:31:45.820 | But like Java, I mean, these languages are amazing
02:31:48.900 | when they first come out.
02:31:50.860 | So then I came up with the idea
02:31:52.380 | that turned into Singularity Net.
02:31:53.740 | Okay, let's make a decentralized agent system
02:31:58.140 | where a bunch of different AIs wrapped up
02:32:00.900 | and say different Docker containers or LXC containers,
02:32:04.260 | different AIs can each of them have their own identity
02:32:07.340 | on the blockchain.
02:32:08.660 | And the coordination of this community of AIs
02:32:11.700 | has no central controller, no dictator, right?
02:32:14.540 | And there's no central repository of information.
02:32:17.060 | The coordination of the society of minds
02:32:19.340 | is done entirely by the decentralized network
02:32:22.620 | in a decentralized way by the algorithms, right?
02:32:25.780 | 'Cause the motto of Bitcoin is in math we trust, right?
02:32:29.140 | And so that's what you need.
02:32:30.780 | You need the society of minds to trust only in math,
02:32:33.820 | not trust only in one centralized server.
02:32:37.660 | - So the AI systems themselves are outside of the blockchain
02:32:40.580 | but then the communication between--
02:32:41.420 | - At the moment, yeah, yeah.
02:32:43.900 | I would have loved to put the AI's operations
02:32:46.180 | on chain in some sense,
02:32:48.660 | but in Ethereum it's just too slow.
02:32:50.420 | You can't deal with it.
02:32:52.660 | - Somehow it's the basic communication
02:32:55.060 | between AI systems that's--
02:32:57.020 | - Yeah, yeah, so basically an AI is just some software
02:33:01.860 | in Singularity, an AI is just some software process
02:33:04.060 | living in a container.
02:33:05.900 | And-- - There's input and output.
02:33:07.300 | - There's a proxy that lives in that container
02:33:08.980 | along with the AI that handles the interaction
02:33:10.820 | with the rest of Singularity Net.
02:33:13.060 | And then when one AI wants to contribute
02:33:15.860 | with another one in the network,
02:33:16.900 | they set up a number of channels.
02:33:18.580 | And the setup of those channels
02:33:20.620 | uses the Ethereum blockchain.
02:33:22.540 | But once the channels are set up,
02:33:24.420 | then data flows along those channels
02:33:26.100 | without having to be on the blockchain.
02:33:29.220 | All that goes on the blockchain is the fact
02:33:31.020 | that some data went along that channel.
02:33:33.100 | So you can do--
02:33:34.220 | - So there's not a shared knowledge, it's--
02:33:38.660 | - Well, the identity of each agent is on the blockchain.
02:33:42.700 | - Right. - On the Ethereum blockchain.
02:33:44.820 | If one agent rates the reputation of another agent,
02:33:48.020 | that goes on the blockchain.
02:33:49.580 | And agents can publish what APIs they will fulfill
02:33:52.900 | on the blockchain.
02:33:54.540 | But the actual data for AI and the results--
02:33:57.460 | - It's not on the blockchain. - AI is not on the blockchain.
02:33:58.900 | - Do you think it could be, do you think it should be?
02:34:02.340 | - In some cases it should be.
02:34:04.180 | In some cases maybe it shouldn't be.
02:34:05.900 | But I mean, I think that--
02:34:08.260 | - So I'll give you an example.
02:34:10.180 | - Using Ethereum, you can't do it.
02:34:11.660 | Using, now there's more modern and faster blockchains
02:34:16.660 | where you could start to do that in some cases.
02:34:21.980 | Two years ago that was less so.
02:34:23.380 | It's a very rapidly evolving ecosystem.
02:34:25.660 | - So like one example maybe you can comment on.
02:34:28.940 | Something I worked a lot on is autonomous vehicles.
02:34:31.900 | You can see each individual vehicle as a AI system.
02:34:35.740 | And you can see vehicles from Tesla, for example,
02:34:39.620 | and then Ford and GM and all these as also like larger.
02:34:44.620 | I mean, they all are running the same kind of system
02:34:47.980 | on each sets of vehicles.
02:34:50.220 | So it's individual AI systems on individual vehicles,
02:34:53.340 | but it's all different, the statiation is the same AI system
02:34:56.820 | within the same company.
02:34:58.460 | So you can envision a situation
02:35:01.380 | where all of those AI systems are put on SingularityNet.
02:35:05.460 | Right? - Yeah.
02:35:06.380 | - And how do you see that happening
02:35:11.060 | and what would be the benefit?
02:35:12.420 | And could they share data?
02:35:14.220 | I guess one of the biggest things is the power there
02:35:17.100 | is in the decentralized control,
02:35:18.780 | but the benefit would have been,
02:35:21.180 | is really nice if they can somehow share the knowledge
02:35:24.780 | in an open way if they choose to.
02:35:27.060 | - Yeah, yeah, yeah.
02:35:28.420 | Those are all quite good points.
02:35:30.660 | So I think the benefit from being on the data side
02:35:36.060 | on the decentralized network as we envision it
02:35:40.180 | is that we want the AIs in the network
02:35:42.100 | to be outsourcing work to each other
02:35:44.380 | and making API calls to each other frequently.
02:35:47.460 | - I got you.
02:35:48.300 | - So the real benefit would be if that AI
02:35:51.060 | wanted to outsource some cognitive processing
02:35:54.460 | or data processing or data pre-processing, whatever,
02:35:57.300 | to some other AIs in the network
02:35:59.900 | which specialize in something different.
02:36:02.180 | And this really requires a different way of thinking
02:36:06.660 | about AI software development, right?
02:36:08.500 | So just like object-oriented programming
02:36:10.860 | was different than imperative programming,
02:36:13.300 | and now object-oriented programmers
02:36:15.660 | all use these frameworks to do things
02:36:18.820 | rather than just libraries even.
02:36:21.260 | You know, shifting to agent-based programming
02:36:23.660 | where AI agent is asking other like live,
02:36:26.620 | real-time evolving agents for feedback
02:36:29.060 | in what they're doing, that's a different way of thinking.
02:36:32.020 | I mean, it's not a new one.
02:36:33.500 | There was loads of papers on agent-based programming
02:36:35.820 | in the '80s and onward.
02:36:37.620 | But if you're willing to shift
02:36:39.940 | to an agent-based model of development,
02:36:42.700 | then you can put less and less in your AI
02:36:45.940 | and rely more and more on interactive calls
02:36:48.620 | to other AIs running in the network.
02:36:51.460 | And of course, that's not fully manifested yet
02:36:54.580 | because although we've rolled out
02:36:56.580 | a nice working version of Singularity Net Platform,
02:36:59.740 | there's only 50 to 100 AIs running in there now.
02:37:03.780 | There's not tens of thousands of AIs.
02:37:05.900 | So we don't have the critical mass
02:37:08.220 | for the whole society of mind to be doing what we want.
02:37:11.860 | - Yeah, the magic really happens
02:37:13.380 | when there's just a huge number of agents.
02:37:15.300 | - Yeah, yeah, exactly.
02:37:16.660 | In terms of data, we're partnering closely
02:37:19.580 | with another blockchain project called Ocean Protocol.
02:37:23.500 | And Ocean Protocol, that's the project of Trent McConaghy
02:37:27.220 | who developed BigchainDB,
02:37:28.700 | which is a blockchain-based database.
02:37:30.820 | So Ocean Protocol is basically blockchain-based big data
02:37:35.420 | and aims at making it efficient
02:37:37.060 | for different AI processes or statistical processes
02:37:40.780 | or whatever to share large data sets.
02:37:44.020 | Or one process can send a clone of itself
02:37:46.540 | to work on the other guy's data set
02:37:48.140 | and send results back and so forth.
02:37:50.580 | So by getting Ocean and you have data lakes,
02:37:55.540 | so this is the data ocean, right?
02:37:56.900 | That's the game.
02:37:57.740 | By getting Ocean and SingularityNet to interoperate,
02:38:01.540 | we're aiming to take into account of the big data aspect.
02:38:05.460 | Also, but it's quite challenging
02:38:08.300 | because to build this whole decentralized
02:38:10.180 | blockchain-based infrastructure,
02:38:12.460 | I mean, your competitors are like Google, Microsoft,
02:38:14.980 | Alibaba, and Amazon, which have so much money
02:38:17.980 | to put behind their centralized infrastructures.
02:38:20.580 | Plus they're solving simpler algorithmic problems
02:38:23.380 | 'cause making it centralized in some ways is easier, right?
02:38:27.380 | So they're very major computer science challenges.
02:38:32.380 | And I think what you saw with the whole ICO boom
02:38:36.220 | in the blockchain and cryptocurrency world
02:38:38.220 | is a lot of young hackers who are hacking Bitcoin
02:38:42.340 | or Ethereum and they see,
02:38:44.060 | well, why don't we make this decentralized on blockchain?
02:38:47.100 | Then after they raise some money through an ICO,
02:38:49.060 | they realize how hard it is.
02:38:51.020 | Like actually we're wrestling with incredibly hard
02:38:54.060 | computer science and software engineering
02:38:56.100 | and distributed systems problems, which can be solved,
02:39:01.100 | but they're just very difficult to solve.
02:39:03.340 | And in some cases, the individuals
02:39:05.620 | who started those projects were not well-equipped
02:39:09.020 | to actually solve the problems that they wanted to.
02:39:12.580 | - So you think, would you say that's the main bottleneck?
02:39:15.660 | If you look at the future of currency,
02:39:19.940 | the question is--
02:39:21.340 | - Currency, the main bottleneck is politics.
02:39:24.020 | Like it's governments and the bands of armed thugs
02:39:26.740 | that will shoot you if you bypass
02:39:28.300 | their currency restrictions.
02:39:30.140 | - That's right, so like your sense is that
02:39:32.180 | versus the technical challenges,
02:39:34.020 | 'cause you kind of just suggested
02:39:35.180 | the technical challenges are quite high as well.
02:39:36.820 | - I mean, for making a distributed money,
02:39:39.260 | you could do that on Algorand right now.
02:39:41.540 | I mean, so that while Ethereum is too slow,
02:39:45.020 | there's Algorand and there's a few other more modern,
02:39:47.500 | more scalable blockchains that would work fine
02:39:49.620 | for a decentralized global currency.
02:39:53.900 | So I think there were technical bottlenecks
02:39:56.460 | to that two years ago, and maybe Ethereum 2.0
02:39:59.380 | will be as fast as Algorand.
02:40:00.780 | I don't know, that's not fully written yet, right?
02:40:04.140 | So I think the obstacle to currency
02:40:07.500 | being put on the blockchain is that--
02:40:09.380 | - Is the other stuff you mentioned.
02:40:10.220 | - I mean, currency will be on the blockchain.
02:40:11.780 | It'll just be on the blockchain in a way
02:40:13.860 | that enforces centralized control
02:40:16.540 | and government hegemony rather than otherwise.
02:40:18.340 | Like the ERMB will probably be the first global,
02:40:20.940 | the first currency on the blockchain.
02:40:22.220 | The E-Ruble may be next.
02:40:23.380 | There already-- - E-Ruble?
02:40:24.580 | - Yeah, yeah, yeah.
02:40:25.620 | I mean, the point is-- - Oh, that's hilarious.
02:40:27.260 | - Digital currency makes total sense,
02:40:30.700 | but they would rather do it in the way
02:40:32.180 | that Putin and Xi Jinping have access
02:40:34.740 | to the global keys for everything, right?
02:40:37.860 | - So, and then the analogy to that
02:40:40.180 | in terms of singularity net, I mean, there's echoes.
02:40:43.620 | I think you've mentioned before that Linux gives you hope.
02:40:46.780 | - AI is not as heavily regulated as money, right?
02:40:49.940 | - Not yet, right?
02:40:50.980 | Not yet.
02:40:51.980 | - Oh, that's a lot slipperier than money too, right?
02:40:54.220 | I mean, money is easier to regulate
02:40:58.220 | 'cause it's kind of easier to define,
02:41:00.740 | whereas AI is, it's almost everywhere inside everything.
02:41:04.060 | Where's the boundary between AI and software, right?
02:41:06.420 | I mean, if you're gonna regulate AI,
02:41:09.140 | there's no IQ test for every hardware device
02:41:11.700 | that has a learning algorithm.
02:41:12.780 | You're gonna be putting hegemonic regulation
02:41:15.700 | on all software, and I don't rule out that that could happen.
02:41:18.860 | - Any adaptive software.
02:41:20.060 | - Yeah, but how do you tell if a software's adaptive?
02:41:23.900 | Every software's gonna be adaptive, I mean.
02:41:26.100 | - Or maybe we're living in the golden age of open source
02:41:31.100 | that will not always be open.
02:41:33.380 | Maybe it'll become centralized control
02:41:35.620 | of software by governments.
02:41:37.020 | - It is entirely possible, and part of what I think
02:41:41.660 | we're doing with things like singularity net protocol
02:41:45.220 | is creating a tool set that can be used
02:41:50.220 | to counteract that sort of thing.
02:41:52.780 | Say a similar thing about mesh networking, right?
02:41:55.660 | Plays a minor role now, the ability to access internet
02:41:59.100 | like directly phone to phone.
02:42:01.020 | On the other hand, if your government starts trying
02:42:03.780 | to control your use of the internet,
02:42:06.100 | suddenly having mesh networking there
02:42:09.180 | can be very convenient, right?
02:42:11.540 | So right now, something like a decentralized
02:42:15.380 | blockchain-based AGI framework, or narrow AI framework,
02:42:20.340 | it's cool, it's nice to have.
02:42:22.700 | On the other hand, if governments start trying
02:42:25.180 | to tamp down on my AI interoperating with someone's AI
02:42:29.820 | in Russia or somewhere, then suddenly having
02:42:33.300 | a decentralized protocol that nobody owns or controls
02:42:37.980 | becomes an extremely valuable part of the tool set.
02:42:41.220 | And we've put that out there now.
02:42:43.820 | It's not perfect, but it operates.
02:42:47.020 | And it's pretty blockchain agnostic.
02:42:51.140 | So we're talking to Algorand about making part
02:42:53.460 | of singularity run on Algorand.
02:42:56.260 | My good friend Tufi Saliba has a cool blockchain project
02:43:00.100 | called Toda, which is a blockchain
02:43:02.260 | without a distributed ledger.
02:43:03.580 | It's like a whole other architecture.
02:43:05.180 | - Interesting.
02:43:06.020 | - So there's a lot of more advanced things
02:43:07.940 | you can do in the blockchain world.
02:43:09.860 | Singularity net could be ported to a whole bunch of,
02:43:13.540 | it could be made multi-chain and ported
02:43:14.980 | to a whole bunch of different blockchains.
02:43:17.100 | And there's a lot of potential and a lot of importance
02:43:21.540 | to putting this kind of tool set out there.
02:43:23.620 | If you compare to OpenCog, what you could see
02:43:25.540 | is OpenCog allows tight integration of a few AI algorithms
02:43:30.540 | that share the same knowledge store in real time,
02:43:35.420 | in RAM, right?
02:43:36.900 | Singularity net allows loose integration
02:43:40.940 | of multiple different AIs.
02:43:42.700 | They can share knowledge, but they're mostly
02:43:45.260 | not gonna be sharing knowledge in RAM on the same machine.
02:43:50.020 | And I think what we're gonna have is a network
02:43:53.100 | of network of networks, right?
02:43:54.540 | Like, I mean, you have the knowledge graph
02:43:57.020 | inside the OpenCog system.
02:44:00.940 | And then you have a network of machines
02:44:03.260 | inside distributed OpenCog mind.
02:44:05.900 | But then that OpenCog will interface with other AIs
02:44:10.260 | doing deep neural nets or custom biology data analysis
02:44:14.420 | or whatever they're doing in Singularity net,
02:44:17.620 | which is a looser integration of different AIs,
02:44:21.020 | some of which may be their own networks, right?
02:44:24.060 | And I think at a very loose analogy,
02:44:27.900 | you could see that in the human body.
02:44:29.380 | Like the brain has regions like cortex or hippocampus,
02:44:33.820 | which tightly interconnect like conical columns
02:44:36.820 | within the cortex, for example.
02:44:39.140 | Then there's looser connection
02:44:40.860 | within the different lobes of the brain.
02:44:42.700 | And then the brain interconnects with the endocrine system
02:44:45.020 | and different parts of the body even more loosely.
02:44:48.260 | Then your body interacts even more loosely
02:44:50.780 | with the other people that you talk to.
02:44:53.320 | So you often have networks within networks
02:44:55.300 | within networks with progressively looser coupling
02:44:59.340 | as you get higher up in that hierarchy.
02:45:02.740 | I mean, you have that in biology,
02:45:03.860 | you have it in the internet as a just networking medium.
02:45:08.180 | And I think that's what we're gonna have
02:45:10.980 | in the network of software processes leading to AGI.
02:45:15.980 | - That's a beautiful way to see the world.
02:45:18.100 | Again, the same similar question is with OpenCog.
02:45:21.940 | If somebody wanted to build an AI system
02:45:24.660 | and plug into the Singularity net,
02:45:27.060 | what would you recommend?
02:45:28.420 | Like how would you go about it?
02:45:29.260 | - So that's much easier.
02:45:30.220 | I mean, OpenCog is still a research system.
02:45:33.900 | So it takes some expertise and some time.
02:45:36.660 | We have tutorials, but it's somewhat
02:45:39.620 | cognitively labor intensive to get up to speed on OpenCog.
02:45:44.340 | And I mean, what's one of the things we hope to change
02:45:46.660 | with the true AGI OpenCog 2.0 version
02:45:49.940 | is just make the learning curve more similar
02:45:52.780 | to TensorFlow or Torch or something.
02:45:54.420 | 'Cause right now, OpenCog is amazingly powerful,
02:45:57.340 | but not simple to do.
02:46:00.620 | On the other hand, Singularity net,
02:46:03.700 | as a open platform was developed
02:46:07.660 | a little more with usability in mind,
02:46:09.580 | although the blockchain is still kind of a pain.
02:46:11.660 | So I mean, if you're a command line guy,
02:46:14.900 | there's a command line interface.
02:46:16.140 | It's quite easy to take any AI that has an API
02:46:20.020 | and lives in a Docker container and put it online anywhere.
02:46:23.500 | And then it joins the global Singularity net.
02:46:25.700 | And anyone who puts a request for services
02:46:28.940 | out into the Singularity net,
02:46:30.140 | the peer to peer discovery mechanism will find your AI.
02:46:33.900 | And if it does what was asked,
02:46:35.700 | it can then start a conversation with your AI
02:46:38.940 | about whether it wants to ask your AI to do something for it,
02:46:42.140 | how much it would cost and so on.
02:46:43.500 | So that's fairly simple.
02:46:46.820 | If you wrote an AI and want it listed
02:46:50.340 | on like official Singularity net marketplace,
02:46:52.980 | which is on our website,
02:46:55.100 | then we have a publisher portal
02:46:57.780 | and then there's a KYC process to go through
02:47:00.140 | 'cause then we have some legal liability
02:47:02.340 | for what goes on that website.
02:47:04.620 | So that in a way that's been an education too.
02:47:07.260 | There's sort of two layers.
02:47:08.340 | Like there's the open decentralized protocol.
02:47:11.620 | - And there's the market.
02:47:12.860 | - Yeah, anyone can use the open decentralized protocol.
02:47:15.460 | So say some developers from Iran
02:47:17.900 | and there's brilliant AI guys
02:47:19.380 | in University of Isfahan in Tehran,
02:47:21.660 | they can put their stuff on Singularity net protocol
02:47:24.580 | and just like they can put something on the internet.
02:47:27.020 | I don't control it.
02:47:28.380 | But if we're gonna list something
02:47:29.660 | on the Singularity net marketplace
02:47:31.980 | and put a little picture and a link to it,
02:47:34.260 | then if I put some Iranian AI genius's code on there,
02:47:38.820 | then Donald Trump can send a bunch of jackbooted thugs
02:47:41.460 | to my house to arrest me for doing business with Iran.
02:47:45.300 | So I mean, we already see in some ways
02:47:48.940 | the value of having a decentralized protocol
02:47:51.060 | 'cause what I hope is that someone in Iran
02:47:53.700 | will put online an Iranian Singularity net marketplace,
02:47:57.300 | which you can pay in a cryptographic token,
02:47:59.660 | which is not owned by any country.
02:48:01.500 | And then if you're in like Congo or somewhere
02:48:04.580 | that doesn't have any problem with Iran,
02:48:06.740 | you can subcontract AI services
02:48:09.180 | that you find on that marketplace,
02:48:11.940 | even though US citizens can't by US law.
02:48:16.020 | So right now, that's kind of a minor point.
02:48:20.500 | As you alluded, if regulations go in the wrong direction,
02:48:23.980 | it could become more of a major point.
02:48:25.500 | But I think it also is the case
02:48:28.020 | that having these workarounds to regulations in place
02:48:31.820 | is a defense mechanism against those regulations
02:48:35.140 | being put into place.
02:48:36.620 | And you can see that in the music industry, right?
02:48:39.180 | I mean, Napster just happened and BitTorrent just happened.
02:48:42.980 | And now most people in my kids' generation,
02:48:45.940 | they're baffled by the idea of paying for music, right?
02:48:49.220 | I mean, my dad pays for music.
02:48:51.340 | (laughing)
02:48:52.620 | - Yeah, that's true.
02:48:53.700 | - Because these decentralized mechanisms happened,
02:48:56.700 | and then the regulations followed, right?
02:48:58.940 | And the regulations would be very different
02:49:01.220 | if they'd been put into place
02:49:03.100 | before there was Napster and BitTorrent and so forth.
02:49:05.460 | So in the same way, we got to put AI out there
02:49:08.620 | in a decentralized vein,
02:49:10.220 | and big data out there in a decentralized vein now,
02:49:13.780 | so that the most advanced AI in the world
02:49:16.260 | is fundamentally decentralized.
02:49:18.260 | And if that's the case,
02:49:20.020 | that's just the reality the regulators have to deal with.
02:49:23.700 | And then as in the music case,
02:49:25.380 | they're gonna come up with regulations
02:49:27.420 | that sort of work with the decentralized reality.
02:49:32.420 | - Beautiful.
02:49:33.980 | You were the chief scientist of Hanson Robotics.
02:49:37.900 | You're still involved with Hanson Robotics,
02:49:40.460 | doing a lot of really interesting stuff there.
02:49:42.620 | This is, for people who don't know,
02:49:43.980 | the company that created Sophia the Robot.
02:49:47.300 | Can you tell me who Sophia is?
02:49:51.380 | - I'd rather start by telling you who David Hanson is.
02:49:54.060 | (Luke laughs)
02:49:55.180 | David is the brilliant mind behind the Sophia Robot,
02:49:58.700 | and he remains, so far he remains more interesting
02:50:01.900 | than his creation,
02:50:03.980 | although she may be improving faster than he is, actually.
02:50:07.340 | I mean, he's-- - Yeah.
02:50:08.700 | - So yeah, I met-- - That's a good point.
02:50:11.220 | - I met David maybe 2007 or something
02:50:15.220 | at some futurist conference.
02:50:16.700 | We were both speaking at.
02:50:18.340 | And I could see we had a great deal in common.
02:50:22.820 | I mean, we were both kind of crazy,
02:50:24.980 | but we also, we both had a passion for AGI
02:50:29.180 | and the singularity,
02:50:31.460 | and we were both huge fans of the work of Philip K. Dick,
02:50:34.820 | the science fiction writer.
02:50:36.820 | And I wanted to create benevolent AGI
02:50:40.740 | that would create massively better life
02:50:44.740 | for all humans and all sentient beings,
02:50:47.580 | including animals, plants, and superhuman beings.
02:50:50.020 | And David, he wanted exactly the same thing,
02:50:53.740 | but he had a different idea of how to do it.
02:50:56.340 | He wanted to get computational compassion.
02:50:59.380 | Like, he wanted to get machines
02:51:01.540 | that would love people and empathize with people.
02:51:05.780 | And he thought the way to do that
02:51:07.140 | was to make a machine that could look people eye to eye,
02:51:11.300 | face to face, look at people,
02:51:13.620 | and make people love the machine,
02:51:15.660 | and the machine loves the people back.
02:51:17.460 | So I thought that was a very different way of looking at it,
02:51:21.460 | 'cause I'm very math-oriented,
02:51:22.900 | and I'm just thinking, like,
02:51:24.660 | what is the abstract cognitive algorithm
02:51:28.060 | that will let the system internalize
02:51:30.660 | the complex patterns of human values, blah, blah, blah,
02:51:33.220 | whereas he's like, look you in the face and the eye
02:51:35.940 | and love you, right?
02:51:37.340 | So we hit it off quite well.
02:51:41.300 | And we talked to each other off and on.
02:51:44.380 | Then I moved to Hong Kong in 2011.
02:51:49.300 | So I'd been, I mean, I've been living all over the place.
02:51:53.300 | I've been in Australia and New Zealand
02:51:54.940 | in my academic career, then in Las Vegas for a while,
02:51:59.300 | was in New York in the late '90s,
02:52:00.780 | starting my entrepreneurial career,
02:52:03.620 | was in DC for nine years
02:52:04.980 | doing a bunch of US government consulting stuff,
02:52:07.860 | then moved to Hong Kong in 2011,
02:52:11.980 | mostly 'cause I met a Chinese girl
02:52:13.860 | who I fell in love with, and we got married.
02:52:16.020 | She's actually not from Hong Kong,
02:52:17.300 | she's from mainland China,
02:52:18.300 | but we converged together in Hong Kong,
02:52:21.260 | still married now, have a two-year-old baby.
02:52:24.100 | - So went to Hong Kong to see about a girl, I guess.
02:52:26.740 | - Yeah, pretty much, yeah.
02:52:28.980 | And on the other hand,
02:52:31.020 | I started doing some cool research there
02:52:33.020 | with Jin You at Hong Kong Polytechnic University.
02:52:36.500 | I got involved with a project called IDEA
02:52:38.220 | using machine learning for stock and futures prediction,
02:52:41.140 | which was quite interesting.
02:52:43.060 | And I also got to know something
02:52:45.060 | about the consumer electronics
02:52:47.380 | and hardware manufacturer ecosystem
02:52:49.540 | in Shenzhen across the border,
02:52:50.940 | which is like the only place in the world
02:52:53.220 | that makes sense to make complex consumer electronics
02:52:56.460 | at large scale and low cost.
02:52:57.820 | It's just, it's astounding,
02:52:59.700 | the hardware ecosystem that you have in South China.
02:53:03.180 | Like US, people here cannot imagine
02:53:06.020 | what it's like.
02:53:07.220 | So David was starting to explore that also.
02:53:12.020 | I invited him to Hong Kong to give a talk
02:53:13.820 | at Hong Kong PolyU,
02:53:15.660 | and I introduced him in Hong Kong
02:53:18.060 | to some investors who were interested in his robots.
02:53:21.540 | And he didn't have Sophia then,
02:53:23.500 | he had a robot of Philip K. Dick,
02:53:25.140 | our favorite science fiction writer.
02:53:26.980 | He had a robot Einstein,
02:53:28.140 | he had some little toy robots
02:53:29.500 | that looked like his son, Zeno.
02:53:31.900 | So through the investors I connected him to,
02:53:35.620 | he managed to get some funding
02:53:37.460 | to basically port Hanson Robotics to Hong Kong.
02:53:40.620 | And when he first moved to Hong Kong,
02:53:42.620 | I was working on AGI research
02:53:45.260 | and also on this machine learning trading project.
02:53:49.300 | So I didn't get that tightly involved
02:53:50.740 | with Hanson Robotics.
02:53:52.940 | But as I hung out with David more and more,
02:53:56.500 | as we were both there in the same place,
02:53:59.140 | I started to get,
02:54:00.180 | I started to think about what you could do
02:54:04.580 | to make his robots smarter than they were.
02:54:08.460 | And so we started working together.
02:54:10.300 | And for a few years,
02:54:11.220 | I was chief scientist and head of software
02:54:13.700 | at Hanson Robotics.
02:54:15.700 | Then when I got deeply into the blockchain side of things,
02:54:19.380 | I stepped back from that and co-founded SingularityNet.
02:54:24.300 | David Hanson was also one of the co-founders
02:54:26.300 | of SingularityNet.
02:54:27.700 | So part of our goal there had been
02:54:30.020 | to make the blockchain based like cloud mind platform
02:54:33.900 | for Sophia and the other--
02:54:36.940 | - Sophia would be just one of the robots
02:54:39.180 | in SingularityNet.
02:54:41.740 | - Yeah, yeah, yeah, exactly.
02:54:43.260 | Sophia, many copies of the Sophia robot
02:54:47.340 | would be among the user interfaces
02:54:51.460 | to the globally distributed SingularityNet cloud mind.
02:54:54.380 | And I mean, David and I talked about that
02:54:57.100 | for quite a while before co-founding SingularityNet.
02:55:01.500 | - By the way, in his vision and your vision,
02:55:04.020 | was Sophia tightly coupled to a particular AI system
02:55:09.620 | or was the idea that you can plug,
02:55:11.660 | you could just keep plugging in different AI systems
02:55:14.180 | within the head of it?
02:55:15.020 | - I think David's view was always that Sophia
02:55:20.020 | would be a platform,
02:55:22.980 | much like say the Pepper robot is a platform from SoftBank.
02:55:26.860 | Should be a platform with a set of nicely designed APIs
02:55:31.700 | that anyone can use to experiment
02:55:33.540 | with their different AI algorithms on that platform.
02:55:38.540 | And SingularityNet of course fits right into that, right?
02:55:41.580 | 'Cause SingularityNet, it's an API marketplace.
02:55:44.100 | So anyone can put their AI on there.
02:55:46.260 | OpenCog is a little bit different.
02:55:49.060 | I mean, David likes it, but I'd say it's my thing.
02:55:52.180 | It's not his.
02:55:53.140 | David has a little more passion
02:55:55.140 | for biologically based approaches to AI than I do,
02:55:58.740 | which makes sense.
02:56:00.180 | I mean, he's really into human physiology and biology.
02:56:02.900 | He's a character sculptor, right?
02:56:05.140 | So yeah, he's interested in,
02:56:07.860 | but he also worked a lot with rule-based
02:56:09.700 | and logic-based AI systems too.
02:56:11.400 | So yeah, he's interested in not just Sophia,
02:56:14.860 | but all the Hanson robots as a powerful social
02:56:17.820 | and emotional robotics platform.
02:56:21.220 | And what I saw in Sophia was a way to get AI algorithms
02:56:26.220 | out there in front of a whole lot of different people
02:56:34.660 | in an emotionally compelling way.
02:56:36.300 | And part of my thought was really kind of abstract,
02:56:39.820 | connected to AGI ethics.
02:56:41.740 | And many people are concerned AGI
02:56:45.020 | is gonna enslave everybody or turn everybody
02:56:48.500 | into computronium to make extra hard drives
02:56:52.100 | for their cognitive engine or whatever.
02:56:55.540 | And emotionally, I'm not driven to that sort of paranoia.
02:57:00.540 | I'm really just an optimist by nature,
02:57:04.100 | but intellectually, I have to assign a non-zero probability
02:57:09.100 | to those sorts of nasty outcomes.
02:57:12.140 | 'Cause if you're making something 10 times as smart as you,
02:57:14.860 | how can you know what it's gonna do?
02:57:16.260 | There's an irreducible uncertainty there,
02:57:19.780 | just as my dog can't predict what I'm gonna do tomorrow.
02:57:22.780 | So it seemed to me that based on our current state
02:57:26.420 | of knowledge, the best way to bias the AGI's we create
02:57:31.420 | toward benevolence would be to infuse them with love
02:57:37.500 | and compassion the way that we do our own children.
02:57:41.620 | So you want to interact with AIs in the context
02:57:45.820 | of doing compassionate, loving, and beneficial things.
02:57:49.900 | And in that way, as your children will learn,
02:57:52.140 | by doing compassionate, beneficial, loving things
02:57:54.220 | alongside you, and that way the AI will learn in practice
02:57:58.660 | what it means to be compassionate, beneficial, and loving.
02:58:02.340 | It will get a sort of ingrained intuitive sense of this,
02:58:06.380 | which it can then abstract in its own way
02:58:09.260 | as it gets more and more intelligent.
02:58:11.140 | Now David saw this the same way.
02:58:12.740 | That's why he came up with the name Sophia,
02:58:15.540 | which means wisdom.
02:58:18.140 | So it seemed to me making these beautiful, loving robots
02:58:22.780 | to be rolled out for beneficial applications
02:58:26.060 | would be the perfect way to roll out early stage AGI systems
02:58:31.060 | so they can learn from people,
02:58:33.940 | and not just learn factual knowledge,
02:58:35.420 | but learn human values and ethics from people
02:58:38.580 | while being their home service robots,
02:58:41.540 | their education assistants, their nursing robots.
02:58:44.100 | So that was the grand vision.
02:58:46.060 | Now if you've ever worked with robots,
02:58:48.620 | the reality is quite different, right?
02:58:50.420 | Like the first principle is the robot is always broken.
02:58:53.220 | (laughing)
02:58:55.020 | I mean, I worked with robots in the '90s a bunch
02:58:57.660 | when you had to solder them together yourself,
02:58:59.540 | and I'd put neural nets doing reinforcement learning
02:59:02.580 | on overturned solid bowl type robots in the '90s
02:59:07.540 | when I was a professor.
02:59:09.300 | Things, of course, advanced a lot, but--
02:59:12.020 | - But the principle still holds.
02:59:13.020 | - The principle of the robot's always broken still holds.
02:59:16.500 | Yeah, so faced with the reality of making Sophia do stuff,
02:59:21.020 | many of my robo-AGI aspirations were temporarily cast aside.
02:59:26.020 | And I mean, there's just a practical problem
02:59:30.660 | of making this robot interact in a meaningful way,
02:59:33.700 | 'cause you put nice computer vision on there,
02:59:36.700 | but there's always glare.
02:59:38.140 | And then, or you have a dialogue system,
02:59:41.420 | but at the time I was there,
02:59:43.740 | like no speech-to-text algorithm could deal
02:59:46.620 | with Hong Kongese people's English accents.
02:59:49.780 | So the speech-to-text was always bad,
02:59:51.620 | so the robot always sounded stupid
02:59:53.620 | because it wasn't getting the right text, right?
02:59:55.620 | So I started to view that really as what,
02:59:58.740 | what in software engineering you call a walking skeleton,
03:00:02.820 | which is maybe the wrong metaphor to use for Sophia,
03:00:05.420 | or maybe the right one.
03:00:06.980 | I mean, where the walking skeleton is
03:00:08.420 | in software development is,
03:00:10.620 | if you're building a complex system, how do you get started?
03:00:14.020 | Well, one way is to first build part one well,
03:00:16.140 | then build part two well,
03:00:17.220 | then build part three well, and so on.
03:00:19.260 | And the other way is you make like a simple version
03:00:22.060 | of the whole system and put something in the place
03:00:24.820 | of every part the whole system will need,
03:00:27.260 | so that you have a whole system that does something,
03:00:29.660 | and then you work on improving each part
03:00:31.900 | in the context of that whole integrated system.
03:00:34.340 | So that's what we did on a software level in Sophia.
03:00:38.100 | We made like a walking skeleton software system,
03:00:41.540 | where so there's something that sees,
03:00:43.060 | there's something that hears,
03:00:44.500 | there's something that moves,
03:00:46.180 | there's something that remembers,
03:00:48.180 | there's something that learns.
03:00:49.940 | You put a simple version of each thing in there,
03:00:52.420 | and you connect them all together,
03:00:54.380 | so that the system will do its thing.
03:00:56.620 | So there's a lot of AI in there.
03:00:59.620 | There's not any AGI in there.
03:01:01.340 | I mean, there's computer vision to recognize people's faces,
03:01:04.660 | recognize when someone comes in the room and leaves,
03:01:07.620 | try to recognize whether two people are together or not.
03:01:10.740 | I mean, the dialogue system,
03:01:13.300 | it's a mix of like hand-coded rules with deep neural nets
03:01:18.300 | that come up with their own responses.
03:01:21.580 | And there's some attempt to have a narrative structure
03:01:25.620 | and sort of try to pull the conversation
03:01:28.420 | into something with a beginning, middle, and end
03:01:30.780 | in this sort of story arc.
03:01:32.180 | So it's.
03:01:33.500 | - I mean, like if you look at the Lubner Prize
03:01:36.420 | and the systems that beat the Turing test currently,
03:01:39.020 | they're heavily rule-based,
03:01:40.500 | because like you had said, narrative structure,
03:01:43.900 | to create compelling conversations,
03:01:45.660 | you currently, neural networks cannot do that well,
03:01:48.380 | even with Google MENA.
03:01:50.620 | When you actually look at full-scale conversations,
03:01:53.020 | it's just not. - Yeah, this is the thing.
03:01:54.140 | So we've been, I've actually been running an experiment
03:01:57.900 | the last couple of weeks, taking Sophia's chatbot,
03:02:01.380 | and then Facebook's transformer chatbot,
03:02:03.740 | which they opened the model.
03:02:05.220 | We've had them chatting to each other
03:02:06.700 | for a number of weeks on the server, just.
03:02:08.780 | - That's funny.
03:02:09.940 | - We're generating training data of what Sophia says
03:02:13.180 | in a wide variety of conversations.
03:02:15.460 | But we can see, compared to Sophia's current chatbot,
03:02:20.220 | the Facebook deep neural chatbot
03:02:23.060 | comes up with a wider variety of fluent-sounding sentences.
03:02:27.220 | On the other hand, it rambles like mad.
03:02:30.060 | The Sophia chatbot, it's a little more repetitive
03:02:33.860 | in the sentence structures it uses.
03:02:36.620 | On the other hand, it's able to keep a conversation arc
03:02:39.780 | over a much longer period, right?
03:02:42.420 | So now, you can probably surmount that using Reformer
03:02:46.580 | and using various other deep neural architectures
03:02:51.100 | to improve the way these transformer models are trained.
03:02:53.940 | But in the end, neither one of them
03:02:57.420 | really understands what's going on.
03:02:58.940 | And I mean, that's the challenge I had with Sophia,
03:03:02.620 | is if I were doing a robotics project aimed at AGI,
03:03:07.620 | I would wanna make like a robo-toddler
03:03:10.060 | that was just learning about what it was seeing,
03:03:11.900 | 'cause then the language is grounded
03:03:13.180 | in the experience of the robot.
03:03:14.900 | But what Sophia needs to do to be Sophia
03:03:17.700 | is talk about sports or the weather or robotics
03:03:21.380 | or the conference she's talking at.
03:03:24.060 | She needs to be fluent talking about
03:03:26.340 | any damn thing in the world,
03:03:28.380 | and she doesn't have grounding for all those things.
03:03:32.380 | So there's this, just like, I mean,
03:03:34.940 | Google Mina and Facebook's chat,
03:03:36.500 | but don't have grounding
03:03:37.340 | for what they're talking about either.
03:03:40.020 | So in a way, the need to speak fluently
03:03:44.340 | about things where there's no non-linguistic grounding
03:03:47.820 | pushes what you can do for Sophia in the short term
03:03:52.820 | a bit away from AGI.
03:03:56.220 | - I mean, it pushes you towards IBM Watson situation
03:04:00.900 | where you basically have to do heuristic
03:04:02.740 | and hard-code stuff and rule-based stuff.
03:04:05.380 | I have to ask you about this.
03:04:07.060 | Okay, so because, in part, Sophia is an art creation
03:04:12.060 | because it's beautiful.
03:04:20.120 | She's beautiful because she inspires
03:04:24.780 | through our human nature of anthropomorphize things.
03:04:29.540 | We immediately see an intelligent being there.
03:04:32.620 | - Because David is a great sculptor.
03:04:34.100 | - Is a great sculptor, that's right.
03:04:35.500 | So in fact, if Sophia just had nothing inside her head,
03:04:40.500 | said nothing, if she just sat there,
03:04:43.240 | we already prescribed some intelligence to her.
03:04:45.940 | - There's a long selfie line in front of her
03:04:47.780 | after every talk.
03:04:48.820 | - That's right.
03:04:49.940 | So it captivated the imagination of many people.
03:04:53.820 | I wasn't gonna say the world,
03:04:54.900 | but yeah, I mean, a lot of people.
03:04:58.220 | - Billions of people, which is amazing.
03:05:00.180 | - It's amazing, right.
03:05:01.980 | Now, of course, many people have prescribed
03:05:06.980 | essentially AGI type of capabilities to Sophia
03:05:11.100 | when they see her.
03:05:12.420 | And of course, friendly French folk like Yann LeCun
03:05:17.420 | immediately see that, the people from the AI community,
03:05:22.820 | and get really frustrated because--
03:05:25.900 | - It's understandable.
03:05:27.060 | - And so what, and then they criticize people like you
03:05:31.740 | who sit back and don't say anything about,
03:05:36.740 | basically allow the imagination of the world,
03:05:40.020 | allow the world to continue being captivated.
03:05:42.460 | So what's your sense of that kind of annoyance
03:05:48.900 | that the AI community has?
03:05:50.940 | - I think there's several parts to my reaction there.
03:05:55.420 | First of all, if I weren't involved with Hanson Robots
03:05:59.820 | and didn't know David Hanson personally,
03:06:03.420 | I probably would have been very annoyed initially
03:06:06.420 | at Sophia as well.
03:06:08.020 | I mean, I can understand the reaction.
03:06:09.460 | I would have been like, wait, all these stupid people
03:06:14.020 | out there think this is an AGI, but it's not an AGI,
03:06:18.020 | but they're tricking people that this very cool robot
03:06:22.180 | is an AGI.
03:06:23.020 | And now those of us trying to raise funding to build AGI,
03:06:27.260 | people will think it's already there and already works.
03:06:31.220 | So on the other hand, I think,
03:06:36.220 | even if I weren't directly involved with it,
03:06:38.380 | once I dug a little deeper into David and the robot
03:06:41.700 | and the intentions behind it,
03:06:43.500 | I think I would have stopped being pissed off,
03:06:47.060 | whereas folks like Jan LeCun have remained pissed off
03:06:51.420 | after their initial reaction.
03:06:54.500 | - That's his thing, that's his thing.
03:06:56.100 | - I think that in particular struck me as somewhat ironic
03:07:01.100 | because Jan LeCun is working for Facebook,
03:07:05.620 | which is using machine learning to program the brains
03:07:09.020 | of the people in the world toward vapid consumerism
03:07:13.340 | and political extremism.
03:07:14.860 | So if your ethics allows you to use machine learning
03:07:19.700 | in such a blatantly destructive way,
03:07:23.500 | why would your ethics not allow you to use machine learning
03:07:26.220 | to make a lovable theatrical robot
03:07:29.780 | that draws some foolish people into its theatrical illusion?
03:07:34.420 | Like if the pushback had come from Yoshua Bengio,
03:07:38.820 | I would have felt much more humbled by it
03:07:40.900 | because he's not using AI for blatant evil, right?
03:07:45.300 | On the other hand, he also is a super nice guy
03:07:48.540 | and doesn't bother to go out there
03:07:50.860 | trashing other people's work for no good reason.
03:07:53.500 | - Shots fired, but I get you.
03:07:56.020 | - I mean, if you're gonna ask, I'm gonna answer.
03:08:01.100 | - No, for sure.
03:08:01.940 | I think we'll go back and forth.
03:08:03.300 | I'll talk to Jan again.
03:08:04.500 | - I would add on this, though.
03:08:06.100 | David Hanson is an artist and he often speaks off the cuff,
03:08:12.900 | and I have not agreed with everything
03:08:16.300 | that David has said or done regarding Sophia,
03:08:19.260 | and David also does not agree with everything
03:08:22.740 | David has said or done about Sophia.
03:08:25.780 | I mean, David is an artistic wild man,
03:08:30.100 | and that's part of his charm.
03:08:33.060 | That's part of his genius.
03:08:34.700 | So certainly there have been conversations
03:08:39.340 | within Hanson Robotics and between me and David
03:08:42.220 | where I was like, let's be more open
03:08:45.660 | about how this thing is working,
03:08:48.180 | and I did have some influence in nudging Hanson Robotics
03:08:52.060 | to be more open about how Sophia was working,
03:08:57.060 | and David wasn't especially opposed to this,
03:09:00.740 | and he was actually quite right about it.
03:09:02.460 | What he said was, you can tell people
03:09:04.500 | exactly how it's working, and they won't care.
03:09:08.020 | They want to be drawn into the illusion,
03:09:09.580 | and he was 100% correct.
03:09:12.580 | I'll tell you what, this wasn't Sophia.
03:09:14.620 | This was Philip K. Dick, but we did some interactions
03:09:17.980 | between humans and Philip K. Dick robot
03:09:20.860 | in Austin, Texas a few years back,
03:09:23.820 | and in this case, the Philip K. Dick was just teleoperated
03:09:26.700 | by another human in the other room.
03:09:28.560 | So during the conversations, we didn't tell people
03:09:31.260 | the robot was teleoperated.
03:09:32.860 | We just said, here, have a conversation with Phil Dick.
03:09:35.500 | We're gonna film you, right?
03:09:37.100 | And they had a great conversation with Philip K. Dick,
03:09:39.740 | teleoperated by my friend, Stephan Bugay.
03:09:42.900 | After the conversation, we brought the people
03:09:45.820 | in the back room to see Stephan,
03:09:47.940 | who was controlling the Philip K. Dick robot,
03:09:52.940 | but they didn't believe it.
03:09:54.800 | These people were like, well, yeah,
03:09:56.460 | but I know I was talking to Phil.
03:09:58.740 | Maybe Stephan was typing, but the spirit of Phil
03:10:02.220 | was animating his mind while he was typing.
03:10:05.100 | So even though they knew it was a human in the loop,
03:10:07.620 | even seeing the guy there, they still believed
03:10:10.220 | that was Phil they were talking to.
03:10:12.820 | A small part of me believes that they were right, actually.
03:10:16.660 | Because our understanding--
03:10:17.860 | - Well, we don't understand the universe.
03:10:19.420 | - That's the thing. - I mean, there is
03:10:20.500 | a cosmic mind field that we're all embedded in
03:10:24.280 | that yields many strange synchronicities in the world,
03:10:28.220 | which is a topic we don't have time to go into too much.
03:10:31.500 | - Yeah, I mean, there's something to this
03:10:35.020 | where our imagination about Sophia
03:10:39.760 | and people like Jan LeCun being frustrated about it
03:10:43.280 | is all part of this beautiful dance
03:10:45.840 | of creating artificial intelligence that's almost essential.
03:10:48.920 | You see with Boston Dynamics, I'm a huge fan of as well.
03:10:52.860 | I mean, these robots are very far from intelligent.
03:10:57.880 | - I played with their last one, actually.
03:11:01.920 | - With the spot mini? - Yeah, very cool.
03:11:04.200 | It reacts quite in a fluid and flexible way.
03:11:07.160 | - But we immediately ascribe the kind of intelligence,
03:11:10.520 | we immediately ascribe AGI to them.
03:11:12.520 | - Yeah, yeah, if you kick it and it falls down
03:11:14.160 | and goes, "Ow," you feel bad, right?
03:11:15.680 | You can't help it. - You feel bad.
03:11:17.320 | And I mean, that's part of,
03:11:21.840 | that's gonna be part of our journey
03:11:23.160 | in creating intelligent systems,
03:11:24.520 | more and more and more and more.
03:11:25.640 | Like, as Sophia starts out with a walking skeleton,
03:11:29.440 | as you add more and more intelligence,
03:11:31.920 | I mean, we're gonna have to deal with this kind of idea.
03:11:34.360 | - Absolutely, and about Sophia, I would say,
03:11:37.640 | I mean, first of all, I have nothing against Jan LeCun.
03:11:39.880 | - No, no, this is fun, this is all for fun.
03:11:40.720 | - He's a nice guy.
03:11:42.240 | If he wants to play the media banter game,
03:11:45.760 | I'm happy to play him.
03:11:48.000 | He's a good researcher and a good human being,
03:11:50.840 | and I'd happily work with the guy.
03:11:52.960 | But the other thing I was gonna say is,
03:11:56.200 | I have been explicit about how Sophia works,
03:12:00.320 | and I've posted online, and what, H+ Magazine,
03:12:04.600 | an online webzine, I mean, I posted
03:12:08.160 | a moderately detailed article explaining,
03:12:10.480 | like, there are three software systems
03:12:12.880 | we've used inside Sophia.
03:12:14.440 | There's a timeline editor,
03:12:16.720 | which is like a rule-based authoring system,
03:12:18.840 | where she's really just being an outlet
03:12:21.160 | for what a human scripted.
03:12:22.680 | There's a chatbot, which has some rule-based
03:12:24.800 | and some neural aspects.
03:12:26.480 | And then sometimes we've used OpenCog behind Sophia,
03:12:29.480 | where there's more learning and reasoning.
03:12:31.920 | And, you know, the funny thing is,
03:12:35.000 | I can't always tell which system is operating here, right?
03:12:37.720 | I mean, so whether she's really learning,
03:12:41.200 | or thinking, or just appears to be,
03:12:43.360 | over a half hour I could tell,
03:12:44.680 | but over, like, three or four minutes of interaction,
03:12:47.520 | I couldn't perhaps tell.
03:12:48.360 | - So even having three systems
03:12:49.960 | that's already sufficiently complex,
03:12:51.560 | where you can't really tell right away.
03:12:53.080 | - Yeah, the thing is, even if you get up on stage
03:12:57.040 | and tell people how Sophia's working,
03:12:59.600 | and then they talk to her,
03:13:00.920 | they still attribute more agency and consciousness to her
03:13:06.120 | than is really there.
03:13:08.960 | So I think there's a couple levels of ethical issue there.
03:13:13.840 | One issue is, should you be transparent
03:13:18.360 | about how Sophia is working?
03:13:21.600 | And I think you should.
03:13:22.880 | And I think we have been.
03:13:26.200 | I mean, there's articles online,
03:13:29.120 | there's some TV special that goes through me
03:13:32.800 | explaining the three subsystems behind Sophia.
03:13:35.400 | So the way Sophia works is out there much more clearly
03:13:40.400 | than how Facebook's AI works or something, right?
03:13:43.360 | I mean, we've been fairly explicit about it.
03:13:45.920 | The other is, given that telling people how it works
03:13:50.520 | doesn't cause them to not attribute
03:13:52.400 | too much intelligence agency to it anyway,
03:13:54.600 | then should you keep fooling them
03:13:58.240 | when they want to be fooled?
03:14:01.120 | And I mean, the whole media industry
03:14:03.640 | is based on fooling people the way they want to be fooled.
03:14:06.720 | And we are fooling people 100% toward a good end.
03:14:11.720 | I mean, we are playing on people's sense of empathy
03:14:17.920 | and compassion so that we can give them
03:14:20.520 | a good user experience with helpful robots,
03:14:23.600 | and so that we can fill the AI's mind
03:14:27.800 | with love and compassion.
03:14:29.400 | So I've been talking a lot with Hanson Robotics lately
03:14:34.080 | about collaborations in the area of medical robotics.
03:14:37.560 | And we haven't quite pulled the trigger
03:14:40.640 | on a project in that domain yet,
03:14:42.440 | but we may well do so quite soon.
03:14:44.640 | So we've been talking a lot about,
03:14:47.720 | robots can help with elder care,
03:14:49.840 | robots can help with kids.
03:14:51.280 | David's done a lot of things with autism therapy
03:14:54.120 | and robots before.
03:14:56.480 | In the COVID era, having a robot
03:14:58.640 | that can be a nursing assistant in various senses
03:15:00.600 | can be quite valuable.
03:15:02.280 | The robots don't spread infection,
03:15:04.160 | and they can also deliver more attention
03:15:06.280 | than human nurses can give, right?
03:15:07.880 | So if you have a robot that's helping a patient with COVID,
03:15:12.320 | if that patient attributes more understanding
03:15:15.680 | and compassion and agency to that robot,
03:15:18.120 | then it really has because it looks like a human.
03:15:20.560 | I mean, is that really bad?
03:15:22.840 | I mean, we can tell them it doesn't fully understand you,
03:15:25.560 | and they don't care,
03:15:26.840 | 'cause they're lying there with a fever and they're sick,
03:15:29.240 | but they'll react better to that robot
03:15:30.920 | with its loving, warm facial expression
03:15:33.400 | than they would to a pepper robot
03:15:35.320 | or a metallic-looking robot.
03:15:38.000 | So it's really, it's about how you use it, right?
03:15:41.280 | If you made a human-looking, like, door-to-door sales robot
03:15:45.040 | that used its human-looking appearance
03:15:47.080 | to scam people out of their money,
03:15:49.880 | then you're using that connection in a bad way,
03:15:53.880 | but you could also use it in a good way.
03:15:57.040 | But then that's the same problem with every technology, right?
03:16:01.720 | - Beautifully put.
03:16:02.960 | So like you said, we're living in the era of the COVID.
03:16:07.960 | This is 2020, one of the craziest years in recent history.
03:16:13.800 | So if we zoom out and look at this panel,
03:16:19.680 | this pandemic, the coronavirus pandemic,
03:16:23.120 | maybe let me ask you this kind of thing,
03:16:27.160 | in viruses in general,
03:16:29.800 | when you look at viruses,
03:16:32.600 | do you see them as a kind of intelligence system?
03:16:35.880 | - I think the concept of intelligence
03:16:37.680 | is not that natural of a concept in the end.
03:16:40.600 | I think human minds and bodies are a kind of complex,
03:16:45.600 | self-organizing, adaptive system,
03:16:49.400 | and viruses certainly are that, right?
03:16:51.880 | They're a very complex, self-organizing, adaptive system.
03:16:54.960 | If you wanna look at intelligence as Marcus Hutter defines it
03:16:58.400 | as sort of optimizing computable reward functions
03:17:02.280 | over computable environments,
03:17:04.720 | for sure viruses are doing that, right?
03:17:06.760 | And I mean, in doing so, they're causing some harm to us.
03:17:11.760 | So the human immune system
03:17:16.200 | is a very complex, self-organizing, adaptive system,
03:17:19.320 | which has a lot of intelligence to it.
03:17:21.080 | And viruses are also adapting
03:17:23.960 | and dividing into new mutant strains and so forth.
03:17:27.680 | And ultimately, the solution is gonna be nanotechnology.
03:17:31.960 | I mean, the solution is gonna be making little nanobots that--
03:17:35.960 | - Fight the viruses?
03:17:37.200 | - Well, people will use them to make nastier viruses,
03:17:40.680 | but hopefully we can also use them
03:17:42.040 | to just detect, combat, and kill the viruses.
03:17:46.240 | But I think now we're stuck with the biological mechanisms
03:17:51.240 | to combat these viruses.
03:17:55.000 | And yeah, we've been,
03:17:56.520 | AGI is not yet mature enough to use against COVID,
03:18:01.560 | but we've been using machine learning
03:18:03.960 | and also some machine reasoning in OpenCog
03:18:07.000 | to help some doctors
03:18:09.040 | to do personalized medicine against COVID.
03:18:11.080 | So the problem there is given the person's genomics
03:18:14.120 | and given their clinical medical indicators,
03:18:16.480 | how do you figure out which combination of antivirals
03:18:20.240 | is gonna be most effective against COVID for that person?
03:18:24.280 | And so that's something
03:18:26.440 | where machine learning is interesting,
03:18:28.520 | but also we're finding the abstraction.
03:18:30.360 | We get an OpenCog with machine reasoning is interesting
03:18:33.880 | 'cause it can help with transfer learning
03:18:36.680 | when you have not that many different cases to study
03:18:40.400 | and qualitative differences
03:18:42.440 | between different strains of a virus
03:18:44.760 | or people of different ages who may have COVID.
03:18:47.160 | - So there's a lot of different disparate data to work with
03:18:50.720 | and it's small data sets and somehow integrating them.
03:18:53.480 | - You know, this is one of the shameful things
03:18:55.480 | that's very hard to get that data.
03:18:57.280 | So, I mean, we're working with a couple groups
03:19:00.360 | doing clinical trials
03:19:02.400 | and they're sharing data with us under non-disclosure.
03:19:06.880 | But what should be the case is like
03:19:09.960 | every COVID clinical trial
03:19:11.920 | should be putting data online somewhere,
03:19:14.440 | like suitably encrypted to protect patient privacy
03:19:17.880 | so that anyone with the right AI algorithms
03:19:21.040 | should be able to help analyze it.
03:19:22.320 | And any biologist should be able to analyze it by hand
03:19:24.520 | to understand what they can, right?
03:19:25.920 | Instead, that data is like siloed
03:19:29.040 | inside whatever hospital is running the clinical trial,
03:19:31.760 | which is completely asinine and ridiculous.
03:19:35.080 | So why the world works that way?
03:19:37.880 | I mean, we could all analyze why,
03:19:39.200 | but it's insane that it does.
03:19:40.720 | You look at this hydrochloroquine, right?
03:19:44.080 | All these clinical trials were done
03:19:45.720 | were reported by Surgisphere,
03:19:47.720 | some little company no one ever heard of.
03:19:50.240 | And everyone paid attention to this.
03:19:53.240 | So they were doing more clinical trials based on that.
03:19:55.560 | Then they stopped doing clinical trials based on that.
03:19:57.480 | Then they started again.
03:19:58.480 | And why isn't that data just out there
03:20:01.480 | so everyone can analyze it and see what's going on, right?
03:20:05.080 | - Do you have hope that we'll move,
03:20:07.960 | that data will be out there eventually
03:20:10.600 | for future pandemics?
03:20:11.840 | I mean, do you have hope that our society
03:20:13.600 | will move in the direction of--
03:20:15.440 | - Not in the immediate future
03:20:16.840 | because the US and China frictions are getting very high.
03:20:21.600 | So it's hard to see US and China
03:20:24.400 | as moving in the direction
03:20:25.520 | of openly sharing data with each other, right?
03:20:27.600 | It's not, there's some sharing of data,
03:20:30.800 | but different groups are keeping their data private
03:20:32.960 | till they've milked the best results from it
03:20:34.680 | and then they share it, right?
03:20:36.240 | So it's, so yeah, we're working with some data
03:20:39.160 | that we've managed to get our hands on,
03:20:41.400 | something we're doing to do good for the world.
03:20:43.160 | And it's a very cool playground
03:20:44.640 | for like putting deep neural nets and open code together.
03:20:47.880 | So we have like a bioatom space
03:20:49.920 | full of all sorts of knowledge
03:20:51.880 | from many different biology experiments
03:20:53.640 | about human longevity
03:20:54.720 | and from biology knowledge bases online.
03:20:57.680 | And we can do like graph to vector type embeddings
03:21:00.760 | where we take nodes from the hypergraph,
03:21:03.040 | embed them into vectors.
03:21:04.600 | We can then feed into neural nets
03:21:06.160 | for different types of analysis.
03:21:07.920 | And we were doing this in the context of a project
03:21:11.400 | called the Rejuve that we spun off from SingularityNet
03:21:15.520 | to do longevity analytics,
03:21:18.560 | like understand why people live to 105 years or over
03:21:21.240 | and other people don't.
03:21:22.320 | And then we had this spin-off Singularity Studio
03:21:25.720 | where we're working with some healthcare companies
03:21:28.880 | on data analytics.
03:21:31.040 | But so there's bioatom space
03:21:33.120 | that we built for these more commercial
03:21:35.440 | and longevity data analysis purposes.
03:21:38.160 | We're repurposing and feeding COVID data
03:21:41.240 | into the same bioatom space
03:21:44.400 | and playing around with like graph embeddings
03:21:47.560 | from that graph into neural nets for bioinformatics.
03:21:51.200 | So it's both being a cool testing ground
03:21:54.760 | for some of our bio AI learning and reasoning.
03:21:57.280 | And it seems we're able to discover things
03:22:00.000 | that people weren't seeing otherwise.
03:22:01.880 | 'Cause the thing in this case is
03:22:03.800 | for each combination of antivirals,
03:22:05.840 | you may have only a few patients
03:22:07.080 | who've tried that combination.
03:22:08.880 | And those few patients may have
03:22:10.240 | their particular characteristics.
03:22:11.680 | Like this combination of three
03:22:13.360 | was tried only on people age 80 or over.
03:22:16.240 | This other combination of three,
03:22:18.120 | which has an overlap with the first combination,
03:22:20.520 | was tried more on young people.
03:22:22.080 | So how do you combine those different pieces of data?
03:22:25.520 | It's a very dodgy transfer learning problem,
03:22:28.600 | which is the kind of thing
03:22:29.560 | that the probabilistic reasoning algorithms
03:22:31.640 | we have inside OpenCog are better at
03:22:34.120 | than deep neural networks.
03:22:35.240 | On the other hand, you have gene expression data
03:22:38.240 | where you have 25,000 genes
03:22:39.760 | and the expression level of each gene
03:22:41.360 | in the peripheral blood of each person.
03:22:43.600 | So that sort of data, either deep neural nets
03:22:46.160 | or tools like XGBoost or CatBoost,
03:22:48.240 | these decision forestries,
03:22:50.120 | are better at dealing with than OpenCog.
03:22:52.080 | 'Cause it's just these huge,
03:22:53.960 | huge messy floating point vectors
03:22:55.880 | that are annoying for a logic engine to deal with,
03:22:59.200 | but are perfect for a decision forest or neural net.
03:23:02.560 | So it's a great playground for hybrid AI methodology.
03:23:07.560 | And we can have a singularity net,
03:23:09.720 | have OpenCog in one agent,
03:23:11.080 | and XGBoost in a different agent,
03:23:12.760 | and they talk to each other.
03:23:14.520 | But at the same time, it's highly practical, right?
03:23:18.040 | 'Cause we're working with, for example,
03:23:20.560 | some physicians on this project,
03:23:24.600 | physicians in the group called Enthopinion,
03:23:27.480 | based out of Vancouver and Seattle,
03:23:30.160 | who are, these guys are working every day
03:23:32.920 | like in the hospital with patients dying of COVID.
03:23:36.480 | So it's quite cool to see like neural symbolic AI,
03:23:41.040 | like where the rubber hits the road,
03:23:43.320 | trying to save people's lives.
03:23:45.400 | I've been doing bio AI since 2001,
03:23:48.480 | but mostly human longevity research
03:23:51.160 | and fly longevity research,
03:23:53.040 | trying to understand why some organisms
03:23:55.360 | really live a long time.
03:23:57.160 | This is the first time like race against the clock
03:24:00.360 | and try to use the AI to figure out stuff that,
03:24:04.640 | like if we take two months longer to solve the AI problem,
03:24:09.560 | some more people will die
03:24:10.720 | because we don't know what combination
03:24:12.200 | of antivirals to give them.
03:24:13.320 | - Yeah.
03:24:14.160 | At the societal level, at the biological level,
03:24:16.640 | at any level, are you hopeful about us
03:24:21.240 | as a human species getting out of this pandemic?
03:24:24.920 | What are your thoughts on it in general?
03:24:26.680 | - The pandemic will be gone in a year or two months.
03:24:29.200 | There's a vaccine for it.
03:24:30.480 | So I mean, that's--
03:24:31.840 | - A lot of pain and suffering can happen in that time.
03:24:35.560 | So that could be irreversible.
03:24:38.520 | - I think if you spend much time in sub-Saharan Africa,
03:24:43.160 | you can see there's a lot of pain
03:24:44.440 | and suffering happening all the time.
03:24:47.560 | Like you walk through the streets
03:24:49.600 | of any large city in sub-Saharan Africa,
03:24:53.280 | and there are loads,
03:24:55.640 | I mean, tens of thousands,
03:24:56.760 | probably hundreds of thousands of people
03:24:59.200 | lying by the side of the road,
03:25:01.440 | dying mainly of curable diseases without food or water,
03:25:05.960 | and either ostracized by their families
03:25:07.840 | or they left their family house
03:25:09.040 | 'cause they didn't want to infect their family, right?
03:25:11.160 | I mean, there's tremendous human suffering on the planet
03:25:15.880 | all the time, which most folks in the developed world
03:25:19.520 | pay no attention to, and COVID is not remotely the worst.
03:25:25.040 | How many people are dying of malaria all the time?
03:25:27.920 | I mean, so COVID is bad.
03:25:30.440 | It is by no mean the worst thing happening,
03:25:33.160 | and setting aside diseases,
03:25:36.080 | I mean, there are many places in the world
03:25:38.320 | where you're at risk of having your teenage son
03:25:41.200 | kidnapped by armed militias
03:25:42.640 | and forced to get killed in someone else's war
03:25:45.320 | fighting tribe against tribe.
03:25:47.000 | I mean, so humanity has a lot of problems,
03:25:50.520 | which we don't need to have
03:25:52.080 | given the state of advancement of our technology right now.
03:25:56.040 | And I think COVID is one of the easier problems to solve
03:25:59.880 | in the sense that there are many brilliant people
03:26:02.360 | working on vaccines.
03:26:03.560 | We have the technology to create vaccines,
03:26:06.040 | and we're gonna create new vaccines.
03:26:08.560 | We should be more worried
03:26:09.520 | that we haven't managed to defeat malaria after so long
03:26:12.920 | and after the Gates Foundation and others
03:26:14.720 | putting so much money into it.
03:26:18.440 | I mean, I think clearly the whole global medical system,
03:26:23.240 | global health system,
03:26:25.000 | and the global political and socioeconomic system
03:26:28.240 | are incredibly unethical and unequal and badly designed.
03:26:33.240 | And I mean, I don't know how to solve that directly.
03:26:40.240 | I think what we can do indirectly to solve it
03:26:44.240 | is to make systems that operate in parallel
03:26:48.000 | and off to the side of the governments
03:26:51.160 | that are nominally controlling the world
03:26:53.920 | with their armies and militias.
03:26:56.960 | And to the extent that you can make compassionate
03:27:00.520 | peer-to-peer decentralized frameworks for doing things,
03:27:05.440 | these are things that can start out unregulated.
03:27:08.400 | And then if they get traction before the regulators come in,
03:27:11.720 | then they've influenced the way the world works, right?
03:27:14.120 | SingularityNet aims to do this with AI.
03:27:18.800 | Rajuv, which is a spinoff from SingularityNet,
03:27:22.200 | you can see it, rajuv.io.
03:27:23.960 | - How do you spell that?
03:27:25.120 | - R-E-J-U-V-E, rajuv.io.
03:27:28.600 | That aims to do the same thing for medicine.
03:27:30.480 | So it's like peer-to-peer sharing of medical data.
03:27:33.840 | So you can share medical data into a secure data wallet.
03:27:36.920 | You can get advice about your health and longevity
03:27:39.320 | through apps that Rajuv will launch
03:27:43.320 | within the next couple months.
03:27:44.840 | And then SingularityNet AI can analyze all this data,
03:27:48.200 | but then the benefits from that analysis
03:27:50.280 | are spread among all the members of the network.
03:27:52.960 | But I mean, of course, I'm gonna hawk my particular projects
03:27:56.760 | but I mean, whether or not SingularityNet and Rajuv
03:28:00.320 | are the answer, I think it's key to create
03:28:04.560 | decentralized mechanisms for everything.
03:28:09.280 | I mean, for AI, for human health, for politics,
03:28:13.440 | for jobs and employment, for sharing social information.
03:28:17.880 | And to the extent decentralized peer-to-peer methods
03:28:21.800 | designed with universal compassion at the core
03:28:25.640 | can gain traction, then these will just decrease the role
03:28:29.920 | that government has.
03:28:31.360 | And I think that's much more likely to do good
03:28:35.000 | than trying to explicitly reform
03:28:37.960 | the global government system.
03:28:39.280 | I mean, I'm happy other people are trying to explicitly
03:28:41.840 | reform the global government system.
03:28:44.000 | On the other hand, you look at how much good the internet
03:28:47.280 | or Google did or mobile phones did.
03:28:50.760 | Even you're making something that's decentralized
03:28:54.160 | and throwing it out everywhere and it takes hold,
03:28:56.720 | then government has to adapt.
03:28:59.320 | And I mean, that's what we need to do with AI and with health.
03:29:02.480 | And in that light, I mean, the centralization of healthcare
03:29:07.880 | and of AI is certainly not ideal, right?
03:29:11.880 | Like most AI PhDs are being sucked in by a half dozen
03:29:16.040 | to a dozen big companies.
03:29:17.280 | Most AI processing power is being bought
03:29:20.880 | by a few big companies for their own proprietary good.
03:29:23.720 | And most medical research is within a few
03:29:26.920 | pharmaceutical companies and clinical trials run
03:29:29.680 | by pharmaceutical companies will stay solid
03:29:31.800 | within those pharmaceutical companies.
03:29:34.120 | Now, these large centralized entities,
03:29:37.240 | which are intelligences in themselves, these corporations,
03:29:40.520 | but they're mostly malevolent, psychopathic
03:29:43.120 | and sociopathic intelligences.
03:29:45.840 | Not saying the people involved are,
03:29:47.640 | but the corporations as self-organizing entities
03:29:50.560 | on their own, which are concerned with maximizing
03:29:53.320 | shareholder value as a sole objective function.
03:29:57.160 | I mean, AI and medicine are being sucked
03:29:59.880 | into these pathological corporate organizations
03:30:04.120 | with government cooperation and Google cooperating
03:30:07.760 | with British and US government on this
03:30:10.240 | as one among many, many different examples.
03:30:12.560 | 23andMe providing you the nice service
03:30:15.160 | of sequencing your genome and then licensing the genome
03:30:18.920 | to GlaxoSmithKline on an exclusive basis, right?
03:30:21.400 | - Right.
03:30:22.240 | - Now you can take your own DNA
03:30:23.480 | and do whatever you want with it,
03:30:24.880 | but the pooled collection of 23andMe sequenced DNA
03:30:28.120 | is just to GlaxoSmithKline.
03:30:30.840 | Someone else could reach out to everyone
03:30:32.520 | who had worked with 23andMe to sequence their DNA
03:30:36.320 | and say, give us your DNA for our open
03:30:39.360 | and decentralized repository that we'll make available
03:30:41.680 | to everyone, but nobody's doing that
03:30:43.720 | 'cause it's a pain to get organized.
03:30:45.680 | And the customer list is proprietary to 23andMe, right?
03:30:48.400 | - Interesting.
03:30:49.240 | - So, yeah, I mean, this I think is a greater risk
03:30:54.240 | to humanity from AI than rogue AGIs turning the universe
03:30:58.400 | into paperclips or computronium.
03:31:01.120 | 'Cause what you have here is mostly good-hearted
03:31:05.080 | and nice people who are sucked into a mode of organization
03:31:09.920 | of large corporations, which has evolved
03:31:12.640 | just for no individual's fault,
03:31:14.200 | just because that's the way society has evolved.
03:31:16.800 | - It's not all choice to get self-interested
03:31:18.640 | and become psychopathic, like you said.
03:31:20.560 | - The corporation is psychopathic,
03:31:22.480 | even if the people are not.
03:31:23.720 | And that's really the disturbing thing about it
03:31:26.680 | because the corporations can do things
03:31:30.520 | that are quite bad for society,
03:31:32.360 | even if nobody has a bad intention.
03:31:35.600 | - Right, no individual member of that corporation
03:31:38.040 | has a bad intention.
03:31:38.880 | - No, some probably do, but it's not necessary
03:31:41.560 | that they do for the corporation.
03:31:43.200 | Like, I mean, Google, I know a lot of people in Google,
03:31:47.040 | and there are, with very few exceptions,
03:31:49.760 | they're all very nice people who genuinely want
03:31:52.280 | what's good for the world.
03:31:53.960 | And Facebook, I know fewer people,
03:31:56.960 | but it's probably mostly true.
03:31:59.040 | It's probably like fun young geeks
03:32:01.440 | who wanna build cool technology.
03:32:03.960 | - I actually tend to believe that even the leaders,
03:32:05.880 | even Mark Zuckerberg, one of the most disliked people
03:32:08.880 | in tech, also wants to do good for the world.
03:32:11.920 | - What do you think about Jamie Dimon?
03:32:13.920 | - Who's Jamie Dimon?
03:32:15.000 | - The heads of the great banks.
03:32:16.280 | May have a different psychology.
03:32:17.680 | - Oh boy, yeah, well.
03:32:19.040 | I tend to be naive about these things
03:32:22.840 | and see the best.
03:32:24.400 | I tend to agree with you that I think the individuals
03:32:28.520 | wanna do good by the world,
03:32:30.560 | but the mechanism of the company can sometimes
03:32:33.080 | be its own intelligence system.
03:32:34.840 | - I mean, there's a, my cousin Mario Goetzel
03:32:38.440 | has worked for Microsoft since 1985 or something,
03:32:41.760 | and I can see for him, I mean,
03:32:45.840 | as well as just working on cool projects,
03:32:49.000 | you're coding stuff that gets used by,
03:32:52.200 | like, billions and billions of people.
03:32:54.560 | And do you think if I improve this feature,
03:32:57.680 | that's making billions of people's lives easier, right?
03:33:00.240 | So of course that's cool,
03:33:03.120 | and the engineers are not in charge
03:33:05.520 | of running the company anyway.
03:33:06.880 | And of course, even if you're Mark Zuckerberg or Larry Page,
03:33:10.120 | I mean, you still have a fiduciary responsibility.
03:33:13.560 | And I mean, you're responsible to the shareholders,
03:33:16.360 | your employees, who you want to keep paying them
03:33:18.880 | and so forth.
03:33:19.720 | So yeah, you're enmeshed in this system.
03:33:22.920 | And when I worked in DC,
03:33:26.760 | I worked a bunch with INSCOM, US Army Intelligence,
03:33:29.360 | and I was heavily politically opposed
03:33:31.880 | to what the US Army was doing in Iraq at that time,
03:33:34.720 | like torturing people in Abu Ghraib.
03:33:36.520 | But everyone I knew in US Army and INSCOM,
03:33:39.840 | when I hung out with them, was a very nice person.
03:33:42.600 | They were friendly to me,
03:33:43.480 | they were nice to my kids and my dogs, right?
03:33:46.120 | And they really believed that the US
03:33:48.320 | was fighting the forces of evil.
03:33:49.640 | And if you ask them about Abu Ghraib,
03:33:51.000 | they're like, "Well, but these Arabs
03:33:53.040 | "will chop us into pieces,
03:33:54.360 | "so how can you say we're wrong to waterboard them a bit?"
03:33:58.000 | Right?
03:33:58.840 | Like, that's much less than what they would do to us.
03:34:00.280 | It's just in their worldview,
03:34:02.880 | what they were doing was really genuinely
03:34:05.320 | for the good of humanity.
03:34:06.760 | Like, none of them woke up in the morning and said,
03:34:09.360 | "I want to do harm to good people
03:34:12.160 | "because I'm just a nasty guy," right?
03:34:14.520 | So yeah, most people on the planet,
03:34:18.200 | setting aside a few genuine psychopaths and sociopaths,
03:34:21.800 | I mean, most people on the planet
03:34:23.600 | have a heavy dose of benevolence and wanting to do good,
03:34:27.560 | and also a heavy capability to convince themselves
03:34:32.160 | whatever they feel like doing,
03:34:33.400 | or whatever is best for them,
03:34:34.520 | is for the good of humankind, right?
03:34:36.720 | - So the more we can decentralize control of--
03:34:40.440 | - Decentralization, you know, democracy is horrible,
03:34:44.940 | but this is like Winston Churchill said,
03:34:47.280 | "It's the worst possible system of government
03:34:49.320 | "except for all the others," right?
03:34:50.720 | I mean, I think the whole mess of humanity
03:34:53.920 | has many, many very bad aspects to it,
03:34:56.920 | but so far the track record of elite groups
03:35:00.360 | who know what's better for all of humanity
03:35:02.520 | is much worse than the track record
03:35:04.560 | of the whole teeming democratic participatory
03:35:08.040 | mess of humanity, right?
03:35:09.560 | I mean, none of them is perfect by any means.
03:35:13.440 | The issue with a small elite group that knows what's best
03:35:16.680 | is even if it starts out as truly benevolent
03:35:20.340 | and doing good things in accordance
03:35:22.440 | with its initial good intentions,
03:35:24.960 | you find out you need more resources,
03:35:26.600 | you need a bigger organization,
03:35:28.040 | you pull in more people, internal politics arises,
03:35:31.280 | difference of opinions arise, and bribery happens.
03:35:35.000 | Like some opponent organization
03:35:38.120 | takes a second in command now to make the first in command
03:35:40.920 | of some other organization,
03:35:42.600 | and I mean, there's a lot of history
03:35:45.560 | of what happens with elite groups
03:35:47.320 | thinking they know what's best for the human race.
03:35:50.080 | So if I have to choose,
03:35:53.080 | I'm gonna reluctantly put my faith
03:35:55.460 | in the vast democratic decentralized mass,
03:35:58.960 | and I think corporations have a track record
03:36:02.920 | of being ethically worse
03:36:05.340 | than their constituent human parts,
03:36:07.480 | and democratic governments have a more mixed track record,
03:36:12.480 | but there are at least--
03:36:14.700 | - But it's the best we got.
03:36:15.880 | - Yeah, I mean, you can, there's Iceland,
03:36:18.500 | very nice country, right?
03:36:19.680 | I mean, democratic for 800 plus years,
03:36:23.320 | very benevolent, beneficial government,
03:36:26.800 | and I think, yeah, there are track records
03:36:28.780 | of democratic modes of organization.
03:36:31.800 | Linux, for example, some of the people in charge of Linux
03:36:36.000 | are overtly complete assholes, right?
03:36:38.560 | And trying to reform themselves, in many cases,
03:36:41.680 | in other cases not, but the organization as a whole,
03:36:45.960 | I think it's done a good job overall.
03:36:49.680 | It's been very welcoming in the third world, for example,
03:36:53.960 | and it's allowed advanced technology
03:36:56.060 | to roll out on all sorts of different embedded devices
03:36:58.480 | and platforms in places where people couldn't afford
03:37:01.240 | to pay for proprietary software.
03:37:03.760 | So I'd say the internet, Linux,
03:37:06.720 | and many democratic nations are examples
03:37:09.800 | of how certain open decentralized democratic methodology
03:37:14.000 | can be ethically better than the sum of the parts
03:37:16.560 | rather than worse, and corporations,
03:37:19.320 | that has happened only for a brief period,
03:37:21.360 | and then it goes sour, right?
03:37:24.540 | I mean, I'd say a similar thing about universities.
03:37:26.960 | Like, university is a horrible way to organize research
03:37:30.880 | and get things done, yet it's better than anything else
03:37:33.640 | we've come up with, right?
03:37:34.480 | A company can be much better, but for a brief period of time
03:37:38.280 | and then it stops being so good, right?
03:37:42.640 | So then I think if you believe that AGI
03:37:47.360 | is gonna emerge sort of incrementally
03:37:50.680 | out of AIs doing practical stuff in the world,
03:37:53.620 | like controlling humanoid robots, or driving cars,
03:37:57.080 | or diagnosing diseases, or operating killer drones,
03:38:01.260 | or spying on people and reporting under the government,
03:38:04.580 | then what kind of organization
03:38:08.440 | creates more and more advanced narrow AI
03:38:10.840 | verging toward AGI may be quite important
03:38:13.600 | because it will guide what's in the mind
03:38:17.040 | of the early stage AGI as it first gains the ability
03:38:20.040 | to rewrite its own code base
03:38:21.800 | and project itself toward superintelligence.
03:38:24.760 | And if you believe that AI may move toward AGI
03:38:29.760 | out of this sort of synergetic activity
03:38:33.280 | of many agents cooperating together,
03:38:35.760 | rather than just have one person's project,
03:38:37.840 | then who owns and controls that platform
03:38:40.800 | for AI cooperation becomes also very, very important.
03:38:45.800 | And is that platform AWS?
03:38:49.360 | Is it Google Cloud?
03:38:50.560 | Is it Alibaba?
03:38:51.800 | Or is it something more like the internet
03:38:53.400 | or SingularityNet, which is open and decentralized?
03:38:56.720 | So if all of my weird machinations come to pass,
03:39:01.120 | I mean, we have the Hanson robots
03:39:03.760 | being a beautiful user interface,
03:39:06.160 | gathering information on human values
03:39:09.080 | and being loving and compassionate to people
03:39:11.440 | in medical home service robot office applications.
03:39:14.600 | You have SingularityNet in the backend
03:39:16.880 | networking together many different AIs
03:39:19.440 | toward cooperative intelligence,
03:39:21.440 | fueling the robots among many other things.
03:39:23.960 | You have OpenCog 2.0 and TrueAGI
03:39:27.320 | as one of the sources of AI
03:39:29.360 | inside this decentralized network,
03:39:31.680 | powering the robot and medical AIs
03:39:34.080 | helping us live a long time
03:39:36.280 | and cure diseases among other things.
03:39:39.680 | And this whole thing is operating
03:39:42.320 | in a democratic and decentralized way, right?
03:39:46.040 | I think if anyone can pull something like this off,
03:39:50.160 | you know, whether using the specific technologies
03:39:53.160 | I've mentioned or something else,
03:39:55.440 | I mean, then I think we have a higher odds
03:39:58.360 | of moving toward a beneficial technological singularity
03:40:02.720 | rather than one in which the first super AGI
03:40:06.160 | is indifferent to humans
03:40:07.600 | and just considers us an inefficient use of molecules.
03:40:11.840 | - That was a beautifully articulated vision for the world.
03:40:15.480 | So thank you for that.
03:40:16.680 | Well, let's talk a little bit about life and death.
03:40:20.460 | - I'm pro-life and anti-death.
03:40:23.840 | For most people, there's few exceptions
03:40:28.040 | that I won't mention here.
03:40:29.360 | - I'm glad just like your dad,
03:40:32.360 | you're taking a stand against death.
03:40:35.520 | You have, by the way, you have a bunch of awesome music
03:40:39.960 | where you play piano online.
03:40:41.760 | One of the songs that I believe you've written,
03:40:45.960 | the lyrics go, by the way, I like the way it sounds.
03:40:49.120 | People should listen to it, it's awesome.
03:40:51.440 | I considered, I probably will cover it.
03:40:53.400 | It's a good song.
03:40:55.000 | Tell me why do you think it is a good thing
03:40:58.640 | that we all get old and die?
03:41:00.320 | It's one of the songs.
03:41:01.980 | I love the way it sounds.
03:41:03.160 | But let me ask you about death first.
03:41:05.640 | Do you think there's an element to death
03:41:08.320 | that's essential to give our life meaning?
03:41:12.240 | Like the fact that this thing ends?
03:41:14.000 | - Let me say, I'm pleased and a little embarrassed
03:41:19.200 | you've been listening to that music I put online.
03:41:21.520 | - Oh, it's awesome.
03:41:22.360 | - One of my regrets in life recently
03:41:24.120 | is I would love to get time to really produce music well.
03:41:28.440 | Like I haven't touched my sequencer software
03:41:31.120 | in like five years.
03:41:32.680 | I would love to rehearse and produce and edit.
03:41:37.240 | But with a two-year-old baby
03:41:39.600 | and trying to create the singularity, there's no time.
03:41:42.280 | So I just made the decision to,
03:41:44.780 | when I'm playing random shit in an off moment--
03:41:47.760 | - Just record it.
03:41:48.600 | - Just record it.
03:41:49.600 | - Oh, you're still thinking.
03:41:50.440 | - Put it out there like whatever.
03:41:51.840 | Maybe if I'm unfortunate enough to die,
03:41:54.480 | maybe that can be input to the AGI
03:41:56.280 | when it tries to make an accurate mind upload of me, right?
03:41:59.000 | Death is bad.
03:42:00.160 | (laughing)
03:42:01.120 | I mean, that's very simple.
03:42:02.680 | It's baffling we should have to say that.
03:42:04.320 | I mean, of course, people can make meaning out of death.
03:42:08.760 | And if someone is tortured,
03:42:10.960 | maybe they can make beautiful meaning out of that torture
03:42:13.240 | and write a beautiful poem
03:42:14.600 | about what it was like to be tortured, right?
03:42:17.000 | I mean, we're very creative.
03:42:19.160 | We can milk beauty and positivity
03:42:22.460 | out of even the most horrible and shitty things.
03:42:25.320 | But just because if I was tortured,
03:42:27.920 | I could write a good song
03:42:28.960 | about what it was like to be tortured,
03:42:30.840 | doesn't make torture good.
03:42:32.000 | And just because people are able to derive meaning
03:42:35.680 | and value from death,
03:42:37.520 | doesn't mean they wouldn't derive even better meaning
03:42:39.680 | and value from ongoing life without death,
03:42:42.600 | which I very--
03:42:43.440 | - Indefinite.
03:42:44.260 | - Yeah, yeah.
03:42:45.100 | - So if you could live forever, would you live forever?
03:42:47.800 | Forever.
03:42:48.640 | - My goal with longevity research
03:42:52.880 | is to abolish the plague of involuntary death.
03:42:57.520 | I don't think people should die unless they choose to die.
03:43:00.400 | If I had to choose forced immortality versus dying,
03:43:06.400 | I would choose forced immortality.
03:43:09.200 | On the other hand, if I chose,
03:43:11.880 | if I had the choice of immortality
03:43:13.520 | with the choice of suicide whenever I felt like it,
03:43:15.640 | of course, I would take that instead.
03:43:17.240 | And that's the more realistic choice.
03:43:18.880 | I mean, there's no reason you should have forced immortality.
03:43:21.680 | You should be able to live
03:43:23.320 | until you get sick of living, right?
03:43:26.080 | I mean, that's, and that will seem insanely obvious
03:43:29.800 | to everyone 50 years from now.
03:43:31.400 | And they will be so, I mean,
03:43:33.520 | people who thought death gives meaning to life,
03:43:36.000 | so we should all die,
03:43:37.660 | they will look at that 50 years from now,
03:43:39.360 | the way we now look at the Anabaptists in the year 1000,
03:43:43.360 | who gave away all their positions,
03:43:45.160 | went on top of the mountain for Jesus,
03:43:47.040 | for Jesus to come and bring them to the ascension.
03:43:50.240 | I mean, it's ridiculous that people think death is good,
03:43:55.760 | because you gain more wisdom as you approach dying.
03:44:00.200 | I mean, of course it's true.
03:44:01.960 | I mean, I'm 53,
03:44:03.480 | and the fact that I might have only a few more decades left,
03:44:08.240 | it does make me reflect on things differently.
03:44:11.440 | It does give me a deeper understanding of many things.
03:44:15.720 | But I mean, so what?
03:44:18.120 | You could get a deep understanding
03:44:19.520 | in a lot of different ways.
03:44:20.920 | Pain is the same way.
03:44:22.080 | Like, we're gonna abolish pain.
03:44:24.300 | And that's even more amazing than abolishing death.
03:44:27.480 | I mean, once we get a little better at neuroscience,
03:44:30.420 | we'll be able to go in and adjust the brain
03:44:32.680 | so that pain doesn't hurt anymore.
03:44:34.780 | And people will say that's bad,
03:44:37.160 | because there's so much beauty
03:44:39.440 | in overcoming pain and suffering.
03:44:41.160 | Well, sure, and there's beauty in overcoming torture too.
03:44:45.260 | And some people like to cut themselves,
03:44:46.900 | but not many, right?
03:44:48.680 | - That's an interesting, so, but to push,
03:44:50.600 | I mean, to push back, again, this is the Russian side of me,
03:44:53.340 | I do romanticize suffering.
03:44:55.040 | It's not obvious, I mean, the way you put it,
03:44:57.400 | it seems very logical, it's almost absurd
03:45:00.400 | to romanticize suffering or pain or death.
03:45:03.900 | But to me, a world without suffering,
03:45:07.760 | without pain, without death, it's not obvious
03:45:10.600 | what that world would say. - Well, then you can stay
03:45:11.760 | in the people's zoo.
03:45:13.520 | - People's zoo with suffering. - With the people
03:45:14.360 | torturing each other, right? (laughs)
03:45:15.480 | - No, but what I'm saying is, I don't,
03:45:18.160 | well, that's, I guess what I'm trying to say,
03:45:20.240 | I don't know if I was presented with that choice
03:45:22.800 | of what I would choose, because to me--
03:45:25.400 | - No, this is a subtler, it's a subtler matter,
03:45:30.120 | and I've posed it in this conversation
03:45:34.040 | in an unnecessarily extreme way.
03:45:37.120 | So I think the way you should think about it
03:45:41.120 | is what if there's a little dial on the side of your head,
03:45:44.740 | and you could turn how much pain hurt,
03:45:48.240 | turn it down to zero, turn it up to 11,
03:45:50.700 | like in spinal tap, if it wants,
03:45:52.260 | maybe through an actual spinal tap, right?
03:45:54.040 | So, I mean, would you opt to have that dial there or not?
03:45:58.960 | That's the question.
03:45:59.800 | The question isn't whether you would turn the pain
03:46:02.040 | down to zero all the time.
03:46:05.260 | Would you opt to have the dial or not?
03:46:07.200 | My guess is that in some dark moment of your life,
03:46:10.040 | you would choose to have the dial implanted,
03:46:11.960 | and then it would be there.
03:46:13.380 | - Just to confess a small thing, don't ask me why,
03:46:17.220 | but I'm doing this physical challenge currently,
03:46:20.800 | where I'm doing 680 pushups and pull-ups a day,
03:46:24.400 | and my shoulder is currently, as we sit here,
03:46:29.200 | in a lot of pain, and I don't know.
03:46:34.200 | I would certainly right now, if you gave me a dial,
03:46:36.880 | I would turn that sucker to zero as quickly as possible.
03:46:39.800 | - Good.
03:46:40.640 | - But I think the whole point of this journey is,
03:46:45.640 | I don't know.
03:46:47.620 | - Well, because you're a twisted human being.
03:46:49.560 | - I'm a twisted, so the question is,
03:46:51.520 | am I somehow twisted because I created some kind
03:46:55.880 | of narrative for myself so that I can deal
03:46:58.360 | with the injustice and the suffering in the world,
03:47:02.080 | or is this actually going to be a source of happiness?
03:47:06.440 | - Well, this is, to an extent, is a research question
03:47:10.820 | that humanity will undertake, right?
03:47:12.280 | So, I mean, human beings do have a particular
03:47:17.680 | biological makeup, which sort of implies
03:47:21.800 | a certain probability distribution
03:47:23.720 | over motivational systems, right?
03:47:25.880 | So, I mean, we, and that is there.
03:47:28.840 | - I'll put.
03:47:29.680 | - That is there.
03:47:30.500 | Now, the question is, how flexibly can that morph
03:47:35.500 | as society and technology change, right?
03:47:39.000 | So, if we're given that dial, and we're given a society
03:47:43.740 | in which, say, we don't have to work for a living,
03:47:47.560 | and in which there's an ambient, decentralized,
03:47:50.720 | benevolent AI network that will warn us
03:47:52.520 | when we're about to hurt ourself,
03:47:54.480 | you know, if we're in a different context,
03:47:57.080 | can we consistently, with being genuinely and fully human,
03:48:02.080 | can we consistently get into a state of consciousness
03:48:05.920 | where we just want to keep the pain dial turned
03:48:09.280 | all the way down, and yet we're leading
03:48:11.760 | very rewarding and fulfilling lives, right?
03:48:13.880 | Now, I suspect the answer is yes, we can do that,
03:48:17.680 | but I don't know that.
03:48:19.560 | - It's a research question, like you said.
03:48:20.400 | - I don't know that for certain.
03:48:21.600 | Yeah, now, I'm more confident that we could create
03:48:25.960 | a non-human AGI system, which just didn't need
03:48:30.520 | an analog of feeling pain, and I think that AGI system
03:48:35.320 | will be fundamentally healthier and more benevolent
03:48:38.600 | than human beings, so I think it might or might not be true
03:48:42.320 | that humans need a certain element of suffering
03:48:45.200 | to be satisfied humans, consistent with the human physiology.
03:48:49.400 | If it is true, that's one of the things
03:48:51.920 | that makes us fucked and disqualified
03:48:54.760 | to be the super AGI, right?
03:48:58.360 | I mean, the nature of the human motivational system
03:49:03.360 | is that we seem to gravitate towards situations
03:49:08.560 | where the best thing in the large scale
03:49:12.680 | is not the best thing in the small scale,
03:49:15.800 | according to our subjective value system.
03:49:18.040 | So we gravitate towards subjective value judgments,
03:49:20.680 | where to gratify ourselves in the large,
03:49:22.880 | we have to ungratify ourselves in the small,
03:49:25.560 | and we do that in, you see that in music,
03:49:29.280 | there's a theory of music which says
03:49:31.680 | the key to musical aesthetics is the surprising
03:49:34.840 | fulfillment of expectations.
03:49:36.800 | Like you want something that will fulfill
03:49:38.840 | the expectations enlisted in the prior part of the music,
03:49:41.800 | but in a way with a bit of a twist that surprises you.
03:49:44.760 | And I mean, that's true not only in outdoor music
03:49:48.080 | like my own or that of Zappa or Steve Vai or Buckethead
03:49:53.080 | or Christoph Penderecki or something,
03:49:55.400 | it's even there in Mozart or something.
03:49:57.920 | It's not there in elevator music too much,
03:49:59.920 | but that's why it's boring, right?
03:50:02.920 | But wrapped up in there is, you know,
03:50:05.360 | we want to hurt a little bit so that we can feel the pain
03:50:10.360 | go away, like we wanna be a little confused
03:50:14.280 | by what's coming next, so then when the thing
03:50:16.720 | that comes next actually makes sense,
03:50:18.320 | it's so satisfying, right?
03:50:19.920 | - It's the surprising fulfillment of expectations,
03:50:22.240 | is that what you said?
03:50:23.080 | - Yeah, yeah, yeah. - So beautifully put.
03:50:24.240 | Is there, we've been skirting around a little bit,
03:50:26.800 | but if I were to ask you the most ridiculous big question
03:50:29.320 | of what is the meaning of life, what would your answer be?
03:50:35.160 | - Three values, joy, growth, and choice.
03:50:38.160 | I think you need joy, I mean, that's the basis
03:50:46.160 | of everything if you want the number one value.
03:50:48.360 | On the other hand, I'm unsatisfied with a static joy
03:50:53.360 | that doesn't progress, perhaps because of some
03:50:56.600 | elemental element of human perversity,
03:50:58.800 | but the idea of something that grows
03:51:00.880 | and becomes more and more and better
03:51:03.160 | and better in some sense appeals to me.
03:51:05.640 | But I also sort of like the idea of individuality,
03:51:09.560 | that as a distinct system, I have some agency,
03:51:13.520 | so there's some nexus of causality within this system,
03:51:17.760 | rather than the causality being wholly evenly distributed
03:51:21.400 | over the joyous growing mass.
03:51:22.920 | So you start with joy, growth, and choice
03:51:26.080 | as three basic values.
03:51:27.640 | - And those three things could continue indefinitely.
03:51:30.840 | That's something that could last forever.
03:51:33.920 | Is there some aspect of something you called,
03:51:37.440 | which I like, super longevity that you find exciting?
03:51:42.440 | Research-wise, is there ideas in that space?
03:51:46.760 | - I mean, I think, yeah, in terms of the meaning of life,
03:51:51.760 | this really ties into that, because for us as humans,
03:51:56.520 | probably the way to get the most joy, growth,
03:51:59.440 | and choice is transhumanism,
03:52:02.520 | and to go beyond the human form that we have right now.
03:52:07.200 | I mean, I think human body is great,
03:52:09.840 | and by no means do any of us maximize the potential
03:52:14.040 | for joy, growth, and choice imminent in our human bodies.
03:52:17.400 | On the other hand, it's clear that other configurations
03:52:20.600 | of matter could manifest even greater amounts
03:52:24.080 | of joy, growth, and choice than humans do,
03:52:28.440 | maybe even finding ways to go beyond the realm of matter
03:52:31.760 | as we understand it right now.
03:52:33.880 | So I think in a practical sense,
03:52:36.920 | much of the meaning I see in human life
03:52:39.560 | is to create something better than humans
03:52:41.760 | and go beyond human life.
03:52:44.320 | But certainly that's not all of it for me
03:52:46.840 | in a practical sense, right?
03:52:48.040 | Like I have four kids and a granddaughter
03:52:50.560 | and many friends and parents and family,
03:52:53.880 | and just enjoying everyday human life.
03:52:57.960 | Human social existence.
03:52:59.800 | - But we can do even better.
03:53:00.920 | - Yeah, yeah, and I mean, I love,
03:53:02.920 | I've always, when I could live near nature,
03:53:05.720 | I spend a bunch of time out in nature in the forest
03:53:08.760 | and on the water every day and so forth.
03:53:10.960 | So I mean, enjoying the pleasant moment is part of it,
03:53:15.080 | but the growth and choice aspect are severely limited
03:53:20.080 | by our human biology.
03:53:22.480 | In particular, dying seems to inhibit your potential
03:53:26.000 | for personal growth considerably as far as we know.
03:53:29.560 | I mean, there's some element of life after death perhaps,
03:53:33.040 | but even if there is, why not also continue going
03:53:36.520 | in this biological realm, right?
03:53:39.320 | In super longevity, I mean,
03:53:41.920 | you know, we haven't yet cured aging.
03:53:45.640 | We haven't yet cured death.
03:53:48.080 | Certainly there's very interesting progress all around.
03:53:51.920 | I mean, CRISPR and gene editing
03:53:53.960 | can be an incredible tool.
03:53:57.240 | And I mean, right now, stem cells
03:54:01.600 | could potentially prolong life a lot.
03:54:03.200 | Like if you got stem cell injections
03:54:05.960 | of just stem cells for every tissue of your body
03:54:09.160 | injected into every tissue,
03:54:11.360 | and you can just have replacement of your old cells
03:54:15.360 | with new cells produced by those stem cells,
03:54:17.320 | I mean, that could be highly impactful at prolonging life.
03:54:21.240 | Now we just need slightly better technology
03:54:23.280 | for having them grow, right?
03:54:25.400 | So using machine learning to guide procedures
03:54:28.840 | for stem cell differentiation and trans differentiation,
03:54:32.720 | it's kind of nitty gritty,
03:54:33.760 | but I mean, that's quite interesting.
03:54:36.680 | So I think there's a lot of different things being done
03:54:41.080 | to help with prolongation of human life,
03:54:44.760 | but we could do a lot better.
03:54:47.560 | So for example, the extracellular matrix,
03:54:51.480 | which is the bunch of proteins
03:54:52.640 | in between the cells in your body,
03:54:54.320 | they get stiffer and stiffer as you get older.
03:54:57.360 | And the extracellular matrix transmits information
03:55:01.320 | both electrically, mechanically,
03:55:03.560 | and to some extent, biophotonically.
03:55:05.400 | So there's all this transmission
03:55:07.280 | through the parts of the body,
03:55:08.880 | but the stiffer the extracellular matrix gets,
03:55:11.880 | the less the transmission happens,
03:55:13.520 | which makes your body get worse coordinated
03:55:15.640 | between the different organs as you get older.
03:55:17.440 | So my friend, Christian Schaffmeister,
03:55:20.000 | my alumnus organization, my alma mater,
03:55:23.200 | the great Temple University,
03:55:25.120 | Christian Schaffmeister has a potential solution to this
03:55:28.640 | where he has these novel molecules called spiral ligamers,
03:55:32.360 | which are like polymers that are not organic.
03:55:34.440 | They're specially designed polymers
03:55:37.800 | so that you can algorithmically predict
03:55:39.440 | exactly how they'll fold very simply.
03:55:41.600 | So he designed the molecular scissors
03:55:43.280 | that have spiral ligamers that you could eat
03:55:45.560 | and would then cut through all the glucosamine
03:55:49.200 | and other cross-linked proteins
03:55:50.640 | in your extracellular matrix, right?
03:55:52.760 | But to make that technology really work
03:55:55.200 | and be mature is several years of work.
03:55:56.880 | As far as I know, no one's funding it at the moment.
03:56:00.120 | So there's so many different ways
03:56:02.360 | that technology could be used to prolong longevity.
03:56:05.060 | What we really need, we need an integrated database
03:56:08.040 | of all biological knowledge about human beings
03:56:10.400 | and model organisms, like hopefully a massively distributed
03:56:14.480 | open-cog bioatom space,
03:56:16.000 | but it can exist in other forms too.
03:56:18.280 | We need that data to be opened up
03:56:20.840 | in a suitably privacy-protecting way.
03:56:23.280 | We need massive funding into machine learning,
03:56:26.120 | AGI, proto-AGI statistical research
03:56:29.240 | aimed at solving biology,
03:56:31.240 | both molecular biology and human biology
03:56:33.440 | based on this massive, massive dataset, right?
03:56:36.720 | And then we need regulators not to stop people
03:56:40.680 | from trying radical therapies on themselves
03:56:43.840 | if they so wish to,
03:56:46.760 | as well as better cloud-based platforms
03:56:49.440 | for automated experimentation on microorganisms,
03:56:52.720 | flies and mice and so forth.
03:56:54.320 | And we could do all this.
03:56:55.840 | You look, after the last financial crisis,
03:56:58.920 | Obama, who I generally like pretty well,
03:57:01.320 | but he gave $4 trillion to large banks
03:57:03.760 | and insurance companies.
03:57:05.640 | Now in this COVID crisis, trillions are being spent
03:57:09.880 | to help everyday people and small businesses.
03:57:12.240 | In the end, we'll probably will find many more trillions
03:57:14.600 | being given to large banks and insurance companies anyway.
03:57:17.720 | Like, could the world put $10 trillion
03:57:21.040 | into making a massive, holistic bio-AI and bio-simulation
03:57:25.560 | and experimental biology infrastructure?
03:57:27.800 | We could, we could put $10 trillion into that
03:57:30.600 | without even screwing us up too badly,
03:57:32.320 | just as in the end, COVID and the last financial crisis
03:57:35.240 | won't screw up the world economy so badly.
03:57:37.880 | We're not putting $10 trillion into that.
03:57:39.880 | Instead, all this research is siloed
03:57:42.240 | inside a few big companies and government agencies.
03:57:46.840 | And most of the data that comes from our individual bodies,
03:57:51.160 | personally, that could feed this AI to solve aging and death,
03:57:55.160 | most of that data is sitting in some hospital's database
03:57:58.880 | doing nothing, right?
03:58:00.160 | - I got two more quick questions for you.
03:58:07.160 | One, I know a lot of people are gonna ask me,
03:58:09.800 | you're on the Joe Rogan podcast
03:58:11.720 | wearing that same amazing hat.
03:58:13.640 | Do you have a origin story for the hat?
03:58:17.520 | Does the hat have its own story that you're able to share?
03:58:21.400 | - The hat story has not been told yet,
03:58:23.160 | so we're gonna have to come back
03:58:24.240 | and you can interview the hat.
03:58:25.880 | - The hat. (laughs)
03:58:27.880 | - We'll leave that for the hat, so an interview.
03:58:30.880 | It's too much to pack into a few seconds.
03:58:32.080 | - Is there a book, is the hat gonna write a book?
03:58:34.320 | Okay, we'll-- (laughs)
03:58:36.840 | - It may transmit the information
03:58:38.360 | through direct neural transmission.
03:58:40.000 | - Okay, so it's actually,
03:58:41.400 | there might be some neural link competition there.
03:58:44.760 | Beautiful, we'll leave it as a mystery.
03:58:46.880 | Maybe one last question.
03:58:49.040 | If you build an AGI system,
03:58:52.680 | you're successful at building the AGI system
03:58:58.480 | that could lead us to the singularity
03:59:00.400 | and you get to talk to her and ask her one question,
03:59:04.520 | what would that question be?
03:59:07.240 | - We're not allowed to ask,
03:59:08.160 | what is the question I should be asking?
03:59:10.040 | (both laugh)
03:59:11.240 | - Yeah, that would be cheating,
03:59:12.240 | but I guess that's a good question.
03:59:14.040 | - I'm thinking of a,
03:59:15.720 | I wrote a story with Stefan Bugaj once
03:59:18.600 | where these AI developers,
03:59:23.400 | they created a super smart AI aimed at answering
03:59:28.400 | all the philosophical questions that had been worrying them,
03:59:32.040 | like what is the meaning of life, is there free will,
03:59:35.680 | what is consciousness and so forth.
03:59:37.960 | So they got the super AGI built
03:59:40.360 | and it churned a while, it said,
03:59:43.880 | those are really stupid questions,
03:59:46.520 | and then it puts off on a spaceship and left the Earth.
03:59:50.320 | (Luke laughs)
03:59:51.400 | - So you'd be afraid of scaring it off.
03:59:54.280 | (Luke laughs)
03:59:55.520 | - That's it, yeah.
03:59:56.480 | I mean, honestly, there is no one question
04:00:02.160 | that rises among all the others, really.
04:00:07.160 | I mean, what interests me more is upgrading
04:00:11.840 | my own intelligence so that I can absorb
04:00:15.360 | the whole worldview of the super AGI.
04:00:19.400 | But I mean, of course, if the answer could be
04:00:22.800 | like what is the chemical formula for the immortality pill,
04:00:29.480 | then I would do that or emit a bit string,
04:00:33.320 | which will be the code for a super AGI
04:00:38.320 | on the Intel i7 processor.
04:00:41.240 | So those would be good questions.
04:00:42.840 | - So if your own mind was expanded
04:00:46.280 | to become super intelligent, like you're describing,
04:00:49.360 | I mean, there's kind of a notion
04:00:53.520 | that intelligence is a burden,
04:00:56.560 | that it's possible that with greater and greater intelligence
04:01:00.040 | that other metric of joy that you mentioned
04:01:03.040 | becomes more and more difficult.
04:01:04.760 | What's your--
04:01:05.600 | - That's a pretty stupid idea.
04:01:07.080 | (both laugh)
04:01:08.280 | - So you think if you're super intelligent,
04:01:09.840 | you can also be super joyful?
04:01:11.460 | - I think getting root access to your own brain
04:01:15.460 | will enable new forms of joy that we don't have now.
04:01:19.240 | And I think, as I've said before,
04:01:22.740 | what I aim at is really make multiple versions of myself.
04:01:27.740 | So I would like to keep one version,
04:01:30.200 | which is basically human like I am now,
04:01:33.600 | but keep the dial to turn pain up and down
04:01:37.000 | and get rid of death, right?
04:01:38.600 | And make another version, which fuses its mind
04:01:43.600 | with superhuman AGI,
04:01:46.600 | and then will become massively transhuman.
04:01:50.080 | And whether it will send some messages back
04:01:52.800 | to the human me or not will be interesting to find out.
04:01:55.600 | The thing is, once you're super AGI,
04:01:58.480 | like one subjective second to a human
04:02:01.520 | might be like a million subjective years
04:02:03.600 | to that super AGI, right?
04:02:04.960 | So it would be on a whole different basis.
04:02:07.560 | I mean, at very least those two copies will be good to have,
04:02:10.940 | but it could be interesting to put your mind
04:02:13.960 | into a dolphin or a space amoeba
04:02:16.840 | or all sorts of other things.
04:02:18.520 | You can imagine one version
04:02:19.800 | that doubled its intelligence every year,
04:02:22.400 | and another version that just became a super AGI
04:02:24.880 | as fast as possible, right?
04:02:26.160 | So, I mean, now we're sort of constrained
04:02:28.760 | to think one mind, one self, one body, right?
04:02:33.000 | But I think we actually,
04:02:35.120 | we don't need to be that constrained
04:02:37.000 | in thinking about future intelligence
04:02:40.800 | after we've mastered AGI and nanotechnology
04:02:44.280 | and longevity biology.
04:02:47.820 | I mean, then each of our minds
04:02:49.520 | is a certain pattern of organization, right?
04:02:52.020 | And I know we haven't talked about consciousness,
04:02:54.280 | but I sort of, I'm panpsychist.
04:02:56.880 | I sort of view the universe as conscious.
04:03:00.080 | And so, a light bulb or a quark or an ant
04:03:04.520 | or a worm or a monkey
04:03:06.040 | have their own manifestations of consciousness.
04:03:08.760 | And the human manifestation of consciousness,
04:03:11.920 | it's partly tied to the particular meat
04:03:15.280 | that we're manifested by,
04:03:17.480 | but it's largely tied to the pattern of organization
04:03:21.240 | in the brain, right?
04:03:22.320 | So, if you upload yourself into a computer
04:03:25.000 | or a robot or whatever else it is,
04:03:28.600 | some element of your human consciousness may not be there
04:03:31.720 | because it's just tied to the biological embodiment.
04:03:34.200 | But I think most of it will be there,
04:03:36.280 | and these will be incarnations of your consciousness
04:03:40.000 | in a slightly different flavor.
04:03:42.480 | And creating these different versions will be amazing,
04:03:46.520 | and each of them will discover meanings of life
04:03:49.640 | that have some overlap,
04:03:52.000 | but probably not total overlap
04:03:54.280 | with the human being's meaning of life.
04:03:59.240 | The thing is, to get to that future
04:04:02.920 | where we can explore different varieties of joy,
04:04:06.440 | different variations of human experience and values
04:04:09.660 | and transhuman experiences and values,
04:04:11.400 | to get to that future,
04:04:13.120 | we need to navigate through a whole lot of human,
04:04:16.240 | bullshit of companies and governments
04:04:19.200 | and killer drones and making and losing money
04:04:23.600 | and so forth, right?
04:04:25.440 | And that's the challenge we're facing now,
04:04:28.540 | is if we do things right,
04:04:30.720 | we can get to a benevolent singularity,
04:04:33.520 | which is levels of joy, growth, and choice
04:04:36.280 | that are literally unimaginable to human beings.
04:04:39.880 | If we do things wrong,
04:04:41.680 | we could either annihilate all life on the planet,
04:04:44.080 | or we could lead to a scenario where, say,
04:04:47.040 | all humans are annihilated,
04:04:49.380 | and there's some super AGI that goes on
04:04:52.700 | and does its own thing unrelated to us,
04:04:55.400 | except via our role in originating it.
04:04:58.360 | And we may well be at a bifurcation point now, right?
04:05:02.080 | Where what we do now has significant causal impact
04:05:05.800 | on what comes about,
04:05:06.680 | and yet, most people on the planet
04:05:09.000 | aren't thinking that way whatsoever.
04:05:11.480 | They're thinking only about their own narrow aims
04:05:16.200 | and aims and goals, right?
04:05:17.760 | Now, of course, I'm thinking about my own narrow aims
04:05:20.840 | and goals to some extent also,
04:05:24.220 | but I'm trying to use as much of my energy and mind
04:05:28.380 | as I can to push toward this more benevolent alternative,
04:05:33.160 | which will be better for me,
04:05:34.640 | but also for everybody else.
04:05:37.960 | And it's weird that so few people understand
04:05:42.520 | what's going on.
04:05:43.360 | I know you interviewed Elon Musk,
04:05:44.760 | and he understands a lot of what's going on,
04:05:47.360 | but he's much more paranoid than I am, right?
04:05:49.600 | 'Cause Elon gets that AGI is gonna be
04:05:52.400 | way, way smarter than people,
04:05:54.200 | and he gets that an AGI does not necessarily
04:05:57.060 | have to give a shit about people,
04:05:58.720 | because we're a very elementary mode
04:06:01.000 | of organization of matter,
04:06:01.960 | compared to many AGIs.
04:06:04.680 | But I don't think he has a clear vision
04:06:06.300 | of how infusing early stage AGIs
04:06:10.120 | with compassion and human warmth
04:06:13.500 | can lead to an AGI that loves and helps people,
04:06:18.000 | rather than viewing us as a historical artifact
04:06:22.840 | and a waste of mass energy.
04:06:26.800 | But on the other hand,
04:06:28.040 | while I have some disagreements with him,
04:06:29.560 | like he understands way, way more of the story
04:06:33.120 | than almost anyone else in such a large-scale
04:06:35.800 | corporate leadership position, right?
04:06:38.180 | It's terrible how little understanding
04:06:40.740 | of these fundamental issues exists out there now.
04:06:45.060 | That may be different five or 10 years from now, though,
04:06:47.220 | 'cause I can see understanding of AGI and longevity
04:06:51.180 | and other such issues is certainly much stronger
04:06:54.620 | and more prevalent now than 10 or 15 years ago, right?
04:06:57.620 | So I mean, humanity is, as a whole,
04:07:00.580 | can be slow learners relative to what I would like,
04:07:05.460 | but on a historical sense, on the other hand,
04:07:08.400 | you could say the progress is astoundingly fast.
04:07:11.220 | - But Elon also said, I think on the Joe Rogan podcast,
04:07:15.640 | that love is the answer.
04:07:17.380 | So maybe in that way, you and him are both on the same page
04:07:21.820 | of how we should proceed with AGI.
04:07:24.420 | I think there's no better place to end it.
04:07:27.280 | I hope we get to talk again about the hat
04:07:30.840 | and about consciousness and about a million topics
04:07:33.340 | we didn't cover.
04:07:34.460 | Ben, it's a huge honor to talk to you.
04:07:36.300 | Thank you for making it out.
04:07:37.540 | Thank you for talking today.
04:07:38.580 | I really loved it. - No, thanks for having me.
04:07:40.420 | This was really, really good fun,
04:07:44.380 | and we dug deep into some very important things.
04:07:47.380 | So thanks for doing this.
04:07:48.700 | - Thanks very much.
04:07:49.780 | Awesome.
04:07:51.180 | Thanks for listening to this conversation with Ben Goertzel,
04:07:53.820 | and thank you to our sponsors,
04:07:55.840 | the Jordan Harbinger Show and Masterclass.
04:07:59.380 | Please consider supporting the podcast
04:08:01.080 | by going to jordanharbinger.com/lex
04:08:04.600 | and signing up to Masterclass at masterclass.com/lex.
04:08:09.600 | Click the links, buy the stuff.
04:08:12.280 | It's the best way to support this podcast
04:08:14.220 | and the journey I'm on in my research and startup.
04:08:18.880 | If you enjoy this thing, subscribe on YouTube,
04:08:21.380 | review it with five stars on Apple Podcast,
04:08:23.720 | support it on Patreon, or connect with me on Twitter,
04:08:26.840 | @lexfriedman, spelled without the E,
04:08:29.780 | just F-R-I-D-M-A-N.
04:08:32.400 | I'm sure eventually you will figure it out.
04:08:35.280 | And now let me leave you with some words from Ben Goertzel.
04:08:38.200 | "Our language for describing emotions is very crude.
04:08:42.480 | "That's what music is for."
04:08:44.020 | Thank you for listening, and hope to see you next time.
04:08:48.400 | (upbeat music)
04:08:50.980 | (upbeat music)
04:08:53.560 | [BLANK_AUDIO]