back to indexDemis Hassabis: DeepMind - AI, Superintelligence & the Future of Humanity | Lex Fridman Podcast #299
Chapters
0:0 Introduction
1:1 Turing Test
8:27 Video games
30:2 Simulation
32:13 Consciousness
37:13 AlphaFold
50:53 Solving intelligence
63:12 Open sourcing AlphaFold & MuJoCo
73:18 Nuclear fusion
77:22 Quantum simulation
80:30 Physics
83:57 Origin of life
88:36 Aliens
96:43 Intelligent life
99:52 Conscious AI
113:7 Power
117:37 Advice for young people
125:43 Meaning of life
00:00:00.000 |
The following is a conversation with Demis Hassabis, 00:00:08.600 |
some of the most incredible artificial intelligence systems 00:00:14.120 |
including AlphaZero that learned all by itself 00:00:18.040 |
to play the game of Go better than any human in the world, 00:00:25.760 |
Both tasks considered nearly impossible for a very long time. 00:00:30.320 |
Demis is widely considered to be one of the most brilliant 00:00:41.240 |
This was truly an honor and a pleasure for me 00:00:44.600 |
to finally sit down with him for this conversation, 00:00:47.360 |
and I'm sure we will talk many times again in the future. 00:00:56.760 |
And now, dear friends, here's Demis Hassabis. 00:01:00.560 |
Let's start with a bit of a personal question. 00:01:04.000 |
Am I an AI program you wrote to interview people 00:01:20.360 |
Is that a good thing to tell a language model 00:01:29.640 |
Probably it would be a good idea not to tell you 00:01:34.720 |
- Heisenberg uncertainty principle situation. 00:01:39.040 |
Maybe that's what's happening with us, of course. 00:01:59.960 |
you've talked about the benchmark for solving intelligence. 00:02:11.440 |
but I still return to the Turing test as a compelling test. 00:02:14.400 |
The spirit of the Turing test is a compelling test. 00:02:22.120 |
but I think if you look back at the 1950 papers, 00:02:30.880 |
I think it was more like a thought experiment, 00:02:36.440 |
And you can see he didn't specify it very rigorously. 00:02:38.680 |
So for example, he didn't specify the knowledge 00:02:44.040 |
how much time would they have to investigate this. 00:02:49.480 |
if you were gonna make it a true sort of formal test. 00:02:59.040 |
a decade ago, I remember someone claiming that 00:03:00.920 |
with a kind of very bog standard normal logic model, 00:03:08.440 |
So the judges thought that the machine was a child. 00:03:13.280 |
So that would be very different from an expert AI person 00:03:17.600 |
interrogating machine and knowing how it was built 00:03:20.720 |
So I think, we should probably move away from that 00:03:24.560 |
as a formal test and move more towards a general test 00:03:28.800 |
where we test the AI capabilities on a range of tasks 00:03:32.040 |
and see if it reaches human level or above performance 00:03:39.200 |
and cover the entire sort of cognitive space. 00:03:44.120 |
it was an amazing thought experiment and also 1950s, 00:03:46.960 |
obviously it was barely the dawn of the computer age. 00:03:59.680 |
but I think it's also possible as systems like God or show 00:04:04.560 |
that eventually that might map right back to language. 00:04:08.320 |
So you might be able to demonstrate your ability 00:04:10.800 |
to generalize across tasks by then communicating 00:04:17.080 |
which is kind of what we do through conversation anyway, 00:04:20.800 |
Ultimately what's in there in that conversation 00:04:27.000 |
it's you moving around like these entirely different 00:04:30.360 |
modalities of understanding that ultimately map 00:04:38.920 |
in all of these domains, which you can think of as tasks. 00:04:44.680 |
use language as our main generalization communication tool. 00:04:58.000 |
in which to explain the system, to explain what it's doing. 00:05:03.000 |
But I don't think it's the only modality that matters. 00:05:09.960 |
there's a lot of different ways to express capabilities 00:05:19.040 |
Yeah, action is the interactive aspect of all that. 00:05:26.800 |
it's sort of pushing prediction to the maximum 00:05:30.240 |
in terms of like, mapping arbitrary sequences 00:05:33.440 |
to other sequences and sort of just predicting 00:05:36.440 |
So prediction seems to be fundamental to intelligence. 00:05:41.040 |
- And what you're predicting doesn't so much matter. 00:05:44.160 |
- Yeah, it seems like you can generalize that quite well. 00:05:46.840 |
So obviously language models predict the next word. 00:05:49.640 |
GATO predicts potentially any action or any token. 00:05:55.320 |
It's our most general agent one could call it so far. 00:05:58.080 |
But that itself can be scaled up massively more 00:06:02.160 |
And obviously we're in the middle of doing that. 00:06:06.880 |
is creating benchmarks that help us get closer and closer. 00:06:11.020 |
Sort of creating benchmarks that test the generalizability. 00:06:14.880 |
And it's just still interesting that this fella, 00:06:33.960 |
And I still think something like the Turing test 00:06:42.500 |
So that you can have a close friend who's in the AI system. 00:06:48.300 |
they're going to have to be able to play StarCraft. 00:06:53.120 |
And they're gonna have to do all of these tasks. 00:07:02.060 |
Use language, humor, all of those kinds of things. 00:07:04.760 |
But that ultimately can boil down to language. 00:07:07.980 |
It feels like, not in terms of the AI community, 00:07:25.780 |
the philosophy behind it, which is the idea of, 00:07:28.860 |
can a machine mimic the behaviors of a human? 00:07:34.980 |
And I would say wider than just language and text. 00:07:38.700 |
Then, in terms of actions and everything else, 00:07:51.660 |
I think he did formulate the right kind of setup. 00:07:55.980 |
- I just, I think there'll be a kind of humor 00:08:09.380 |
they would know which year they were finally able 00:08:11.900 |
to sort of cross the threshold of human-level intelligence, 00:08:18.820 |
were still confused about this whole problem. 00:08:30.300 |
when did you fall in love with programming first? 00:08:35.940 |
So I started off, actually, games was my first love, 00:08:40.860 |
so starting to play chess when I was around four years old, 00:08:50.820 |
It was a ZX Spectrum, which was hugely popular 00:08:56.540 |
because I think it trained a whole generation 00:08:59.320 |
of programmers in the UK, because it was so accessible. 00:09:06.660 |
And my parents didn't really know anything about computers, 00:09:10.460 |
but because it was my money from a chess competition, 00:09:23.500 |
And then, of course, once you start doing that, 00:09:26.460 |
you start adjusting it, and then making your own games. 00:09:29.140 |
And that's when I fell in love with computers 00:09:30.840 |
and realised that they were a very magical device. 00:09:53.080 |
So, I mean, all machines do that to some extent. 00:09:57.660 |
Obviously, cars make us, allow us to move faster 00:10:00.120 |
than we can run, but this was a machine to extend the mind. 00:10:04.560 |
And then, of course, AI is the ultimate expression 00:10:08.520 |
of what a machine may be able to do or learn. 00:10:23.600 |
I think it was just basic on the ZX Spectrum. 00:10:33.820 |
- So, yeah, well, lots of my friends had Atari STs, 00:10:37.860 |
It was a bit more powerful, and that was incredible. 00:10:43.820 |
and also Amos Basic, this specific form of basic. 00:10:53.060 |
So, when did you first start to gain an understanding 00:11:08.800 |
Sort of a thing that can figure out something 00:11:11.840 |
more complicated than a simple mathematical operation. 00:11:24.860 |
At the time, when I was about maybe 10, 11 years old, 00:11:27.500 |
I was gonna become a professional chess player. 00:11:35.540 |
- Yeah, so I was, when I was about 12 years old, 00:11:39.180 |
I got to Master Standard, and I was second highest rated 00:11:42.740 |
who obviously ended up being an amazing chess player, 00:11:52.780 |
you're trying to improve your own thinking processes. 00:11:55.140 |
So, that leads you to thinking about thinking. 00:11:58.120 |
How is your brain coming up with these ideas? 00:12:06.380 |
it was just the beginning, this was like in the early 80s, 00:12:17.020 |
And I think Kasparov had a branded version of it 00:12:29.060 |
to try and improve your openings and other things. 00:12:31.460 |
And so I remember, I think I probably got my first one, 00:12:45.660 |
which was called The Chess Computer Handbook by David Levy. 00:12:50.700 |
so I must have got it when I was about 11, 12. 00:12:52.380 |
And it explained fully how these chess programs were made. 00:13:00.420 |
it couldn't, it wasn't powerful enough to play chess, 00:13:04.220 |
but I wrote a program for it to play Othello, 00:13:06.620 |
or Reversi, it's sometimes called, I think, in the US. 00:13:11.780 |
but I used all of the principles that chess programs had, 00:13:17.420 |
I remember that very well, I was around 12 years old. 00:13:25.540 |
and I was writing games professionally, designing games, 00:13:35.660 |
And it sold millions of copies around the world, 00:13:40.980 |
even though it was relatively simple by today's AI standards, 00:13:44.460 |
was reacting to the way you as the player played it. 00:13:49.220 |
so it was one of the first types of games like that, 00:13:52.660 |
and it meant that every game you played was unique. 00:14:02.180 |
from a game design, human enjoyment perspective, 00:14:06.540 |
really impressive AI that you've seen in games, 00:14:09.660 |
and maybe what does it take to create AI system, 00:14:14.220 |
So a million questions, just as a brief tangent. 00:14:18.340 |
- Well, look, I think games have been significant 00:14:26.100 |
and training myself on games when I was a kid. 00:14:28.780 |
Then I went through a phase of designing games 00:14:40.060 |
and the reason I was doing that in games industry 00:14:47.180 |
So whether it was graphics with people like John Carmack 00:14:53.060 |
I think actually all the action was going on in games. 00:14:56.140 |
And we're still reaping the benefits of that, 00:14:58.460 |
even with things like GPUs, which I find ironic, 00:15:01.500 |
was obviously invented for graphics, computer graphics, 00:15:03.700 |
but then turns out to be amazingly useful for AI. 00:15:06.220 |
It just turns out everything's a matrix multiplication, 00:15:11.140 |
So I think games at the time had the most cutting edge AI, 00:15:15.780 |
and a lot of the games, I was involved in writing, 00:15:19.780 |
so there was a game called "Black and White," 00:15:24.820 |
is the most impressive example of reinforcement learning 00:15:30.500 |
So in that game, you trained a little pet animal. 00:15:37.620 |
So if you treated it badly, then it became mean, 00:15:47.220 |
But if you were kind to it, then it would be kind. 00:15:49.380 |
And people were fascinated by how that worked, 00:16:02.300 |
in the choices you make, can define where you end up, 00:16:07.300 |
and that means all of us are capable of the good, evil. 00:16:15.260 |
along the trajectory to those places that you make. 00:16:19.060 |
I mean, games can do that philosophically to you, 00:16:22.540 |
- Yeah, well, games are, I think, a unique medium, 00:16:26.580 |
you're not just passively consuming the entertainment, 00:16:30.080 |
right, you're actually actively involved as an agent. 00:16:34.300 |
So I think that's what makes it, in some ways, 00:16:40.020 |
So the second, so that was designing AI in games, 00:16:50.940 |
for proving out AI algorithms and developing AI algorithms. 00:16:55.020 |
And that was a sort of a core component of our vision 00:17:03.220 |
as our main testing ground, certainly to begin with, 00:17:08.600 |
and also, you know, it's very easy to have metrics 00:17:15.900 |
and whether you're making incremental improvements. 00:17:20.420 |
in something that humans did for a long time beforehand, 00:17:36.860 |
we've been playing it for thousands of years, 00:17:39.760 |
and often they have scores or at least win conditions. 00:17:43.340 |
So it's very easy for reward learning systems 00:17:46.500 |
It's very easy to specify what that reward is. 00:17:49.340 |
And also at the end, it's easy to test externally, 00:17:58.140 |
the world's strongest players at those games. 00:18:08.260 |
So I think there's a huge reason why we were so successful 00:18:14.740 |
how come we were able to progress so quickly, 00:18:18.880 |
And, you know, at the beginning of "DeepMind," 00:18:24.580 |
who I knew from my previous lives in the games industry, 00:18:28.020 |
and that helped to bootstrap us very quickly. 00:18:33.860 |
almost at a philosophical level of man versus machine 00:18:41.220 |
And especially given that the entire history of AI 00:18:43.620 |
is defined by people saying it's gonna be impossible 00:18:45.980 |
to make a machine that beats a human being in chess. 00:18:50.980 |
And then once that happened, people were certain 00:18:54.580 |
when I was coming up in AI that Go is not a game 00:18:58.020 |
that can be solved because of the combinatorial complexity. 00:19:10.180 |
And so then there's something compelling about facing, 00:19:14.900 |
sort of taking on the impossibility of that task 00:19:18.140 |
from the AI researcher perspective, engineer perspective, 00:19:23.140 |
and then as a human being just observing this whole thing, 00:19:27.020 |
your beliefs about what you thought was impossible 00:19:40.500 |
It's humbling to realize that the things we think 00:19:43.140 |
are impossible now perhaps will be done in the future. 00:19:46.980 |
There's something really powerful about a game, 00:20:08.780 |
both as the AI, you know, creators of the AI, 00:20:25.340 |
and being obviously heavily, heavily involved in that. 00:20:28.500 |
But, you know, as you say, chess has been the, 00:20:39.500 |
and I think he's right, because chess has been 00:20:49.580 |
starting with Turing and Claude Shannon and all those, 00:20:58.820 |
I've got original edition of Claude Shannon's 00:21:03.980 |
the original sort of paper, and they all did that. 00:21:13.740 |
So he had to run, he had to be the computer, right? 00:21:16.020 |
So he literally, I think, spent two or three days 00:21:18.900 |
running his own program by hand with pencil and paper 00:21:21.340 |
and playing a friend of his with his chess program. 00:21:24.940 |
So of course, Deep Blue was a huge moment beating Kasparov. 00:21:31.820 |
I remember that very, very vividly, of course, 00:21:36.580 |
all the things I loved, and I was at college at the time. 00:21:43.020 |
than I was by Deep Blue, because here was Kasparov 00:21:46.420 |
with his human mind, not only could he play chess 00:21:55.780 |
ride a bike, talk many languages, do politics, 00:21:58.140 |
all the rest of the amazing things that Kasparov does. 00:22:00.860 |
And so with the same brain, and yet Deep Blue, 00:22:17.820 |
Like, it couldn't even play a strictly simpler game 00:22:21.300 |
So something to me was missing from intelligence 00:22:25.900 |
from that system that we would regard as intelligence. 00:22:32.180 |
So, and that's obviously what we tried to do with AlphaGo. 00:22:36.140 |
- Yeah, with AlphaGo and AlphaZero and MuZero 00:22:50.180 |
you've proposed that from a game design perspective, 00:22:53.420 |
the thing that makes chess compelling as a game 00:23:13.460 |
and actually a lot of even amazing chess players 00:23:23.100 |
And I think a critical reason is the dynamicness 00:23:27.580 |
of the different kind of chess positions you can have, 00:23:29.980 |
whether they're closed or open and other things 00:23:36.500 |
the capabilities of the bishop and knight are 00:23:46.100 |
So they're both roughly worth three points each. 00:23:48.740 |
- So you think that dynamics is always there, 00:23:57.700 |
But the fact that it's got to this beautiful equilibrium 00:24:22.100 |
but now you aim for a different type of position. 00:24:24.020 |
If you have the knight, you want a closed position. 00:24:26.060 |
If you have the bishop, you want an open position. 00:24:30.940 |
- So some kind of controlled creative tension. 00:24:35.980 |
do you think AI systems could eventually design games 00:24:42.940 |
Sometimes I get asked about AI and creativity, 00:24:45.980 |
and the way I answer that is relevant to that question, 00:24:48.860 |
which is that I think there are different levels 00:24:59.300 |
then I think the kind of lowest level of creativity 00:25:11.380 |
and then you say, "Give me an average-looking cat," right? 00:25:20.420 |
So AlphaGo played millions of games of Go against itself, 00:25:32.820 |
even though we've played it for thousands of years 00:25:41.060 |
which is, you know, you could call out-of-the-box thinking 00:25:44.020 |
or true innovation, which is, could you invent Go, right? 00:25:48.500 |
And not just come up with a brilliant chess move 00:25:58.900 |
but what's missing is how would you even specify that task 00:26:05.420 |
if I was telling a human to do it or a games designer, 00:26:27.500 |
And also it can be completed in three or four hours 00:26:30.820 |
of gameplay time, which is, you know, useful for our, 00:26:35.980 |
And so you might specify these sort of high-level concepts 00:26:43.460 |
one could imagine that Go satisfies those constraints. 00:26:53.740 |
high-level abstract notions like that yet to our AI systems. 00:26:57.500 |
And I think there's still something missing there 00:26:59.500 |
in terms of high-level concepts or abstractions 00:27:03.700 |
and that are, you know, combinable and compositional. 00:27:18.500 |
with complicated objectives around those rule sets, 00:27:21.220 |
we can't currently do, but you could take a specific rule set 00:27:28.900 |
to see how long, just observe how an AI system 00:27:32.580 |
from scratch learns, how long is that journey of learning? 00:27:36.540 |
And maybe if it satisfies some of those other things 00:27:39.700 |
you mentioned in terms of quickness to learn and so on, 00:27:46.820 |
then you could say that this is a promising game. 00:27:49.860 |
But it would be nice to do almost like alpha codes 00:27:55.740 |
that automate even that part of the generation of rules. 00:28:05.660 |
if you could have a system that takes your game, 00:28:09.180 |
plays it tens of millions of times, maybe overnight, 00:28:13.820 |
So it tweaks the rules and maybe the equations 00:28:18.060 |
and the parameters so that the game is more balanced, 00:28:22.700 |
the units in the game or some of the rules could be tweaked. 00:28:30.780 |
or something like that to sort of explore it. 00:28:33.380 |
And I think that would be super powerful tool actually 00:28:42.140 |
from hundreds of games, human games testers normally 00:28:57.260 |
you know, you might better do that like overnight. 00:29:10.740 |
It's only the sort of game I would love to make is, 00:29:13.060 |
and I've tried, you know, in my games career, 00:29:17.460 |
my first big game was designing a theme park, 00:29:23.700 |
I tried to, you know, have games where we designed 00:29:36.140 |
- Sim Earth, I haven't actually played that one. 00:29:37.580 |
So what is it, does it incorporate of evolution or? 00:29:42.660 |
it tries to, it sort of treats it as an entire biosphere, 00:29:48.100 |
- It'd be nice to be able to sort of zoom in, 00:29:51.740 |
So obviously it couldn't do, that was in the, 00:29:54.940 |
So it couldn't, you know, it wasn't able to do that. 00:30:01.460 |
- On that topic, do you think we're living in a simulation? 00:30:06.620 |
- We're gonna jump around from the absurdly philosophical 00:30:13.780 |
is a little bit complex because there is simulation theory, 00:30:24.700 |
So in the sense that are we in some sort of computer game 00:30:38.420 |
I think that, but I do think that we might be, 00:30:43.340 |
that the best way to understand physics and the universe 00:30:49.260 |
So understanding it as an information universe 00:30:52.380 |
and actually information being the most fundamental 00:30:55.700 |
unit of reality rather than matter or energy. 00:31:07.340 |
I'd actually say information, which of course itself 00:31:13.540 |
Matter is actually just, we're just out the way our bodies 00:31:16.860 |
and all the molecules in our body are arranged 00:31:19.700 |
So I think information may be the most fundamental way 00:31:23.060 |
to describe the universe and therefore you could say 00:31:26.580 |
we're in some sort of simulation because of that. 00:31:43.420 |
- And you just mean treating the universe as a computer 00:31:52.180 |
is a good way to solve the problems of physics, 00:31:54.820 |
of chemistry, of biology and perhaps of humanity and so on. 00:32:02.180 |
in terms of information theory might be the best way 00:32:09.340 |
- From our understanding of a universal Turing machine, 00:32:35.740 |
with Sir Roger Penrose and obviously he's famously, 00:32:45.220 |
and they were pretty influential in the '90s. 00:32:55.740 |
I think about what we're doing actually at DeepMind 00:33:05.920 |
What are the limits of what classical computing can do? 00:33:09.340 |
Now, and at the same time, I've also studied neuroscience 00:33:19.160 |
from a neuroscience or biological perspective? 00:33:24.460 |
and most mainstream biologists and neuroscientists 00:33:26.380 |
would say there's no evidence of any quantum systems 00:33:30.740 |
As far as we can see, it can be mostly explained 00:33:40.620 |
And then at the same time, there's the raising of the water, 00:33:44.300 |
the bar, from what classical Turing machines can do. 00:33:54.000 |
I think AI, especially in the last decade plus, 00:34:14.760 |
to bet against how far the universal Turing machine 00:34:23.400 |
And my betting would be that all of, certainly, 00:34:27.280 |
what's going on in our brain can probably be mimicked 00:34:34.720 |
not requiring something metaphysical or quantum. 00:34:38.400 |
- And we'll get there with some of the work with AlphaFold, 00:35:06.200 |
- Well, look, I think it's, you say it's a few, 00:35:12.760 |
it is just a few pounds of mush in our skulls, 00:35:15.040 |
and yet it's also, our brains are the most complex objects 00:35:40.440 |
I think by building an intelligent artifact like AI, 00:35:50.960 |
that we've always wondered about since the dawn of history, 00:35:53.480 |
like consciousness, dreaming, creativity, emotions. 00:36:00.760 |
We've wondered about them since the dawn of humanity, 00:36:05.940 |
and, you know, I love philosophy and philosophy of mind, 00:36:09.920 |
is there haven't been the tools for us to really, 00:36:19.360 |
But now, suddenly we have a plethora of tools. 00:36:21.720 |
Firstly, we have all of the neuroscience tools, 00:36:23.240 |
fMRI machines, single-cell recording, all of this stuff, 00:36:25.900 |
but we also have the ability, computers and AI, 00:36:34.720 |
I think it is amazing what the human mind does, 00:36:41.120 |
and I think it's amazing that, without human minds, 00:36:49.880 |
I think that's also a testament to the human mind. 00:37:05.760 |
maybe we're the mechanism by which the universe 00:37:13.360 |
So let's go to the basic building blocks of biology 00:37:19.400 |
at which you can start to understand the human mind, 00:37:30.480 |
you can construct bigger and bigger, more complex systems, 00:37:33.080 |
maybe one day the entirety of the human biology. 00:37:39.680 |
to be impossible to solve, which is protein folding. 00:37:50.320 |
I think it's one of the biggest breakthroughs, 00:37:53.400 |
certainly in the history of structural biology, 00:38:04.840 |
And then we can ask some fascinating questions after. 00:38:15.780 |
which is, proteins are essential to all life. 00:38:18.840 |
Every function in your body depends on proteins. 00:38:21.520 |
Sometimes they're called the workhorses of biology. 00:38:26.640 |
I've been researching proteins and structural biology 00:38:31.760 |
they're amazing little bio-nano-machines proteins. 00:38:39.000 |
And proteins are specified by their genetic sequence, 00:38:44.280 |
So you can think of it as their genetic makeup. 00:39:08.560 |
And also, if you're interested in drugs or disease, 00:39:17.040 |
about to block something the protein's doing, 00:39:34.840 |
And that's, in essence, the protein folding problem is, 00:39:47.160 |
And this has been a grand challenge in biology 00:39:53.200 |
by Christian Anfiensen, a Nobel Prize winner in 1972, 00:39:59.280 |
And he just speculated this should be possible 00:40:01.900 |
to go from the amino acid sequence to the 3D structure. 00:40:12.320 |
- You should, as somebody that very well might win 00:40:30.640 |
It should be, I'll have to remember that for future. 00:40:33.520 |
So yeah, so he set off with this one throwaway remark, 00:40:36.280 |
just like Fermat, he set off this whole 50-year field, 00:40:46.240 |
They hadn't really got very far with doing this. 00:40:52.500 |
this is done experimentally, very painstakingly. 00:40:59.840 |
Some proteins can't be crystallized like membrane proteins. 00:41:03.080 |
And then you have to use very expensive electron microscopes 00:41:08.200 |
really painstaking work to get the 3D structure 00:41:26.400 |
And so over Christmas, we did the whole human proteome, 00:41:30.240 |
or every protein in the human body, all 20,000 proteins. 00:41:47.960 |
Should they put all of their effort in or not? 00:41:53.280 |
- And so there's a data set on which it's trained 00:41:58.800 |
First of all, it's incredible that a protein, 00:42:03.760 |
in some kind of distributed way and do it very quickly. 00:42:08.840 |
And they evolved that way 'cause in the beginning, 00:42:18.900 |
like they evolved to have many of these proteins. 00:42:22.720 |
And those proteins figure out how to be computers themselves 00:42:28.520 |
that can interact in complex ways with each other 00:42:32.620 |
I mean, it's a weird system that they figured it out. 00:42:36.320 |
I mean, maybe we should talk about the origins of life too. 00:42:38.960 |
But proteins themselves, I think, are magical 00:42:41.140 |
and incredible, as I said, little bio-nano machines. 00:42:45.760 |
And actually, Leventhal, who is another scientist, 00:42:51.000 |
a contemporary of Amundsen, he coined this Leventhal, 00:43:11.480 |
So there's 10 to the power 300 different ways 00:43:14.800 |
And yet somehow, in nature, physics solves this 00:43:25.600 |
So physics is somehow solving that search problem. 00:43:29.080 |
- And just to be clear, in many of these cases, 00:43:33.040 |
there's often a unique way for that sequence to form itself. 00:43:43.560 |
in some cases, there might be a misfunction, so on, 00:43:47.780 |
which leads to a lot of the disorders and stuff like that. 00:44:01.880 |
And as you say, in disease, so for example, Alzheimer's, 00:44:05.440 |
one conjecture is that it's because of misfolded protein, 00:44:09.040 |
a protein that folds in the wrong way, amyloid beta protein. 00:44:12.040 |
So, and then because it folds in the wrong way, 00:44:27.600 |
Of course, the next step is sometimes proteins change shape 00:44:32.160 |
So they're not just static, necessarily, in biology. 00:44:46.160 |
Because unlike games, this is real physical systems 00:44:51.160 |
that are less amenable to self-play type of mechanisms. 00:44:59.760 |
so you have to be very clever about certain things. 00:45:06.680 |
and what are some beautiful aspects about the solution? 00:45:09.920 |
- Yeah, I would say alpha fold is the most complex 00:45:15.860 |
So it's been an amazing time actually in the last, 00:45:18.400 |
you know, two, three years to see that come through 00:45:30.400 |
not just to crack games, it was just to build, 00:45:33.160 |
use them to bootstrap general learning systems 00:45:35.320 |
we could then apply to real world challenges. 00:45:37.460 |
Specifically, my passion is scientific challenges 00:45:49.000 |
and the amount of innovations that had to go into it, 00:45:53.060 |
different component algorithms needed to be put together 00:46:00.800 |
kind of building in some hard coded constraints 00:46:07.760 |
to constrain sort of things like the bond angles 00:46:18.040 |
So still allowing the system to be able to learn 00:46:21.000 |
the physics itself from the examples that we had. 00:46:37.120 |
which is much less than normally we would like to use. 00:46:41.140 |
But using various tricks, things like self-distillation. 00:46:51.000 |
we put them back into the training set, right? 00:46:58.400 |
So there was actually a huge number of different 00:47:06.080 |
Alpha fold one, what it produced was a distogram. 00:47:09.720 |
So a kind of a matrix of the pair wise distances 00:47:17.760 |
And then there had to be a separate optimization process 00:47:26.900 |
So we went straight from the amino acid sequence of bases 00:47:33.880 |
without going through this intermediate step. 00:47:36.080 |
And in machine learning, what we've always found is that 00:47:39.040 |
the more end to end you can make it, the better the system. 00:47:48.520 |
than we are as the human designers of specifying it. 00:47:55.360 |
you're really looking for, in this case, the 3D structure, 00:47:58.400 |
you're better off than having this intermediate step, 00:48:00.520 |
which you then have to handcraft the next step for. 00:48:03.320 |
So it's better to let the gradients and the learning 00:48:06.120 |
flow all the way through the system from the endpoint, 00:48:11.960 |
I mean, you problem handcraft a bunch of stuff, 00:48:18.640 |
or a small learning piece and grow that learning piece 00:48:39.640 |
but it was specifically trained to only play Go, right? 00:48:57.960 |
from random starting point from the beginning. 00:49:00.280 |
So that removed the need for human knowledge about Go. 00:49:03.720 |
And then finally AlphaZero then generalized it 00:49:05.960 |
so that any things we had in there, the system, 00:49:08.920 |
including things like symmetry of the Go board were removed. 00:49:19.640 |
was then extending it so that you didn't even have to give it 00:49:27.760 |
- So that line of AlphaGo, AlphaGo Zero, AlphaZero, MuZero, 00:49:31.840 |
that's the full trajectory of what you can take 00:49:34.200 |
from imitation learning to full self-supervised learning. 00:49:41.640 |
And learning the entire structure of the environment 00:49:47.640 |
And bootstrapping it through self-play yourself. 00:49:51.840 |
But the thing is it would have been impossible, I think, 00:49:53.720 |
or very hard for us to build AlphaZero or MuZero first 00:49:58.600 |
- Even psychologically, because you have to believe 00:50:04.640 |
'cause a lot of people say that it's impossible. 00:50:06.720 |
- Exactly, so it was hard enough just to do Go. 00:50:08.640 |
As you were saying, everyone thought that was impossible 00:50:10.760 |
or at least a decade away from when we did it back in 2015, 00:50:17.320 |
And so, yes, it would have been psychologically 00:50:20.960 |
probably very difficult as well as the fact that, 00:50:23.080 |
of course, we learn a lot by building AlphaGo first. 00:50:29.920 |
It's one of the most fascinating science disciplines, 00:50:32.280 |
but it's also an engineering science in the sense that, 00:50:34.680 |
unlike natural sciences, the phenomenon you're studying 00:50:42.480 |
and then you can study and pull it apart and how it works. 00:50:50.000 |
'cause you probably will say it's everything, 00:50:54.360 |
because you're in a very interesting position 00:50:56.480 |
where DeepMind is a place of some of the most brilliant ideas 00:51:01.760 |
but it's also a place of brilliant engineering. 00:51:08.040 |
this big goal for DeepMind, how much of it is science? 00:51:16.160 |
How much is the hardware compute infrastructure? 00:51:19.840 |
How much is it the software compute infrastructure? 00:51:27.240 |
And like just the humans interacting certain kinds of ways. 00:51:40.680 |
like if we go forward 200 years and look back, 00:51:43.200 |
what was the key thing that solved intelligence? 00:52:05.120 |
I don't know if you remember back to your MIT days, 00:52:14.200 |
We tried this hard in the '90s at places like MIT, 00:52:17.040 |
mostly using logic systems and old-fashioned sort of, 00:52:34.740 |
because at least you know you're on a unique track 00:52:37.860 |
Even if all of your professors are telling you you're mad. 00:52:48.940 |
given that it's the biggest sort of buzzword in VCs 00:52:51.540 |
and fundraising's easy and all these kinds of things today. 00:53:01.080 |
what were the sort of founding tenets of DeepMind, 00:53:08.680 |
So deep learning, you know, Jeff Hinton and co. 00:53:20.160 |
had advanced quite a lot in the decade prior, 00:53:29.700 |
and sort of representations maybe that the brain uses. 00:53:33.420 |
So at a systems level, not at a implementation level. 00:53:36.900 |
And then the other big things were compute and GPUs, right? 00:53:41.060 |
So we could see a compute was gonna be really useful 00:53:44.180 |
and it got to a place where it become commoditized, 00:53:50.780 |
And then the final thing was also mathematical 00:53:57.620 |
which Shane worked on with his supervisor, Marcus Hutter, 00:54:00.220 |
which is this sort of theoretical proof really 00:54:05.340 |
which is actually a reinforcement learning system. 00:54:07.940 |
In the limit, I mean, it assumes infinite compute 00:54:12.960 |
But I was also waiting to see something like that too, 00:54:15.900 |
to, you know, like Turing machines and computation theory 00:54:19.500 |
that people like Turing and Shannon came up with 00:54:24.820 |
You know, I was waiting for a theory like that 00:54:32.180 |
that to me was a sort of final piece of the jigsaw. 00:54:36.420 |
I would say that ideas were the most important, you know, 00:54:48.100 |
three or four from, if you think from 2010 till now, 00:55:01.680 |
I think engineering becomes more and more important 00:55:05.180 |
and data because scale and of course the recent, 00:55:08.360 |
you know, results of GPT-3 and all the big language models 00:55:18.660 |
but perhaps not sufficient part of an AGI solution. 00:55:29.300 |
is sticking by ideas like reinforcement learning, 00:55:41.980 |
but proudly having the best researchers in the world 00:55:51.460 |
AGI or something like this, that speaking of MIT, 00:56:09.880 |
very small scale, not very ambitious projects. 00:56:21.780 |
and believing you can, that's really, really powerful. 00:56:28.020 |
to have great engineers, build great systems, 00:56:42.620 |
it was used to be solving step one, solve intelligence, 00:56:47.840 |
So if you can imagine pitching that to VC in 2010, 00:56:52.680 |
we managed to find a few kooky people to back us, 00:56:57.680 |
And I got to the point where we wouldn't mention it 00:57:05.920 |
So it was, there's a lot of things that we had to do, 00:57:13.240 |
one reason I've always believed in reinforcement learning 00:57:19.160 |
that is the way that the primate brain learns. 00:57:22.720 |
One of the main mechanisms is the dopamine system 00:57:26.440 |
It was a very famous result in the late '90s, 00:57:36.800 |
this is what I think you can use neuroscience for, 00:57:44.560 |
and you're, you know, it's blue sky research, 00:57:54.280 |
or give you confidence you're going in the right direction. 00:57:56.680 |
So that was one reason we pushed so hard on that. 00:58:03.160 |
the other big thing that I think we innovated with 00:58:05.360 |
at DeepMind to encourage invention and innovation 00:58:10.320 |
was the multidisciplinary organization we built, 00:58:16.680 |
of the most cutting edge knowledge in neuroscience 00:58:19.400 |
with machine learning, engineering, and mathematics, right? 00:58:24.120 |
And then since then, we've built that out even further. 00:58:26.760 |
So we have philosophers here and, you know, ethicists, 00:58:30.280 |
but also other types of scientists, physicists, and so on. 00:58:35.120 |
I tried to build a sort of new type of Bell Labs, 00:58:43.800 |
to try and foster this incredible sort of innovation machine. 00:58:55.560 |
coming together to try and build these learning systems. 00:58:58.880 |
- If we return to the big ambitious dream of AlphaFold 00:59:14.120 |
can you use to predict the structure and function 00:59:26.840 |
that eventually simulate something like the human brain 00:59:32.480 |
the mess of the beautiful, resilient mess of biology. 00:59:40.360 |
I think, you know, if you think about what are the things, 00:59:47.640 |
biology and curing diseases and understanding biology 00:59:52.160 |
was right up there, you know, top of my list. 00:59:54.080 |
That's one of the reasons I personally pushed that myself 01:00:02.960 |
And I hope it's evidence of what could be done 01:00:08.780 |
So, you know, AlphaFold solved this huge problem 01:00:12.160 |
of the structure of proteins, but biology is dynamic. 01:00:18.620 |
is protein-protein interaction, protein-ligand binding, 01:00:32.640 |
And I've been talking actually to a lot of biologists, 01:00:34.520 |
friends of mine, Paul Nurse, who runs the Crick Institute, 01:00:36.760 |
amazing biologist, Nobel Prize-winning biologist. 01:00:39.100 |
We've been discussing for 20 years now virtual cells. 01:00:42.100 |
Could you build a virtual simulation of a cell? 01:00:49.500 |
on the virtual cell, and then only at the last stage, 01:00:53.920 |
So you could, you know, in terms of the search space 01:01:18.360 |
of different parts of biology and the interactions. 01:01:20.780 |
And so, you know, every few years we talk about this, 01:01:27.860 |
I said, now's the time, we can finally go for it. 01:01:33.820 |
And he's very excited, and we have some collaborations 01:01:35.940 |
with his lab, they're just across the road actually 01:01:38.460 |
from us, it's just, you know, wonderful being here 01:01:40.380 |
in Kings Cross with the Crick Institute across the road. 01:01:45.980 |
I think there's gonna be some amazing advances in biology 01:01:50.980 |
We're already seeing that with the community doing that 01:02:02.300 |
is the perfect description language for physics. 01:02:09.180 |
because biology is so messy, it's so emergent, 01:02:15.220 |
I think, I find it very hard to believe we'll ever get 01:02:17.460 |
to something as elegant as Newton's laws of motions 01:02:26.060 |
- You have to start at the basic building blocks 01:02:51.860 |
But in this case, the rules are very difficult 01:03:09.100 |
and that's exactly the type of systems that we're building. 01:03:11.820 |
- So you mentioned you've open sourced AlphaFold 01:03:19.940 |
and a big thank you for open sourcing with JoCo, 01:03:24.860 |
that's often used for robotics research and so on. 01:03:35.140 |
very few companies or people do that kind of thing. 01:03:43.980 |
we felt that was the maximum benefit to humanity to do that 01:03:49.420 |
In one case, the robotics physics community with Mojoco. 01:03:55.540 |
- Yes, we purchased it for the express principle 01:04:05.740 |
and mostly we did it because the person building it 01:04:08.140 |
was not able to cope with supporting it anymore 01:04:13.540 |
He's an amazing professor who built it in the first place. 01:04:18.180 |
And then with AlphaFold, even bigger, I would say, 01:04:21.900 |
we decided that there were so many downstream applications 01:04:25.460 |
of AlphaFold that we couldn't possibly even imagine 01:04:34.300 |
and also fundamental research would be to give 01:04:45.220 |
what people have done that within just one year, 01:04:49.180 |
And it's being used by over 500,000 researchers have used it. 01:04:54.100 |
We think that's almost every biologist in the world. 01:04:56.500 |
I think there's roughly 500,000 biologists in the world, 01:04:59.940 |
have used it to look at their proteins of interest. 01:05:03.260 |
We've seen amazing fundamental research done. 01:05:08.980 |
there was a whole special history of science, 01:05:13.940 |
which is one of the biggest proteins in the body. 01:05:15.740 |
The nuclear pore complex is a protein that governs 01:05:18.900 |
all the nutrients going in and out of your cell nucleus. 01:05:31.620 |
And they've been looking to try and figure out 01:05:37.100 |
but it's too low resolution, there's bits missing. 01:05:39.540 |
And they were able to, like a giant Lego jigsaw puzzle, 01:05:43.060 |
use alpha fold predictions plus experimental data 01:05:46.140 |
and combined those two independent sources of information, 01:05:49.740 |
actually four different groups around the world 01:05:51.220 |
were able to put it together more or less simultaneously 01:06:01.420 |
has said that their teams are using alpha fold 01:06:03.740 |
to accelerate whatever drugs they're trying to discover. 01:06:08.020 |
So I think the knock on effect has been enormous 01:06:11.420 |
in terms of the impact that alpha fold has made. 01:06:15.220 |
- And it's probably bringing in, it's creating biologists, 01:06:24.580 |
And it's almost like a gateway drug to biology. 01:06:29.620 |
And to get more computational people involved too, hopefully. 01:06:32.660 |
And I think for us, the next stage, as I said, 01:06:35.980 |
future we have to have other considerations too. 01:06:49.020 |
to actually get the most resources and impact behind it. 01:06:51.740 |
In other ways, some other projects we'll do non-profit style. 01:06:55.300 |
And also we have to consider for future things as well, 01:06:58.540 |
safety and ethics as well, like synthetic biology, 01:07:01.620 |
there is dual use and we have to think about that as well. 01:07:05.100 |
With alpha fold, we consulted with 30 different bioethicists 01:07:10.260 |
to make sure it was safe before we released it. 01:07:13.300 |
So there'll be other considerations in future. 01:07:15.300 |
But for right now, I think alpha fold is a kind of a gift 01:07:20.860 |
- So I'm pretty sure that something like alpha fold 01:07:29.140 |
But us humans, of course, are horrible with credit assignment 01:07:34.540 |
Do you think there will be a day when AI system 01:07:39.380 |
can't be denied that it earned that Nobel prize? 01:07:45.140 |
Do you think we will see that in 21st century? 01:07:47.460 |
- It depends what type of AIs we end up building, 01:07:53.580 |
who specifies the goals, who comes up with the hypotheses, 01:08:00.860 |
- And tweets about it, announcement of the results. 01:08:02.420 |
- Yes, it's announced the results exactly as part of it. 01:08:07.900 |
it's amazing human ingenuity that's behind these systems 01:08:12.180 |
and then the system in my opinion is just a tool. 01:08:21.140 |
I mean, it's clearly Galileo building the tool 01:08:27.340 |
even though these tools learn for themselves. 01:08:32.940 |
and the things we're building as the ultimate tools 01:08:38.580 |
to help us as scientists acquire new knowledge. 01:08:46.340 |
or come up with something like general relativity 01:08:48.780 |
of its own bat, not just by averaging everything 01:08:52.020 |
on the internet or averaging everything on PubMed. 01:08:58.500 |
So that to me is a bit like our earlier debate 01:09:03.220 |
rather than just coming up with a good Go move. 01:09:15.740 |
and sort of invent that new conjecture out of the blue 01:09:19.300 |
rather than being specified by the human scientists 01:09:23.580 |
So I think right now it's definitely just a tool. 01:09:27.900 |
by averaging everything on the internet, like you said, 01:09:33.140 |
as you're always standing on the shoulders of giants. 01:09:35.620 |
And the question is how much are you really reaching 01:09:42.020 |
Maybe it's just a simulating different kinds of results 01:09:46.260 |
of the past with ultimately this new perspective 01:09:54.380 |
in the way that it can't be already discovered 01:09:56.980 |
Maybe the Nobel prizes of the next hundred years 01:10:00.060 |
are already all there on the internet to be discovered. 01:10:04.540 |
I mean, I think this is one of the big mysteries I think 01:10:08.940 |
is that, first of all, I believe a lot of the big, 01:10:14.380 |
in the next few decades, and even in the last decade, 01:10:20.140 |
where there'll be some new connection that's found 01:10:26.140 |
And one can even think of DeepMind, as I said earlier, 01:10:28.780 |
as a sort of interdisciplinary between neuroscience ideas 01:10:37.900 |
And then one of the things we can't imagine today is, 01:10:41.660 |
we were so surprised by how well large models worked, 01:10:44.380 |
is that actually it's very hard for our human minds, 01:10:49.380 |
what it would be like to read the whole internet, right? 01:10:58.420 |
And I think our minds can just about comprehend 01:11:01.860 |
but the whole internet is beyond comprehension. 01:11:04.380 |
So I think we just don't understand what it would be like 01:11:07.380 |
to be able to hold all of that in mind, potentially, right? 01:11:19.220 |
But I do think there is this other type of creativity, 01:11:26.620 |
can't be averaged from things that are known, 01:11:35.380 |
but just a unique way of putting those things together. 01:11:38.260 |
I think some of the greatest scientists in history 01:11:42.180 |
although it's very hard to know, going back to their time, 01:11:45.060 |
what was exactly known when they came up with those things. 01:12:07.020 |
I don't think you can visually, truly comprehend 01:12:20.020 |
You can probably construct thought experiments 01:12:22.060 |
based on that, like simulate different ideas. 01:12:25.860 |
So if this is true, let me run this thought experiment, 01:12:40.100 |
but Einstein would do the same kind of things 01:12:43.700 |
- Yeah, one could imagine doing that systematically 01:12:52.980 |
to be discovered like that that are hugely useful. 01:12:57.580 |
in material science, like room temperature superconductors 01:13:01.500 |
that I'd like to have an AI system to help build, 01:13:17.100 |
- So speaking of which, you have a paper on nuclear fusion, 01:13:24.700 |
So you're seeking to solve nuclear fusion with deep RL. 01:13:29.780 |
So it's doing control of high temperature plasmas. 01:13:36.340 |
- (laughs) It's been very fun last year or two, 01:13:39.380 |
and very productive because we've been taking off 01:14:01.740 |
as being one of the biggest places I think AI can help with. 01:14:09.220 |
And fusion is one area I think AI can help with. 01:14:22.700 |
and whenever we go into a new field to apply our systems, 01:14:36.340 |
the Swiss Technical Institute, who are amazing. 01:14:46.060 |
I was impressed they managed to persuade them 01:14:49.060 |
And it's an amazing test reactor they have there. 01:14:53.380 |
And they try all sorts of pretty crazy experiments on it. 01:15:06.980 |
that are still stopping fusion working today? 01:15:09.260 |
And then we look at, we get a fusion expert to tell us, 01:15:14.580 |
which ones are amenable to our AI methods today. 01:15:18.940 |
- And would be interesting from a research perspective, 01:15:22.220 |
from our point of view, from an AI point of view. 01:15:24.420 |
And that would address one of their bottlenecks. 01:15:26.740 |
And in this case, plasma control was perfect. 01:15:29.700 |
So, the plasma, it's a million degrees Celsius, 01:15:34.660 |
And there's obviously no material that can contain it. 01:15:37.660 |
So they have to be containing these magnetic, 01:15:39.460 |
very powerful superconducting magnetic fields. 01:15:42.540 |
But the problem is plasma, it's pretty unstable, 01:15:45.700 |
You're kind of holding a mini sun, mini star in a reactor. 01:15:49.340 |
So, you kind of want to predict ahead of time 01:15:54.060 |
so you can move the magnetic field within a few milliseconds 01:15:58.140 |
to basically contain what it's gonna do next. 01:16:00.940 |
So it seems like a perfect problem, if you think of it, 01:16:03.140 |
for like a reinforcement learning prediction problem. 01:16:11.380 |
they were doing it with traditional operational 01:16:18.300 |
And the problem is, of course, they can't react in the moment 01:16:23.020 |
And again, knowing that that's normally our go-to solution 01:16:27.940 |
And they also had a simulator of these plasma. 01:16:34.740 |
- So, can AI eventually solve nuclear fusion? 01:16:39.740 |
and we published it in a Nature paper last year, 01:16:46.180 |
So actually, it's almost like carving the plasma 01:16:52.860 |
So, that's one of the problems of fusion sort of solved. 01:17:05.780 |
for the energy productions called droplets and so on. 01:17:19.380 |
- So, another fascinating place in a paper titled, 01:17:23.020 |
"Pushing the Frontiers of Density Functionals 01:17:30.900 |
the quantum mechanical behavior of electrons. 01:17:39.260 |
and simulate arbitrary quantum mechanical systems 01:17:42.420 |
- Yeah, so this is another problem I've had my eye on 01:17:47.180 |
which is sort of simulating the properties of electrons. 01:17:51.220 |
If you can do that, you can basically describe 01:17:54.300 |
how elements and materials and substances work. 01:18:13.220 |
to these functionals and kind of come up with descriptions 01:18:18.220 |
of the electron clouds, where they're gonna go, 01:18:26.780 |
learn a functional that will describe more chemistry, 01:18:31.780 |
So, until now, you can run expensive simulations, 01:18:35.580 |
but then you can only simulate very small molecules, 01:18:45.780 |
And we're building up towards building functionals 01:18:51.220 |
and then allow you to describe what the electrons are doing. 01:18:55.580 |
And all material sort of science and material properties 01:18:58.420 |
are governed by the electrons and how they interact. 01:19:01.340 |
- So, have a good summarization of the simulation 01:19:11.340 |
to what the actual simulation would come out with. 01:19:23.260 |
from the initial conditions and the parameters 01:19:25.180 |
of the simulation, learning what the functional would be? 01:19:31.260 |
the nice thing is we can run a lot of the simulations, 01:19:35.100 |
the molecular dynamic simulations on our compute clusters. 01:19:44.700 |
And that's why we use games, it's simulator generated data. 01:19:48.020 |
And we can kind of create as much of it as we want really. 01:19:55.180 |
we just run some of these calculations, right? 01:20:03.460 |
Simulations and protein simulations and other things. 01:20:06.180 |
And so, you know, when you're not searching on YouTube 01:20:11.260 |
we're using those computers usefully in quantum chemistry. 01:20:16.940 |
And then, yeah, and then all of that computational data 01:20:20.780 |
we can then try and learn the functionals from that, 01:20:30.540 |
- Do you think one day AI may allow us to do something like 01:20:36.340 |
So, do something like travel faster than the speed of light? 01:20:39.460 |
- My ultimate aim has always been with AI is, 01:20:42.940 |
the reason I am personally working on AI for my whole life, 01:20:46.260 |
it was to build a tool to help us understand the universe. 01:20:50.300 |
So I wanted to, and that means physics really, 01:21:10.020 |
This is one thing I find fascinating about science, 01:21:12.300 |
and as a huge proponent of the scientific method 01:21:15.060 |
as being one of the greatest ideas humanity's ever had 01:21:17.860 |
and allowed us to progress with our knowledge. 01:21:21.980 |
I think what you find is the more you find out, 01:21:41.740 |
these are all the fundamental things of nature. 01:21:47.340 |
- To live life, we pin certain assumptions on them 01:21:51.500 |
and kind of treat our assumptions as if they're fact. 01:22:06.780 |
and realize like, no, we have a bunch of assumptions. 01:22:11.540 |
There's a lot of uncertainty about exactly what is time. 01:22:17.500 |
You know, there's a lot of fundamental questions 01:22:21.180 |
And maybe AI allows you to not put anything on the shelf. 01:22:32.060 |
- Exactly, I think we should be truly open-minded about that 01:22:34.660 |
and exactly that, not be dogmatic to a particular theory. 01:22:51.780 |
at the beginning about the computational nature 01:22:53.500 |
of the universe, how one might, if that was true, 01:23:03.180 |
and others about, you know, how much information 01:23:05.740 |
can a specific Planck unit of space and time contain, right? 01:23:10.140 |
So one might be able to think about testing those ideas 01:23:36.700 |
much like you're doing in the quantum simulation work. 01:23:55.060 |
What do you, do you think AI will allow us to, 01:23:59.740 |
It's trying to understand the origin of life. 01:24:08.780 |
- Yeah, well, maybe I'll come to that in a second, 01:24:13.820 |
is to kind of use it to accelerate science to the maximum. 01:24:18.100 |
So I think of it a little bit like the tree of all knowledge. 01:24:25.820 |
And we sort of barely scratched the surface of that so far 01:24:38.620 |
And I want to explore as much of that tree of knowledge 01:24:49.660 |
but also potentially designing and building new tools, 01:24:54.840 |
and also running simulations and learning simulations, 01:25:00.220 |
we're sort of doing at a baby steps level here. 01:25:04.980 |
But I can imagine that in the decades to come 01:25:08.540 |
as what's the full flourishing of that line of thinking. 01:25:19.580 |
tree of knowledge for humans is much smaller. 01:25:21.980 |
In the set of all possible trees of knowledge, 01:25:35.740 |
we still won't be able to understand a lot of things. 01:25:41.140 |
might be able to reach farther, not just as tools, 01:25:51.780 |
that are sort of encapsulated in what you just said there. 01:25:55.020 |
I think, first of all, there's two different things. 01:26:08.620 |
you can think of them as three larger and larger trees 01:26:12.900 |
And I think with AI, we're gonna explore that whole lot. 01:26:19.140 |
what is the totality of what could be understood, 01:26:21.860 |
there may be some fundamental physics reasons 01:26:26.340 |
like what's outside a simulation or outside the universe. 01:26:29.040 |
Maybe it's not understandable from within the universe. 01:26:32.360 |
So there may be some hard constraints like that. 01:26:40.540 |
Our human brains are really used to this idea 01:26:47.820 |
They wouldn't have that limitation necessary. 01:26:49.780 |
They could think in 11 dimensions, 12 dimensions, 01:27:01.460 |
or we've talked about chess and these kinds of things, 01:27:07.580 |
you can't come up with the move Garry comes up with 01:27:14.220 |
- And you can understand post hoc the reasoning. 01:27:16.740 |
So I think there's an even further level of like, 01:27:19.460 |
well, maybe you couldn't have invented that thing, 01:27:24.340 |
perhaps you can understand and appreciate that. 01:27:27.100 |
Same way that you can appreciate Vivaldi or Mozart 01:27:30.220 |
or something without, you can appreciate the beauty of that 01:27:32.740 |
without being able to construct it yourself, right? 01:27:42.480 |
but you can imagine also one sign of intelligence 01:27:45.860 |
is the ability to explain things clearly and simply, right? 01:27:50.460 |
another one of my old time heroes used to say that, right? 01:27:52.420 |
If you can't, if you can explain it something simply, 01:27:55.620 |
then that's the best sign, a complex topic simply, 01:27:58.640 |
then that's one of the best signs of you understanding it. 01:28:01.520 |
I can see myself talking trash in the AI system in that way. 01:28:09.900 |
I was like, well, that means you're not intelligent 01:28:14.420 |
- Yeah, of course, there's also the other option, 01:28:16.700 |
of course, we could enhance ourselves and without devices. 01:28:19.580 |
We are already sort of symbiotic with our compute devices, 01:28:24.640 |
And there's stuff like Neuralink and Acceptra 01:28:30.020 |
So I think there's lots of really amazing possibilities 01:28:39.900 |
do you think there's a lot of alien civilizations out there? 01:28:51.460 |
and it's one of my hobbies, physics, I guess. 01:28:56.980 |
and talk to a lot of experts on and read a lot of books on. 01:29:00.860 |
And I think my feeling currently is that we are alone. 01:29:08.860 |
So, and the reasoning is I think that we've tried 01:29:16.220 |
and I guess since the dawning of the space age, 01:29:23.400 |
And if you think about and try to detect signals. 01:29:27.360 |
Now, if you think about the evolution of humans on earth, 01:29:30.220 |
we could have easily been a million years ahead 01:29:34.040 |
of our time now or a million years behind, right? 01:29:36.560 |
Easily with just some slightly different quirk 01:29:39.520 |
thing happening hundreds of thousands of years ago, 01:29:43.740 |
If the meteor had hit the dinosaurs a million years earlier, 01:29:48.200 |
We'd be a million years ahead of where we are now. 01:29:51.000 |
So what that means is if you imagine where humanity will be 01:29:54.160 |
in a few hundred years, let alone a million years, 01:29:59.920 |
solve things like climate change and other things, 01:30:02.280 |
and we continue to flourish and we build things like AI 01:30:05.720 |
and we do space traveling and all of the stuff 01:30:13.340 |
We will be spreading across the stars, right? 01:30:16.820 |
And Voigt-Neumann famously calculated, you know, 01:30:20.860 |
if you send out Voigt-Neumann probes to the nearest, 01:30:28.380 |
two more versions of themselves and set those two out 01:30:44.340 |
have thought about constructing Dyson spheres around stars 01:30:47.320 |
to collect all the energy coming out of the star. 01:30:49.840 |
You know, that, there would be constructions like that 01:31:00.780 |
that have gone out since the, you know, 30s and 40s, 01:31:06.780 |
And now hundreds of civilizations doing that. 01:31:12.240 |
we got technologically sophisticated enough in the space age, 01:31:19.160 |
We should have joined that cacophony of voices. 01:31:20.920 |
And what we did, we opened our ears and we heard nothing. 01:31:24.520 |
And many people who argue that there are aliens would say, 01:31:27.560 |
well, we haven't really done exhaustive search yet. 01:31:33.800 |
and we wouldn't notice what an alien form was like 01:31:36.120 |
'cause it'd be so different to what we're used to. 01:31:50.620 |
You know, there should be a lot of evidence for those things. 01:31:59.400 |
And we're, you know, there's some kind of global, 01:32:04.580 |
But like, look, we can't even coordinate humans 01:32:10.060 |
What is the chance that of all of these different 01:32:12.420 |
human civilization, you know, alien civilizations, 01:32:16.740 |
and agree across, you know, these kinds of matters. 01:32:21.860 |
and we were in some sort of safari for our own good, 01:32:27.620 |
Because what does it mean, the simulation hypothesis? 01:32:31.340 |
it means what we're seeing is not quite reality, right? 01:32:34.940 |
It's something, there's something more deeper underlying it, 01:32:42.540 |
and everything we were seeing was a hologram, 01:32:44.420 |
and it was projected by the aliens or whatever, 01:32:46.460 |
that to me is not much different than thinking 01:32:50.220 |
'Cause we still can't see true reality, right? 01:32:55.060 |
It could be that the way they're communicating 01:33:01.180 |
the much better methods of communication they have. 01:33:16.020 |
- The, I mean, it sounds like very kind of wild, 01:33:34.860 |
- It could be, but I don't see any sensible argument 01:33:37.340 |
to the why would all of the alien species behave this way? 01:33:48.700 |
Some would be aggressive, some would be, you know, 01:33:50.940 |
curious, others would be very historical and philosophical. 01:33:55.380 |
Because, you know, maybe they're a million years 01:33:57.140 |
older than us, but it's not, it shouldn't be like, 01:34:00.140 |
I mean, one alien civilization might be like that, 01:34:10.020 |
- It could be a violent dictatorship that the people, 01:34:13.020 |
the alien civilizations that become successful, 01:34:36.940 |
There's a question there, which was the hardest, 01:34:45.220 |
and maybe many others have reached this level, 01:34:47.680 |
the great filter that's prevented them from going farther, 01:34:57.740 |
And those are really important questions for us, 01:35:00.260 |
whether there's other alien civilizations out there or not, 01:35:12.260 |
- Yeah, well, you know, these are big questions, 01:35:15.620 |
but the interesting thing is that if we're alone, 01:35:19.860 |
that's somewhat comforting from the great filter perspective, 01:35:22.280 |
because it probably means the great filters are past us, 01:35:26.540 |
So going back to your origin of life question, 01:35:32.060 |
Like obviously the first life form from chemical soup, 01:35:39.020 |
I wouldn't be that surprised if we saw single cell 01:35:42.460 |
sort of life forms elsewhere, bacteria type things. 01:35:45.740 |
But multicellular life seems incredibly hard, 01:35:50.500 |
and then sort of using that as part of yourself, 01:35:54.260 |
- Would you say that's the biggest, the most, 01:36:08.540 |
- I think that was probably the one that's the biggest. 01:36:11.860 |
"The 10 Great Inventions of Evolution" by Nick Lane, 01:36:37.140 |
but there's certainly for the early candidates, 01:36:47.960 |
Is it that we started cooking meat over fire? 01:37:12.060 |
to defeat the dictator, the authoritarian alpha male 01:37:31.300 |
So that's clearly important for energy efficiency, 01:37:46.340 |
I think you're right about the tribal cooperation aspects 01:38:08.340 |
especially in the Americas when humans arrived. 01:38:11.360 |
So you can imagine once you discover tool usage, 01:38:18.020 |
So I think all of those could have been explanations for it. 01:38:22.700 |
that it's a bit like general intelligence too, 01:38:24.560 |
is it's very costly to begin with to have a brain 01:38:44.060 |
Just playing a game of serious high-level chess, 01:38:46.380 |
which you wouldn't think, just sitting there. 01:38:52.080 |
So in order for an animal or an organism to justify that, 01:39:00.320 |
or half intelligence, say an IQs of like a monkey brain, 01:39:05.320 |
it's not clear you can justify that evolutionary 01:39:15.640 |
which is why I think it's only been done once 01:39:17.200 |
from the sort of specialized brains that you see in animals 01:39:25.180 |
And which allows us to invent the modern world. 01:39:33.660 |
And I think we've seen the same with AI systems, 01:39:38.260 |
it's always been easier to craft a specific solution 01:39:42.380 |
than it has been to build a general learning system 01:39:46.340 |
'Cause initially, that system will be way worse 01:39:49.520 |
than less efficient than the specialized system. 01:39:52.160 |
- So one of the interesting quirks of the human mind 01:39:55.880 |
of this evolved system is that it appears to be conscious. 01:40:08.760 |
that it feels like something to eat a cookie, 01:40:17.940 |
we also need to solve consciousness along the way? 01:40:20.700 |
Do you think AGI systems need to have consciousness 01:40:27.980 |
- Yeah, we thought about this a lot actually. 01:40:29.640 |
And I think that my guess is that consciousness 01:40:35.800 |
So you can have one without the other both ways. 01:40:38.360 |
And I think you can see that with consciousness in that, 01:40:44.160 |
if you have a pet dog or something like that, 01:40:50.240 |
have self-awareness and are very sociable, seem to dream. 01:40:55.240 |
Those kinds of, a lot of the traits one would regard 01:41:05.120 |
So they're not that intelligent by say IQ standards 01:41:08.960 |
- Yeah, it's also possible that our understanding 01:41:11.120 |
of intelligence is flawed, like putting an IQ to it. 01:41:17.360 |
is actually gone very far along the path of intelligence 01:41:24.840 |
- Right, but if we go back to the idea of AGI 01:41:27.040 |
and general intelligence, dogs are very specialized, right? 01:41:32.360 |
but they're like kind of elite sports people or something. 01:41:41.920 |
of the human population to feed them and service them. 01:41:46.440 |
- Yes, exactly, well we co-evolved to some crazy degree, 01:41:52.320 |
even wag their tails and twitch their noses, right? 01:41:57.520 |
But I think you can also see intelligence on the other side. 01:42:01.880 |
So systems like artificial systems that are amazingly smart 01:42:06.200 |
at certain things, like maybe playing Go and chess 01:42:11.760 |
in any shape or form conscious in the way that, 01:42:21.320 |
these intelligent constructs, is one of the best ways 01:42:25.440 |
to explore the mystery of consciousness, to break it down. 01:42:28.040 |
Because we're gonna have devices that are pretty smart 01:42:33.040 |
at certain things or capable of certain things, 01:42:40.800 |
And in fact, I would advocate, if there's a choice, 01:42:43.880 |
building systems in the first place, AI systems, 01:42:48.640 |
are just tools until we understand them better 01:42:53.960 |
- So on that topic, just not as the CEO of DeepMind, 01:42:58.320 |
just as a human being, let me ask you about this 01:43:01.480 |
one particular anecdotal evidence of the Google engineer 01:43:05.320 |
who made a comment or believed that there's some aspect 01:43:09.880 |
of a language model, the Lambda language model, 01:43:15.960 |
So you said you believe there might be a responsibility 01:43:21.120 |
And this experience of a particular engineer, 01:43:23.520 |
I think, I'd love to get your general opinion 01:43:25.880 |
on this kind of thing, but I think it will happen 01:43:28.000 |
more and more and more, which not when engineers, 01:43:40.600 |
deep, impactful interactions with us in a way 01:43:47.920 |
And we sure as heck feel like they're living entities, 01:43:51.960 |
self-aware entities, and maybe even we project sentience 01:43:56.040 |
So what's your thought about this particular system? 01:43:59.960 |
Have you ever met a language model that's sentient? 01:44:06.320 |
- What do you make of the case of when you kind of feel 01:44:10.160 |
that there's some elements of sentience to this system? 01:44:17.760 |
So first thing to say is I think that none of the systems 01:44:20.760 |
we have today, I would say, even have one iota 01:44:26.320 |
That's my personal feeling, interacting with them every day. 01:44:29.720 |
So I think this way premature to be discussing 01:44:34.160 |
I think at the moment it's more of a projection 01:44:36.480 |
of the way our own minds work, which is to see 01:44:39.080 |
sort of purpose and direction in almost anything that we, 01:44:44.360 |
you know, our brains are trained to interpret agency, 01:44:48.200 |
basically, in things, even inanimate things sometimes. 01:44:54.880 |
'cause language is so fundamental to intelligence, 01:44:57.080 |
it's gonna be easy for us to anthropomorphize that. 01:45:00.440 |
I mean, back in the day, even the first, you know, 01:45:23.240 |
the Turing test is a little bit flawed as a formal test 01:45:25.400 |
because it depends on the sophistication of the judge, 01:45:29.200 |
whether or not they are qualified to make that distinction. 01:45:38.280 |
people like Daniel Dennett and David Chalmers 01:45:43.640 |
Of course, consciousness itself hasn't been well, 01:45:51.040 |
you know, I kind of, the working definition I like is, 01:45:55.080 |
it's the way information feels when, you know, 01:46:07.760 |
I think we can obviously see from neuroscience 01:46:18.120 |
and set of coherent preferences that are coherent over time. 01:46:29.320 |
But the reason, the difficult thing I think for us 01:46:31.800 |
when we get, and I think this is a really interesting 01:46:33.400 |
philosophical debate, is when we get closer to AGI 01:46:37.280 |
and, you know, and much more powerful systems 01:46:52.080 |
that a human sentient or a sentient being would exhibit? 01:47:05.760 |
that makes us as humans regard each other as sentient. 01:47:15.600 |
which is that we're running on the same substrate. 01:47:18.040 |
Right, so if we're exhibiting the same behavior, 01:47:26.200 |
the squishy, you know, few pounds of flesh in our skulls, 01:47:29.560 |
then the most parsimonious, I think, explanation 01:47:32.800 |
is that you're feeling the same thing as I'm feeling. 01:47:35.040 |
Right, but we will never have that second part, 01:47:40.680 |
Right, so we will have to only judge based on the behavior. 01:47:45.920 |
is a critical part of why we make assumptions 01:47:51.040 |
high-level animals, why we think they might be, 01:47:52.680 |
'cause they're exhibiting some of the behaviors 01:47:58.680 |
So we're gonna have to come up with explanations 01:48:02.880 |
or models of the gap between substrate differences 01:48:16.080 |
When you have millions, perhaps billions of people 01:48:20.840 |
believing what that Google engineer believed, 01:48:48.240 |
with a system that's faking it before it makes it. 01:49:02.720 |
- Why are we constraining AIs to always be tools 01:49:09.560 |
these are fantastic questions and also critical ones. 01:49:19.800 |
and however remote that looked like back in 2010. 01:49:24.800 |
And we've always had sort of these ethical considerations 01:49:28.680 |
And my current thinking on the language models 01:49:36.720 |
And in terms of analysis tools and guardrails, 01:49:44.000 |
Because I think there are big, still ethical questions, 01:49:51.840 |
What do you do about answering those philosophical questions 01:49:55.800 |
about the feelings people may have about AI systems, 01:50:06.320 |
before you can responsibly deploy these systems at scale. 01:50:09.400 |
That will be at least be my current position. 01:50:12.320 |
Over time, I'm very confident we'll have those tools, 01:50:23.480 |
I think there it's important to look beyond just science. 01:50:28.480 |
That's why I think philosophy, social sciences, 01:50:31.720 |
even theology, other things like that come into it. 01:50:40.320 |
and to enhance that and the human condition, right? 01:50:51.640 |
solve many scientific problems, solve disease. 01:50:55.240 |
this is the amazing era I think we're heading into 01:51:00.800 |
We've already seen with things like social media, 01:51:05.920 |
firstly, by bad actors or naive actors or crazy actors, right? 01:51:10.920 |
So there's that set of just the common or garden misuse 01:51:18.000 |
And then of course, there's an additional thing 01:51:28.720 |
So I think these questions have to be approached 01:51:31.480 |
very carefully using the scientific method, I would say, 01:51:35.360 |
in terms of hypothesis generation, careful control testing, 01:51:44.400 |
if something goes wrong, it may cause a lot of harm 01:51:52.040 |
where if something goes wrong, it's relatively easy to fix 01:52:02.600 |
of like with a lot of power comes a lot of responsibility. 01:52:05.280 |
And I think that's the case here with things like AI 01:52:07.840 |
given the enormous opportunity in front of us. 01:52:14.120 |
and as many inputs into things like the design of the systems 01:52:22.360 |
I think as wide a group of voices as possible 01:52:26.760 |
to input into that and to have a say in that, 01:52:29.080 |
especially when it comes to deployment of these systems, 01:52:31.800 |
which is when the rubber really hits the road, 01:52:33.440 |
it really affects the general person in the street 01:52:37.360 |
And that's why I say, I think as a first step, 01:52:55.800 |
in order for us to allow us to carefully experiment 01:53:01.000 |
- So the leap between tool to sentient entity being 01:53:13.480 |
in the AI community, also one of the most kind 01:53:16.800 |
and if I may say sort of loved people in the community. 01:53:20.860 |
That said, creation of a super intelligent AI system 01:53:25.860 |
would be one of the most powerful things in the world, 01:53:37.560 |
power corrupts and absolute power corrupts absolutely. 01:53:53.240 |
Do you think about the corrupting nature of power 01:53:59.540 |
that as all dictators and people have caused atrocities 01:54:07.780 |
But they don't do good because the powers polluted 01:54:10.940 |
their mind about what is good and what is evil. 01:54:18.700 |
And I think what are the defenses against that? 01:54:24.820 |
and sort of humble no matter what you do or achieve. 01:54:38.100 |
I've always, I think trying to be a multidisciplinary person 01:54:43.780 |
because no matter how good you are at one topic, 01:54:47.620 |
And always relearning a new topic again from scratch 01:54:53.380 |
So for me, that's been biology over the last five years. 01:55:07.660 |
amazing set of people around you at your company 01:55:10.820 |
or your organization who are also very ethical 01:55:13.660 |
and grounded themselves and help to keep you that way. 01:55:16.840 |
And then ultimately, just to answer your question, 01:55:18.880 |
I hope we're gonna be a big part of birthing AI 01:55:22.020 |
and that being the greatest benefit to humanity 01:55:26.820 |
and getting us into a world of radical abundance 01:55:32.100 |
and solving many of the big challenges we have 01:55:38.260 |
to travel the stars and find those aliens if they are there. 01:55:41.180 |
And if they're not there, find out why they're not there, 01:55:54.780 |
there'll be a certain set of pioneers who get there first. 01:56:02.460 |
which cultures they come from and what values they have, 01:56:09.300 |
is gonna learn for itself most of its knowledge, 01:56:11.580 |
there'll be a residue in the system of the culture 01:56:14.780 |
and the values of the creators of that system. 01:56:21.580 |
different cultures as we're in a more fragmented world 01:56:29.220 |
where we can't seem to get our act together globally 01:56:35.580 |
Perhaps if we get to an era of radical abundance, 01:56:44.300 |
- It's true that in terms of power corrupting 01:56:50.020 |
it seems that some of the atrocities of the past 01:56:52.780 |
happen when there's a significant constraint on resources. 01:56:58.380 |
I think scarcity is one thing that's led to competition, 01:57:03.980 |
I would like us to all be in a positive sum world. 01:57:06.080 |
And I think for that, you have to remove scarcity. 01:57:08.460 |
I don't think that's enough unfortunately to get world peace 01:57:12.780 |
like wanting power over people and this kind of stuff, 01:57:15.460 |
which is not necessarily satisfied by just abundance, 01:57:20.240 |
But I think ultimately AI is not gonna be run 01:57:29.580 |
And I think there'll be many ways this will happen. 01:57:33.100 |
And ultimately, everybody should have a say in that. 01:57:45.820 |
or interested in having a big impact on the world, 01:57:50.700 |
what they should do to have a career they can be proud of 01:57:55.060 |
- I love giving talks to the next generation. 01:57:59.180 |
I think the most important things to learn about 01:58:04.540 |
is what are your true passions is first of all, 01:58:11.860 |
the way to do that is to explore as many things as possible 01:58:19.180 |
I would also encourage people to look at the, 01:58:21.100 |
finding the connections between things in a unique way. 01:58:24.620 |
I think that's a really great way to find a passion. 01:58:27.300 |
Second thing I would say, advise is know yourself. 01:58:30.660 |
So spend a lot of time understanding how you work best. 01:58:39.900 |
What are your, how do you deal with pressure? 01:58:47.260 |
but also find out what your unique skills and strengths are 01:58:57.860 |
and find passions that you're genuinely excited about, 01:59:01.220 |
that intersect with what your unique strong skills are, 01:59:07.860 |
And I think you can make a huge difference in the world. 01:59:14.420 |
Quick questions about day in the life, the perfect day, 01:59:18.140 |
the perfect productive day in the life of Demis' house. 01:59:21.180 |
Maybe these days you're, there's a lot involved. 01:59:29.020 |
where you could focus on a single project maybe. 01:59:39.180 |
How many dozens of cups of coffees do you drink a day? 02:00:02.660 |
Back 10, 20 years ago, it would have been a whole day 02:00:06.300 |
of research, individual research or programming, 02:00:18.380 |
reading science fiction books or playing some games. 02:00:28.340 |
on whether it's programming or reading research papers. 02:00:38.060 |
five to 10 years, I've actually got quite a structure 02:00:42.300 |
which is that I'm a complete night owl, always have been. 02:00:52.540 |
and sort of do work till about seven in the office. 02:01:00.900 |
And with as many, meet as many people as possible. 02:01:03.180 |
So that's my collaboration management part of the day. 02:01:06.460 |
Then I go home, spend time with the family and friends, 02:01:15.220 |
I call it my second day of work around 10 p.m., 11 p.m. 02:01:18.500 |
And that's the time till about the small hours 02:01:22.540 |
where I will do my thinking and reading research, 02:01:30.980 |
but it's not efficient to do that these days, 02:01:37.140 |
But that's when I do, maybe do the long kind of stretches 02:01:42.460 |
And then probably, using email or other things, 02:02:01.060 |
- I still like pencil and paper best for working out things, 02:02:15.980 |
when you're still using physical pen and pencil and paper. 02:02:22.420 |
and also whole stacks of notebooks that I use at home, yeah. 02:02:27.420 |
- On some of these most challenging next steps, 02:02:35.540 |
there's some deep thinking required there, right? 02:02:41.260 |
Because you're gonna have to invest a huge amount of time 02:02:55.020 |
Do we need to construct a benchmark from scratch? 02:02:58.140 |
- Yes, so I think of all those kind of things 02:03:03.420 |
I find I've always found the quiet hours of the morning 02:03:07.620 |
when everyone's asleep, it's super quiet outside. 02:03:13.380 |
like between like one and three in the morning. 02:03:21.580 |
So that's when I would read my philosophy books 02:03:24.220 |
and Spinoza's my recent favorite, Kant, all these things. 02:03:28.820 |
And I read about a great scientist of history, 02:03:33.660 |
how they did things, how they thought things. 02:03:41.780 |
you do your sort of creative thinking in one block. 02:03:48.540 |
'cause obviously no one else is up at those times. 02:03:57.580 |
The other nice thing about doing it night time wise 02:04:06.860 |
and I'll go into six in the morning, whatever, 02:04:10.780 |
So I'll be a bit tired and I won't be my best, 02:04:13.900 |
I can decide, looking at my schedule the next day 02:04:16.660 |
that I'm given where I'm at with this particular thought 02:04:19.380 |
or creative idea that I'm gonna pay that cost the next day. 02:04:32.660 |
starts at breakfast, you know, 8 a.m., whatever, 02:04:36.860 |
you have to reschedule a day if you're in flow. 02:04:38.980 |
- Yeah, that could be a true special thread of thoughts 02:04:47.740 |
is when you just lose yourself late into the night. 02:04:56.500 |
So you have to do some kind of first principles thinking 02:05:03.140 |
- You have to get really good at context switching, 02:05:09.020 |
if you include all the scientific things we do, 02:05:12.580 |
these are entire complex fields in themselves, 02:05:15.380 |
and you have to sort of keep abreast of that. 02:05:20.020 |
I've always been a sort of generalist in a way, 02:05:33.900 |
So I've always been that way, inclined, multidisciplinary, 02:05:36.940 |
and there's too many interesting things in the world 02:05:43.260 |
gotta ask the big, ridiculously big question about life. 02:05:47.660 |
What do you think is the meaning of this whole thing? 02:06:08.100 |
to gain knowledge and understand the universe. 02:06:29.140 |
and more understanding yourself and more tolerant 02:06:32.060 |
and all these, I think all these other things 02:06:34.740 |
And to me, understanding the nature of reality, 02:06:38.740 |
What is going on here is sometimes the colloquial way I say, 02:06:49.980 |
the universe seems to be structured in a way, 02:06:53.100 |
why is it structured in a way that science is even possible? 02:06:59.300 |
It feels like it's almost structured in a way 02:07:05.020 |
So I feel like, and why should computers be even possible? 02:07:08.020 |
Isn't that amazing that computational electronic devices 02:07:23.820 |
So a lot of things are kind of slightly suspicious to me. 02:07:27.740 |
this puzzle sure as heck sounds like something 02:07:30.740 |
what it takes to design a game that's really fun to play 02:07:36.620 |
And it does seem like this puzzle, like you mentioned, 02:07:46.060 |
but excites you by the possibility of learning more. 02:07:49.060 |
It's one heck of a puzzle we got going on here. 02:07:53.600 |
So like I mentioned, of all the people in the world, 02:07:56.460 |
you're very likely to be the one who creates the AGI system 02:08:01.460 |
that achieves human level intelligence and goes beyond it. 02:08:26.000 |
'cause maybe it would be 42 or something like that. 02:08:30.980 |
- And then there'll be a deep sigh from the systems, 02:08:34.800 |
like, all right, how do I explain to this human? 02:08:37.440 |
All right, let me, I don't have time to explain. 02:08:52.760 |
- What would you think the answer could possibly look like? 02:09:07.740 |
as to what one would do to maybe prove those things out. 02:09:25.500 |
- A much deeper, maybe simpler explanation of things, 02:09:31.900 |
which we know doesn't work, but we still keep adding to. 02:09:38.940 |
And it would start encompassing many of the mysteries 02:09:41.260 |
that we have wondered about for thousands of years, 02:09:52.620 |
Well, Demis, you're one of the special human beings 02:09:59.060 |
and it's a huge honor that you would take a pause 02:10:01.020 |
from the bigger puzzle to solve this small puzzle 02:10:06.260 |
Thank you so much. - Thank you for having me. 02:10:13.180 |
please check out our sponsors in the description. 02:10:26.340 |
Thank you for listening, and hope to see you next time.