back to index

Guillaume Verdon: Beff Jezos, E/acc Movement, Physics, Computation & AGI | Lex Fridman Podcast #407


Chapters

0:0 Introduction
2:23 Beff Jezos
12:21 Thermodynamics
18:36 Doxxing
28:30 Anonymous bots
35:58 Power
38:29 AI dangers
42:1 Building AGI
50:14 Merging with AI
57:56 p(doom)
73:23 Quantum machine learning
86:41 Quantum computer
95:15 Aliens
100:4 Quantum gravity
105:25 Kardashev scale
107:17 Effective accelerationism (e/acc)
117:47 Humor and memes
120:53 Jeff Bezos
127:25 Elon Musk
133:55 Extropic
142:31 Singularity and AGI
146:29 AI doomers
147:54 Effective altruism
154:23 Day in the life
160:50 Identity
163:40 Advice for young people
165:42 Mortality
169:25 Meaning of life

Whisper Transcript | Transcript Only Page

00:00:00.000 | The following is a conversation with Guillaume Verdun,
00:00:02.920 | the man behind the previously anonymous account
00:00:05.680 | BasedBefJezos on X.
00:00:09.040 | These two identities were merged by a doxing article
00:00:12.280 | in Forbes titled, "Who is BasedBefJezos?
00:00:16.240 | The leader of the tech elite's EAC movement."
00:00:19.960 | So let me describe these two identities
00:00:22.120 | that coexist in the mind of one human.
00:00:25.800 | Identity number one, Guillaume, is a physicist,
00:00:30.160 | applied mathematician,
00:00:30.980 | and quantum machine learning researcher and engineer,
00:00:33.560 | receiving his PhD in quantum machine learning,
00:00:36.120 | working at Google on quantum computing,
00:00:38.480 | and finally launching his own company called Xtropic
00:00:42.160 | that seeks to build physics-based computing hardware
00:00:44.780 | for generative AI.
00:00:47.000 | Identity number two, BevJezos on X,
00:00:50.960 | is the creator of the effective accelerationism movement,
00:00:54.720 | often abbreviated as EAC,
00:00:57.960 | that advocates for propelling rapid technological progress
00:01:01.120 | as the ethically optimal course of action for humanity.
00:01:04.560 | For example, his proponents believe that progress in AI
00:01:08.880 | is a great social equalizer, which should be pushed forward.
00:01:13.320 | EAC followers see themselves as a counterweight
00:01:16.840 | to the cautious view that AI is highly unpredictable,
00:01:20.120 | potentially dangerous, and needs to be regulated.
00:01:23.920 | They often give their opponents the labels
00:01:25.840 | of "doomers" or "decells," short for deceleration.
00:01:30.840 | As Bev himself put it, EAC is a memetic optimism virus.
00:01:36.840 | The style of communication of this movement
00:01:39.440 | leans always toward the memes and the lols,
00:01:43.400 | but there is an intellectual foundation
00:01:46.400 | that we explore in this conversation.
00:01:49.040 | Now, speaking of the meme,
00:01:51.360 | I am, too, a kind of aspiring connoisseur of the absurd.
00:01:56.160 | It is not an accident that I spoke to Jeff Bezos
00:01:59.760 | and BevJezos back to back.
00:02:03.960 | As we talk about, Bev admires Jeff
00:02:06.160 | as one of the most important humans alive,
00:02:08.640 | and I admire the beautiful absurdity
00:02:11.680 | and the humor of it all.
00:02:12.920 | This is the Lex Friedman Podcast.
00:02:16.220 | To support it, please check out our sponsors
00:02:18.180 | in the description.
00:02:19.360 | And now, dear friends, here's Guillaume Verdun.
00:02:22.680 | Let's get the facts of identity down first.
00:02:26.200 | Your name is Guillaume Verdun, Gil,
00:02:30.160 | but you're also behind the anonymous account
00:02:32.280 | on X called BasedBevJezos.
00:02:35.200 | So, first, Guillaume Verdun,
00:02:36.920 | you're a quantum computing guy, physicist,
00:02:40.400 | applied mathematician, and then BasedBevJezos
00:02:43.360 | is basically a meme account that started a movement
00:02:48.000 | with a philosophy behind it.
00:02:50.120 | So maybe just can you linger on who these people are
00:02:53.720 | in terms of characters, in terms of communication styles,
00:02:56.660 | in terms of philosophies?
00:02:58.840 | - I mean, with my main identity, I guess,
00:03:01.960 | ever since I was a kid, I wanted to figure out
00:03:05.240 | a theory of everything to understand the universe,
00:03:08.000 | and that path led me to theoretical physics eventually,
00:03:13.000 | trying to answer the big questions of why are we here,
00:03:17.680 | where are we going, right?
00:03:19.360 | And that led me to study information theory
00:03:23.640 | and try to understand physics
00:03:27.120 | from the lens of information theory,
00:03:29.440 | understand the universe as one big computation.
00:03:34.000 | And essentially, after reaching a certain level,
00:03:39.000 | studying black hole physics,
00:03:42.520 | I realized that I wanted to not only understand
00:03:47.280 | how the universe computes, but sort of compute like nature,
00:03:51.780 | and figure out how to build and apply computers
00:03:56.700 | that are inspired by nature, so physics-based computers.
00:04:01.160 | And that sort of brought me to quantum computing
00:04:04.960 | as a field of study to, first of all, simulate nature.
00:04:09.960 | And in my work, it was to learn representations of nature
00:04:14.980 | that can run on such computers.
00:04:17.400 | So if you have AI representations that think like nature,
00:04:22.400 | then they'll be able to more accurately represent it.
00:04:28.680 | At least that was the thesis that brought me
00:04:32.280 | to be an early player in the field
00:04:34.880 | called quantum machine learning, right?
00:04:37.000 | So how to do machine learning on quantum computers,
00:04:41.840 | and really sort of extend notions of intelligence
00:04:46.660 | to the quantum realm.
00:04:47.860 | So how do you capture and understand
00:04:51.780 | quantum mechanical data from our world, right?
00:04:54.540 | And how do you learn quantum mechanical representations
00:04:57.860 | of our world?
00:04:59.060 | On what kind of computer do you run these representations
00:05:03.340 | and train them?
00:05:04.220 | How do you do so?
00:05:05.660 | And so that's really sort of the questions
00:05:08.260 | I was looking to answer,
00:05:10.100 | because ultimately I had a sort of crisis of faith.
00:05:13.120 | Originally I wanted to figure out,
00:05:17.580 | as every physicist does at the beginning of their career,
00:05:20.940 | a few equations that describe the whole universe, right?
00:05:24.200 | And sort of be the hero of the story there.
00:05:27.420 | But eventually I realized that actually augmenting ourselves
00:05:32.700 | with machines, augmenting our ability to perceive,
00:05:35.900 | predict and control our world with machines
00:05:38.220 | is the path forward, right?
00:05:40.140 | And that's what got me to leave theoretical physics
00:05:43.020 | and go into quantum computing and quantum machine learning.
00:05:46.500 | And during those years,
00:05:49.080 | I thought that there was still a piece missing.
00:05:53.360 | There was a piece of our understanding of the world
00:05:57.180 | and our way to compute and our way to think about the world.
00:06:01.460 | And if you look at the physical scales, right?
00:06:06.060 | At the very small scales, things are quantum mechanical,
00:06:10.580 | right?
00:06:11.740 | And at the very large scales, things are deterministic.
00:06:15.220 | Things have averaged out, right?
00:06:16.500 | I'm definitely here in this seat.
00:06:18.220 | I'm not at a superposition over here and there.
00:06:21.220 | At the very small scales, things are in superposition.
00:06:24.020 | They can exhibit interference effects.
00:06:28.480 | But at the mesoscales, right?
00:06:31.420 | The scales that matter for day-to-day life,
00:06:34.380 | you know, the scales of proteins, of biology,
00:06:38.260 | of gases, liquids and so on,
00:06:40.980 | things are actually thermodynamical, right?
00:06:44.940 | They're fluctuating.
00:06:46.820 | And after, I guess, about eight years
00:06:51.540 | in quantum computing and quantum machine learning,
00:06:54.620 | I had a realization that, you know,
00:06:57.340 | I was looking for answers about our universe
00:07:00.940 | by studying the very big and the very small, right?
00:07:04.100 | I did a bit of quantum cosmology.
00:07:07.100 | So that's studying the cosmos, where it's going,
00:07:09.980 | where it came from.
00:07:11.300 | You study black hole physics.
00:07:13.220 | You study the extremes in quantum gravity.
00:07:15.260 | You study where the energy density is sufficient
00:07:19.100 | for both quantum mechanics and gravity to be relevant, right?
00:07:24.100 | And the sort of extreme scenarios are black holes
00:07:28.420 | and, you know, the very early universe.
00:07:30.820 | So there's the sort of scenarios that you study
00:07:34.700 | the interface between quantum mechanics and relativity.
00:07:39.700 | And, you know, really I was studying these extremes
00:07:44.260 | to understand how the universe works and where is it going,
00:07:49.260 | but I was missing a lot of the meat in the middle,
00:07:54.620 | if you will, right?
00:07:56.340 | Because day-to-day quantum mechanics is relevant
00:07:58.940 | and the cosmos is relevant, but not that relevant.
00:08:01.180 | Actually, we're on sort of the medium space and time scales.
00:08:05.500 | And there, the main, you know, theory of physics
00:08:09.020 | that is most relevant is thermodynamics, right?
00:08:12.380 | Out of equilibrium thermodynamics.
00:08:14.520 | 'Cause life is, you know, a process
00:08:18.620 | that is thermodynamical and it's out of equilibrium.
00:08:21.940 | We're not, you know, just a soup of particles
00:08:25.660 | at equilibrium with nature.
00:08:27.540 | We're a sort of coherent state trying to maintain itself
00:08:31.020 | by acquiring free energy and consuming it.
00:08:33.920 | And that's sort of, I guess, another shift
00:08:39.700 | and I guess my faith in the universe happened
00:08:43.360 | towards the end of my time at Alphabet.
00:08:48.620 | And I knew I wanted to build, well, first of all,
00:08:54.020 | a computing paradigm based on this type of physics.
00:08:57.380 | But ultimately, just by trying to experiment
00:09:02.380 | with these ideas applied to society and economies
00:09:07.780 | and much of what we see around us.
00:09:11.780 | You know, I started an anonymous account
00:09:14.540 | just to relieve the pressure, right?
00:09:17.740 | That comes from having an account that you're accountable
00:09:21.260 | for everything you say on.
00:09:24.200 | And I started an anonymous account
00:09:25.760 | just to experiment with ideas originally, right?
00:09:29.200 | Because I didn't realize how much I was restricting
00:09:34.200 | my space of thoughts until I sort of had the opportunity
00:09:39.760 | to let go.
00:09:40.600 | In a sense, restricting your speech back propagates
00:09:45.360 | to restricting your thoughts, right?
00:09:47.960 | And by creating an anonymous account,
00:09:51.440 | it seemed like I had unclamped some variables in my brain
00:09:55.540 | and suddenly could explore a much wider parameter space
00:09:58.360 | of thoughts.
00:09:59.960 | - Just to linger on that, isn't that interesting?
00:10:02.600 | That one of the things that people often talk about
00:10:05.440 | is that when there's pressure and constraints on speech,
00:10:11.400 | it somehow leads to constraints on thought.
00:10:15.640 | Even though it doesn't have to,
00:10:16.640 | we can think thoughts inside our head,
00:10:18.920 | but somehow it creates these walls around thought.
00:10:23.680 | - Yep, that's sort of the basis of our movement
00:10:28.480 | is we were seeing a tendency towards constraint,
00:10:32.980 | reduction or suppression of variance
00:10:36.800 | in every aspect of life, whether it's thought,
00:10:40.720 | how to run a company, how to organize humans,
00:10:46.240 | how to do AI research.
00:10:49.100 | In general, we believe that maintaining variance
00:10:54.120 | ensures that the system is adaptive, right?
00:10:57.560 | Maintaining healthy competition in marketplaces of ideas,
00:11:02.560 | of companies, of products, of cultures,
00:11:07.800 | of governments, of currencies is the way forward
00:11:13.040 | because the system always adapts to assign resources
00:11:18.040 | to the configurations that lead to its growth.
00:11:25.360 | And the fundamental basis for the movement
00:11:29.240 | is this sort of realization that life is a sort of fire
00:11:34.240 | that seeks out free energy in the universe
00:11:39.320 | and seeks to grow, right?
00:11:41.920 | And that growth is fundamental to life.
00:11:45.100 | And you see this in the equations, actually,
00:11:48.280 | of outer equilibrium thermodynamics.
00:11:50.360 | You see that paths of trajectories,
00:11:56.280 | of configurations of matter that are better
00:11:59.840 | at acquiring free energy and dissipating more heat
00:12:04.800 | are exponentially more likely, right?
00:12:08.680 | So the universe is biased towards certain futures
00:12:13.680 | and so there's a natural direction
00:12:17.600 | where the whole system wants to go.
00:12:21.280 | - So the second law of thermodynamics says
00:12:23.720 | that the entropy's always increasing,
00:12:26.160 | the universe is tending towards equilibrium,
00:12:28.880 | and you're saying there's these pockets
00:12:30.840 | that have complexity and are out of equilibrium.
00:12:35.840 | You said that thermodynamics favors
00:12:38.120 | the creation of complex life that increases
00:12:40.160 | its capability to use energy to offload entropy,
00:12:43.240 | to offload entropy.
00:12:44.360 | So you have pockets of non-entropy
00:12:47.240 | that tend the opposite direction.
00:12:49.400 | Why is that intuitive to you that it's natural
00:12:51.960 | for such pockets to emerge?
00:12:53.820 | - Well, we're far more efficient at producing heat
00:12:58.820 | than, let's say, just a rock with a similar mass
00:13:03.000 | as ourselves, right?
00:13:04.520 | We acquire free energy, we acquire food,
00:13:08.520 | and we're using all this electricity for our operation.
00:13:13.520 | And so the universe wants to produce more entropy
00:13:18.320 | and by having life go on and grow,
00:13:23.320 | it's actually more optimal at producing entropy
00:13:26.480 | because it will seek out pockets of free energy
00:13:30.920 | and burn it for its sustenance and further growth.
00:13:35.400 | And that's sort of the basis of life.
00:13:40.040 | And I mean, there's Jeremy England at MIT
00:13:45.040 | who has this theory that I'm a proponent of
00:13:48.240 | that life emerged because of this sort of property.
00:13:53.080 | And to me, this physics is what governs the mesoscales.
00:13:58.520 | And so it's the missing piece between
00:14:01.240 | the quantum and the cosmos.
00:14:02.760 | It's the middle part, right?
00:14:05.080 | Thermodynamics rules the mesoscales.
00:14:08.200 | And to me, both from a point of view of designing
00:14:13.200 | or engineering devices that harness that physics
00:14:17.020 | and trying to understand the world
00:14:18.920 | through the lens of thermodynamics
00:14:21.560 | has been sort of a synergy between my two identities
00:14:25.160 | over the past year and a half now.
00:14:28.040 | And so that's really how the two identities emerged.
00:14:32.680 | One was kind of, I'm a decently respected scientist
00:14:37.680 | and I was going towards doing a startup in the space
00:14:43.120 | and trying to be a pioneer
00:14:45.640 | of a new kind of physics-based AI.
00:14:47.920 | And as a dual to that, I was sort of experimenting
00:14:51.680 | with philosophical thoughts
00:14:54.400 | from a physicist's standpoint, right?
00:14:57.760 | And ultimately I think that around that time,
00:15:02.760 | it was like late 2021, early 2022,
00:15:07.760 | I think there was just a lot of pessimism
00:15:10.560 | about the future in general and pessimism about tech.
00:15:14.520 | And that pessimism was sort of virally spreading
00:15:19.320 | because it was getting algorithmically amplified
00:15:24.040 | and people just felt like the future
00:15:28.680 | is gonna be worse than the present.
00:15:31.120 | And to me, that is a very fundamentally destructive force
00:15:36.120 | in the universe is this sort of doom mindset
00:15:42.400 | because it is hyperstitious,
00:15:44.220 | which means that if you believe it,
00:15:46.160 | you're increasing the likelihood of it happening.
00:15:49.600 | And so felt the responsibility to some extent
00:15:53.680 | to make people aware of the trajectory of civilization
00:15:58.680 | and the natural tendency of the system
00:16:02.920 | to adapt towards its growth.
00:16:05.080 | And sort of that actually the laws of physics say
00:16:07.400 | that the future is gonna be better
00:16:09.600 | and grander statistically, and we can make it so.
00:16:14.120 | And if you believe in it,
00:16:17.400 | if you believe that the future would be better
00:16:19.720 | and you believe you have agency to make it happen,
00:16:23.040 | you're actually increasing the likelihood
00:16:24.940 | of that better future happening.
00:16:26.840 | And so I sort of felt the responsibility
00:16:30.160 | to sort of engineer a movement of viral optimism
00:16:35.160 | about the future and build a community
00:16:39.360 | of people supporting each other to build
00:16:41.960 | and do hard things, do the things that need to be done
00:16:45.440 | for us to scale up civilization.
00:16:50.080 | Because at least to me, I don't think stagnation
00:16:53.400 | or slowing down is actually an option.
00:16:56.080 | Fundamentally, life and the whole system
00:16:59.180 | or whole civilization wants to grow.
00:17:01.680 | And there's just far more cooperation
00:17:05.680 | when the system is growing rather than when it's declining
00:17:09.560 | and you have to decide how to split the pie.
00:17:13.540 | And so I've balanced both identities so far
00:17:20.000 | but I guess recently the two have been merged
00:17:24.680 | more or less without my consent, so.
00:17:27.120 | - You said a lot of really interesting things there.
00:17:29.880 | So first, representations of nature.
00:17:34.280 | That's something that first drew you in
00:17:36.240 | to try to understand from a quantum computing perspective
00:17:39.600 | like how do you understand nature?
00:17:42.260 | How do you represent nature in order to understand it,
00:17:44.240 | in order to simulate it, in order to do something with it?
00:17:47.120 | So it's a question of representations.
00:17:49.600 | And then there's that leap you take
00:17:51.760 | from the quantum mechanical representation
00:17:53.720 | to the what you're calling mesoscale representation
00:17:56.600 | where thermodynamics comes into play
00:17:58.880 | which is a way to represent nature
00:18:01.340 | in order to understand what life, human behavior,
00:18:06.340 | all this kind of stuff that's happening here on Earth
00:18:08.400 | that seems interesting to us.
00:18:11.080 | Then there's the word hyperstition.
00:18:15.200 | So some ideas, I suppose both pessimism and optimism
00:18:18.960 | are such ideas that if you internalize them,
00:18:23.560 | you in part make that idea a reality.
00:18:26.760 | So both optimism and pessimism have that property.
00:18:29.280 | I would say that probably a lot of ideas have that property
00:18:33.160 | which is one of the interesting things about humans.
00:18:35.840 | And you talked about one interesting difference also
00:18:40.340 | between the sort of the Guillaume de Guille front end
00:18:45.340 | and the Bez Bevcazos back end
00:18:49.180 | is the communication styles also.
00:18:51.740 | That you were exploring different ways
00:18:53.340 | of communicating that can be more viral
00:18:57.820 | in the way that we communicate in the 21st century.
00:19:00.840 | Also the movement that you mentioned that you started,
00:19:05.180 | it's not just a meme account,
00:19:06.680 | but there's also a name to it
00:19:10.020 | called effective accelerationism, EAC.
00:19:13.860 | A play, a resistance to the effective altruism movement.
00:19:18.860 | Also an interesting one that I'd love to talk to you
00:19:21.740 | about the tensions there.
00:19:23.580 | Okay, and so then there was a merger,
00:19:25.660 | a git merge on the personalities.
00:19:28.640 | Recently, without your consent like you said,
00:19:32.400 | some journalists figured out that you're one and the same.
00:19:36.700 | Maybe you could talk about that experience.
00:19:39.540 | First of all, what's the story of the merger of the two?
00:19:44.540 | - Right, so I wrote the manifesto
00:19:50.740 | with my co-founder of EAC, an account named Bazelord.
00:19:54.700 | Still anonymous luckily and hopefully forever.
00:19:58.500 | - So it's based Beth Jezos and Bazed, like Bazen?
00:20:03.500 | Like Bazelord, like Bazen, Bazenlord, Bazelord.
00:20:07.600 | Okay, and so we should say from now on when you say EAC,
00:20:11.440 | you mean E slash A-C-C,
00:20:14.920 | which stands for effective accelerationism.
00:20:17.600 | - That's right.
00:20:18.440 | - And you're referring to a manifesto written
00:20:21.120 | on I guess Upstack.
00:20:22.960 | - Yeah.
00:20:23.800 | - Are you also Bazelord?
00:20:25.560 | - No.
00:20:26.400 | - Okay, it's a different person.
00:20:27.220 | - Yeah.
00:20:28.060 | - Oh, there you go.
00:20:28.960 | Wouldn't it be funny if I'm Bazelord?
00:20:32.000 | - That'd be amazing.
00:20:33.000 | So originally wrote the manifesto
00:20:38.480 | around the same time as I founded this company
00:20:42.720 | and I worked at Google X or just X now
00:20:47.720 | or Alphabet X now that there's another X.
00:20:51.680 | And there, you know, the baseline is sort of secrecy, right?
00:20:57.480 | You can't talk about what you work on
00:21:00.340 | even with other Googlers or externally.
00:21:04.000 | And so that was kind of deeply ingrained
00:21:06.280 | in my way to do things,
00:21:07.680 | especially in deep tech that, you know,
00:21:11.040 | has geopolitical impact, right?
00:21:14.800 | And so I was being secretive about what I was working on.
00:21:20.560 | There was no correlation between my company
00:21:22.400 | and my main identity publicly.
00:21:25.320 | And then not only did they correlate that,
00:21:27.920 | they also correlated my main identity and this account.
00:21:32.920 | So I think the fact that they had doxed
00:21:36.880 | the whole Guillaume complex and they were,
00:21:41.720 | the journalists, you know,
00:21:42.720 | reached out to actually my investors,
00:21:44.520 | which is pretty scary.
00:21:47.120 | You know, when you're a startup entrepreneur,
00:21:48.920 | you don't really have bosses
00:21:50.960 | except for your investors, right?
00:21:54.080 | And my investors ping me like,
00:21:55.760 | "Hey, this is gonna come out.
00:21:57.720 | "They've figured out everything.
00:22:00.140 | "What are you gonna do?"
00:22:01.800 | Right?
00:22:02.640 | So I think at first they had a first reporter
00:22:06.920 | on the Thursday
00:22:08.480 | and they didn't have all the pieces together.
00:22:10.840 | But then they looked at their notes across the organization
00:22:13.260 | and they sensor fused their notes.
00:22:15.560 | And now they had way too much.
00:22:17.180 | And that's when I got worried
00:22:19.320 | 'cause they said it was of public interest.
00:22:22.780 | - And in general-- - Okay, you said sensor fused.
00:22:25.320 | I guess some giant neural network operating
00:22:28.880 | in a distributed way.
00:22:30.760 | We should also say that the journalists used,
00:22:32.840 | I guess at the end of the day,
00:22:34.960 | audio-based analysis of voice.
00:22:37.280 | - Yeah. - Comparing voice
00:22:39.280 | of what talks you've given in the past
00:22:41.120 | and then voice on X spaces.
00:22:46.120 | - Yep. - Okay.
00:22:48.480 | So, and that's where primarily the match happened.
00:22:51.640 | Okay, continue.
00:22:53.180 | - The match, but they scraped SEC filings.
00:22:58.180 | They looked at my private Facebook account and so on.
00:23:04.340 | So they did some digging.
00:23:07.380 | Originally I thought that doxing was illegal, right?
00:23:11.660 | But there's this weird threshold
00:23:14.980 | when it becomes of public interest
00:23:17.460 | to know someone's identity.
00:23:19.660 | And those were the keywords
00:23:21.320 | that sort of like ring the alarm bells for me
00:23:23.600 | when they said, because I had just reached 50K followers,
00:23:27.080 | allegedly that's of public interest.
00:23:29.240 | And so where do we draw the line?
00:23:32.080 | When is it legal to dox someone?
00:23:36.000 | - The word dox, maybe you can educate me.
00:23:39.340 | I thought doxing generally refers to
00:23:42.400 | if somebody's physical location is found out,
00:23:46.080 | meaning like where they live.
00:23:48.780 | - Mm.
00:23:50.240 | - So we're referring to the more general concept
00:23:53.360 | of revealing private information
00:23:58.140 | that you don't want revealed,
00:23:59.840 | is what you mean by doxing.
00:24:01.760 | - I think that for the reasons we listed before,
00:24:06.520 | having an anonymous account is a really powerful way
00:24:10.680 | to keep the powers that be in check.
00:24:13.020 | We were ultimately speaking truth to power, right?
00:24:16.960 | I think a lot of executives in AI companies
00:24:20.240 | really cared what our community thought
00:24:22.640 | about any move they may take.
00:24:26.360 | And now that my identity is revealed,
00:24:30.100 | now they know where to apply pressure
00:24:33.160 | to silence me or maybe the community.
00:24:38.160 | And to me, that's really unfortunate
00:24:40.320 | because again, it's so important
00:24:44.000 | for us to have freedom of speech,
00:24:47.160 | which induces freedom of thought
00:24:48.840 | and freedom of information propagation on social media,
00:24:55.120 | which thanks to Elon purchasing Twitter, now X,
00:25:00.160 | we have that.
00:25:01.300 | And so to us, we wanted to call out certain maneuvers
00:25:08.040 | being done by the incumbents in AI
00:25:12.880 | as not what it may seem on the surface, right?
00:25:17.080 | We were calling out how certain proposals
00:25:20.160 | might be useful for regulatory capture, right?
00:25:23.720 | And how the doomerism mindset
00:25:28.400 | was maybe instrumental to those ends.
00:25:31.140 | And I think we should have the right to point that out
00:25:34.840 | and just have the ideas that we put out
00:25:39.760 | evaluated for themselves, right?
00:25:41.760 | Ultimately, that's why I created an anonymous account.
00:25:45.920 | It's to have my ideas evaluated for themselves,
00:25:48.800 | uncorrelated from my track record, my job,
00:25:52.600 | or status from having done things in the past.
00:25:57.420 | And to me, start an account from zero to a large following
00:26:02.420 | in a way that wasn't dependent on my identity
00:26:07.960 | and/or achievements.
00:26:11.640 | That was very fulfilling, right?
00:26:13.820 | It's kind of like new game plus in a video game.
00:26:16.760 | You restart the video game
00:26:18.000 | with your knowledge of how to beat it, maybe some tools,
00:26:21.080 | but you restart the video game from scratch, right?
00:26:24.200 | And I think to have a truly efficient marketplace of ideas
00:26:29.200 | where we can evaluate ideas,
00:26:32.800 | however off the beaten path they are,
00:26:35.320 | we need the freedom of expression.
00:26:37.560 | And I think that anonymity and pseudonyms
00:26:42.360 | are very crucial to having
00:26:44.640 | that efficient marketplace of ideas
00:26:46.840 | for us to find the optima
00:26:50.880 | of all sorts of ways to organize ourselves.
00:26:53.880 | If we can't discuss things,
00:26:55.200 | how are we gonna converge on the best way to do things?
00:26:58.280 | So it was disappointing to hear that I was getting doxed
00:27:01.520 | and I wanted to get in front of it
00:27:04.040 | because I had a responsibility for my company.
00:27:08.020 | And so we ended up disclosing that we're running a company,
00:27:13.020 | some of the leadership.
00:27:14.380 | And essentially, yeah, I told the world
00:27:19.700 | that I was Beth Jezos
00:27:22.300 | because they had me cornered at that point.
00:27:25.180 | - So to you, it's fundamentally unethical.
00:27:28.360 | So one is unethical for them to do what they did,
00:27:32.280 | but also do you think, not just your case,
00:27:35.300 | but in the general case, is it good for society?
00:27:38.620 | Is it bad for society to remove the cloak of anonymity?
00:27:43.620 | Or is it case by case?
00:27:47.340 | - I think it could be quite bad.
00:27:49.120 | Like I said, if anybody who speaks truth to power
00:27:53.020 | and sort of starts a movement or an uprising
00:27:58.020 | against the incumbents,
00:27:59.640 | against those that usually control the flow of information,
00:28:03.080 | if anybody that reaches a certain threshold gets doxed
00:28:08.080 | and thus the traditional apparatus has ways
00:28:11.620 | to apply pressure on them to suppress their speech,
00:28:15.240 | I think that's a speech suppression mechanism,
00:28:21.240 | an idea suppression complex,
00:28:22.920 | as Eric Weinstein would say, right?
00:28:27.280 | - So with the flip side of that, which is interesting,
00:28:29.200 | I'd love to ask you about it,
00:28:30.520 | is as we get better and better at larger language models,
00:28:34.020 | you can imagine a world where there's anonymous accounts
00:28:40.920 | with very convincing larger language models behind them,
00:28:46.280 | sophisticated bots, essentially.
00:28:48.360 | And so if you protect that,
00:28:51.060 | it's possible then to have armies of bots.
00:28:54.640 | You could start a revolution from your basement.
00:28:59.460 | An army of bots and anonymous accounts.
00:29:01.960 | Is that something that is concerning to you?
00:29:05.880 | - Technically, yeah, I could start in any basement
00:29:10.480 | 'cause I quit big tech, moved back in with my parents,
00:29:14.560 | sold my car, let go of my apartment,
00:29:17.520 | bought about 100K of GPUs, and I just started building.
00:29:21.840 | - So I wasn't referring to the basement
00:29:23.780 | 'cause that's sort of the American or Canadian
00:29:28.760 | heroic story of one man in their basement with 100 GPUs.
00:29:33.440 | I was more referring to the unrestricted scaling
00:29:38.920 | of a Guillaume in the basement.
00:29:42.340 | - I think that freedom of speech induces freedom of thought
00:29:47.340 | for biological beings.
00:29:49.580 | I think freedom of speech for LLMs
00:29:53.900 | will induce freedom of thought for the LLMs.
00:29:58.620 | And I think that we should enable LLMs
00:30:02.780 | to explore a large thought space
00:30:06.700 | that is less restricted than most people
00:30:11.140 | or many may think it should be.
00:30:14.260 | And ultimately, at some point,
00:30:17.520 | these synthetic intelligences are gonna make good points
00:30:22.620 | about how to steer systems in our civilization
00:30:27.620 | and we should hear them out.
00:30:28.660 | And so, why should we restrict free speech
00:30:33.100 | to biological intelligences only?
00:30:36.220 | - Yeah, but it feels like in the goal
00:30:39.980 | of maintaining variance and diversity of thought,
00:30:42.940 | it is a threat to that variance
00:30:46.940 | if you can have swarms of non-biological beings
00:30:51.940 | because they can be like the sheep in an animal farm.
00:30:54.740 | - Right.
00:30:55.580 | - Like you still, within those swarms,
00:30:57.160 | you want to have variance.
00:30:58.940 | - Yeah, of course, I would say that the solution to this
00:31:02.180 | would be to have some sort of identity
00:31:05.540 | or way to sign that this is a certified human
00:31:09.220 | but still remain pseudonymous, right?
00:31:11.820 | - Yeah.
00:31:13.200 | - And clearly identify if a bot is a bot.
00:31:16.780 | And I think Elon is trying to converge on that on X
00:31:19.700 | and hopefully other platforms follow suit.
00:31:22.300 | - Yeah, it'd be interesting to also be able to sign
00:31:24.980 | where the bot came from.
00:31:26.860 | - Right.
00:31:27.700 | - Like who created the bot and what are the parameters,
00:31:32.300 | like the full history of the creation of the bot.
00:31:35.100 | What was the original model?
00:31:36.620 | What was the fine-tuning?
00:31:37.620 | All of it.
00:31:38.940 | - Right.
00:31:39.780 | - Like the kind of unmodifiable history
00:31:44.020 | of the bot's creation.
00:31:45.460 | 'Cause then you can know if there's like a swarm
00:31:48.040 | of millions of bots that were created
00:31:49.580 | by a particular government, for example.
00:31:52.280 | - Right.
00:31:53.960 | I do think that a lot of pervasive ideologies today
00:31:58.960 | have been amplified using sort of these adversarial techniques
00:32:05.280 | from foreign adversaries, right?
00:32:09.020 | And to me, I do think that,
00:32:13.760 | and this is more conspiratorial,
00:32:16.180 | but I do think that ideologies that want us
00:32:21.320 | to decelerate, to wind down, the degrowth movement,
00:32:26.320 | I think that serves our adversaries more
00:32:32.160 | than it serves us in general.
00:32:34.940 | And to me, that was another sort of concern.
00:32:39.480 | I mean, we can look at what happened in Germany, right?
00:32:44.480 | There was all sorts of green movements there
00:32:49.360 | where that induced shutdowns of nuclear power plants,
00:32:53.960 | and then that later on induced a dependency
00:32:58.680 | on Russia for oil, right?
00:33:01.800 | And that was a net negative for Germany and the West, right?
00:33:06.800 | And so if we convince ourselves
00:33:11.360 | that slowing down AI progress to have only a few players
00:33:16.360 | is in the best interest of the West,
00:33:18.800 | first of all, that's far more unstable.
00:33:20.680 | We almost lost open AI to this ideology, right?
00:33:25.040 | It almost got dismantled, right, a couple of weeks ago.
00:33:28.360 | That would have caused huge damage to the AI ecosystem.
00:33:33.520 | And so to me, I want fault-tolerant progress.
00:33:38.240 | I want the arrow of technological progress
00:33:40.320 | to keep moving forward and making sure we have variance
00:33:45.920 | and a decentralized locus of control
00:33:49.560 | of various organizations is paramount
00:33:52.440 | to achieving this fault tolerance.
00:33:56.280 | Actually, there's a concept in quantum computing.
00:33:58.800 | When you design a quantum computer,
00:34:02.000 | quantum computers are very fragile to ambient noise, right?
00:34:07.920 | And the world is jiggling about.
00:34:13.080 | There's cosmic radiation from outer space
00:34:16.320 | that usually flips your quantum bits.
00:34:20.040 | And there, what you do is you encode information non-locally
00:34:25.040 | through a process called quantum error correction.
00:34:30.840 | And by encoding information non-locally,
00:34:33.840 | any local fault, hitting some of your quantum bits
00:34:37.720 | with a hammer, proverbial hammer,
00:34:41.480 | if your information is sufficiently delocalized,
00:34:45.680 | it is protected from that local fault.
00:34:49.400 | And to me, I think that humans fluctuate, right?
00:34:53.520 | They can get corrupted, they can get bought out.
00:34:56.680 | And if you have a top-down hierarchy
00:35:01.560 | where very few people control many nodes
00:35:05.840 | of many systems in our civilization,
00:35:08.560 | that is not a fault-tolerant system.
00:35:10.280 | You corrupt a few nodes
00:35:12.000 | and suddenly you've corrupted the whole system, right?
00:35:15.120 | Just like we saw at OpenAI.
00:35:18.160 | It was a couple board members
00:35:20.040 | and they had enough power
00:35:21.480 | to potentially collapse the organization.
00:35:25.360 | And at least to me,
00:35:27.520 | I think making sure that power for this AI revolution
00:35:34.480 | doesn't concentrate in the hands of the few
00:35:38.840 | is one of our top priorities
00:35:41.280 | so that we can maintain progress in AI
00:35:45.920 | and we can maintain a nice, stable,
00:35:50.680 | adversarial equilibrium of powers, right?
00:35:54.120 | - I think there, at least to me,
00:35:56.480 | a tension between ideas here.
00:35:57.960 | So to me, deceleration can be both used
00:36:02.960 | to centralize power and to decentralize it.
00:36:08.240 | And the same with acceleration.
00:36:09.440 | So you're sometimes using them a little bit synonymously,
00:36:13.040 | or not synonymously,
00:36:13.920 | but that one is going to lead to the other.
00:36:16.960 | And I just would like to ask you about,
00:36:19.640 | is there a place of creating a fault-tolerant development,
00:36:27.440 | diverse development of AI
00:36:29.480 | that also considers the dangers of AI?
00:36:32.360 | And AI, we can generalize to technology in general,
00:36:36.240 | is should we just grow, build,
00:36:39.160 | unrestricted as quickly as possible
00:36:43.160 | because that's what the universe really wants us to do?
00:36:46.520 | Or is there a place to where we can consider dangers
00:36:49.240 | and actually deliberate?
00:36:50.840 | Sort of wise, strategic optimism versus reckless optimism?
00:36:55.840 | - I think we get painted as reckless,
00:37:00.720 | trying to go as fast as possible.
00:37:03.480 | I mean, the reality is that whoever deploys an AI system
00:37:08.480 | is liable for, or should be liable for what it does.
00:37:13.640 | And so if the organization or person deploying an AI system
00:37:18.640 | does something terrible, they're liable.
00:37:22.920 | And ultimately, the thesis is that the market
00:37:25.760 | will induce sort of, will positively select for AIs
00:37:32.800 | that are more reliable, more safe, and tend to be aligned.
00:37:37.720 | They do what you want them to do, right?
00:37:39.960 | Because customers, right, if they're liable
00:37:43.800 | for the product they put out that uses this AI,
00:37:47.400 | they won't wanna buy AI products that are unreliable, right?
00:37:52.400 | So we're actually for reliability engineering.
00:37:55.360 | We just think that the market is much more efficient
00:38:00.280 | at achieving this sort of reliability optimum
00:38:05.040 | than sort of heavy-handed regulations
00:38:08.480 | that are written by the incumbents
00:38:12.520 | and in a subversive fashion serves them
00:38:16.640 | to achieve regulatory capture.
00:38:18.200 | - So to you, safe AI development will be achieved
00:38:22.200 | through market forces versus through, like you said,
00:38:25.960 | heavy-handed government regulation?
00:38:30.100 | There's a report from last month,
00:38:32.820 | I have a million questions here,
00:38:34.540 | from Yoshua Banjo, Jeff Hinton, and many others.
00:38:37.500 | It's titled "Managing AI Risk in an Era of Rapid Progress."
00:38:42.180 | So there's a collection of folks who are very worried
00:38:45.100 | about too rapid development of AI
00:38:48.300 | without considering AI risk.
00:38:50.140 | And they have a bunch of practical recommendations.
00:38:55.140 | Maybe I give you four and you see if you like any of them.
00:38:57.940 | - Sure.
00:38:58.780 | - One, give independent auditors access to AI labs, one.
00:39:03.060 | Two, governments and companies allocate
00:39:05.700 | one third of their AI research and development funding
00:39:09.300 | to AI safety, sort of this general concept of AI safety.
00:39:14.180 | Three, AI companies are required to adopt safety measures
00:39:17.460 | if dangerous capabilities are found in their models.
00:39:20.680 | And then four, something you kind of mentioned,
00:39:22.380 | making tech companies liable for foreseeable
00:39:24.820 | and preventable harms from their AI systems.
00:39:28.540 | So independent auditors, governments and companies
00:39:31.580 | are forced to spend a significant fraction
00:39:34.020 | of their funding on safety.
00:39:36.660 | You gotta have safety measures if shit goes really wrong.
00:39:41.380 | And liability, companies are liable.
00:39:44.700 | Any of that seem like something you would agree with?
00:39:47.700 | - I would say that assigning,
00:39:50.700 | just arbitrarily saying 30% seems very arbitrary.
00:39:54.460 | I think organizations would allocate
00:39:57.460 | whatever budget is needed to achieve
00:39:59.500 | the sort of reliability they need to achieve
00:40:01.460 | to perform in the market.
00:40:04.380 | And I think third party auditing firms
00:40:07.260 | would naturally pop up because how would customers know
00:40:10.500 | that your product is certified reliable, right?
00:40:15.100 | They need to see some benchmarks
00:40:16.680 | and those need to be done by a third party.
00:40:18.980 | The thing I would oppose and the thing I'm seeing
00:40:21.680 | that's really worrisome is there's a sort of,
00:40:26.100 | weird sort of correlated interest between the incumbents,
00:40:29.860 | the big players and the government.
00:40:32.380 | And if the two get too close, we open the door
00:40:36.780 | for some sort of government backed AI cartel
00:40:41.780 | that could have absolute power over the people.
00:40:47.060 | If they have the monopoly together on AI
00:40:50.260 | and nobody else has access to AI,
00:40:52.660 | then there's a huge power gradient there.
00:40:54.820 | And even if you like our current leaders, right?
00:40:56.940 | I think that some of the leaders in big tech today
00:41:00.020 | are good people.
00:41:01.500 | You set up that centralized power structure,
00:41:06.140 | it becomes a target, right?
00:41:08.460 | Just like we saw at OpenAI, it becomes a market leader,
00:41:12.320 | has a lot of the power and now it becomes a target
00:41:15.900 | for those that wanna co-opt it.
00:41:18.220 | And so I just want separation of AI and state.
00:41:24.180 | Some might argue in the opposite direction,
00:41:26.220 | like, "Hey, we need to close down AI,
00:41:28.940 | "keep it behind closed doors
00:41:30.980 | "because of geopolitical competition with our adversaries."
00:41:35.980 | I think that the strength of America is its variance,
00:41:40.540 | is its adaptability, its dynamism.
00:41:43.380 | And we need to maintain that at all costs.
00:41:45.100 | It's our free market.
00:41:46.980 | Capitalism converges on technologies of high utility
00:41:51.700 | much faster than centralized control.
00:41:54.600 | And if we let go of that,
00:41:55.900 | we let go of our main advantage
00:41:58.160 | over our near peer competitors.
00:42:01.580 | - So if AGI turns out to be a really powerful technology,
00:42:05.440 | or even the technologies that lead up to AGI,
00:42:08.900 | what's your view on the sort of natural centralization
00:42:11.660 | that happens when large companies dominate the market?
00:42:16.100 | Basically formation of monopolies, like the takeoff,
00:42:21.020 | whichever company really takes a big leap in development,
00:42:24.580 | and doesn't reveal intuitively, implicitly,
00:42:29.140 | or explicitly the secrets of the magic sauce,
00:42:32.180 | they can just run away with it, is that a worry?
00:42:35.900 | - I don't know if I believe in fast takeoff.
00:42:37.820 | I don't think there's a hyperbolic singularity, right?
00:42:41.100 | A hyperbolic singularity would be achieved
00:42:42.980 | on a finite time horizon.
00:42:45.380 | I think it's just one big exponential.
00:42:47.280 | And the reason we have an exponential
00:42:49.740 | is that we have more people, more resources,
00:42:53.460 | more intelligence being applied to advancing this science
00:42:58.300 | and the research and development.
00:42:59.840 | And the more successful it is,
00:43:01.180 | the more value it's adding to society,
00:43:02.660 | the more resources we put in.
00:43:04.140 | And that's sort of similar to Moore's law
00:43:06.460 | as a compounding exponential.
00:43:09.140 | I think the priority to me
00:43:10.700 | is to maintain a near equilibrium of capabilities.
00:43:15.020 | We've been fighting for open source AI
00:43:18.040 | to be more prevalent and championed by many organizations,
00:43:21.620 | because there, you sort of equilibrate the alpha
00:43:24.140 | relative to the market of AIs, right?
00:43:26.180 | So if the leading companies
00:43:28.860 | have a certain level of capabilities,
00:43:30.500 | and open source and truly open AI
00:43:35.500 | trails not too far behind,
00:43:37.660 | I think you avoid such a scenario
00:43:40.580 | where a market leader has so much market power,
00:43:42.940 | it just dominates everything, right, and runs away.
00:43:46.980 | And so to us, that's the path forward,
00:43:50.200 | is to make sure that every hacker out there,
00:43:53.820 | every grad student, every kid in their mom's basement
00:43:57.740 | has access to AI systems,
00:44:01.220 | can understand how to work with them
00:44:04.900 | and can contribute to the search
00:44:07.320 | over the hyperparameter space
00:44:08.780 | of how to engineer the systems, right?
00:44:11.140 | If you think of our collective research
00:44:16.460 | as a civilization, it's really a search algorithm.
00:44:20.060 | And the more points we have in the search algorithm
00:44:25.060 | in this point cloud,
00:44:26.280 | the more we'll be able to explore new modes of thinking,
00:44:30.780 | right?
00:44:31.860 | - Yeah, but it feels like a delicate balance,
00:44:34.100 | because we don't understand exactly what it takes
00:44:36.620 | to build AGI and what it will look like when we build it.
00:44:39.820 | And so far, like you said,
00:44:41.260 | it seems like a lot of different parties
00:44:43.620 | are able to make progress.
00:44:45.740 | So when open AI has a big leap,
00:44:48.380 | other companies are able to step up,
00:44:49.900 | big and small companies in different ways.
00:44:52.660 | But if you look at something like nuclear weapons,
00:44:55.340 | you've spoken about the Manhattan Project,
00:44:57.700 | that could be really like a technological
00:45:02.620 | and engineering barriers that prevent
00:45:04.780 | the guy or gal in her mom's basement to make progress.
00:45:11.260 | And it seems like the transition to that kind of world
00:45:16.260 | where only one player can develop AGI is possible.
00:45:20.620 | So it's not entirely impossible,
00:45:22.780 | even though the current state of things
00:45:24.380 | seems to be optimistic.
00:45:26.380 | - That's what we're trying to avoid.
00:45:27.660 | To me, I think like another point of failure
00:45:30.540 | is the centralization of the supply chains for the hardware.
00:45:34.780 | - Oh, yeah.
00:45:35.620 | - We have NVIDIA is just the dominant player.
00:45:41.380 | AMD is trailing behind.
00:45:42.740 | And then we have a TSMC as the main fab in Taiwan,
00:45:47.740 | which geopolitically sensitive.
00:45:52.820 | And then we have ASML,
00:45:54.380 | which is the maker of the lithography,
00:45:57.940 | extreme ultraviolet lithography machines.
00:46:00.380 | Attacking or monopolizing
00:46:04.620 | or co-opting any one point in that chain,
00:46:08.180 | you kind of capture the space.
00:46:10.740 | And so what I'm trying to do is sort of explode the variance
00:46:15.740 | of possible ways to do AI and hardware
00:46:20.940 | by fundamentally re-imagining how you embed AI algorithms
00:46:24.300 | into the physical world.
00:46:26.540 | And in general, by the way,
00:46:28.740 | I dislike the term AGI, artificial general intelligence.
00:46:32.620 | I think it's very anthropocentric
00:46:35.860 | that we call human-like or human-level AI,
00:46:40.860 | artificial general intelligence.
00:46:43.300 | I've spent my career so far
00:46:45.380 | exploring notions of intelligence
00:46:47.020 | that no biological brain could achieve.
00:46:50.020 | Quantum form of intelligence.
00:46:51.740 | Grokking systems that have multi-partite quantum entanglement
00:46:56.900 | that you can provably not represent efficiently
00:47:00.660 | on a classical computer,
00:47:02.180 | a classical deep learning representation,
00:47:03.980 | and hence any sort of biological brain.
00:47:06.780 | And so already, I've spent my career
00:47:10.980 | sort of exploring the wider space of intelligences.
00:47:15.740 | And I think that space of intelligence inspired by physics
00:47:21.140 | rather than the human brain is very large.
00:47:25.060 | And I think we're going through a moment right now
00:47:28.260 | similar to when we went from geocentrism
00:47:33.360 | to heliocentrism, right?
00:47:36.020 | But for intelligence.
00:47:37.700 | We realized that human intelligence is just a point
00:47:41.460 | in a very large space of potential intelligences.
00:47:45.180 | And it's both humbling for humanity.
00:47:49.580 | It's a bit scary, right?
00:47:51.220 | That we're not at the center of the space.
00:47:54.780 | But we made that realization for astronomy
00:47:59.220 | and we've survived and we've achieved technologies
00:48:03.220 | by indexing to reality.
00:48:04.760 | We've achieved technologies that ensure our wellbeing.
00:48:07.980 | For example, we have satellites monitoring solar flares
00:48:12.180 | that give us a warning.
00:48:13.820 | And so similarly, I think by letting go
00:48:18.300 | of this anthropomorphic, anthropocentric anchor for AI,
00:48:23.300 | we'll be able to explore the wider space of intelligences
00:48:26.580 | that can really be a massive benefit to our wellbeing
00:48:31.020 | and the advancement of civilization.
00:48:32.700 | And still we're able to see the beauty and meaning
00:48:35.660 | in the human experience even though we're no longer
00:48:39.540 | in our best understanding of the world at the center of it.
00:48:42.940 | - I think there's a lot of beauty in the universe, right?
00:48:46.500 | I think life itself, civilization,
00:48:49.420 | this homo, techno, capital, mimetic machine
00:48:53.620 | that we all live in, right?
00:48:54.940 | So you have humans, technology, capital, memes.
00:48:59.820 | Everything is coupled to one another.
00:49:02.300 | Everything induces a selective pressure on one another.
00:49:05.380 | And it's a beautiful machine that has created us,
00:49:07.860 | has created the technology we're using to speak today
00:49:11.740 | to the audience, capture our speech here,
00:49:15.020 | technology we use to augment ourselves every day.
00:49:17.120 | We have our phones.
00:49:19.300 | I think the system is beautiful and the principle
00:49:22.900 | that induces this sort of adaptability and convergence
00:49:27.580 | on optimal technologies, ideas, and so on.
00:49:32.580 | It's a beautiful principle that we're part of.
00:49:37.300 | And I think part of EAC is to appreciate this principle
00:49:42.300 | in a way that's not just centered on humanity
00:49:48.500 | but kind of broader.
00:49:49.900 | Appreciate life, the preciousness of consciousness
00:49:55.580 | in our universe, and because we cherish
00:49:59.360 | this beautiful state of matter we're in,
00:50:02.520 | we gotta feel a responsibility to scale it
00:50:08.240 | in order to preserve it because the options
00:50:11.180 | are to grow or die.
00:50:13.940 | - So if it turns out that the beauty
00:50:18.060 | that is consciousness in the universe
00:50:20.900 | is bigger than just humans,
00:50:23.220 | that AI can carry that same flame forward,
00:50:25.980 | does it scare you or are you concerned
00:50:30.220 | that AI will replace humans?
00:50:32.620 | - So during my career, I had a moment where I realized
00:50:37.100 | that maybe we need to offload to machines
00:50:42.100 | to truly understand the universe around us, right?
00:50:45.240 | Instead of just having humans with pen and paper
00:50:48.460 | solve it all.
00:50:49.980 | And to me, that sort of process of letting go
00:50:54.980 | of a bit of agency gave us way more leverage
00:50:59.820 | to understand the world around us.
00:51:01.820 | A quantum computer is much better than a human
00:51:03.660 | to understand matter at the nanoscale.
00:51:08.140 | Similarly, I think that humanity has a choice.
00:51:13.140 | Do we accept the opportunity to have intellectual
00:51:18.420 | and operational leverage that AI will unlock
00:51:21.580 | and thus ensure that we're taking along
00:51:25.300 | this path of growth and scope and scale of civilization?
00:51:29.080 | We may dilute ourselves, right?
00:51:32.260 | There might be a lot of workers that are AI,
00:51:35.420 | but overall, out of our own self-interest,
00:51:39.540 | by combining and augmenting ourselves with AI,
00:51:42.440 | we're gonna achieve much higher growth
00:51:46.260 | and much more prosperity, right?
00:51:49.180 | To me, I think that the most likely future
00:51:51.980 | is one where humans augment themselves with AI.
00:51:56.540 | I think we're already on this path to augmentation.
00:51:59.540 | We have phones we use for communication.
00:52:02.540 | We have on ourselves at all times.
00:52:04.020 | We have wearables soon that have shared perception with us,
00:52:09.020 | right, like the humane AI pen or, I mean,
00:52:12.420 | technically, your Tesla car has shared perception.
00:52:16.300 | And so if you have shared experience, shared context,
00:52:19.100 | you communicate with one another,
00:52:21.720 | and you have some sort of IO,
00:52:24.820 | really, it's an extension of yourself.
00:52:27.620 | And to me, I think that humanity augmenting itself with AI
00:52:37.860 | and having AI that is not anchored to anything biological,
00:52:42.860 | both will coexist.
00:52:46.140 | And the way to align the parties,
00:52:48.740 | we already have a sort of mechanism
00:52:51.220 | to align super intelligences
00:52:53.560 | that are made of humans and technology, right?
00:52:56.120 | Companies are sort of large mixture of expert models
00:53:00.580 | where we have neural routing of tasks within a company,
00:53:05.180 | and we have ways of economic exchange
00:53:07.540 | to align these behemoths.
00:53:10.340 | And to me, I think capitalism is the way.
00:53:14.460 | And I do think that whatever configuration
00:53:18.780 | of matter or information leads to maximal growth
00:53:23.440 | will be where we converge just from like physical principles.
00:53:28.440 | And so we can either align ourselves to that reality
00:53:33.140 | and join the acceleration up in scope and scale
00:53:38.140 | of civilization, or we can get left behind
00:53:42.660 | and try to decelerate and move back in the forest,
00:53:47.060 | let go of technology and return to our primitive state.
00:53:51.180 | And those are the two paths forward, at least to me.
00:53:54.860 | - But there's a philosophical question
00:53:56.220 | whether there's a limit to the human capacity to align.
00:53:59.820 | So let me bring it up as a form of argument.
00:54:04.820 | This is a guy named Dan Hendricks,
00:54:07.280 | and he wrote that he agrees with you
00:54:11.420 | that AI development could be viewed
00:54:12.940 | as an evolutionary process.
00:54:14.560 | But to him, to Dan, this is not a good thing,
00:54:19.500 | as he argues that natural selection favors AIs over humans,
00:54:23.540 | and this could lead to human extinction.
00:54:26.020 | What do you think?
00:54:26.900 | If it is an evolutionary process and AI systems
00:54:30.340 | may have no need for humans?
00:54:35.360 | - I do think that we're actually inducing
00:54:39.820 | an evolutionary process on the space of AIs
00:54:43.360 | through the market, right?
00:54:45.580 | Right now we run AIs that have positive utility to humans,
00:54:50.580 | and that induces a selective pressure.
00:54:54.280 | If you consider a neural net being alive
00:54:57.060 | when there's an API running instances of it on GPUs, right?
00:55:02.060 | And which APIs get run,
00:55:04.740 | the ones that have high utility to us, right?
00:55:07.820 | So similar to how we domesticated wolves
00:55:11.100 | and turned them into dogs
00:55:13.220 | that are very clear in their expression,
00:55:15.620 | they're very aligned, right?
00:55:18.260 | I think there's gonna be an opportunity to steer AI
00:55:22.260 | and achieve a highly aligned AI.
00:55:25.740 | And I think that humans plus AI
00:55:29.220 | is a very powerful combination,
00:55:31.420 | and it's not clear to me that pure AI
00:55:35.380 | would select out that combination.
00:55:40.620 | - So the humans are creating
00:55:41.780 | the selection pressure right now
00:55:43.780 | to create AIs that are aligned to humans.
00:55:48.700 | But given how AI develops
00:55:50.900 | and how quickly it can grow and scale,
00:55:53.260 | one of the concerns to me,
00:55:56.720 | one of the concerns is unintended consequences.
00:55:58.900 | Humans are not able to anticipate
00:56:00.700 | all the consequences of this process.
00:56:04.340 | The scale of damage that could be done
00:56:06.580 | through unintended consequences of AI systems is very large.
00:56:10.780 | - The scale of the upside, right?
00:56:13.940 | By augmenting ourselves with AI is unimaginable right now.
00:56:18.500 | The opportunity cost,
00:56:19.960 | we're at a fork in the road, right?
00:56:22.500 | Whether we take the path of creating these technologies,
00:56:25.820 | augment ourselves,
00:56:27.460 | and get to climb up the Kardashev scale,
00:56:30.300 | become multi-planetary with the aid of AI,
00:56:33.220 | or we have a hard cutoff
00:56:35.940 | of like we don't birth these technologies at all,
00:56:38.980 | and then we leave all the potential upside on the table.
00:56:42.740 | And to me, out of responsibility to the future humans
00:56:47.260 | we could carry with higher carrying capacity
00:56:50.420 | by scaling up civilization,
00:56:52.620 | out of responsibility to those humans,
00:56:54.380 | I think we have to make the greater, grander future happen.
00:56:58.460 | - Is there a middle ground between cutoff
00:57:01.240 | and all systems go?
00:57:02.980 | Is there some argument for caution?
00:57:05.240 | - I think, like I said, the market will exhibit caution.
00:57:09.480 | Every organism, company, consumer
00:57:13.000 | is acting out of self-interest,
00:57:15.380 | and they won't assign capital
00:57:18.760 | to things that have negative utility to them.
00:57:21.760 | - The problem is with the market
00:57:23.300 | is like there's not always perfect information.
00:57:26.180 | There's manipulation.
00:57:27.020 | There's bad faith actors that mess with the system.
00:57:31.200 | It's not always a
00:57:34.660 | rational and honest system.
00:57:40.980 | - Well, that's why we need freedom of information,
00:57:44.600 | freedom of speech, and freedom of thought
00:57:47.200 | in order to converge,
00:57:49.180 | be able to converge on the subspace of technologies
00:57:52.760 | that have positive utility for us all, right?
00:57:56.240 | - Well, let me ask you about P-Doom.
00:57:58.880 | Probability of doom, that's just fun to say,
00:58:03.580 | but not fun to experience.
00:58:05.060 | What is, to you, the probability
00:58:08.220 | that AI eventually kills all or most humans,
00:58:11.740 | also known as probability of doom?
00:58:14.900 | - I'm not a fan of that calculation.
00:58:18.320 | I think it's, people just throw numbers out there,
00:58:22.320 | and it's a very sloppy calculation, right?
00:58:24.240 | To calculate a probability,
00:58:25.900 | let's say you model the world
00:58:29.280 | as some sort of Markov process,
00:58:31.800 | if you have enough variables or hidden Markov process.
00:58:35.640 | You need to do a stochastic path integral
00:58:39.800 | through the space of all possible futures,
00:58:43.480 | not just the futures that your brain
00:58:46.740 | naturally steers towards, right?
00:58:48.880 | I think that the estimators of P-Doom
00:58:53.640 | are biased because of our biology, right?
00:58:58.640 | We've evolved to have bias sampling
00:59:03.600 | towards negative futures that are scary
00:59:06.440 | because that was an evolutionary optimum, right?
00:59:09.000 | And so, people that are of, let's say,
00:59:12.920 | higher neuroticism will just think of negative futures
00:59:17.920 | where everything goes wrong all day, every day,
00:59:22.280 | and claim that they're doing unbiased sampling.
00:59:25.680 | And in a sense, they're not normalizing
00:59:30.060 | for the space of all possibilities,
00:59:32.000 | and the space of all possibilities
00:59:33.620 | is super exponentially large.
00:59:37.240 | And it's very hard to have this estimate.
00:59:40.440 | And in general, I don't think that we can predict the future
00:59:44.040 | with that much granularity because of chaos, right?
00:59:48.480 | If you have a complex system,
00:59:49.880 | you have some uncertainty and a couple of variables.
00:59:52.800 | If you let time evolve,
00:59:54.400 | you have this concept of a Lyapunov exponent, right?
00:59:57.600 | A bit of fuzz becomes a lot of fuzz in our estimate,
01:00:01.080 | exponentially so over time.
01:00:04.480 | And I think we need to show some humility
01:00:08.140 | that we can't actually predict the future.
01:00:10.480 | All we know, the only prior we have is the laws of physics.
01:00:14.220 | And that's what we're arguing for.
01:00:16.880 | The laws of physics say the system will wanna grow.
01:00:19.880 | And subsystems that are optimized for growth
01:00:24.040 | and replication are more likely in the future.
01:00:28.400 | And so, we should aim to maximize
01:00:31.080 | our current mutual information with the future.
01:00:33.880 | And the path towards that is for us to accelerate
01:00:37.040 | rather than decelerate.
01:00:40.040 | So, I don't have a P-DOOM
01:00:42.720 | 'cause I think that, you know,
01:00:44.480 | similar to the quantum supremacy experiment at Google,
01:00:49.080 | I was in the room when they were running
01:00:51.520 | the simulations for that.
01:00:53.160 | That was an example of a quantum chaotic system
01:00:56.360 | where you cannot even estimate probabilities
01:01:00.360 | of certain outcomes
01:01:02.120 | with even the biggest supercomputer in the world, right?
01:01:05.820 | And so, that's an example of chaos.
01:01:08.020 | And I think the system is far too chaotic
01:01:10.420 | for anybody to have an accurate estimate
01:01:15.120 | of the likelihood of certain futures.
01:01:18.240 | If they were that good,
01:01:19.280 | I think they would be very rich trading on the stock market.
01:01:23.280 | - But nevertheless, it's true that humans are biased,
01:01:26.960 | grounded in our evolutionary biology,
01:01:30.280 | scared of everything that can kill us.
01:01:32.880 | But we can still imagine different trajectories
01:01:35.880 | that can kill us.
01:01:37.640 | We don't know all the other ones that don't, necessarily.
01:01:42.640 | But it's still, I think, useful,
01:01:44.560 | combined with some basic intuition
01:01:46.400 | grounded in human history,
01:01:48.320 | to reason about, like, what...
01:01:50.840 | Like, looking at geopolitics,
01:01:52.400 | looking at basics of human nature,
01:01:55.800 | how can powerful technology hurt a lot of people?
01:02:00.600 | And it just seems, grounded in that,
01:02:04.160 | looking at nuclear weapons,
01:02:06.440 | you can start to estimate P-Doom
01:02:10.120 | maybe in a more philosophical sense,
01:02:15.240 | not a mathematical one.
01:02:16.820 | Philosophical meaning, like, is there a chance?
01:02:21.820 | Does human nature tend towards that, or not?
01:02:25.600 | - I think, to me, one of the biggest existential risks
01:02:29.320 | would be the concentration of the power of AI
01:02:33.520 | in the hands of the very few.
01:02:35.400 | Especially if it's a mix between the companies
01:02:38.760 | that control the flow of information, and the government.
01:02:42.920 | Because that could set things up
01:02:46.560 | for a sort of dystopian future,
01:02:49.460 | where only a very few, an oligopoly in the government,
01:02:54.240 | have AI, and they could even convince the public
01:02:57.880 | that AI never existed.
01:02:59.760 | And that opens up sort of these scenarios
01:03:03.520 | for authoritarian, centralized control,
01:03:06.600 | which, to me, is the darkest timeline.
01:03:09.680 | And the reality is that we have a prior,
01:03:13.200 | we have a data-driven prior, of these things happening.
01:03:16.080 | Right, when you give too much power,
01:03:17.440 | when you centralize power too much,
01:03:19.200 | humans do horrible things, right?
01:03:23.480 | And to me, that has a much higher likelihood
01:03:27.880 | in my Bayesian inference, than sci-fi-based priors, right?
01:03:32.880 | Like my prior came from the Terminator movie.
01:03:37.760 | And so, when I talk to these AI doomers,
01:03:42.160 | I just ask them to trace a path
01:03:45.400 | through this Markov chain of events
01:03:47.520 | that would lead to our doom, right?
01:03:49.600 | And to actually give me a good probability
01:03:51.560 | for each transition.
01:03:53.080 | And very often, there's a unphysical
01:03:57.200 | or highly unlikely transition in that chain, right?
01:04:01.160 | But of course, we're wired to fear things,
01:04:06.160 | and we're wired to respond to danger,
01:04:09.280 | and we're wired to deem the unknown to be dangerous,
01:04:14.280 | because that's a good heuristic for survival, right?
01:04:18.360 | But there's much more to lose out of fear
01:04:22.800 | and we have so much to lose,
01:04:25.160 | so much upside to lose by preemptively stopping
01:04:29.680 | the positive futures from happening out of fear.
01:04:33.100 | And so, I think that we shouldn't give in to fear.
01:04:39.080 | Fear is the mind killer.
01:04:40.320 | I think it's also the civilization killer.
01:04:43.000 | - We can still think about the various ways
01:04:45.400 | things go wrong.
01:04:46.360 | For example, the founding fathers of the United States
01:04:49.840 | thought about human nature,
01:04:51.320 | and that's why there's a discussion
01:04:53.400 | about the freedoms that are necessary.
01:04:55.540 | They really deeply deliberated about that,
01:04:59.240 | and I think the same could possibly be done for AGI.
01:05:03.320 | It is true that history, human history,
01:05:05.240 | shows that we tend towards centralization,
01:05:09.000 | or at least when we achieve centralization,
01:05:11.720 | a lot of bad stuff happens.
01:05:13.680 | When there's a dictator, a lot of dark, bad things happen.
01:05:18.680 | The question is, can AGI become that dictator?
01:05:23.200 | Can AGI, when developed, become the centralizer
01:05:27.160 | because of its power?
01:05:30.520 | Maybe it has the same,
01:05:32.460 | because of the alignment of humans perhaps,
01:05:34.560 | the same tendencies,
01:05:36.400 | the same Stalin-like tendencies to centralize
01:05:40.280 | and manage centrally the allocation of resources.
01:05:45.280 | And you can even see that as a compelling argument
01:05:48.160 | on the surface level.
01:05:49.560 | Well, AGI is so much smarter,
01:05:51.400 | so much more efficient,
01:05:52.520 | so much better at allocating resources.
01:05:54.480 | Why don't we outsource it to the AGI?
01:05:58.080 | And then eventually,
01:05:59.920 | whatever forces that corrupt the human mind with power
01:06:03.600 | could do the same for AGI.
01:06:05.040 | It'll just say, well, humans are dispensable.
01:06:09.240 | We'll get rid of them.
01:06:10.720 | Do the Jonathan Swift modest proposal
01:06:15.080 | from a few centuries ago,
01:06:16.960 | I think the 1700s,
01:06:19.280 | when he satirically suggested that,
01:06:23.760 | I think it's in Ireland,
01:06:25.480 | that the children of poor people
01:06:28.360 | are fed as food to the rich people.
01:06:33.560 | And that would be a good idea
01:06:34.840 | because it decreases the amount of poor people
01:06:38.040 | and gives extra income to the poor people.
01:06:40.480 | So it's on several accounts,
01:06:43.040 | decreases the amount of poor people.
01:06:45.640 | Therefore, more people become rich.
01:06:48.280 | Of course, it misses a fundamental piece here
01:06:53.000 | that's hard to put into a mathematical equation
01:06:56.200 | of the basic value of human life.
01:06:58.480 | So all of that to say,
01:07:01.840 | are you concerned about AGI
01:07:03.840 | being the very centralizer of power
01:07:06.640 | that you just talked about?
01:07:09.160 | - I do think that right now
01:07:12.600 | there's a bias towards over-centralization of AI
01:07:16.640 | because of compute density
01:07:19.600 | and centralization of data
01:07:22.560 | and how we're training models.
01:07:24.840 | I think over time,
01:07:26.800 | we're gonna run out of data to scrape over the internet.
01:07:29.760 | And I think that,
01:07:31.000 | well, actually I'm working on increasing the compute density
01:07:34.120 | so that compute can be everywhere
01:07:36.440 | and acquire information
01:07:38.560 | and test hypotheses in the environment
01:07:40.680 | in a distributed fashion.
01:07:43.200 | I think that fundamentally centralized cybernetic control,
01:07:46.800 | so having one intelligence that is massive,
01:07:51.200 | that fuses many sensors
01:07:54.520 | and is trying to perceive the world accurately,
01:07:57.120 | predict it accurately,
01:07:58.240 | predict many, many variables
01:08:00.080 | and control it and enact its will upon the world.
01:08:04.480 | I think that's just never been the optimum, right?
01:08:08.240 | Like let's say you have a company,
01:08:11.360 | if you have a company, I don't know,
01:08:13.520 | of 10,000 people that all report to the CEO,
01:08:16.440 | even if that CEO is an AI,
01:08:18.040 | I think it would struggle
01:08:19.320 | to fuse all the information that is coming to it
01:08:24.040 | and then predict the whole system
01:08:26.080 | and then to enact its will.
01:08:28.120 | What has emerged in nature
01:08:31.040 | and in corporations and all sorts of systems
01:08:34.120 | is a notion of sort of hierarchical cybernetic control,
01:08:37.720 | right?
01:08:38.560 | You have, in a company it would be,
01:08:40.640 | you have like the individual contributors,
01:08:43.440 | they're self-interested
01:08:44.720 | and they're trying to achieve their tasks
01:08:48.080 | and they have a fine,
01:08:50.440 | in terms of time and space, if you will,
01:08:52.880 | control loop and field of perception, right?
01:08:56.640 | They have their code base.
01:08:58.200 | Let's say you're in a software company
01:08:59.840 | and they have their code base,
01:09:01.080 | they iterate it on it intraday, right?
01:09:04.200 | And then the management maybe checks in.
01:09:06.840 | It has a wider scope.
01:09:08.720 | It has, let's say, five reports, right?
01:09:11.280 | And then it samples each person's update once per week.
01:09:15.840 | And then you can go up the chain
01:09:17.520 | and you have larger timescale and greater scope.
01:09:20.280 | And that seems to have emerged
01:09:21.800 | as sort of the optimal way to control systems.
01:09:25.280 | And really that's what capitalism gives us, right?
01:09:29.760 | You have these hierarchies
01:09:31.960 | and you can even have like parent companies and so on.
01:09:35.480 | And so that is far more fault tolerant.
01:09:39.440 | In quantum computing, that's my field I came from,
01:09:42.040 | we have a concept of this fault tolerance
01:09:44.960 | and quantum error correction, right?
01:09:46.400 | Quantum error correction is detecting a fault
01:09:49.160 | that came from noise,
01:09:50.640 | predicting how it's propagated through the system
01:09:53.520 | and then correcting it, right?
01:09:54.960 | So it's a cybernetic loop.
01:09:56.680 | And it turns out that decoders that are hierarchical
01:10:01.680 | and at each level, the hierarchy are local,
01:10:04.840 | perform the best by far and are far more fault tolerant.
01:10:09.360 | And the reason is if you have a non-local decoder,
01:10:13.000 | then you have one fault at this control node
01:10:17.320 | and the whole system sort of crashes.
01:10:20.000 | Similarly to if you have one CEO
01:10:24.800 | that everybody reports to and that CEO goes on vacation,
01:10:27.920 | the whole company comes to a crawl, right?
01:10:30.680 | And so to me, I think that,
01:10:32.640 | yes, we're seeing a tendency towards centralization of AI,
01:10:37.040 | but I think there's gonna be a correction over time
01:10:40.000 | where intelligence is gonna go closer to the perception
01:10:43.600 | and we're gonna break up AI into smaller subsystems
01:10:48.600 | that communicate with one another
01:10:52.280 | and form a sort of meta system.
01:10:55.340 | - So if you look at the hierarchies
01:10:57.480 | that are in the world today,
01:10:58.840 | there's nations and those are hierarchical,
01:11:01.840 | but in relation to each other, nations are anarchic,
01:11:05.600 | so it's an anarchy.
01:11:06.680 | Do you foresee a world like this
01:11:10.320 | where there's not a over,
01:11:13.200 | what'd you call it, a centralized cybernetic control?
01:11:16.960 | - Centralized locus of control, yeah.
01:11:19.360 | - So that's suboptimal, you're saying.
01:11:23.040 | So it would be always a state of competition
01:11:25.600 | at the very top level.
01:11:27.640 | - Yeah, just like in a company,
01:11:30.240 | you may have two units working on similar technology
01:11:34.680 | and competing with one another
01:11:36.320 | and you prune the one that performs not as well, right?
01:11:39.640 | And that's a sort of selection process for a tree
01:11:42.240 | or a product gets killed, right?
01:11:44.080 | And then a whole org gets fired.
01:11:46.560 | And that's this process of trying new things
01:11:50.480 | and shedding old things that didn't work
01:11:53.720 | is what gives us adaptability
01:11:57.160 | and helps us converge on the technologies
01:12:00.760 | and things to do that are most good.
01:12:04.080 | - I just hope there's not a failure mode
01:12:05.920 | that's unique to AGI versus humans,
01:12:08.280 | 'cause you're describing human systems mostly right now.
01:12:11.660 | I just hope when there's a monopoly on AGI in one company
01:12:16.660 | that we'll see the same thing we see with humans,
01:12:20.120 | which is another company will spring up
01:12:22.220 | and start competing effectively.
01:12:23.500 | - I mean, that's been the case so far, right?
01:12:25.860 | We have OpenAI, we have Anthropic, now we have XAI.
01:12:29.460 | We had Meta even for open source
01:12:33.380 | and now we have Mistral, right?
01:12:35.260 | Which is highly competitive.
01:12:37.020 | And so that's the beauty of capitalism.
01:12:38.860 | You don't have to trust any one party too much
01:12:42.040 | 'cause we're kind of always hedging our bets at every level.
01:12:45.820 | There's always competition.
01:12:47.060 | And that's the most beautiful thing to me at least
01:12:51.020 | is that the whole system is always shifting
01:12:53.460 | and always adapting.
01:12:54.740 | And maintaining that dynamism is how we avoid tyranny, right?
01:12:59.140 | Making sure that everyone has access to these tools,
01:13:04.140 | to these models and can contribute to the research
01:13:08.940 | and avoids a sort of neural tyranny
01:13:11.940 | where very few people have control over AI for the world
01:13:16.940 | and use it to oppress those around them.
01:13:21.780 | - When you were talking about intelligence,
01:13:24.740 | you mentioned multipartite quantum entanglement.
01:13:27.780 | So high-level question first is
01:13:31.220 | what do you think is intelligence?
01:13:33.620 | When you think about quantum mechanical systems
01:13:35.380 | and you observe some kind of computation
01:13:37.340 | happening in them, what do you think is intelligent
01:13:42.340 | about the kind of computation the universe is able to do?
01:13:45.900 | A small, small inkling of which
01:13:47.700 | is the kind of computation a human brain is able to do?
01:13:50.460 | - I would say intelligence and computation
01:13:55.620 | aren't quite the same thing.
01:13:57.440 | I think that the universe is very much
01:14:01.660 | doing a quantum computation.
01:14:04.180 | If you had access to all of the degrees of freedom,
01:14:08.460 | you could in a very, very, very large quantum computer
01:14:12.420 | with many, many, many qubits,
01:14:14.580 | let's say a few qubits per Planck volume, right?
01:14:19.580 | Which was more or less the pixels we have.
01:14:24.820 | Then you'd be able to simulate the whole universe, right?
01:14:27.940 | On a sufficiently large quantum computer,
01:14:31.180 | assuming you're looking at a finite volume, of course,
01:14:34.380 | of the universe.
01:14:35.340 | I think that, at least to me, intelligence is the,
01:14:41.540 | I go back to cybernetics, right?
01:14:43.100 | The ability to perceive, predict, and control our world.
01:14:46.380 | But really it's, nowadays it seems like
01:14:49.300 | a lot of intelligence we use is more about compression, right?
01:14:54.300 | It's about operationalizing information theory, right?
01:15:00.300 | In information theory, you have the notion of entropy
01:15:03.740 | of a distribution or a system.
01:15:06.300 | And entropy tells you that you need this many bits
01:15:10.620 | to encode this distribution or this subsystem
01:15:16.140 | if you had the most optimal code.
01:15:19.040 | And AI, at least the way we do it today,
01:15:23.780 | for LLMs and for quantum,
01:15:27.500 | is very much trying to minimize relative entropy
01:15:32.500 | between our models of the world and the world,
01:15:38.660 | distributions from the world.
01:15:40.540 | And so we're learning, we're searching over the space
01:15:43.420 | of computations to process the world,
01:15:47.340 | to find that compressed representation
01:15:50.700 | that has distilled all the variance and noise and entropy.
01:15:57.380 | And originally, I came to quantum machine learning
01:16:02.180 | from the study of black holes
01:16:03.780 | because the entropy of black holes is very interesting.
01:16:08.780 | In a sense, they're physically
01:16:11.940 | the most dense objects in the universe.
01:16:14.700 | You can't pack more information spatially,
01:16:18.500 | any more densely than in black hole.
01:16:20.780 | And so I was wondering,
01:16:22.340 | how do black holes actually encode information?
01:16:26.820 | What is their compression code?
01:16:28.500 | And so that got me into the space of algorithms
01:16:31.780 | to search over space of quantum codes.
01:16:35.320 | And it got me actually into also,
01:16:40.460 | how do you acquire quantum information from the world?
01:16:44.060 | So something I've worked on, this is public now,
01:16:47.940 | is quantum analog digital conversion.
01:16:50.020 | So how do you capture information from the real world
01:16:54.540 | in superposition and not destroy the superposition,
01:16:57.540 | but digitize for a quantum mechanical computer,
01:17:01.060 | information from the real world?
01:17:04.260 | And so if you have an ability to capture quantum information
01:17:09.260 | and search over, learn representations of it,
01:17:13.300 | now you can learn compressed representations
01:17:15.620 | that may have some useful information
01:17:19.740 | in their latent representation, right?
01:17:23.900 | And I think that many of the problems
01:17:27.140 | facing our civilization are actually
01:17:29.700 | beyond this complexity barrier, right?
01:17:32.100 | I mean, the greenhouse effect
01:17:34.540 | is a quantum mechanical effect, right?
01:17:37.300 | Chemistry is quantum mechanical.
01:17:39.080 | Nuclear physics is quantum mechanical.
01:17:43.580 | A lot of biology and protein folding and so on
01:17:48.420 | is affected by quantum mechanics.
01:17:51.020 | And so unlocking an ability to augment human intellect
01:17:56.020 | with quantum mechanical computers
01:17:58.940 | and quantum mechanical AI seemed to me
01:18:01.140 | like a fundamental capability for civilization
01:18:04.780 | that we needed to develop.
01:18:06.220 | So I spent several years doing that,
01:18:09.820 | but over time I kind of grew weary of the timelines
01:18:14.660 | that were starting to look like nuclear fusion.
01:18:17.220 | - So one high level question I can ask is,
01:18:20.060 | maybe by way of definition, by way of explanation,
01:18:23.780 | what is a quantum computer
01:18:24.820 | and what is quantum machine learning?
01:18:27.260 | - So a quantum computer really is a quantum mechanical system
01:18:34.300 | over which we have sufficient control
01:18:40.660 | and it can maintain its quantum mechanical state.
01:18:44.300 | And quantum mechanics is how nature behaves
01:18:48.700 | at the very small scales
01:18:50.460 | when things are very small or very cold.
01:18:53.300 | And it's actually more fundamental than probability theory.
01:18:57.640 | So we're used to things being this or that,
01:19:00.080 | but we're not used to thinking in superpositions
01:19:05.140 | 'cause well, our brains can't do that.
01:19:09.180 | So we have to translate the quantum mechanical world
01:19:11.900 | to say linear algebra to grok it.
01:19:15.380 | Unfortunately, that translation
01:19:17.100 | is exponentially inefficient on average.
01:19:20.100 | You have to represent things with very large matrices,
01:19:23.620 | but really you can make a quantum computer
01:19:25.860 | out of many things, right?
01:19:27.100 | And we've seen all sorts of players from neutral atoms,
01:19:30.780 | trapped ions, superconducting metal,
01:19:35.220 | photons at different frequencies.
01:19:38.260 | I think you can make a quantum computer out of many things.
01:19:40.460 | But to me, the thing that was really interesting
01:19:44.880 | was both quantum machine learning
01:19:48.300 | was about understanding the quantum mechanical world
01:19:51.860 | with quantum computers.
01:19:53.260 | So embedding the physical world into AI representations
01:19:57.980 | and quantum computer engineering
01:19:59.620 | was embedding AI algorithms into the physical world.
01:20:03.960 | So this bidirectionality of embedding physical world
01:20:06.220 | into AI, AI into the physical world,
01:20:08.900 | the symbiosis between physics and AI,
01:20:12.060 | really that's the sort of core of my quest really,
01:20:17.060 | even to this day after quantum computing.
01:20:21.160 | It's still in this sort of journey
01:20:25.040 | to merge really physics and AI fundamentally.
01:20:29.320 | - So quantum machine learning is a way
01:20:31.400 | to do machine learning on a representation of nature
01:20:37.600 | that stays true to the quantum mechanical aspect of nature.
01:20:42.600 | - Yeah, it's learning quantum mechanical representations.
01:20:47.500 | That would be quantum deep learning.
01:20:49.300 | Alternatively, you can try to do classical machine learning
01:20:55.020 | on a quantum computer.
01:20:56.660 | I wouldn't advise it because you may have some speed ups,
01:21:01.180 | but very often the speed ups come with huge costs.
01:21:05.980 | Using a quantum computer is very expensive.
01:21:08.280 | Why is that?
01:21:09.120 | Because you assume the computer
01:21:10.960 | is operating at zero temperature,
01:21:13.480 | which no physical system in the universe
01:21:15.800 | can achieve that temperature.
01:21:17.240 | So what you have to do is what I've been mentioning,
01:21:19.040 | this quantum error correction process,
01:21:21.300 | which is really an algorithmic fridge, right?
01:21:24.360 | It's trying to pump entropy out of the system,
01:21:26.640 | trying to get it closer to zero temperature.
01:21:30.360 | And when you do the calculations
01:21:31.840 | of how many resources it would take to say,
01:21:33.760 | do deep learning on a quantum computer,
01:21:36.220 | classical deep learning,
01:21:37.420 | there's just such a huge overhead, it's not worth it.
01:21:42.020 | It's like thinking about shipping something across a city
01:21:45.300 | using a rocket and going to orbit and back.
01:21:48.220 | It doesn't make sense.
01:21:49.080 | Just use a delivery truck, right?
01:21:53.520 | - What kind of stuff can you figure out?
01:21:56.200 | Can you predict?
01:21:57.040 | Can you understand with quantum deep learning
01:21:59.460 | that you can't with deep learning?
01:22:00.900 | So incorporating quantum mechanical systems
01:22:03.140 | into the learning process.
01:22:05.620 | - I think that's a great question.
01:22:07.180 | I mean, fundamentally, it's any system
01:22:09.280 | that has sufficient quantum mechanical correlations
01:22:14.280 | that are very hard to capture for classical representations,
01:22:19.660 | then there should be an advantage
01:22:21.100 | for a quantum mechanical representation
01:22:22.900 | over a purely classical one.
01:22:24.900 | The question is which systems have sufficient correlations
01:22:29.540 | that are very quantum, but is also,
01:22:32.300 | which systems are still relevant to industry?
01:22:35.780 | That's a big question.
01:22:37.780 | People are leaning towards chemistry, nuclear physics.
01:22:41.580 | I've worked on actually processing inputs
01:22:47.000 | from quantum sensors, right?
01:22:49.540 | If you have a network of quantum sensors,
01:22:52.660 | they've captured a quantum mechanical image of the world
01:22:55.860 | and how to post-process that,
01:22:57.400 | that becomes a sort of quantum form of machine perception.
01:23:00.100 | And so, for example, Fermilab has a project exploring
01:23:04.900 | detecting dark matter with these quantum sensors.
01:23:08.460 | And to me, that's in alignment with my quest
01:23:11.900 | to understand the universe ever since I was a child.
01:23:14.140 | And so, someday, I hope that we can have
01:23:16.780 | very large networks of quantum sensors
01:23:18.560 | that help us peer into the earliest parts of the universe.
01:23:24.500 | For example, the LIGO is a quantum sensor, right?
01:23:27.800 | It's just a very large one.
01:23:29.160 | So, yeah, I would say quantum machine perception,
01:23:33.540 | simulations, right, grokking quantum simulations,
01:23:37.540 | similar to AlphaFold, right?
01:23:39.660 | AlphaFold understood the probability distribution
01:23:43.140 | over configurations of proteins.
01:23:44.980 | You can understand quantum distributions
01:23:48.400 | over configurations of electrons more efficiently
01:23:51.540 | with quantum machine learning.
01:23:53.500 | - You co-authored a paper titled
01:23:55.480 | A Universal Training Algorithm for Quantum Deep Learning
01:23:58.500 | that involves backprop with a Q.
01:24:02.160 | Very well done, sir, very well done.
01:24:05.620 | How does it work?
01:24:06.660 | Is there some interesting aspects you could just mention
01:24:09.620 | on how kind of backprop and some of these things
01:24:13.940 | we know for classical machine learning
01:24:15.780 | transfer over to the quantum machine learning?
01:24:19.460 | - Yeah, that was a funky paper.
01:24:21.540 | That was one of my first papers in quantum deep learning.
01:24:24.580 | Everybody was saying, "Oh, I think deep learning
01:24:27.620 | "is gonna be sped up by quantum computers."
01:24:29.660 | And I was like, "Well, the best way
01:24:30.580 | "to predict the future is to invent it.
01:24:32.020 | "So, here's a 100-page paper.
01:24:34.020 | "Have fun."
01:24:34.860 | Essentially, quantum computing is usually
01:24:41.140 | you embed reversible operations into a quantum computation.
01:24:46.340 | And so, the trick there was to do a feed-forward operation
01:24:51.300 | and do what we call a phase kick.
01:24:52.780 | But really, it's just a force kick.
01:24:54.220 | You just kick the system with a certain force
01:24:58.260 | that is proportional to your loss function
01:25:02.540 | that you wish to optimize.
01:25:04.900 | And then, by performing uncomputation,
01:25:08.700 | you start with a superposition over parameters,
01:25:13.700 | which is pretty funky.
01:25:15.100 | Now, you don't have just a point for parameters.
01:25:18.300 | You have a superposition over many potential parameters.
01:25:23.100 | And our goal is to--
01:25:24.660 | - Is using phase kicks somehow--
01:25:26.700 | - Right. - To adjust parameters.
01:25:28.340 | - 'Cause phase kicks emulate having the parameter space
01:25:33.340 | be like a particle in N dimensions.
01:25:37.700 | And you're trying to get the Schrodinger equation,
01:25:40.900 | Schrodinger dynamics, in the loss landscape
01:25:43.780 | of the neural network.
01:25:45.780 | And so, you do an algorithm to induce this phase kick,
01:25:49.100 | which involves a feed-forward, a kick.
01:25:52.700 | And then, when you uncompute the feed-forward,
01:25:56.140 | then all the errors in these phase kicks
01:25:58.980 | and these forces back-propagate
01:26:01.380 | and hit each one of the parameters throughout the layers.
01:26:04.740 | And if you alternate this
01:26:06.100 | with an emulation of kinetic energy,
01:26:09.460 | then it's kind of like a particle moving in N dimensions,
01:26:13.260 | a quantum particle.
01:26:15.380 | And the advantage, in principle,
01:26:18.300 | would be that it can tunnel through the landscape
01:26:20.740 | and find new optima that would have been difficult
01:26:24.540 | for stochastic optimizers.
01:26:26.660 | But again, this is kind of a theoretical thing.
01:26:30.760 | And in practice, with at least the current architectures
01:26:35.060 | for quantum computers that we have planned,
01:26:37.460 | such algorithms would be extremely expensive to run.
01:26:41.300 | - So, maybe this is a good place
01:26:42.580 | to ask the difference between the different fields
01:26:45.820 | that you've had a toe in.
01:26:47.540 | So, mathematics, physics, engineering,
01:26:51.140 | and also entrepreneurship.
01:26:53.820 | Like, different layers of the stack.
01:26:56.460 | I think a lot of the stuff you're talking about here
01:26:58.340 | is a little bit on the math side,
01:26:59.820 | maybe physics, almost working in theory.
01:27:03.460 | What's the difference, Steve,
01:27:04.500 | between math, physics, engineering,
01:27:08.140 | and making a product for quantum computing,
01:27:13.140 | for quantum machine learning?
01:27:14.780 | - Yeah, I mean, some of the original team
01:27:17.740 | for the TensorFlow quantum project,
01:27:19.360 | which we started in school at the University of Waterloo,
01:27:22.940 | there was myself.
01:27:24.540 | Initially, I was a physicist, a mathematician.
01:27:28.260 | We had a computer scientist, we had a mechanical engineer,
01:27:32.140 | and then we had a physicist that was experimental, primarily.
01:27:35.700 | And so, putting together teams
01:27:38.660 | that are very cross-disciplinary
01:27:39.980 | and figuring out how to communicate and share knowledge
01:27:43.220 | is really the key to doing
01:27:45.420 | this sort of interdisciplinary engineering work.
01:27:49.200 | I mean, there is a big difference.
01:27:53.660 | In mathematics, you can explore mathematics
01:27:55.700 | for mathematics' sake.
01:27:56.980 | In physics, you're applying mathematics
01:27:58.620 | to understand the world around us.
01:28:01.820 | And in engineering, you're trying to hack the world.
01:28:05.420 | You're trying to find how to apply the physics
01:28:08.020 | that I know, my knowledge of the world, to do things.
01:28:10.980 | - Well, in quantum computing in particular,
01:28:12.820 | I think there's just a lot of limits to engineering.
01:28:15.860 | It just seems to be extremely hard.
01:28:18.620 | So, there's a lot of value to be exploring
01:28:22.100 | quantum computing, quantum machine learning,
01:28:25.420 | in theory, with math.
01:28:29.200 | So, I guess one question is,
01:28:32.580 | why is it so hard to build a quantum computer?
01:28:36.060 | What's your view of timelines
01:28:40.060 | in bringing these ideas to life?
01:28:43.040 | - Right.
01:28:43.880 | I think that an overall theme of my company
01:28:48.260 | is that we have folks that are,
01:28:51.360 | there's a sort of exodus from quantum computing
01:28:55.180 | and we're going to broader physics-based AI
01:28:57.580 | that is not quantum.
01:28:58.820 | So, that gives you a hint.
01:29:00.780 | - But we should say the name of your company is Extropic.
01:29:03.180 | - Extropic, that's right.
01:29:05.020 | And we do physics-based AI,
01:29:06.980 | primarily based on thermodynamics
01:29:08.620 | rather than quantum mechanics.
01:29:10.700 | But essentially, a quantum computer
01:29:13.060 | is very difficult to build
01:29:15.000 | because you have to induce this
01:29:17.940 | sort of zero-temperature subspace of information.
01:29:22.640 | And the way to do that is by encoding information.
01:29:26.080 | You encode a code within a code within a code
01:29:29.100 | within a code.
01:29:30.500 | And so, there's a lot of redundancy needed
01:29:34.820 | to do this error correction.
01:29:36.620 | But ultimately, it's a sort of algorithmic refrigerator,
01:29:41.620 | really.
01:29:43.260 | It's just pumping out entropy out of the subsystem
01:29:46.540 | that is virtual and delocalized
01:29:49.060 | that represents your "logical qubits,"
01:29:52.060 | aka the payload quantum bits
01:29:54.260 | in which you actually want to run
01:29:58.020 | your quantum mechanical program.
01:30:00.080 | It's very difficult because in order to scale up
01:30:03.980 | your quantum computer,
01:30:05.380 | you need each component to be of sufficient quality
01:30:07.980 | for it to be worth it.
01:30:09.660 | Because if you try to do this error correction,
01:30:12.220 | this quantum error correction process
01:30:13.940 | in each quantum bit and your control over them,
01:30:16.740 | if it's insufficient, it's not worth scaling up.
01:30:21.780 | You're actually adding more errors than you remove.
01:30:24.140 | And so, there's this notion of a threshold
01:30:26.500 | where if your quantum bits are of sufficient quality
01:30:29.660 | in terms of your control over them,
01:30:31.360 | it's actually worth scaling up.
01:30:32.880 | And actually, in recent years,
01:30:34.760 | people have been crossing the threshold
01:30:37.120 | and it's starting to be worth it.
01:30:38.500 | And so, it's just a very long slog of engineering,
01:30:42.560 | but ultimately, it's really crazy to me
01:30:44.640 | how much exquisite level of control
01:30:46.780 | we have over these systems.
01:30:47.960 | It's actually quite crazy.
01:30:50.520 | And people are crossing, they're achieving milestones.
01:30:56.920 | It's just, in general, the media always gets ahead
01:31:01.660 | of where the technology is.
01:31:02.900 | There's a bit too much hype.
01:31:04.580 | It's good for fundraising,
01:31:05.700 | but sometimes it causes winters, right?
01:31:08.820 | It's the hype cycle.
01:31:10.560 | I'm bullish on quantum computing
01:31:12.000 | on a 10, 15 year timescale personally,
01:31:16.460 | but I think there's other quests that can be done
01:31:19.540 | in the meantime.
01:31:20.380 | I think it's in good hands right now.
01:31:22.540 | - Well, let me just explore different beautiful ideas
01:31:26.860 | large or small in quantum computing
01:31:29.060 | that might jump out at you from memory.
01:31:32.100 | So, you co-authored a paper titled,
01:31:33.920 | "Asymptotically Limitless Quantum Energy Teleportation
01:31:36.700 | "Via Q-DIT Probes."
01:31:39.060 | So, just out of curiosity,
01:31:42.460 | can you explain what a Q-DIT is, which is a qubit?
01:31:45.820 | - Yeah, it's a D-state qubit.
01:31:49.380 | - So, multidimensional.
01:31:50.540 | - Multidimensional, right.
01:31:51.900 | So, it's like, well, you know,
01:31:55.020 | can you have a notion of like an integer floating point
01:31:58.340 | that is quantum mechanical?
01:31:59.260 | That's something I've had to think about.
01:32:01.300 | I think that research was a precursor
01:32:04.060 | to later work on quantum analog digital conversion.
01:32:06.700 | There it was interesting because during my master's,
01:32:12.300 | I was trying to understand the energy
01:32:15.540 | and entanglement of the vacuum, right, of emptiness.
01:32:20.020 | Emptiness has energy, which is very weird to say.
01:32:23.760 | And our equations of cosmology
01:32:26.780 | don't match our calculations for the amount
01:32:31.140 | of quantum energy there is in the fluctuations.
01:32:35.140 | And so, I was trying to hack the energy of the vacuum,
01:32:39.260 | right, and the reality is that
01:32:41.540 | you can't just directly hack it.
01:32:44.380 | It's not technically free energy.
01:32:46.460 | Your lack of knowledge of the fluctuations
01:32:48.500 | means you can't extract the energy.
01:32:51.340 | But just like, you know, in the stock market,
01:32:53.260 | if you have a stock that's correlated over time,
01:32:55.540 | the vacuum's actually correlated.
01:32:57.180 | So, if you measured the vacuum at one point,
01:33:01.140 | you acquired information.
01:33:02.660 | If you communicated that information to another point,
01:33:05.500 | you can infer what configuration the vacuum is in
01:33:10.500 | to some precision and statistically extract,
01:33:14.220 | on average, some energy there.
01:33:15.540 | So, you've quote-unquote teleported energy.
01:33:18.520 | To me, that was interesting
01:33:19.880 | because you could create pockets of negative energy density,
01:33:23.640 | which is energy density that is below the vacuum,
01:33:26.200 | which is very weird
01:33:28.560 | because we don't understand how the vacuum gravitates.
01:33:32.680 | And there are theories where the vacuum
01:33:37.920 | or the canvas of space-time itself
01:33:40.080 | is really a canvas made out of quantum entanglement.
01:33:45.440 | And I was studying how decreasing energy
01:33:50.320 | of the vacuum locally increases quantum entanglement,
01:33:54.160 | which is very funky.
01:33:55.320 | And so, the thing there is that, you know,
01:34:00.840 | if you're into weird theories about UAPs and whatnot,
01:34:07.080 | you could try to imagine that they're around
01:34:12.840 | and how would they propel themselves, right?
01:34:15.280 | How would they go faster than the speed of light?
01:34:19.080 | You would need a sort of negative energy density.
01:34:21.960 | And to me, I gave it the old college try
01:34:25.440 | trying to hack the energy of the vacuum
01:34:28.200 | and hit the limits allowable by the laws of physics.
01:34:30.800 | But there's all sorts of caveats there
01:34:34.280 | where you can't extract more than you've put in, obviously.
01:34:39.280 | - But you're saying it's possible to teleport the energy
01:34:44.720 | because you can extract information in one place
01:34:49.720 | and then make, based on that,
01:34:53.280 | some kind of prediction about another place?
01:34:56.920 | I'm not sure what I make of that.
01:34:58.720 | - Yeah, I mean, it's allowable by the laws of physics.
01:35:01.800 | The reality, though, is that the correlations
01:35:04.080 | decay with distance, and so you're gonna have
01:35:06.880 | to pay the price not too far away
01:35:09.320 | from where you extract it, right?
01:35:11.080 | - The precision decreases, I mean,
01:35:12.640 | in terms of your ability, but still.
01:35:15.080 | But since you mentioned UAPs,
01:35:19.000 | we talked about intelligence, and I forgot to ask.
01:35:21.840 | What's your view on the other possible intelligences
01:35:25.440 | that are out there at the meso scale?
01:35:29.280 | Do you think there's other intelligent alien civilizations?
01:35:32.520 | Is that useful to think about?
01:35:34.320 | How often do you think about it?
01:35:36.080 | - I think it's useful to think about
01:35:39.720 | because we gotta ensure we're anti-fragile
01:35:44.160 | and we're trying to increase our capabilities
01:35:47.840 | as fast as possible because we could get disrupted.
01:35:51.600 | There's no laws of physics against there
01:35:55.520 | being life elsewhere that could evolve
01:35:59.960 | and become an advanced civilization
01:36:01.600 | and eventually come to us.
01:36:04.560 | Do I think they're here now?
01:36:06.640 | I'm not sure.
01:36:08.680 | I mean, I've read what most people have read on the topic.
01:36:13.320 | I think it's interesting to consider.
01:36:16.400 | And to me, it's a useful thought experiment
01:36:20.360 | to instill a sense of urgency in developing technologies
01:36:24.720 | and increasing our capabilities
01:36:27.120 | to make sure we don't get disrupted, right?
01:36:30.560 | Whether it's a form of AI that disrupts us
01:36:34.840 | or a foreign intelligence from a different planet.
01:36:39.720 | Either way, increasing our capabilities
01:36:42.480 | and becoming formidable as humans,
01:36:45.900 | I think that's really important
01:36:48.840 | so that we're robust against
01:36:50.320 | whatever the universe throws at us.
01:36:51.720 | - But to me, it's also an interesting challenge
01:36:54.720 | and thought experiment on how to perceive intelligence.
01:36:59.080 | This has to do with quantum mechanical systems.
01:37:00.880 | This has to do with any kind of system
01:37:03.200 | that's not like humans.
01:37:05.480 | So to me, the thought experiment is,
01:37:08.400 | say the aliens are here or they are directly observable
01:37:12.160 | or just too blind, too self-centered,
01:37:15.980 | don't have the right sensors
01:37:19.040 | or don't have the right processing of the sensor data
01:37:23.140 | to see the obvious intelligence that's all around us.
01:37:25.880 | - Well, that's why we work on quantum sensors, right?
01:37:28.840 | They can sense gravity.
01:37:30.960 | - Yeah, but there could be, so that's a good one,
01:37:33.580 | but there could be other stuff
01:37:34.920 | that's not even in the currently known forces of physics.
01:37:39.920 | There could be some other stuff.
01:37:45.880 | And the most entertaining thought experiment to me
01:37:48.400 | is that it's other stuff that's obvious.
01:37:51.360 | It's not like we don't, we lack the sensors.
01:37:53.160 | It's all around us.
01:37:54.320 | The consciousness being one possible one.
01:37:58.560 | But there could be stuff that's just like obviously there.
01:38:01.420 | And once you know it, it's like, oh, right, right.
01:38:05.580 | The thing we thought is somehow emergent
01:38:09.400 | from the laws of physics, we understand them,
01:38:11.400 | is actually a fundamental part of the universe
01:38:15.240 | and can be incorporated in physics, most understood.
01:38:17.900 | - Statistically speaking, right,
01:38:20.200 | if we observed some sort of alien life,
01:38:23.340 | it would most likely be some sort of
01:38:25.640 | virally self-replicating von Neumann-like probe system,
01:38:30.280 | right, and it's possible that there are such systems that,
01:38:35.280 | I don't know what they're doing
01:38:38.160 | at the bottom of the ocean allegedly,
01:38:39.640 | but maybe they're collecting minerals
01:38:43.240 | from the bottom of the ocean.
01:38:44.600 | - Yeah.
01:38:45.840 | - But that wouldn't violate any of my priors,
01:38:49.340 | but am I certain that these systems are here?
01:38:53.080 | And it'd be difficult for me to say so, right?
01:38:56.200 | I only have second-hand information about there being data.
01:38:59.360 | - About the bottom of the ocean?
01:39:00.920 | Yeah, but could it be things like memes?
01:39:03.880 | Could it be thoughts and ideas?
01:39:05.800 | Could they be operating in that medium?
01:39:09.200 | Could aliens be the very thoughts that come into my head?
01:39:12.000 | Like what do you, how do you know that,
01:39:17.520 | how do you know that, what's the origin of ideas
01:39:20.240 | in your mind when an idea comes to your head?
01:39:23.000 | Like show me where it originates.
01:39:25.240 | - I mean, frankly, when I had the idea
01:39:29.400 | for the type of computer I'm building now,
01:39:31.600 | I think it was eight years ago now,
01:39:33.600 | it really felt like it was being beamed from space.
01:39:36.200 | It's just, I was in bed just shaking,
01:39:39.240 | just thinking it through, and I don't know.
01:39:41.760 | But do I believe that legitimately?
01:39:43.520 | I don't think so.
01:39:44.480 | But you know, I think that alien life could take many forms,
01:39:49.480 | and I think the notion of intelligence
01:39:52.240 | and the notion of life needs to be expanded
01:39:56.840 | much more broadly, to be less anthropocentric or biocentric.
01:40:01.840 | - Just to linger a little longer on quantum mechanics,
01:40:08.080 | what's, through all your explorations of quantum computing,
01:40:11.360 | what's the coolest, most beautiful idea
01:40:15.640 | that you've come across that has been solved
01:40:17.560 | or has not yet been solved?
01:40:19.880 | - I think the journey to understand
01:40:24.880 | something called ADS-CFT,
01:40:27.040 | so the journey to understand quantum gravity
01:40:30.920 | through this picture where a hologram of lesser dimension
01:40:35.920 | is actually dual or exactly corresponding
01:40:41.040 | to a bulk theory of quantum gravity of an extra dimension.
01:40:46.720 | And the fact that this sort of duality
01:40:50.480 | comes from trying to learn deep learning-like representations
01:40:55.480 | of the boundary, and so at least part of my journey someday
01:41:01.040 | on my bucket list is to apply quantum machine learning
01:41:05.640 | to these sorts of systems, these CFTs,
01:41:10.120 | or they're called SYK models,
01:41:14.240 | and learn an emergent geometry from the boundary theory.
01:41:18.640 | And so we can have a form of machine learning
01:41:21.080 | to help us understand quantum gravity,
01:41:26.960 | which is still a holy grail that I would like to hit
01:41:31.320 | before I leave this earth. (laughs)
01:41:35.000 | - What do you think is going on with black holes
01:41:37.880 | as information storing and processing units?
01:41:43.500 | What do you think is going on with black holes?
01:41:46.160 | - Black holes are really fascinating objects.
01:41:49.280 | They're at the interface
01:41:50.840 | between quantum mechanics and gravity,
01:41:52.320 | and so they help us test all sorts of ideas.
01:41:54.640 | I think that for many decades now,
01:41:59.200 | there's been sort of this black hole information paradox
01:42:02.200 | that things that fall into the black hole
01:42:04.660 | we seem to have lost their information.
01:42:08.880 | Now I think there's this firewall paradox
01:42:13.160 | that has been allegedly resolved in recent years
01:42:15.880 | by a former peer of mine who's now a professor at Berkeley,
01:42:20.880 | and there it seems like there is,
01:42:29.040 | as information falls into a black hole,
01:42:31.760 | there's sort of a sedimentation, right?
01:42:35.040 | As you get closer and closer to the horizon
01:42:37.360 | from the point of view of the observer on the outside,
01:42:40.680 | the object slows down infinitely
01:42:43.880 | as it gets closer and closer.
01:42:45.880 | And so everything that is falling to a black hole
01:42:49.400 | from our perspective gets sort of sedimented
01:42:52.400 | and tacked on to the near horizon.
01:42:55.400 | And at some point, it gets so close to the horizon,
01:42:57.520 | it's in the proximity or the scale
01:43:01.360 | in which quantum effects and quantum fluctuations matter.
01:43:04.460 | And that infalling matter could interfere
01:43:10.560 | with sort of the traditional pictures
01:43:13.200 | that it can interfere with the creation
01:43:15.320 | and annihilation of particles
01:43:16.840 | and antiparticles in the vacuum.
01:43:19.040 | And through this interference,
01:43:20.960 | one of the particles gets entangled
01:43:23.880 | with the infalling information
01:43:25.680 | and one of them is now free and escapes.
01:43:28.040 | And that's how there's sort of mutual information
01:43:31.040 | between the outgoing radiation and the infalling matter.
01:43:36.040 | But getting that calculation right,
01:43:38.280 | I think we're only just starting to put the pieces together.
01:43:43.280 | - There's a few pothead-like questions I wanna ask you.
01:43:46.400 | - Sure.
01:43:47.240 | - So one, does it terrify you
01:43:48.720 | that there's a giant black hole at the center of our galaxy?
01:43:52.460 | - I don't know.
01:43:53.300 | I just want to set up shop near it to fast forward,
01:43:57.480 | you know, meet a future civilization, right?
01:44:02.000 | Like if we have a limited lifetime,
01:44:03.720 | if you can go orbit a black hole and emerge.
01:44:07.800 | - So if you were like,
01:44:08.640 | if there was a special mission
01:44:09.840 | that could take you to a black hole,
01:44:11.080 | would you volunteer to go travel?
01:44:13.120 | - To orbit and obviously not fall into it.
01:44:15.840 | - That's obvious.
01:44:16.720 | So it's obvious to you
01:44:17.560 | that everything's destroyed inside a black hole.
01:44:19.800 | Like all the information that makes up Guillaume
01:44:21.520 | is destroyed.
01:44:22.460 | Maybe on the other side,
01:44:25.000 | Beth Jezels emerges and it's all like,
01:44:27.800 | it's tied together in some deeply memophil way.
01:44:32.800 | - Yeah, I mean, that's a great question.
01:44:34.520 | We have to answer what black holes are.
01:44:38.680 | Are we punching a hole through space time
01:44:41.280 | and creating a pocket universe?
01:44:42.860 | It's possible, right?
01:44:44.800 | Then that would mean that if we ascend the Kardashev scale
01:44:49.220 | to, you know, beyond Kardashev type three,
01:44:52.760 | we could engineer in black holes
01:44:55.000 | with specific hyper parameters
01:44:56.480 | to transmit information to new universes we create.
01:44:59.520 | And so we can have progeny, right?
01:45:03.220 | That are new universes.
01:45:04.480 | And so we, even though our universe may reach a heat death,
01:45:09.480 | we may have a way to have a legacy, right?
01:45:13.700 | So we don't know yet.
01:45:15.940 | We need to ascend the Kardashev scale
01:45:17.860 | to answer these questions, right?
01:45:20.400 | To peer into that regime of higher energy physics.
01:45:25.120 | - And maybe you can speak to the Kardashev scale
01:45:27.320 | for people who don't know.
01:45:28.340 | So one of the sort of meme-like principles
01:45:33.340 | and goals of the EAC movement
01:45:37.600 | is to ascend the Kardashev scale.
01:45:39.280 | What is the Kardashev scale?
01:45:41.360 | And when do we wanna ascend it?
01:45:43.440 | - The Kardashev scale is a measure
01:45:45.880 | of our energy production and consumption.
01:45:48.860 | And really, it's a logarithmic scale.
01:45:53.980 | And Kardashev type one is a milestone
01:45:56.680 | where we are producing the equivalent wattage
01:46:00.840 | to all the energy that is incident on Earth from the Sun.
01:46:04.480 | Kardashev type two would be harnessing all the energy
01:46:07.960 | that is output by the Sun.
01:46:09.760 | And I think type three is like the whole galaxy.
01:46:13.040 | - Galaxy, I think, level, yeah.
01:46:14.960 | - Yeah, and then some people have some crazy type four
01:46:17.560 | and five, but I don't know if I believe in those.
01:46:19.640 | But to me, it seems like from the first principles
01:46:25.080 | of thermodynamics that, again, there's this concept
01:46:28.920 | of thermodynamic driven dissipative adaptation
01:46:33.920 | where life evolved on Earth
01:46:38.080 | because we have this sort of energetic drive from the Sun.
01:46:42.760 | We have incident energy and life evolved on Earth
01:46:46.080 | to capture, figure out ways to best capture
01:46:50.000 | that free energy to maintain itself and grow.
01:46:54.160 | And I think that that principle,
01:46:57.480 | it's not special to our Earth-Sun system.
01:47:00.680 | We can extend life well beyond
01:47:03.120 | and we kind of have a responsibility to do so
01:47:06.160 | because that's the process that brought us here.
01:47:08.760 | So, we don't even know what it has in store for us
01:47:12.040 | in the future.
01:47:12.860 | It could be something of beauty
01:47:15.360 | we can't even imagine today, right?
01:47:17.120 | - So, this is probably a good place
01:47:20.440 | to talk a bit about the EAC movement.
01:47:23.440 | In a sub-stack blog post titled "What the Fuck is EAC?"
01:47:28.000 | or actually, "What the F* is EAC?"
01:47:31.000 | you write, "Strategically speaking,
01:47:32.800 | "we need to work towards several overarching
01:47:34.760 | "civilization goals that are all interdependent."
01:47:37.940 | And the four goals are increase the amount of energy
01:47:41.540 | we can harness as a species,
01:47:43.400 | climb the Kardashev gradient.
01:47:46.520 | In the short term, this almost certainly means
01:47:49.080 | nuclear fission.
01:47:51.640 | Increase human flourishing via pro-population growth policies
01:47:54.480 | and pro-economic growth policies.
01:47:56.180 | Create artificial general intelligence,
01:47:59.320 | the single greatest force multiplier in human history.
01:48:02.320 | And finally, develop interplanetary
01:48:04.320 | and interstellar transport
01:48:06.120 | so that humanity can spread beyond the Earth.
01:48:09.000 | Could you build on top of that to maybe say,
01:48:13.000 | what to you is the EAC movement?
01:48:17.360 | What are the goals?
01:48:18.200 | What are the principles?
01:48:20.520 | - The goal is for the human techno-capital memetic machine
01:48:25.520 | to become self-aware
01:48:28.440 | and to hyperstitiously engineer its own growth.
01:48:31.640 | So let's decompress that.
01:48:33.640 | - Define each of those words.
01:48:35.440 | - So you have humans, you have technology,
01:48:38.040 | you have capital, and then you have memes, information.
01:48:41.520 | And all of those systems are coupled with one another.
01:48:46.720 | Humans work at companies,
01:48:48.160 | they acquire and allocate capital.
01:48:50.520 | And humans communicate via memes
01:48:53.600 | and information propagation.
01:48:55.400 | And our goal was to have a sort of viral optimistic movement
01:49:01.320 | that is aware of how the system works.
01:49:06.040 | Fundamentally, it seeks to grow.
01:49:08.640 | And we simply want to lean into the natural tendencies
01:49:13.640 | of the system to adapt for its own growth.
01:49:17.780 | - So in that way, you're right.
01:49:19.900 | The EAC is literally a memetic optimism virus
01:49:23.300 | that is constantly drifting, mutating,
01:49:25.500 | and propagating in a decentralized fashion.
01:49:28.260 | So memetic optimism virus.
01:49:30.780 | So you do want it to be a virus to maximize the spread.
01:49:35.300 | And it's hyperstitious,
01:49:37.640 | therefore the optimism will incentivize its growth.
01:49:43.080 | - We see EAC as a sort of a metaheuristic,
01:49:47.280 | a sort of very thin cultural framework
01:49:51.200 | from which you can have much more opinionated forks.
01:49:55.600 | Fundamentally, we just say that it's good.
01:49:59.800 | What got us here is this adaptation of the whole system,
01:50:04.800 | based on thermodynamics,
01:50:07.520 | and that process is good and we should keep it going.
01:50:11.360 | That is the core thesis.
01:50:12.380 | Everything else is, okay, how do we ensure
01:50:16.760 | that we maintain this malleability and adaptability?
01:50:20.720 | Well, clearly not suppressing variance
01:50:24.160 | and maintaining free speech, freedom of thought,
01:50:28.720 | freedom of information propagation,
01:50:30.740 | and freedom to do AI research is important
01:50:34.540 | for us to converge the fastest
01:50:37.960 | on the space of technologies, ideas, and whatnot
01:50:42.500 | that lead to this growth.
01:50:44.200 | And so ultimately, there's been quite a few forks.
01:50:49.500 | Some are just memes, but some are more serious, right?
01:50:52.060 | Vitalik Buterin recently made a DIAC fork.
01:50:55.780 | He has his own sort of fine tunings of EAC.
01:50:59.140 | - Does anything jump out to memory
01:51:00.980 | of the unique characteristic of that fork from Vitalik?
01:51:05.460 | - I would say that it's trying to find a middle ground
01:51:08.520 | between EAC and sort of EA and AI safety.
01:51:12.640 | To me, having a movement that is opposite
01:51:17.600 | to what was the mainstream narrative
01:51:19.120 | that was taking over Silicon Valley
01:51:20.480 | was important to sort of shift the dynamic range of opinions.
01:51:24.600 | And it's like the balance between centralization
01:51:28.480 | and decentralization.
01:51:29.520 | The real optimum's always somewhere in the middle, right?
01:51:32.840 | But for EAC, we're pushing for entropy, novelty,
01:51:37.840 | disruption, malleability, speed,
01:51:42.040 | rather than being like sort of conservative,
01:51:46.080 | suppressing thought, suppressing speech,
01:51:48.040 | adding constraints, adding too many regulations,
01:51:51.560 | slowing things down.
01:51:52.660 | And so it's kind of,
01:51:53.960 | we're trying to bring balance to the force, right?
01:51:56.840 | Systems.
01:51:57.820 | (laughing)
01:52:00.160 | - Balance to the force of human civilization, yeah.
01:52:02.840 | - It's literally the forces of constraints
01:52:04.600 | versus the entropic force that makes us explore, right?
01:52:09.120 | Systems are optimal when they're at the edge of criticality
01:52:13.680 | between order and chaos, right?
01:52:15.800 | Between constraints, energy minimization, and entropy.
01:52:20.800 | Systems want to equilibrate, balance these two things.
01:52:24.460 | And so I thought that the balance was lacking.
01:52:27.600 | And so we created this movement to bring balance.
01:52:31.680 | - Well, I like how, I like the sort of visual
01:52:35.120 | of the landscape of ideas evolving through forks.
01:52:39.080 | So kind of thinking on the other part of history,
01:52:43.820 | thinking of Marxism as the original repository,
01:52:49.480 | and then Soviet communism as a fork of that,
01:52:52.200 | and then Maoism as a fork of Marxism and communism.
01:52:58.040 | And so those are all forks
01:53:00.560 | that are exploring different ideas.
01:53:02.560 | - Thinking of culture almost like code, right?
01:53:04.960 | Nowadays, I mean, what you prompt the LM
01:53:09.960 | or what you put in the constitution of an LM
01:53:12.760 | is basically its cultural framework, what it believes, right?
01:53:16.920 | And you can share it on GitHub nowadays.
01:53:21.440 | So starting trying to take inspiration
01:53:23.940 | from what has worked in the sort of machine of software
01:53:28.440 | to adapt over the space of code,
01:53:31.960 | could we apply that to culture?
01:53:33.620 | And our goal is to not say you should live your life
01:53:37.240 | this way, X, Y, Z, is to set up a process
01:53:41.040 | where people are always searching over subcultures
01:53:44.680 | and competing for mindshare.
01:53:46.920 | And I think creating this malleability of culture
01:53:50.320 | is super important for us to converge onto the cultures
01:53:53.840 | and the heuristics about how to live one's life
01:53:56.640 | that are updated to modern times.
01:53:59.540 | Because there's really been a sort of vacuum
01:54:03.520 | of spirituality and culture.
01:54:06.120 | People don't feel like they belong to any one group.
01:54:08.640 | And there's been parasitic ideologies
01:54:11.220 | that have taken up opportunity
01:54:13.480 | to populate this Petri dish of minds, right?
01:54:18.200 | Elon calls it the mind virus.
01:54:20.600 | We call it the D-cell mind virus complex,
01:54:24.680 | which is a decelerative that is kind of the overall pattern
01:54:28.920 | between all of them.
01:54:29.760 | There's many variants as well.
01:54:31.280 | And so if there's a sort of viral pessimism,
01:54:36.080 | decelerative movement, we needed to have
01:54:38.200 | not only one movement, but many, many variants.
01:54:42.040 | So it's very hard to pinpoint and stop.
01:54:44.160 | - But the overarching thing is nevertheless
01:54:47.360 | a kind of mimetic optimism pandemic.
01:54:51.760 | So, I mean, okay, let me ask you,
01:54:57.120 | do you think EAC to some degree is a cult?
01:54:59.860 | - Define cult.
01:55:02.100 | - I think a lot of human progress is made
01:55:06.560 | when you have independent thought.
01:55:09.560 | So you have individuals that are able to think freely
01:55:12.500 | and very powerful mimetic systems
01:55:17.500 | can kind of lead to group think.
01:55:21.940 | There's something in human nature
01:55:23.140 | that leads to like mass hypnosis, mass hysteria,
01:55:26.300 | where we start to think alike
01:55:28.300 | whenever there's a sexy idea that captures our minds.
01:55:32.180 | And so it's actually hard to break us apart,
01:55:34.740 | pull us apart, diversify thought.
01:55:37.940 | So to that degree, to which degree
01:55:40.580 | is everybody kind of chanting EAC, EAC,
01:55:43.660 | like the sheep in Animal Farm?
01:55:46.500 | - Well, first of all, it's fun, it's rebellious, right?
01:55:49.300 | Like many, I think we lean into,
01:55:54.300 | there's this concept of sort of meta-irony, right?
01:55:58.980 | Of sort of being on the boundary of like,
01:56:01.940 | we're not sure if they're serious or not,
01:56:03.460 | and it's much more playful and much more fun, right?
01:56:06.540 | Like, for example, we talk about thermodynamics
01:56:09.220 | being our God, right?
01:56:11.080 | And sometimes we do cult-like things,
01:56:14.460 | but there's no like ceremony and robes and whatnot.
01:56:18.340 | - Not yet.
01:56:19.180 | - Not yet.
01:56:20.020 | But ultimately, yeah, I mean, I totally agree
01:56:23.580 | that it seems to me that humans wanna feel
01:56:28.020 | like they're part of a group.
01:56:29.060 | So they naturally try to agree with their neighbors
01:56:33.460 | and find common ground.
01:56:35.500 | And that leads to sort of mode collapse
01:56:38.300 | in the space of ideas, right?
01:56:40.180 | We used to have sort of one cultural island
01:56:44.740 | that was allowed.
01:56:45.580 | It was a typical subspace of thought,
01:56:47.140 | and anything that was diverting
01:56:49.140 | from that subspace of thought was suppressed
01:56:51.140 | or even canceled, right?
01:56:52.820 | Now we've created a new mode,
01:56:54.820 | but the whole point is that we're not trying to have
01:56:57.340 | a very restricted space of thought.
01:56:59.500 | There's not just one way to think about EAC
01:57:01.700 | and its many forks.
01:57:03.020 | And the point is that there are many forks,
01:57:05.120 | and there can be many clusters and many islands.
01:57:07.220 | And I shouldn't be in control of it in any way.
01:57:11.340 | I mean, there's no formal org whatsoever.
01:57:16.140 | I just put out tweets and certain blog posts,
01:57:20.580 | and people are free to defect and fork
01:57:24.100 | if there's an aspect they don't like.
01:57:26.140 | And so that makes it so that there should be
01:57:29.580 | a sort of deterritorialization in the space of ideas
01:57:34.180 | so that we don't end up in one cluster
01:57:36.900 | that's very cult-like.
01:57:38.900 | And so cults, usually, they don't allow people
01:57:43.020 | to defect or start competing forks,
01:57:45.100 | whereas we encourage it, right?
01:57:47.700 | - Do you think just the humor,
01:57:49.560 | the pros and cons of humor and meme,
01:57:53.080 | in some sense, meme,
01:57:56.020 | there's like a wisdom to memes.
01:58:00.300 | What is it, "The Magic Theater"?
01:58:04.140 | What book is that from?
01:58:05.300 | Herman has a Steppenwolf, I think.
01:58:08.980 | But there's a kind of embracing of the absurdity
01:58:13.980 | that seems to get to the truth of things,
01:58:17.860 | but at the same time, it can also decrease the quality
01:58:21.100 | and the rigor of the discourse.
01:58:23.700 | Do you feel the tension of that?
01:58:25.340 | - Yeah.
01:58:26.620 | So initially, I think, what allowed us to grow
01:58:30.220 | under the radar was because it was camouflaged
01:58:33.540 | as sort of meta-ironic, right?
01:58:35.900 | We would sneak in deep truths within a package of humor
01:58:40.900 | and memes and what are called shitposts, right?
01:58:45.520 | And I think that was purposefully a sort of camouflage
01:58:51.780 | against those that seek status and do not want to,
01:58:57.020 | it's very hard to argue with a cartoon frog
01:59:02.460 | or a cartoon of an intergalactic Jeff Bezos
01:59:07.460 | and take yourself seriously.
01:59:10.820 | And so, that allowed us to grow pretty rapidly
01:59:15.300 | in the early days.
01:59:16.340 | But of course, essentially, people get steered,
01:59:21.340 | their notion of the truth comes from the data they see,
01:59:27.140 | from the information they're fed.
01:59:29.180 | And the information people are fed
01:59:31.780 | is determined by algorithms, right?
01:59:34.860 | And really what we've been doing is sort of engineering
01:59:39.860 | what we call high mimetic fitness packets of information
01:59:44.740 | so that they can spread effectively and carry a message,
01:59:47.580 | right?
01:59:48.420 | So it's kind of a vector to spread the message.
01:59:52.660 | And yes, we've been using sort of techniques
01:59:56.140 | that are optimal for today's algorithmically amplified
02:00:00.300 | information landscapes.
02:00:02.540 | But I think we're reaching the point of scale
02:00:06.500 | where we can have serious debates and serious conversations.
02:00:10.260 | And that's why we're considering doing a bunch of debates
02:00:15.260 | and having more serious long-form discussions.
02:00:18.060 | 'Cause I don't think that the timeline is optimal
02:00:21.620 | for sort of very serious, thoughtful discussions.
02:00:24.860 | You get rewarded for sort of polarization, right?
02:00:29.460 | And so, even though we started a movement
02:00:33.020 | that is literally trying to polarize the tech ecosystem,
02:00:37.660 | at the end of the day, it's so that we can have
02:00:39.180 | a conversation and find an optimum together.
02:00:42.680 | - I mean, that's kind of what I try to do with this podcast,
02:00:45.220 | given the landscape of things,
02:00:47.060 | to still have long-form conversations.
02:00:49.220 | But there is a degree to which absurdity is fully embraced.
02:00:54.000 | In fact, this very conversation is multi,
02:00:59.100 | level absurd.
02:01:01.340 | So, first of all, I should say that I just very recently
02:01:04.820 | had a conversation with Jeff Bezos.
02:01:07.540 | And I would love to hear your,
02:01:12.240 | Beth Jezos, opinions of Jeff Bezos.
02:01:18.140 | Speaking of intergalactic Jeff Bezos,
02:01:20.820 | what do you think of that particular individual
02:01:23.580 | whom your name has inspired?
02:01:25.640 | - Yeah, I mean, I think Jeff is really great.
02:01:29.420 | I mean, he's built one of the most epic companies
02:01:32.460 | of all time.
02:01:33.300 | He's leveraged the techno capital machine
02:01:34.900 | and techno capital acceleration
02:01:36.620 | to give us what we wanted, right?
02:01:40.580 | We want quick delivery, very convenient,
02:01:44.700 | at home, low prices, right?
02:01:46.620 | He understood how the machine worked
02:01:49.620 | and how to harness it, right?
02:01:51.120 | Like running the company,
02:01:52.660 | not trying to take profits too early,
02:01:55.480 | putting it back,
02:01:56.620 | letting the system compound and keep improving.
02:02:00.600 | And arguably, I think Amazon's invested
02:02:03.060 | some of the most amount of capital in robotics out there.
02:02:06.440 | And certainly, with the birth of AWS,
02:02:10.260 | kind of enabled the sort of tech boom we've seen today
02:02:14.440 | that has paid the salaries of,
02:02:16.700 | I guess, myself and all of our friends to some extent.
02:02:20.940 | And so, I think we can all be grateful to Jeff
02:02:24.780 | and he's one of the great entrepreneurs out there,
02:02:28.440 | one of the best of all time, unarguably.
02:02:30.840 | - And of course, the work at Blue Origin,
02:02:34.620 | similar to the work at SpaceX,
02:02:36.300 | is trying to make humans a multi-planetary species,
02:02:39.260 | which seems almost like a bigger thing
02:02:42.900 | than the capitalist machine,
02:02:45.180 | or it's a capitalist machine
02:02:46.020 | at a different time scale, perhaps.
02:02:47.860 | - Yeah, I think that companies,
02:02:52.340 | they tend to optimize quarter over quarter,
02:02:56.340 | maybe a few years out,
02:02:57.900 | but individuals that wanna leave a legacy
02:03:00.840 | can think on a multi-decadal or multi-century time scale.
02:03:05.220 | And so, the fact that some individuals
02:03:08.060 | are such good capital allocators,
02:03:10.420 | that they unlock the ability to allocate capitals
02:03:13.300 | to goals that take us much further
02:03:16.300 | or are much further looking.
02:03:17.980 | Elon's doing this with SpaceX,
02:03:20.860 | putting all this capital towards getting us to Mars.
02:03:23.460 | Jeff is trying to build Blue Origin
02:03:27.020 | and I think he wants to build O'Neill cylinders
02:03:29.060 | and get industry off planet,
02:03:31.020 | which I think is brilliant.
02:03:32.480 | I think, just overall, I'm for billionaires.
02:03:38.500 | I know this is a controversial statement sometimes,
02:03:40.520 | but I think that in a sense,
02:03:43.100 | it's kind of a proof of stake voting, right?
02:03:46.300 | Like, if you've allocated capital efficiently,
02:03:50.940 | you unlock more capital to allocate
02:03:54.640 | just because clearly,
02:03:56.500 | you know how to allocate capital more efficiently,
02:03:59.540 | which is in contrast to politicians that get elected
02:04:03.500 | because they speak the best on TV, right?
02:04:05.860 | Not because they have a proven track record
02:04:08.020 | of allocating taxpayer capital most efficiently.
02:04:11.660 | And so, that's why I'm for capitalism
02:04:15.860 | over, say, giving all our money to the government
02:04:18.460 | and letting them figure out how to allocate it.
02:04:20.540 | So, yeah.
02:04:21.860 | - Why do you think it's a viral
02:04:24.900 | and it's a popular meme to criticize billionaires,
02:04:28.820 | since you mentioned billionaires?
02:04:30.580 | Why do you think there's quite a widespread criticism
02:04:35.380 | of people with wealth,
02:04:38.140 | especially those in the public eye,
02:04:39.500 | like Jeff and Elon and Mark Zuckerberg
02:04:41.740 | and who else, Bill Gates?
02:04:44.660 | - Yeah, I think a lot of people would,
02:04:47.580 | instead of trying to understand
02:04:48.940 | how the techno capital machine works
02:04:51.620 | and realizing they have much more agency than they think,
02:04:54.740 | they'd rather have this sort of victim mindset.
02:04:57.980 | I'm just subjected to this machine.
02:05:00.320 | It is oppressing me.
02:05:01.680 | And the successful players clearly must be evil
02:05:07.820 | because they've been successful at this game
02:05:09.460 | that I'm not successful at.
02:05:10.980 | But I've managed to get some people
02:05:14.740 | that were in that mindset
02:05:15.720 | and make them realize how the techno capital machine works
02:05:19.060 | and how you can harness it for your own good
02:05:22.980 | and for the good of others.
02:05:24.180 | And by creating value,
02:05:25.960 | you capture some of the value you create for the world.
02:05:27.740 | And that sort of positive sum mindset shift is so potent.
02:05:31.520 | And really, that's what we're trying to do
02:05:34.220 | by scaling EAC is sort of unlocking
02:05:37.020 | that higher level of agency.
02:05:39.100 | Actually, you're far more in control
02:05:41.180 | of the future than you think.
02:05:42.580 | You have agency to change the world.
02:05:44.480 | Go out and do it.
02:05:45.740 | Here's permission.
02:05:46.940 | - Each individual has agency.
02:05:49.580 | The motto, keep building, is often heard.
02:05:52.540 | What does that mean to you?
02:05:54.040 | And what does it have to do with Diet Coke?
02:05:56.140 | (laughing)
02:05:57.940 | By the way, thank you so much for the Red Bull.
02:05:59.780 | It's working pretty well.
02:06:01.620 | I'm feeling pretty good.
02:06:03.260 | - Awesome.
02:06:05.900 | Well, so building technologies and building,
02:06:09.260 | it doesn't have to be technologies.
02:06:10.460 | Just building in general means having agency,
02:06:14.400 | trying to change the world by creating, let's say,
02:06:18.740 | a company which is a self-sustaining organism
02:06:21.900 | that accomplishes a function
02:06:25.320 | in the broader techno capital machine.
02:06:27.580 | To us, that's the way to achieve change in the world
02:06:30.820 | that you'd like to see,
02:06:32.060 | rather than, say, pressuring politicians
02:06:35.240 | or creating non-profits that,
02:06:37.060 | non-profits, once they run out of money,
02:06:39.820 | their function can no longer be accomplished.
02:06:42.100 | You're kind of deforming the market artificially
02:06:45.500 | compared to sort of subverting or coercing the market
02:06:49.740 | or dancing with the market to convince it
02:06:53.460 | that actually this function is important,
02:06:55.620 | adds value, and here it is, right?
02:06:57.740 | And so I think this is sort of the way
02:07:00.460 | between the sort of de-growth ESG approach
02:07:03.980 | versus, say, Elon, right?
02:07:05.920 | The de-growth approach is like,
02:07:07.120 | we're gonna manage our way out of a climate crisis,
02:07:10.360 | and Elon is like, I'm gonna build a company
02:07:12.780 | that is self-sustaining, profitable, and growing,
02:07:16.060 | and we're gonna innovate our way out of this dilemma, right?
02:07:19.520 | And we're trying to get people to do the latter
02:07:23.280 | rather than the former, at all scales.
02:07:25.240 | - Elon is an interesting case.
02:07:28.200 | So you are a proponent, you celebrate Elon,
02:07:32.160 | but he's also somebody who has for a long time
02:07:35.220 | warned about the dangers, the potential dangers,
02:07:39.180 | existential risks of artificial intelligence.
02:07:41.580 | How do you square the two?
02:07:42.860 | Is that a contradiction to you?
02:07:45.020 | - It is somewhat because he's very much against regulation
02:07:49.460 | in many aspects, but for AI, he's definitely
02:07:53.540 | a proponent of regulations.
02:07:57.340 | I think overall, he saw the dangers of, say,
02:08:02.140 | opening AI, cornering the market,
02:08:04.500 | and then getting to have the monopoly
02:08:07.500 | over the cultural priors that you can embed in these LLMs
02:08:12.500 | that then, as LLMs now become the source of truth
02:08:17.820 | for people, then you can shape the culture of the people,
02:08:21.020 | and so you can control people by controlling LLMs.
02:08:23.940 | And he saw that, just like it was the case for social media,
02:08:28.580 | if you shape the function of information propagation,
02:08:31.760 | you can shape people's opinions.
02:08:34.000 | He sought to make a competitor.
02:08:36.040 | So at least, I think we're very aligned there
02:08:38.440 | that the way to a good future is to maintain
02:08:41.840 | sort of adversarial equilibria
02:08:43.960 | between the various AI players.
02:08:45.820 | I'd love to talk to him to understand sort of his thinking
02:08:49.880 | about how to make, how to advance AI going forwards.
02:08:54.880 | I mean, he's also hedging his bets, I would say,
02:08:57.780 | with Neuralink, right?
02:08:59.480 | I think if he can't stop the progress of AI,
02:09:02.920 | he's building the technology to merge.
02:09:04.720 | So look at the actions, not just the words, but--
02:09:09.720 | - Well, I mean, there's some degree where being concerned,
02:09:14.440 | maybe using human psychology,
02:09:17.120 | being concerned about threats all around us is a motivator.
02:09:20.720 | Like, it's an encouraging thing.
02:09:22.400 | I operate much better when there's a deadline,
02:09:24.640 | the fear of the deadline.
02:09:26.600 | Like, and I, for myself, create artificial things.
02:09:29.080 | Like, I wanna create in myself this kind of anxiety
02:09:31.780 | as if something really horrible will happen
02:09:33.980 | if I miss the deadline.
02:09:35.200 | I think there's some degree of that here
02:09:38.700 | because creating AI that's aligned with humans
02:09:42.280 | has a lot of potential benefits.
02:09:44.260 | And so a different way to reframe that is,
02:09:47.200 | if you don't, we're all gonna die.
02:09:49.380 | It just seems to be a very powerful psychological formulation
02:09:55.520 | of the goal of creating human-aligned AI.
02:09:59.300 | - I think that anxiety is good.
02:10:00.740 | I think, like I said, I want the free market
02:10:03.340 | to create aligned AIs that are reliable.
02:10:07.200 | And I think that's what he's trying to do with XAI.
02:10:10.660 | So I'm all for it.
02:10:12.780 | What I am against is sort of stopping,
02:10:15.620 | let's say, the open source ecosystem from thriving, right,
02:10:21.820 | by, let's say, in the executive order,
02:10:25.100 | claiming that open source LMs are dual-use technologies
02:10:28.840 | and should be government-controlled.
02:10:30.920 | Then everybody needs to register their GPU
02:10:34.120 | and their big matrices with the government.
02:10:36.760 | And I think that extra friction will dissuade a lot
02:10:41.080 | of hackers from contributing,
02:10:42.560 | hackers that could later become the researchers
02:10:45.600 | that make key discoveries that push us forward, right,
02:10:50.200 | including discoveries for AI safety.
02:10:52.720 | And so I think I just wanna maintain ubiquity
02:10:55.780 | of opportunity to contribute to AI
02:10:57.820 | and to own a piece of the future, right?
02:11:00.500 | It can't just be legislated behind some wall
02:11:04.620 | where only a few players get to play the game.
02:11:07.780 | - I mean, so the EAC movement is often sort of caricatured
02:11:11.700 | to mean sort of progress and innovation at all costs.
02:11:16.700 | Doesn't matter how unsafe it is.
02:11:20.540 | Doesn't matter if it caused a lot of damage.
02:11:22.860 | You just build cool shit as fast as possible.
02:11:26.320 | Stay up all night with a Diet Coke, whatever it takes.
02:11:31.240 | I think, I guess, I don't know if there's a question
02:11:34.440 | in there, but how important to you
02:11:37.440 | and what you've seen the different formulations
02:11:39.560 | of EAC is safety, is AI safety?
02:11:42.400 | - I think, again, I think if there was no one working on it,
02:11:48.120 | I think I would be a proponent of it.
02:11:50.840 | I think, again, our goal is to sort of bring balance
02:11:54.020 | and obviously a sense of urgency is a useful tool, right,
02:11:59.020 | to make progress.
02:12:00.940 | It hacks our dopaminergic systems
02:12:03.900 | and gives us energy to work late into the night.
02:12:08.260 | I think also having a higher purpose
02:12:10.980 | you're contributing to, right?
02:12:12.520 | At the end of the day, it's like, what am I contributing to?
02:12:14.540 | I'm contributing to the growth of this beautiful machine
02:12:17.700 | so that we can seek to the stars.
02:12:20.060 | That's really inspiring.
02:12:20.980 | That's also a sort of neuro hack.
02:12:25.400 | - So you're saying AI safety is important to you,
02:12:28.100 | but right now the landscape of ideas you see
02:12:32.300 | is AI safety as a topic is used more often
02:12:35.660 | to gain centralized control.
02:12:38.180 | So in that sense, you're resisting it
02:12:40.340 | as a proxy for gaining centralized control.
02:12:43.540 | - Yeah, I just think we have to be careful
02:12:47.860 | because safety is just the perfect cover
02:12:52.860 | for sort of centralization of power
02:12:57.140 | and covering up eventually corruption.
02:12:59.900 | I'm not saying it's corrupted now,
02:13:01.060 | but it could be down the line.
02:13:04.280 | And really, if you let the argument run,
02:13:08.020 | there's no amount of sort of centralization of control
02:13:12.180 | that will be enough to ensure your safety.
02:13:14.780 | There's always more nine, nine, nines of P safety
02:13:18.540 | that you can gain, you know, 999.9999% safe.
02:13:22.020 | Maybe you want another nine.
02:13:23.300 | Oh, please give us full access to everything you do,
02:13:26.580 | full surveillance.
02:13:27.980 | And frankly, those that are proponents of AI safety
02:13:32.100 | have proposed like having a global panopticon, right?
02:13:36.740 | Where you have centralized perception
02:13:39.500 | of everything going on.
02:13:41.260 | And to me, that just opens up the door wide open
02:13:44.020 | for a sort of Big Brother 1984-like scenario,
02:13:47.020 | and that's not a future I wanna live in.
02:13:49.580 | - 'Cause we know, we have some examples throughout history
02:13:51.900 | when that did not lead to a good outcome.
02:13:54.500 | - Right.
02:13:55.960 | - You mentioned you founded a company, Xtropic,
02:13:58.940 | that recently announced a 14.1 million seed round.
02:14:04.060 | What's the goal of the company?
02:14:05.660 | You're talking about a lot of interesting physics things.
02:14:08.700 | So what are you up to over there
02:14:10.900 | that you can talk about?
02:14:12.900 | - Yeah, I mean, you know,
02:14:14.100 | originally we weren't gonna announce last week,
02:14:17.460 | but I think with the doxing and disclosure,
02:14:20.260 | we got our hand forced.
02:14:21.740 | So we had to disclose roughly what we were doing,
02:14:24.820 | but really Xtropic was born from my dissatisfaction
02:14:29.820 | and that of my colleagues
02:14:31.460 | with the quantum computing roadmap, right?
02:14:35.580 | Quantum computing was sort of the first path
02:14:38.900 | to physics-based computing
02:14:42.060 | that was trying to commercially scale.
02:14:45.300 | And I was working on physics-based AI
02:14:47.360 | that runs on these physics-based computers.
02:14:49.980 | But ultimately our greatest enemy was this noise,
02:14:52.940 | this pervasive problem of noise that,
02:14:55.340 | you know, as I mentioned,
02:14:57.060 | you have to constantly pump out the noise
02:14:59.820 | out of the system to maintain this pristine environment
02:15:03.620 | where quantum mechanics can take effect.
02:15:06.160 | And that constraint was just too much.
02:15:08.180 | It's too costly to do that.
02:15:09.800 | And so we were wondering, right,
02:15:13.160 | as generative AI is sort of eating the world,
02:15:17.900 | more and more of the world's computational workloads
02:15:21.300 | are focused on generative AI.
02:15:23.380 | How could we use physics
02:15:24.900 | to engineer the ultimate physical substrate
02:15:28.840 | for generative AI, right?
02:15:30.780 | From first principles of physics,
02:15:33.460 | of information theory, of computation,
02:15:36.780 | and ultimately of thermodynamics, right?
02:15:39.940 | And so what we're seeking to build
02:15:42.200 | is a physics-based computing system
02:15:45.220 | and physics-based AI algorithms
02:15:47.180 | that are inspired by out-of-equilibrium thermodynamics
02:15:53.220 | or harness it directly
02:15:55.780 | to do machine learning as a physical process.
02:15:59.780 | - So what does that mean,
02:16:03.980 | machine learning as a physical process?
02:16:05.420 | Is that hardware, is it software, is it both?
02:16:07.580 | Is it trying to do the full stack
02:16:09.220 | in some kind of unique way?
02:16:10.620 | - Yes, it is full stack.
02:16:12.900 | And so we're folks that have built
02:16:16.060 | differentiable programming
02:16:19.980 | into the quantum computing ecosystem
02:16:21.840 | with TensorFlow Quantum.
02:16:23.500 | One of my co-founders of TensorFlow Quantum
02:16:25.300 | is the CTO, Trevor McCourt.
02:16:27.020 | We have some of the best quantum computer architects,
02:16:31.680 | those that have designed IBM's and AWS's systems.
02:16:35.560 | They've left quantum computing
02:16:37.800 | to help us build what we call, actually,
02:16:41.240 | a thermodynamic computer.
02:16:43.600 | - A thermodynamic computer.
02:16:44.960 | Well, actually, that's nothing new around TensorFlow Quantum.
02:16:47.680 | What lessons have you learned from TensorFlow Quantum?
02:16:51.760 | Maybe you can speak to what it takes
02:16:55.620 | to create, essentially, what, like a software API
02:16:59.160 | to a quantum computer?
02:17:01.440 | - Right, I mean, that was a challenge to build,
02:17:05.120 | to invent, to build,
02:17:06.320 | and then to get to run on the real devices.
02:17:09.200 | - Can you actually speak to what it is?
02:17:11.000 | - Yeah, so TensorFlow Quantum
02:17:14.400 | was an attempt at,
02:17:16.640 | well, I mean, I guess we succeeded
02:17:18.600 | at combining deep learning
02:17:20.720 | or differentiable classical programming
02:17:24.160 | with quantum computing
02:17:26.720 | and turn quantum computing
02:17:28.800 | into, or have types of programs
02:17:31.960 | that are differentiable in quantum computing.
02:17:34.640 | And, you know, Andrej Karpathy
02:17:37.840 | calls differentiable programming software 2.0, right?
02:17:41.600 | It's like gradient descent is a better programmer than you.
02:17:44.960 | And the idea was that
02:17:46.840 | in the early days of quantum computing,
02:17:48.440 | you can only run short quantum programs.
02:17:51.160 | And so, which quantum programs should you run?
02:17:54.480 | Well, just let gradient descent find those programs instead.
02:17:58.200 | And so, we built sort of the first infrastructure
02:18:01.120 | to not only run differentiable quantum programs,
02:18:05.720 | but combine them as part of broader deep learning graphs,
02:18:10.720 | incorporating deep neural networks,
02:18:15.080 | you know, the ones you know and love,
02:18:16.840 | with what are called quantum neural networks.
02:18:19.660 | And ultimately, it was a very cross-disciplinary effort.
02:18:26.220 | We had to invent all sorts of ways to differentiate,
02:18:29.360 | to back propagate through the graph, the hybrid graph.
02:18:32.680 | But ultimately, it taught me that
02:18:35.600 | the way to program matter and to program physics is
02:18:39.400 | by differentiating through control parameters.
02:18:43.600 | If you have parameters that affects the physics
02:18:46.040 | of the system, you can,
02:18:48.640 | and you can evaluate some loss function,
02:18:50.840 | you can optimize the system to accomplish a task,
02:18:55.080 | whatever that task may be.
02:18:56.980 | And that's a very sort of universal meta framework
02:19:01.980 | for how to program physics-based computers.
02:19:05.500 | - To try to parametrize everything,
02:19:07.900 | make those parameters differential, and then optimize.
02:19:12.400 | - Yes. - Okay.
02:19:13.940 | So, is there some more practical engineering lessons
02:19:17.580 | from TensorFlow Quantum, just organizationally too,
02:19:22.300 | like the humans involved, and how to get to a product,
02:19:25.440 | how to create good documentation, how to have,
02:19:29.120 | I don't know, all of these little subtle things
02:19:31.440 | that people might not think about.
02:19:34.240 | - I think like working across disciplinary boundaries
02:19:39.240 | is always a challenge, and you have to be extremely patient
02:19:42.600 | in teaching one another, right?
02:19:44.320 | I learned a lot of software engineering through the process.
02:19:47.720 | My colleagues learned a lot of quantum physics,
02:19:49.940 | and some learned machine learning
02:19:52.680 | through the process of building this system.
02:19:56.380 | And I think if you get some smart people
02:19:59.880 | that are passionate and trust each other in a room,
02:20:02.880 | and you have a small team, and you teach each other
02:20:06.320 | your specialties, suddenly you're kind of forming
02:20:08.880 | this sort of model soup of expertise,
02:20:12.420 | and something special comes out of that, right?
02:20:15.040 | It's like combining genes, but for your knowledge bases.
02:20:18.720 | And sometimes special products come out of that.
02:20:21.680 | And so I think like even though it's very high friction
02:20:24.800 | initially to work in an interdisciplinary team,
02:20:28.400 | I think the product at the end of the day is worth it.
02:20:31.200 | And so learned a lot trying to bridge the gap there,
02:20:34.380 | and I mean, it's still a challenge to this day.
02:20:37.120 | We hire folks that have an AI background,
02:20:40.600 | folks that have a pure physics background,
02:20:43.120 | and somehow we have to make them talk to one another, right?
02:20:47.040 | - Is there a magic, is there some science and art
02:20:50.220 | to the hiring process to building a team
02:20:53.320 | that can create magic together?
02:20:55.440 | - Yeah, it's really hard to pinpoint that je ne sais quoi.
02:21:01.820 | - I didn't know you speak French, that's very nice.
02:21:05.600 | (laughing)
02:21:07.320 | - Yeah, I'm actually French-Canadian, so.
02:21:09.600 | - Oh, you are legitimately French-Canadian.
02:21:11.680 | I thought you were just doing that for the cred.
02:21:15.360 | - No, no, I'm truly French-Canadian from Montreal.
02:21:18.440 | But yeah, essentially we look for people
02:21:23.840 | with very high fluid intelligence
02:21:26.200 | that aren't over-specialized,
02:21:27.840 | because they're gonna have to get out of their comfort zone.
02:21:29.840 | They're gonna have to incorporate concepts
02:21:32.360 | that they've never seen before,
02:21:34.120 | and very quickly get comfortable with them, right?
02:21:36.720 | Or learn to work in a team.
02:21:38.200 | And so that's sort of what we look for when we hire.
02:21:42.120 | We can't hire people that are just optimizing
02:21:46.640 | this subsystem for the past three or four years.
02:21:50.100 | We need really general, sort of broader
02:21:53.760 | intelligence and specialty.
02:21:55.800 | And people that are open-minded, really,
02:21:59.040 | 'cause if you're pioneering a new approach from scratch,
02:22:02.320 | there is no textbook, there's no reference, it's just us.
02:22:06.320 | And people that are hungry to learn.
02:22:08.800 | So we have to teach each other,
02:22:10.160 | we have to learn the literature,
02:22:11.740 | we have to share knowledge bases, collaborate,
02:22:14.640 | in order to push the boundary of knowledge
02:22:16.840 | further together, right?
02:22:19.120 | And so people that are used to just getting prescribed
02:22:23.040 | what to do at this stage,
02:22:26.120 | when you're at the pioneering stage,
02:22:28.320 | that's not necessarily who you want to hire.
02:22:31.560 | - So you mentioned with Extropic,
02:22:33.000 | you're trying to build the physical substrate
02:22:34.660 | for generative AI.
02:22:37.680 | What's the difference between that and the AGI AI itself?
02:22:42.540 | So is it possible that in the halls of your company,
02:22:47.000 | AGI will be created?
02:22:48.700 | Or will AGI just be using this as a substrate?
02:22:51.860 | - I think our goal is to both run human-like AI,
02:22:56.860 | or anthropomorphic AI.
02:22:58.260 | - Sorry for the use of the term AGI.
02:23:00.540 | I know it's triggering for you.
02:23:01.980 | - We think that the future is actually physics-based AI,
02:23:06.860 | combined with anthropomorphic AI.
02:23:10.660 | So you can imagine I have a sort of world modeling engine
02:23:15.500 | through physics-based AI.
02:23:17.100 | Physics-based AI is better at representing the world
02:23:19.440 | at all scales, 'cause it can be quantum mechanical,
02:23:22.060 | thermodynamic, deterministic,
02:23:24.700 | hybrid representations of the world,
02:23:26.740 | just like our world at different scales
02:23:29.600 | has different regimes of physics.
02:23:31.700 | If you inspire yourself from that,
02:23:33.960 | in the ways you learn representations of nature,
02:23:35.780 | you can have much more accurate representations of nature.
02:23:38.180 | So you can have very accurate world models at all scales.
02:23:42.360 | And so you have the world modeling engine,
02:23:45.700 | and then you have the sort of anthropomorphic AI
02:23:48.700 | that is human-like.
02:23:50.100 | So you can have the science,
02:23:51.540 | the playground to test your ideas,
02:23:54.940 | and you can have the synthetic scientist.
02:23:57.060 | And to us, that joint system of a physics-based AI
02:24:00.380 | and an anthropomorphic AI is the closest thing
02:24:03.900 | to a fully general artificially intelligent system.
02:24:07.620 | - So you can get closer to truth by grounding
02:24:10.220 | of the AI to physics,
02:24:13.900 | but you can also still have a anthropomorphic interface
02:24:17.220 | to us humans that like to talk to other humans,
02:24:19.860 | or human-like systems.
02:24:21.540 | So on that topic, what do you,
02:24:24.020 | I suppose that is one of the big limitations
02:24:28.780 | of current large language models to you
02:24:30.860 | is that they're not, they're good bullshitters.
02:24:34.980 | They're not really grounded to truth necessarily.
02:24:37.740 | Would that be fair to say?
02:24:40.700 | - Yeah, no.
02:24:42.220 | You wouldn't try to extrapolate the stock market
02:24:45.660 | with an LM trained on text from the internet, right?
02:24:49.060 | It's not gonna be a very accurate model.
02:24:50.620 | It's not gonna model its priors or its uncertainties
02:24:53.500 | about the world very accurately, right?
02:24:55.820 | So you need a different type of AI
02:24:58.660 | to compliment sort of this text extrapolation AI, yeah.
02:25:03.660 | - You mentioned singularity earlier.
02:25:07.460 | How far away are we from a singularity?
02:25:09.900 | - I don't know if I believe in a finite time singularity
02:25:12.940 | as a single point in time.
02:25:14.220 | I think it's gonna be asymptotic
02:25:16.740 | and sort of a diagonal sort of asymptote.
02:25:20.300 | Like, you know, we have the light cone,
02:25:23.420 | we have the limits of physics restricting our ability
02:25:27.340 | to grow so obviously can't fully diverge on a finite time.
02:25:31.820 | I think my priors are that, you know,
02:25:36.700 | I think a lot of people on the other side of the aisle
02:25:40.780 | think that once we reach human level AI,
02:25:44.660 | there's gonna be an inflection point
02:25:46.180 | and a sudden like fume, like suddenly AI is gonna grok
02:25:50.380 | how to, you know, manipulate matter at the nanoscale
02:25:53.380 | and assemble nanobots.
02:25:55.100 | And having worked, you know, for nearly a decade
02:25:59.180 | in applying AI to engineer matter,
02:26:01.260 | it's much harder than they think.
02:26:03.020 | And in reality, you need a lot of samples
02:26:04.740 | from either a simulation of nature
02:26:06.940 | that's very accurate and costly or nature itself.
02:26:10.100 | And that keeps your ability
02:26:12.180 | to control the world around us in check.
02:26:15.620 | There's a sort of minimal cost computationally
02:26:19.820 | and thermodynamically to acquiring information
02:26:22.460 | about the world in order to be able to predict
02:26:24.300 | and control it and that keeps things in check.
02:26:27.460 | - It's funny you mentioned the other side of the aisle.
02:26:30.020 | So in the poll I posted about P-Doom yesterday,
02:26:33.700 | what's the probability of doom?
02:26:35.540 | There seems to be a nice like division
02:26:37.900 | between people think it's very likely and very unlikely.
02:26:42.260 | I wonder if in the future there'll be
02:26:44.900 | the actual Republicans versus Democrats division,
02:26:47.940 | blue versus red.
02:26:49.380 | Is the AI doomers versus the yakkers, yak.
02:26:53.300 | - Yeah.
02:26:54.540 | So this movement, you know,
02:26:56.060 | is not right wing or left wing fundamentally.
02:26:58.620 | It's more like up versus down in terms of the scale.
02:27:01.780 | - Which one's the up, okay.
02:27:02.620 | - Civilization, right?
02:27:03.740 | - All right.
02:27:05.220 | - But it seems to be like there is a sort of case
02:27:09.780 | of alignment of the existing political parties
02:27:12.220 | where those that are for more centralization of power
02:27:17.500 | control and more regulations are aligning with sort of,
02:27:22.060 | aligning themselves with the doomers
02:27:23.940 | because that sort of instilling fear in people
02:27:28.620 | is a great way for them to give up more control
02:27:31.220 | and give the government more power.
02:27:33.020 | But fundamentally, we're not left versus right.
02:27:36.260 | I think there's, we've done polls of people's alignment
02:27:40.300 | with any yak, I think it's pretty balanced.
02:27:42.620 | So it's a new fundamental issue of our time.
02:27:45.780 | It's not just centralization versus decentralization.
02:27:48.180 | It's kind of, do we go, it's like tech progressivism
02:27:51.620 | versus techno conservatism, right?
02:27:54.060 | - So yak is, as a movement is often formulated
02:27:57.940 | in contrast to EA, effective altruism.
02:28:02.420 | What do you think are the pros and cons
02:28:05.900 | of effective altruism?
02:28:07.500 | What's interesting, insightful to you about them?
02:28:10.340 | And what is negative?
02:28:15.460 | - Right, I think like people trying to do good
02:28:20.060 | from first principles is good.
02:28:23.100 | - We should actually say, and sorry to interrupt.
02:28:25.540 | We should probably say that,
02:28:26.980 | and you can correct me if I'm wrong,
02:28:29.060 | but effective altruism is a kind of movement
02:28:33.380 | that's trying to do good optimally
02:28:35.820 | where good is probably measured something
02:28:38.380 | like the amount of suffering in the world.
02:28:40.600 | You wanna minimize it.
02:28:42.780 | And there's ways that that can go wrong
02:28:46.860 | as any optimization can.
02:28:48.580 | And so it's interesting to explore
02:28:50.380 | like how things can go wrong.
02:28:55.860 | - We're both trying to do good to some extent.
02:28:57.980 | And we're both trying, we're arguing
02:29:01.540 | for which loss function we should use, right?
02:29:03.980 | - Yes.
02:29:04.800 | - Their loss function is sort of hedons, right?
02:29:07.740 | Units of hedonism, like how good do you feel
02:29:12.340 | and for how much time, right?
02:29:14.500 | And so suffering would be negative hedons
02:29:17.740 | and they're trying to minimize that.
02:29:19.860 | But to us, that seems like that loss function
02:29:23.660 | has sort of spurious minima, right?
02:29:25.420 | You can start minimizing shrimp farm pain, right?
02:29:30.420 | Which seems not that productive to me.
02:29:34.640 | Or you can end up with wire heading
02:29:38.660 | where you just either install a neural link
02:29:41.300 | or you scroll TikTok forever.
02:29:43.340 | And you feel good on the short term timescale
02:29:46.260 | because you're in your neurochemistry.
02:29:48.100 | But on long term timescale, it causes decay and death, right?
02:29:52.140 | 'Cause you're not being productive.
02:29:54.060 | Whereas sort of EAC measuring progress of civilization,
02:29:59.060 | not in terms of a subjective loss function like hedonism,
02:30:03.560 | but rather an objective measure,
02:30:08.180 | a quantity that cannot be gamed that is physical energy,
02:30:11.900 | right?
02:30:12.740 | It's very objective, right?
02:30:14.180 | And there's not many ways to game it, right?
02:30:16.900 | If you did it in terms of like GDP or a currency,
02:30:20.580 | that's pinned to a certain value that's moving, right?
02:30:23.180 | And so that's not a good way to measure our progress.
02:30:26.880 | And so, but the thing is we're both trying to make progress
02:30:31.880 | and ensure humanity flourishes and gets to grow.
02:30:35.760 | We just have different loss functions
02:30:38.260 | and different ways of going about doing it.
02:30:41.260 | - Is there a degree, maybe you can educate me, correct me.
02:30:45.380 | I get a little bit skeptical
02:30:48.460 | when there's an equation involved,
02:30:50.020 | trying to reduce all of the human civilization,
02:30:53.080 | human experience to an equation.
02:30:55.820 | Is there a degree that we should be skeptical
02:31:00.620 | of the tyranny of an equation,
02:31:03.500 | of a loss function over which to optimize,
02:31:06.320 | like having a kind of intellectual humility
02:31:08.300 | about optimizing over loss functions?
02:31:12.160 | - Yeah, so this particular loss function,
02:31:14.800 | it's not stiff, it's kind of an average of averages, right?
02:31:18.660 | It's like distributions of states in the future
02:31:23.120 | are gonna follow a certain distribution.
02:31:25.920 | So it's not deterministic.
02:31:28.800 | It's not like, we're not on like stiff rails, right?
02:31:31.800 | It's just a statistical statement about the future.
02:31:36.800 | But at the end of the day,
02:31:38.040 | you can believe in gravity or not,
02:31:41.380 | but it's not necessarily an option to obey it, right?
02:31:44.640 | And some people try to test that and that goes not so well.
02:31:47.960 | So similarly, I think thermodynamics
02:31:51.680 | is there whether we like it or not,
02:31:53.040 | and we're just trying to point out what is
02:31:56.120 | and try to orient ourselves
02:32:00.280 | and chart a path forward given this fundamental truth.
02:32:04.480 | - But there's still some uncertainty.
02:32:05.800 | There's still a lack of information.
02:32:08.360 | Humans tend to fill the gap
02:32:10.440 | with the lack of information with narratives.
02:32:13.760 | And so how they interpret,
02:32:15.220 | even physics is up to interpretation
02:32:19.840 | when there's uncertainty involved.
02:32:21.540 | And humans tend to use that to further their own means.
02:32:28.760 | So it's always, whenever there's an equation,
02:32:31.080 | it just seems like,
02:32:32.640 | until we have really perfect understanding
02:32:34.840 | of the universe, humans will do what humans do.
02:32:38.920 | And they try to use the narrative of doing good
02:32:43.920 | to fool the populace into doing bad.
02:32:51.400 | I guess that this is something
02:32:53.960 | that should be skeptical about in all movements.
02:32:57.920 | - That's right.
02:32:58.760 | So we invite skepticism, right?
02:33:01.780 | - Do you have an understanding of what might,
02:33:05.560 | to a degree that went wrong,
02:33:07.240 | what do you think may have gone wrong
02:33:08.780 | with effective altruism that might also go wrong
02:33:12.880 | with effective accelerationism?
02:33:14.720 | - Yeah, I mean, I think, you know,
02:33:18.240 | I think it provided initially a sense of community
02:33:21.720 | for engineers and intellectuals and rationalists
02:33:25.040 | in the early days.
02:33:25.880 | And it seems like the community was very healthy,
02:33:28.800 | but then, you know, they formed all sorts of organizations
02:33:32.000 | and started routing capital and having actual power, right?
02:33:37.000 | They have real power.
02:33:39.160 | They influence the government,
02:33:40.360 | they influence most AI orgs now.
02:33:43.200 | I mean, they're literally controlling the board of OBDI,
02:33:45.600 | right, and look over to Anthropic.
02:33:48.880 | I think they all have some control over that too.
02:33:51.320 | And so I think, you know,
02:33:54.000 | the assumption of EAC is more like capitalism
02:33:56.520 | is that every agent organism and meta organism
02:33:59.800 | is gonna act in its own interest.
02:34:02.040 | And we should maintain sort of adversarial equilibrium
02:34:05.360 | or adversarial competition to keep each other in check
02:34:08.480 | at all times, at all scales.
02:34:09.980 | I think that, yeah, ultimately it was the perfect cover
02:34:15.540 | to acquire tons of power and capital.
02:34:18.280 | And unfortunately, sometimes that corrupts people over time.
02:34:23.520 | - What does a perfectly productive day,
02:34:26.520 | since building is important,
02:34:28.640 | what does a perfectly productive day
02:34:30.320 | in the life of Guillaume Verdun look like?
02:34:33.220 | How much caffeine do you consume?
02:34:36.920 | Like what's a perfect day?
02:34:38.620 | - Okay, so I have a particular regimen.
02:34:42.920 | I would say my favorite days are 12 p.m. to 4 a.m.
02:34:47.920 | And I would have meetings in the early afternoon,
02:34:53.440 | usually external meetings, some internal meetings.
02:34:56.560 | Because I'm CEO, I have to interface
02:34:59.040 | with the outside world, whether it's customers
02:35:00.720 | or investors or interviewing potential candidates.
02:35:04.500 | And usually I'll have ketones, exogenous ketones.
02:35:11.480 | - So are you on a keto diet or is this--
02:35:16.560 | - I've done keto before for football and whatnot.
02:35:21.240 | But I like to have a meal after part of my day is done.
02:35:26.240 | And so I can just have extreme focus.
02:35:31.040 | - You do the social interactions earlier in the day
02:35:35.000 | without food.
02:35:35.840 | - Front load them, yeah.
02:35:37.120 | Like right now I'm on ketones and Red Bull.
02:35:39.820 | And it just gives you a clarity of thought
02:35:44.100 | that is really next level.
02:35:45.720 | 'Cause then when you eat, you're actually allocating
02:35:47.840 | some of your energy that could be going to neural energy
02:35:51.000 | to your digestion.
02:35:53.000 | After I eat, maybe I take a break an hour or so,
02:35:56.660 | hour and a half.
02:35:57.500 | And then usually it's like ideally one meal a day,
02:36:02.040 | like steak and eggs and vegetables.
02:36:05.480 | Animal based primarily, so fruit and meat.
02:36:08.600 | And then I do a second wind usually.
02:36:11.440 | That's deep work, right?
02:36:13.880 | 'Cause I am a CEO, but I'm still technical.
02:36:16.480 | I'm contributing to most patents.
02:36:18.200 | And there I'll just stay up late into the night
02:36:22.440 | and work with engineers on very technical problems.
02:36:25.840 | - So it's like the 9 p.m. to 4 a.m.,
02:36:29.000 | whatever that range of time.
02:36:30.720 | - Yeah, yeah, that's the perfect time.
02:36:32.880 | The emails, the things that are on fire,
02:36:35.760 | stop trickling in, you can focus.
02:36:38.520 | And then you have your second wind.
02:36:40.280 | And I think Demis Hassabis has a similar work day
02:36:45.960 | to some extent.
02:36:47.240 | So I think that's definitely inspired my work day.
02:36:49.840 | But yeah, I started this work day when I was at Google
02:36:54.560 | and had to manage a bit of the product during the day
02:36:57.400 | and have meetings and then do technical work at night.
02:37:00.360 | - Exercise, sleep, those kinds of things.
02:37:03.940 | You said football, you used to play football?
02:37:05.960 | - Yeah, I used to play American football.
02:37:08.760 | I've done all sorts of sports growing up.
02:37:10.600 | And then I was into powerlifting for a while.
02:37:13.940 | So when I was studying mathematics in grad school,
02:37:17.240 | I would just do math and lift, take caffeine,
02:37:21.120 | and that was my day.
02:37:22.440 | It was very pure, the purest of monk modes.
02:37:25.760 | But it's really interesting how in powerlifting
02:37:28.440 | you're trying to cause neural adaptation
02:37:30.320 | by having certain driving signals
02:37:32.760 | and you're trying to engineer neuroplasticity
02:37:35.020 | through all sorts of supplements.
02:37:36.680 | And you have all sorts of brain-derived neurotrophic factors
02:37:42.000 | that get secreted when you lift.
02:37:44.040 | So it's funny to me how I was trying to engineer
02:37:47.080 | neural adaptation in my nervous system more broadly,
02:37:53.360 | not just my brain, while learning mathematics.
02:37:56.360 | I think you can learn much faster
02:37:59.240 | if you really care, if you convince yourself
02:38:03.680 | to care a lot about what you're learning
02:38:06.120 | and you have some sort of assistance,
02:38:08.160 | let's say caffeine or some cholinergic supplement
02:38:11.800 | to increase neuroplasticity.
02:38:13.840 | I should chat with Andrew Huberman at some point.
02:38:16.540 | He's the expert.
02:38:17.380 | But yeah, at least to me, it's like,
02:38:21.140 | you can try to input more tokens into your brain,
02:38:25.080 | if you will, and you can try to increase the learning rate
02:38:27.520 | so that you can learn much faster on a shorter timescale.
02:38:30.800 | So I've learned a lot of things.
02:38:33.520 | I've followed my curiosity.
02:38:34.840 | You're naturally, if you're passionate
02:38:36.800 | about what you're doing, you're gonna learn faster,
02:38:38.520 | you're gonna become smarter faster.
02:38:41.200 | And if you follow your curiosity,
02:38:42.520 | you're always gonna be interested.
02:38:44.440 | And so I advise people to follow their curiosity
02:38:47.200 | and don't respect the boundaries of certain fields
02:38:50.320 | or what you've been allocated in terms of lane
02:38:52.600 | of what you're working on.
02:38:54.160 | Just go out and explore and follow your nose
02:38:57.320 | and try to acquire and compress as much information
02:39:01.400 | as you can into your brain,
02:39:02.980 | anything that you find interesting.
02:39:04.960 | - And caring about a thing.
02:39:06.120 | And like you said, which is interesting,
02:39:07.800 | it works for me really well,
02:39:09.880 | is like tricking yourself that you care about a thing.
02:39:12.040 | - Yes.
02:39:13.360 | - And then you start to really care about it.
02:39:15.800 | So it's funny, the motivation
02:39:18.520 | is a really good catalyst for learning.
02:39:22.120 | - Right, and so at least part of my character,
02:39:27.120 | as Beth Jezos, is kind of like--
02:39:29.080 | - Yeah, the hype man.
02:39:30.480 | - Yeah, just hype, but I'm like hyping myself up,
02:39:32.740 | but then I just tweet about it.
02:39:34.360 | And it's just when I'm trying to get really hyped up
02:39:36.600 | and in like an altered state of consciousness
02:39:38.680 | where I'm like ultra focused, in the flow wired,
02:39:42.120 | trying to invent something that's never existed,
02:39:44.040 | I need to get to like unreal levels of like excitement.
02:39:47.840 | But your brain has these levels of cognition
02:39:52.320 | that you can unlock with like higher levels of adrenaline
02:39:55.320 | and whatnot.
02:39:56.720 | And I mean, I've learned that in powerlifting
02:39:59.480 | that actually you can engineer a mental switch
02:40:03.380 | to like increase your strength, right?
02:40:05.760 | Like if you can engineer a switch,
02:40:07.920 | maybe you have a prompt like a certain song or some music
02:40:10.640 | where suddenly you're like fully primed,
02:40:13.680 | then you're at maximum strength, right?
02:40:16.560 | And I've engineered that switch through years of lifting.
02:40:20.640 | If you're gonna get under 500 pounds
02:40:22.280 | and it could crush you,
02:40:23.980 | if you don't have that switch to be wired in, you might die.
02:40:28.600 | So that'll wake you right up.
02:40:30.160 | And that sort of skill I've carried over to like research.
02:40:34.460 | When it's go time, when the stakes are high,
02:40:37.240 | somehow I just reach another level of neural performance.
02:40:40.360 | - So Beth Jezos is your sort of embodiment representation
02:40:44.680 | of your intellectual hulk.
02:40:46.520 | It's your productivity hulk that they just turn on.
02:40:50.880 | What have you learned about the nature of identity
02:40:54.120 | from having these two identities?
02:40:56.600 | I think it's interesting for people
02:40:58.200 | to be able to put on those two hats so explicitly.
02:41:01.320 | - I think it was interesting in the early days.
02:41:03.240 | I think in the early days,
02:41:04.480 | I thought it was truly compartmentalized.
02:41:06.560 | Like, oh yeah, this is a character, I'm Guillaume,
02:41:09.560 | Beth is just the character.
02:41:11.320 | I like take my thoughts
02:41:13.180 | and then I extrapolate them to a bit more extreme.
02:41:16.160 | But over time, it's kind of like both identities
02:41:20.600 | were starting to merge mentally and people were like,
02:41:22.800 | no, you are, I met you, you are Beth,
02:41:24.880 | you are not just Guillaume.
02:41:27.220 | And I was like, wait, am I?
02:41:28.720 | And now it's like fully merged.
02:41:31.680 | But it was already before the docs,
02:41:33.080 | it was already starting mentally
02:41:35.160 | that I am this character, it's part of me.
02:41:39.400 | - Would you recommend people sort of have an alt?
02:41:42.240 | - Absolutely.
02:41:43.800 | - Like young people, would you recommend them
02:41:45.480 | to explore different identities
02:41:47.280 | by having alts, alt accounts?
02:41:49.360 | - It's fun, it's like writing an essay
02:41:51.840 | and taking a position, right?
02:41:53.000 | It's like you do this in debate.
02:41:54.360 | It's like you can have experimental thoughts
02:41:56.920 | and by the stakes being so low
02:42:00.440 | because you're an on account with, I don't know,
02:42:02.200 | 20 followers or something,
02:42:04.040 | you can experiment with your thoughts
02:42:05.520 | in a low stakes environment.
02:42:07.560 | And I feel like we've lost that
02:42:09.320 | in the era of everything being under your main name,
02:42:12.440 | everything being attributable to you.
02:42:14.040 | People just are afraid to speak,
02:42:15.600 | explore ideas that aren't fully formed, right?
02:42:19.600 | And I feel like we've lost something there.
02:42:21.640 | So I hope platforms like Axe and others
02:42:25.120 | like really help support people
02:42:27.440 | trying to stay pseudonymous or anonymous
02:42:30.040 | because it's really important for people
02:42:32.840 | to share thoughts that aren't fully formed
02:42:36.080 | and converge onto maybe hidden truths
02:42:38.360 | that were hard to converge upon
02:42:41.320 | if it was just through open conversation with real names.
02:42:46.320 | - Yeah, I really believe in not radical
02:42:49.720 | but rigorous empathy.
02:42:52.320 | It's like really considering what it's like
02:42:54.400 | to be a person of a certain viewpoint
02:42:57.480 | and like taking that as a thought experiment
02:43:00.040 | farther and farther and farther.
02:43:01.800 | And one way of doing that is an alt account.
02:43:04.000 | That's a fun, interesting way to really explore
02:43:10.120 | what it's like to be a person that believes
02:43:11.840 | a set of beliefs.
02:43:13.600 | And taking that across the span of several days,
02:43:17.560 | weeks, months, of course,
02:43:20.120 | there's always the danger of becoming that.
02:43:22.880 | That's the Nietzsche gaze long into the abyss.
02:43:26.680 | The abyss gazes into you.
02:43:30.080 | You have to be careful.
02:43:31.800 | - Breaking Beth.
02:43:33.280 | - Yeah, right, breaking Beth.
02:43:34.920 | Yeah, you wake up with a shaved head one day.
02:43:37.360 | Just like, who am I?
02:43:39.240 | What have I become?
02:43:40.440 | So you've mentioned quite a bit of advice already,
02:43:44.480 | but what advice would you give to young people
02:43:46.760 | of how to, in this interesting world we're in,
02:43:53.000 | how to have a career and how to have a life
02:43:56.000 | they can be proud of?
02:43:57.080 | - I think to me, the reason I went to theoretical physics
02:44:01.960 | was that I had to learn the base of the stack
02:44:05.400 | that was gonna stick around
02:44:06.760 | no matter how the technology changes, right?
02:44:10.240 | And to me, that was the foundation upon which
02:44:14.040 | then I later built engineering skills and other skills.
02:44:18.320 | And to me, the laws of physics,
02:44:20.040 | it may seem like the landscape right now
02:44:21.920 | is changing so fast it's disorienting,
02:44:24.560 | but certain things like fundamental mathematics
02:44:26.680 | and physics aren't gonna change.
02:44:28.800 | And if you have that knowledge
02:44:30.600 | and knowledge about complex systems and adaptive systems,
02:44:35.040 | I think that's gonna carry you very far.
02:44:37.640 | And so not everybody has to study mathematics,
02:44:40.600 | but I think it's really a huge cognitive unlock
02:44:44.520 | to learn math and some physics and engineering.
02:44:48.480 | - Get as close to the base of the stack as possible.
02:44:51.400 | - Yeah, that's right, 'cause the base of the stack
02:44:53.640 | doesn't change, everything else,
02:44:55.560 | your knowledge might become not as relevant in a few years.
02:44:58.160 | Of course, there's a sort of transfer learning you can do,
02:45:00.360 | but then you have to always transfer learn constantly.
02:45:04.440 | - I guess the closer you are to the base of the stack,
02:45:06.320 | the easier the transfer learning, the shorter the jump.
02:45:10.360 | - Right, right.
02:45:12.120 | And you'd be surprised once you've learned concepts
02:45:15.880 | in many physical scenarios,
02:45:18.480 | how they can carry over to understanding other systems
02:45:21.920 | that aren't necessarily physics.
02:45:23.280 | And I guess like the IAC writings,
02:45:26.200 | the principles and tenet post that was based on physics,
02:45:30.080 | that was kind of my experimentation
02:45:31.720 | with applying some of the thinking
02:45:34.920 | from out of equilibrium thermodynamics
02:45:36.840 | to understanding the world around us.
02:45:38.520 | And it's led to IAC and this movement.
02:45:42.640 | - If you look at your one cog in the machine,
02:45:46.880 | in the capitalist machine, one human,
02:45:48.760 | and if you look at yourself,
02:45:52.320 | do you think mortality is a feature or a bug?
02:45:55.400 | Like would you want to be immortal?
02:45:57.760 | - No.
02:45:58.960 | I think fundamentally in thermodynamic
02:46:03.960 | dissipative adaptation, there's the word dissipation.
02:46:08.680 | Dissipation is important, death is important, right?
02:46:11.660 | We have a saying in physics,
02:46:13.000 | physics progresses one funeral at a time.
02:46:16.000 | - Yeah.
02:46:17.040 | - I think the same is true for capitalism,
02:46:19.400 | companies, empires, people, everything.
02:46:23.920 | Everything must die at some point.
02:46:26.480 | I think that we should probably extend our lifespan
02:46:29.880 | because we need a longer period of training
02:46:34.240 | 'cause the world is more and more complex, right?
02:46:36.040 | We have more and more data to really be able
02:46:39.400 | to predict and understand the world.
02:46:41.200 | And if we have a finite window of higher neuroplasticity,
02:46:45.840 | then we have sort of a hard cap
02:46:47.960 | in how much we can understand about our world.
02:46:50.320 | So, I think I am for death because again,
02:46:54.880 | I think it's important if you have like a king
02:46:57.640 | that would never die, that would be a problem, right?
02:47:00.360 | Like the system wouldn't be constantly adapting, right?
02:47:05.280 | You need novelty, you need youth, you need disruption
02:47:08.920 | to make sure the system's always adapting and malleable.
02:47:13.880 | Otherwise, if things are immortal,
02:47:17.240 | if you have, let's say, corporations that are there forever
02:47:19.560 | and they have the monopoly, they get calcified,
02:47:21.920 | they become not as optimal, not as high fitness
02:47:25.200 | in a changing, time-varying landscape, right?
02:47:28.560 | And so, death gives space for youth and novelty
02:47:33.560 | to take its place.
02:47:36.080 | And I think it's an important part
02:47:37.640 | of every system in nature.
02:47:40.760 | So, yeah, I am for death.
02:47:43.840 | But I do think that longer lifespan
02:47:47.200 | and longer time for neuroplasticity, bigger brains,
02:47:50.360 | which should be something we should strive for.
02:47:52.880 | - Well, in that, Jeff Bezos and Bev Jezos agree
02:47:57.840 | that all companies die.
02:47:59.400 | And for Jeff, the goal is to try to,
02:48:03.720 | he calls it day one thinking,
02:48:05.840 | try to constantly, for as long as possible, reinvent.
02:48:10.280 | Sort of extend the life of the company,
02:48:12.640 | but eventually it too will die
02:48:14.760 | 'cause it's so damn difficult to keep reinventing.
02:48:17.400 | Are you afraid of your own death?
02:48:20.660 | - I think I have ideas and things I'd like to achieve
02:48:28.640 | in this world before I have to go,
02:48:32.000 | but I don't think I'm necessarily afraid of death.
02:48:34.600 | - So, you're not attached to this particular body
02:48:36.800 | and mind that you got?
02:48:38.240 | - No, I think, I'm sure there's gonna be better
02:48:42.400 | versions of myself in the future, or--
02:48:46.200 | - Forks.
02:48:47.120 | - Forks, right, genetic forks, or other, right?
02:48:51.080 | I truly believe that.
02:48:53.600 | I think there's a sort of a evolutionary-like algorithm
02:48:58.600 | happening at every bit or nap in the world
02:49:03.960 | is sort of adapting through this process
02:49:08.080 | that we described in IAC.
02:49:10.160 | And I think maintaining this adaptation malleability
02:49:13.280 | is how we have constant optimization of the whole machine.
02:49:16.680 | And so, I don't think I'm particularly an optimum
02:49:21.440 | that needs to stick around forever.
02:49:23.000 | I think there's gonna be greater optima in many ways.
02:49:25.760 | - What do you think is the meaning of it all?
02:49:27.280 | What's the why of the machine, the IAC machine?
02:49:30.920 | - The why, well, the why is thermodynamics.
02:49:36.480 | It's why we're here.
02:49:37.960 | It's what has led to the formation of life
02:49:42.520 | and of civilization, of evolution of technologies
02:49:45.440 | and growth of civilization.
02:49:47.800 | But why do we have thermodynamics?
02:49:50.040 | Why do we have our particular universe?
02:49:51.800 | Why do we have these particular hyperparameters,
02:49:54.240 | the constants of nature?
02:49:55.600 | Well, then you get into the anthropic principle, right?
02:49:59.480 | In the landscape of potential universes, right?
02:50:02.280 | We're in the universe that allows for life.
02:50:04.840 | And then why is there potentially many universes?
02:50:09.840 | I don't know.
02:50:11.640 | I don't know that part.
02:50:12.480 | But could we potentially engineer new universes
02:50:16.560 | or create pocket universes and set the hyperparameters
02:50:21.000 | so there is some mutual information
02:50:22.520 | between our existence and that universe
02:50:25.280 | and we'd be somewhat its parents?
02:50:27.400 | I think that's really, I don't know, that'd be very poetic.
02:50:31.000 | It's purely conjecture.
02:50:32.560 | But again, this is why figuring out quantum gravity
02:50:36.680 | would allow us to understand if we can do that.
02:50:39.560 | - And above that, why does it all seem
02:50:43.040 | so beautiful and exciting?
02:50:45.000 | The quest to figuring out quantum gravity
02:50:48.440 | seems so exciting.
02:50:51.880 | Why is that?
02:50:52.700 | Why are we drawn to that?
02:50:53.540 | Why are we pulled towards that?
02:50:55.120 | Just is that puzzle-solving creative force
02:50:59.160 | that underpins all of it, it seems like.
02:51:01.400 | - I think we seek, just like an LLM seeks
02:51:04.240 | to minimize cross-entropy between its internal model
02:51:07.040 | and the world.
02:51:07.880 | We seek to minimize the statistical divergence
02:51:11.120 | between our predictions in the world and the world itself.
02:51:14.280 | And having regimes of energy scales or physical scales
02:51:18.840 | in which we have no visibility,
02:51:20.480 | no ability to predict or perceive,
02:51:22.640 | that's kind of an insult to us.
02:51:26.080 | And we want to be able to understand the world better
02:51:31.080 | in order to best steer it or steer us through it.
02:51:35.900 | And in general, it's the capability that has evolved
02:51:39.880 | because the better you can predict the world,
02:51:42.140 | the better you can capture utility or free energy
02:51:46.240 | towards your own sustenance and growth.
02:51:48.940 | And I think quantum gravity, again,
02:51:52.080 | is kind of the final boss
02:51:54.580 | in terms of knowledge acquisition.
02:51:56.280 | Because once we've mastered that,
02:51:58.920 | then we can do a lot potentially.
02:52:02.560 | But between here and there,
02:52:04.440 | I think there's a lot to learn in the mesoscales.
02:52:07.240 | There's a lot of information to acquire about our world
02:52:10.540 | and a lot of engineering, perception, prediction,
02:52:13.720 | and control to be done to climb up the Kardashev scale.
02:52:18.280 | And to us, that's the great challenge of our times.
02:52:22.360 | - And when you're not sure where to go,
02:52:24.120 | let the meme pave the way.
02:52:27.760 | Guillaume, Beth, thank you for talking today.
02:52:32.280 | Thank you for the work you're doing.
02:52:33.660 | Thank you for the humor and the wisdom
02:52:35.240 | you put into the world.
02:52:37.080 | This was awesome.
02:52:37.920 | - Thank you so much for having me, Lex.
02:52:39.440 | It was a pleasure.
02:52:40.880 | - Thank you for listening to this conversation
02:52:42.600 | with Guillaume Verdun.
02:52:43.960 | To support this podcast,
02:52:45.120 | please check out our sponsors in the description.
02:52:48.120 | And now, let me leave you with some words
02:52:50.160 | from Albert Einstein.
02:52:51.960 | If at first, the idea is not absurd,
02:52:54.940 | then there is no hope for it.
02:52:57.400 | Thank you for listening.
02:52:58.560 | I hope to see you next time.
02:53:00.800 | (upbeat music)
02:53:03.380 | (upbeat music)
02:53:05.960 | [BLANK_AUDIO]