back to indexGuillaume Verdon: Beff Jezos, E/acc Movement, Physics, Computation & AGI | Lex Fridman Podcast #407
Chapters
0:0 Introduction
2:23 Beff Jezos
12:21 Thermodynamics
18:36 Doxxing
28:30 Anonymous bots
35:58 Power
38:29 AI dangers
42:1 Building AGI
50:14 Merging with AI
57:56 p(doom)
73:23 Quantum machine learning
86:41 Quantum computer
95:15 Aliens
100:4 Quantum gravity
105:25 Kardashev scale
107:17 Effective accelerationism (e/acc)
117:47 Humor and memes
120:53 Jeff Bezos
127:25 Elon Musk
133:55 Extropic
142:31 Singularity and AGI
146:29 AI doomers
147:54 Effective altruism
154:23 Day in the life
160:50 Identity
163:40 Advice for young people
165:42 Mortality
169:25 Meaning of life
00:00:00.000 |
The following is a conversation with Guillaume Verdun, 00:00:02.920 |
the man behind the previously anonymous account 00:00:09.040 |
These two identities were merged by a doxing article 00:00:16.240 |
The leader of the tech elite's EAC movement." 00:00:25.800 |
Identity number one, Guillaume, is a physicist, 00:00:30.980 |
and quantum machine learning researcher and engineer, 00:00:33.560 |
receiving his PhD in quantum machine learning, 00:00:38.480 |
and finally launching his own company called Xtropic 00:00:42.160 |
that seeks to build physics-based computing hardware 00:00:50.960 |
is the creator of the effective accelerationism movement, 00:00:57.960 |
that advocates for propelling rapid technological progress 00:01:01.120 |
as the ethically optimal course of action for humanity. 00:01:04.560 |
For example, his proponents believe that progress in AI 00:01:08.880 |
is a great social equalizer, which should be pushed forward. 00:01:13.320 |
EAC followers see themselves as a counterweight 00:01:16.840 |
to the cautious view that AI is highly unpredictable, 00:01:20.120 |
potentially dangerous, and needs to be regulated. 00:01:25.840 |
of "doomers" or "decells," short for deceleration. 00:01:30.840 |
As Bev himself put it, EAC is a memetic optimism virus. 00:01:51.360 |
I am, too, a kind of aspiring connoisseur of the absurd. 00:01:56.160 |
It is not an accident that I spoke to Jeff Bezos 00:02:19.360 |
And now, dear friends, here's Guillaume Verdun. 00:02:40.400 |
applied mathematician, and then BasedBevJezos 00:02:43.360 |
is basically a meme account that started a movement 00:02:50.120 |
So maybe just can you linger on who these people are 00:02:53.720 |
in terms of characters, in terms of communication styles, 00:03:01.960 |
ever since I was a kid, I wanted to figure out 00:03:05.240 |
a theory of everything to understand the universe, 00:03:08.000 |
and that path led me to theoretical physics eventually, 00:03:13.000 |
trying to answer the big questions of why are we here, 00:03:29.440 |
understand the universe as one big computation. 00:03:34.000 |
And essentially, after reaching a certain level, 00:03:42.520 |
I realized that I wanted to not only understand 00:03:47.280 |
how the universe computes, but sort of compute like nature, 00:03:51.780 |
and figure out how to build and apply computers 00:03:56.700 |
that are inspired by nature, so physics-based computers. 00:04:01.160 |
And that sort of brought me to quantum computing 00:04:04.960 |
as a field of study to, first of all, simulate nature. 00:04:09.960 |
And in my work, it was to learn representations of nature 00:04:17.400 |
So if you have AI representations that think like nature, 00:04:22.400 |
then they'll be able to more accurately represent it. 00:04:37.000 |
So how to do machine learning on quantum computers, 00:04:41.840 |
and really sort of extend notions of intelligence 00:04:51.780 |
quantum mechanical data from our world, right? 00:04:54.540 |
And how do you learn quantum mechanical representations 00:04:59.060 |
On what kind of computer do you run these representations 00:05:10.100 |
because ultimately I had a sort of crisis of faith. 00:05:17.580 |
as every physicist does at the beginning of their career, 00:05:20.940 |
a few equations that describe the whole universe, right? 00:05:27.420 |
But eventually I realized that actually augmenting ourselves 00:05:32.700 |
with machines, augmenting our ability to perceive, 00:05:40.140 |
And that's what got me to leave theoretical physics 00:05:43.020 |
and go into quantum computing and quantum machine learning. 00:05:49.080 |
I thought that there was still a piece missing. 00:05:53.360 |
There was a piece of our understanding of the world 00:05:57.180 |
and our way to compute and our way to think about the world. 00:06:01.460 |
And if you look at the physical scales, right? 00:06:06.060 |
At the very small scales, things are quantum mechanical, 00:06:11.740 |
And at the very large scales, things are deterministic. 00:06:18.220 |
I'm not at a superposition over here and there. 00:06:21.220 |
At the very small scales, things are in superposition. 00:06:34.380 |
you know, the scales of proteins, of biology, 00:06:51.540 |
in quantum computing and quantum machine learning, 00:07:00.940 |
by studying the very big and the very small, right? 00:07:07.100 |
So that's studying the cosmos, where it's going, 00:07:15.260 |
You study where the energy density is sufficient 00:07:19.100 |
for both quantum mechanics and gravity to be relevant, right? 00:07:24.100 |
And the sort of extreme scenarios are black holes 00:07:30.820 |
So there's the sort of scenarios that you study 00:07:34.700 |
the interface between quantum mechanics and relativity. 00:07:39.700 |
And, you know, really I was studying these extremes 00:07:44.260 |
to understand how the universe works and where is it going, 00:07:49.260 |
but I was missing a lot of the meat in the middle, 00:07:56.340 |
Because day-to-day quantum mechanics is relevant 00:07:58.940 |
and the cosmos is relevant, but not that relevant. 00:08:01.180 |
Actually, we're on sort of the medium space and time scales. 00:08:05.500 |
And there, the main, you know, theory of physics 00:08:09.020 |
that is most relevant is thermodynamics, right? 00:08:18.620 |
that is thermodynamical and it's out of equilibrium. 00:08:21.940 |
We're not, you know, just a soup of particles 00:08:27.540 |
We're a sort of coherent state trying to maintain itself 00:08:39.700 |
and I guess my faith in the universe happened 00:08:48.620 |
And I knew I wanted to build, well, first of all, 00:08:54.020 |
a computing paradigm based on this type of physics. 00:09:02.380 |
with these ideas applied to society and economies 00:09:17.740 |
That comes from having an account that you're accountable 00:09:25.760 |
just to experiment with ideas originally, right? 00:09:29.200 |
Because I didn't realize how much I was restricting 00:09:34.200 |
my space of thoughts until I sort of had the opportunity 00:09:40.600 |
In a sense, restricting your speech back propagates 00:09:51.440 |
it seemed like I had unclamped some variables in my brain 00:09:55.540 |
and suddenly could explore a much wider parameter space 00:09:59.960 |
- Just to linger on that, isn't that interesting? 00:10:02.600 |
That one of the things that people often talk about 00:10:05.440 |
is that when there's pressure and constraints on speech, 00:10:18.920 |
but somehow it creates these walls around thought. 00:10:23.680 |
- Yep, that's sort of the basis of our movement 00:10:28.480 |
is we were seeing a tendency towards constraint, 00:10:36.800 |
in every aspect of life, whether it's thought, 00:10:40.720 |
how to run a company, how to organize humans, 00:10:49.100 |
In general, we believe that maintaining variance 00:10:57.560 |
Maintaining healthy competition in marketplaces of ideas, 00:11:07.800 |
of governments, of currencies is the way forward 00:11:13.040 |
because the system always adapts to assign resources 00:11:18.040 |
to the configurations that lead to its growth. 00:11:29.240 |
is this sort of realization that life is a sort of fire 00:11:59.840 |
at acquiring free energy and dissipating more heat 00:12:08.680 |
So the universe is biased towards certain futures 00:12:30.840 |
that have complexity and are out of equilibrium. 00:12:40.160 |
its capability to use energy to offload entropy, 00:12:49.400 |
Why is that intuitive to you that it's natural 00:12:53.820 |
- Well, we're far more efficient at producing heat 00:12:58.820 |
than, let's say, just a rock with a similar mass 00:13:08.520 |
and we're using all this electricity for our operation. 00:13:13.520 |
And so the universe wants to produce more entropy 00:13:23.320 |
it's actually more optimal at producing entropy 00:13:26.480 |
because it will seek out pockets of free energy 00:13:30.920 |
and burn it for its sustenance and further growth. 00:13:48.240 |
that life emerged because of this sort of property. 00:13:53.080 |
And to me, this physics is what governs the mesoscales. 00:14:08.200 |
And to me, both from a point of view of designing 00:14:13.200 |
or engineering devices that harness that physics 00:14:21.560 |
has been sort of a synergy between my two identities 00:14:28.040 |
And so that's really how the two identities emerged. 00:14:32.680 |
One was kind of, I'm a decently respected scientist 00:14:37.680 |
and I was going towards doing a startup in the space 00:14:47.920 |
And as a dual to that, I was sort of experimenting 00:14:57.760 |
And ultimately I think that around that time, 00:15:10.560 |
about the future in general and pessimism about tech. 00:15:14.520 |
And that pessimism was sort of virally spreading 00:15:19.320 |
because it was getting algorithmically amplified 00:15:31.120 |
And to me, that is a very fundamentally destructive force 00:15:46.160 |
you're increasing the likelihood of it happening. 00:15:49.600 |
And so felt the responsibility to some extent 00:15:53.680 |
to make people aware of the trajectory of civilization 00:16:05.080 |
And sort of that actually the laws of physics say 00:16:09.600 |
and grander statistically, and we can make it so. 00:16:17.400 |
if you believe that the future would be better 00:16:19.720 |
and you believe you have agency to make it happen, 00:16:30.160 |
to sort of engineer a movement of viral optimism 00:16:41.960 |
and do hard things, do the things that need to be done 00:16:50.080 |
Because at least to me, I don't think stagnation 00:17:05.680 |
when the system is growing rather than when it's declining 00:17:20.000 |
but I guess recently the two have been merged 00:17:27.120 |
- You said a lot of really interesting things there. 00:17:36.240 |
to try to understand from a quantum computing perspective 00:17:42.260 |
How do you represent nature in order to understand it, 00:17:44.240 |
in order to simulate it, in order to do something with it? 00:17:53.720 |
to the what you're calling mesoscale representation 00:18:01.340 |
in order to understand what life, human behavior, 00:18:06.340 |
all this kind of stuff that's happening here on Earth 00:18:15.200 |
So some ideas, I suppose both pessimism and optimism 00:18:26.760 |
So both optimism and pessimism have that property. 00:18:29.280 |
I would say that probably a lot of ideas have that property 00:18:33.160 |
which is one of the interesting things about humans. 00:18:35.840 |
And you talked about one interesting difference also 00:18:40.340 |
between the sort of the Guillaume de Guille front end 00:18:57.820 |
in the way that we communicate in the 21st century. 00:19:00.840 |
Also the movement that you mentioned that you started, 00:19:13.860 |
A play, a resistance to the effective altruism movement. 00:19:18.860 |
Also an interesting one that I'd love to talk to you 00:19:28.640 |
Recently, without your consent like you said, 00:19:32.400 |
some journalists figured out that you're one and the same. 00:19:39.540 |
First of all, what's the story of the merger of the two? 00:19:50.740 |
with my co-founder of EAC, an account named Bazelord. 00:19:54.700 |
Still anonymous luckily and hopefully forever. 00:19:58.500 |
- So it's based Beth Jezos and Bazed, like Bazen? 00:20:03.500 |
Like Bazelord, like Bazen, Bazenlord, Bazelord. 00:20:07.600 |
Okay, and so we should say from now on when you say EAC, 00:20:18.440 |
- And you're referring to a manifesto written 00:20:38.480 |
around the same time as I founded this company 00:20:51.680 |
And there, you know, the baseline is sort of secrecy, right? 00:21:14.800 |
And so I was being secretive about what I was working on. 00:21:27.920 |
they also correlated my main identity and this account. 00:21:47.120 |
You know, when you're a startup entrepreneur, 00:22:02.640 |
So I think at first they had a first reporter 00:22:08.480 |
and they didn't have all the pieces together. 00:22:10.840 |
But then they looked at their notes across the organization 00:22:22.780 |
- And in general-- - Okay, you said sensor fused. 00:22:30.760 |
We should also say that the journalists used, 00:22:48.480 |
So, and that's where primarily the match happened. 00:22:58.180 |
They looked at my private Facebook account and so on. 00:23:07.380 |
Originally I thought that doxing was illegal, right? 00:23:21.320 |
that sort of like ring the alarm bells for me 00:23:23.600 |
when they said, because I had just reached 50K followers, 00:23:42.400 |
if somebody's physical location is found out, 00:23:50.240 |
- So we're referring to the more general concept 00:24:01.760 |
- I think that for the reasons we listed before, 00:24:06.520 |
having an anonymous account is a really powerful way 00:24:13.020 |
We were ultimately speaking truth to power, right? 00:24:48.840 |
and freedom of information propagation on social media, 00:24:55.120 |
which thanks to Elon purchasing Twitter, now X, 00:25:01.300 |
And so to us, we wanted to call out certain maneuvers 00:25:12.880 |
as not what it may seem on the surface, right? 00:25:20.160 |
might be useful for regulatory capture, right? 00:25:31.140 |
And I think we should have the right to point that out 00:25:41.760 |
Ultimately, that's why I created an anonymous account. 00:25:45.920 |
It's to have my ideas evaluated for themselves, 00:25:52.600 |
or status from having done things in the past. 00:25:57.420 |
And to me, start an account from zero to a large following 00:26:02.420 |
in a way that wasn't dependent on my identity 00:26:13.820 |
It's kind of like new game plus in a video game. 00:26:18.000 |
with your knowledge of how to beat it, maybe some tools, 00:26:21.080 |
but you restart the video game from scratch, right? 00:26:24.200 |
And I think to have a truly efficient marketplace of ideas 00:26:55.200 |
how are we gonna converge on the best way to do things? 00:26:58.280 |
So it was disappointing to hear that I was getting doxed 00:27:04.040 |
because I had a responsibility for my company. 00:27:08.020 |
And so we ended up disclosing that we're running a company, 00:27:28.360 |
So one is unethical for them to do what they did, 00:27:35.300 |
but in the general case, is it good for society? 00:27:38.620 |
Is it bad for society to remove the cloak of anonymity? 00:27:49.120 |
Like I said, if anybody who speaks truth to power 00:27:59.640 |
against those that usually control the flow of information, 00:28:03.080 |
if anybody that reaches a certain threshold gets doxed 00:28:11.620 |
to apply pressure on them to suppress their speech, 00:28:15.240 |
I think that's a speech suppression mechanism, 00:28:27.280 |
- So with the flip side of that, which is interesting, 00:28:30.520 |
is as we get better and better at larger language models, 00:28:34.020 |
you can imagine a world where there's anonymous accounts 00:28:40.920 |
with very convincing larger language models behind them, 00:28:54.640 |
You could start a revolution from your basement. 00:29:05.880 |
- Technically, yeah, I could start in any basement 00:29:10.480 |
'cause I quit big tech, moved back in with my parents, 00:29:17.520 |
bought about 100K of GPUs, and I just started building. 00:29:23.780 |
'cause that's sort of the American or Canadian 00:29:28.760 |
heroic story of one man in their basement with 100 GPUs. 00:29:33.440 |
I was more referring to the unrestricted scaling 00:29:42.340 |
- I think that freedom of speech induces freedom of thought 00:30:17.520 |
these synthetic intelligences are gonna make good points 00:30:22.620 |
about how to steer systems in our civilization 00:30:39.980 |
of maintaining variance and diversity of thought, 00:30:46.940 |
if you can have swarms of non-biological beings 00:30:51.940 |
because they can be like the sheep in an animal farm. 00:30:58.940 |
- Yeah, of course, I would say that the solution to this 00:31:05.540 |
or way to sign that this is a certified human 00:31:16.780 |
And I think Elon is trying to converge on that on X 00:31:22.300 |
- Yeah, it'd be interesting to also be able to sign 00:31:27.700 |
- Like who created the bot and what are the parameters, 00:31:32.300 |
like the full history of the creation of the bot. 00:31:45.460 |
'Cause then you can know if there's like a swarm 00:31:53.960 |
I do think that a lot of pervasive ideologies today 00:31:58.960 |
have been amplified using sort of these adversarial techniques 00:32:21.320 |
to decelerate, to wind down, the degrowth movement, 00:32:39.480 |
I mean, we can look at what happened in Germany, right? 00:32:49.360 |
where that induced shutdowns of nuclear power plants, 00:33:01.800 |
And that was a net negative for Germany and the West, right? 00:33:11.360 |
that slowing down AI progress to have only a few players 00:33:20.680 |
We almost lost open AI to this ideology, right? 00:33:25.040 |
It almost got dismantled, right, a couple of weeks ago. 00:33:28.360 |
That would have caused huge damage to the AI ecosystem. 00:33:33.520 |
And so to me, I want fault-tolerant progress. 00:33:40.320 |
to keep moving forward and making sure we have variance 00:33:56.280 |
Actually, there's a concept in quantum computing. 00:34:02.000 |
quantum computers are very fragile to ambient noise, right? 00:34:20.040 |
And there, what you do is you encode information non-locally 00:34:25.040 |
through a process called quantum error correction. 00:34:33.840 |
any local fault, hitting some of your quantum bits 00:34:41.480 |
if your information is sufficiently delocalized, 00:34:49.400 |
And to me, I think that humans fluctuate, right? 00:34:53.520 |
They can get corrupted, they can get bought out. 00:35:12.000 |
and suddenly you've corrupted the whole system, right? 00:35:27.520 |
I think making sure that power for this AI revolution 00:36:09.440 |
So you're sometimes using them a little bit synonymously, 00:36:19.640 |
is there a place of creating a fault-tolerant development, 00:36:32.360 |
And AI, we can generalize to technology in general, 00:36:43.160 |
because that's what the universe really wants us to do? 00:36:46.520 |
Or is there a place to where we can consider dangers 00:36:50.840 |
Sort of wise, strategic optimism versus reckless optimism? 00:37:03.480 |
I mean, the reality is that whoever deploys an AI system 00:37:08.480 |
is liable for, or should be liable for what it does. 00:37:13.640 |
And so if the organization or person deploying an AI system 00:37:22.920 |
And ultimately, the thesis is that the market 00:37:25.760 |
will induce sort of, will positively select for AIs 00:37:32.800 |
that are more reliable, more safe, and tend to be aligned. 00:37:43.800 |
for the product they put out that uses this AI, 00:37:47.400 |
they won't wanna buy AI products that are unreliable, right? 00:37:52.400 |
So we're actually for reliability engineering. 00:37:55.360 |
We just think that the market is much more efficient 00:38:00.280 |
at achieving this sort of reliability optimum 00:38:18.200 |
- So to you, safe AI development will be achieved 00:38:22.200 |
through market forces versus through, like you said, 00:38:34.540 |
from Yoshua Banjo, Jeff Hinton, and many others. 00:38:37.500 |
It's titled "Managing AI Risk in an Era of Rapid Progress." 00:38:42.180 |
So there's a collection of folks who are very worried 00:38:50.140 |
And they have a bunch of practical recommendations. 00:38:55.140 |
Maybe I give you four and you see if you like any of them. 00:38:58.780 |
- One, give independent auditors access to AI labs, one. 00:39:05.700 |
one third of their AI research and development funding 00:39:09.300 |
to AI safety, sort of this general concept of AI safety. 00:39:14.180 |
Three, AI companies are required to adopt safety measures 00:39:17.460 |
if dangerous capabilities are found in their models. 00:39:20.680 |
And then four, something you kind of mentioned, 00:39:28.540 |
So independent auditors, governments and companies 00:39:36.660 |
You gotta have safety measures if shit goes really wrong. 00:39:44.700 |
Any of that seem like something you would agree with? 00:39:50.700 |
just arbitrarily saying 30% seems very arbitrary. 00:40:07.260 |
would naturally pop up because how would customers know 00:40:10.500 |
that your product is certified reliable, right? 00:40:18.980 |
The thing I would oppose and the thing I'm seeing 00:40:21.680 |
that's really worrisome is there's a sort of, 00:40:26.100 |
weird sort of correlated interest between the incumbents, 00:40:32.380 |
And if the two get too close, we open the door 00:40:41.780 |
that could have absolute power over the people. 00:40:54.820 |
And even if you like our current leaders, right? 00:40:56.940 |
I think that some of the leaders in big tech today 00:41:08.460 |
Just like we saw at OpenAI, it becomes a market leader, 00:41:12.320 |
has a lot of the power and now it becomes a target 00:41:18.220 |
And so I just want separation of AI and state. 00:41:30.980 |
"because of geopolitical competition with our adversaries." 00:41:35.980 |
I think that the strength of America is its variance, 00:41:46.980 |
Capitalism converges on technologies of high utility 00:42:01.580 |
- So if AGI turns out to be a really powerful technology, 00:42:05.440 |
or even the technologies that lead up to AGI, 00:42:08.900 |
what's your view on the sort of natural centralization 00:42:11.660 |
that happens when large companies dominate the market? 00:42:16.100 |
Basically formation of monopolies, like the takeoff, 00:42:21.020 |
whichever company really takes a big leap in development, 00:42:29.140 |
or explicitly the secrets of the magic sauce, 00:42:32.180 |
they can just run away with it, is that a worry? 00:42:37.820 |
I don't think there's a hyperbolic singularity, right? 00:42:53.460 |
more intelligence being applied to advancing this science 00:43:10.700 |
is to maintain a near equilibrium of capabilities. 00:43:18.040 |
to be more prevalent and championed by many organizations, 00:43:21.620 |
because there, you sort of equilibrate the alpha 00:43:40.580 |
where a market leader has so much market power, 00:43:42.940 |
it just dominates everything, right, and runs away. 00:43:53.820 |
every grad student, every kid in their mom's basement 00:44:16.460 |
as a civilization, it's really a search algorithm. 00:44:20.060 |
And the more points we have in the search algorithm 00:44:26.280 |
the more we'll be able to explore new modes of thinking, 00:44:31.860 |
- Yeah, but it feels like a delicate balance, 00:44:34.100 |
because we don't understand exactly what it takes 00:44:36.620 |
to build AGI and what it will look like when we build it. 00:44:52.660 |
But if you look at something like nuclear weapons, 00:45:04.780 |
the guy or gal in her mom's basement to make progress. 00:45:11.260 |
And it seems like the transition to that kind of world 00:45:16.260 |
where only one player can develop AGI is possible. 00:45:30.540 |
is the centralization of the supply chains for the hardware. 00:45:35.620 |
- We have NVIDIA is just the dominant player. 00:45:42.740 |
And then we have a TSMC as the main fab in Taiwan, 00:46:10.740 |
And so what I'm trying to do is sort of explode the variance 00:46:20.940 |
by fundamentally re-imagining how you embed AI algorithms 00:46:28.740 |
I dislike the term AGI, artificial general intelligence. 00:46:51.740 |
Grokking systems that have multi-partite quantum entanglement 00:46:56.900 |
that you can provably not represent efficiently 00:47:10.980 |
sort of exploring the wider space of intelligences. 00:47:15.740 |
And I think that space of intelligence inspired by physics 00:47:25.060 |
And I think we're going through a moment right now 00:47:37.700 |
We realized that human intelligence is just a point 00:47:41.460 |
in a very large space of potential intelligences. 00:47:59.220 |
and we've survived and we've achieved technologies 00:48:04.760 |
We've achieved technologies that ensure our wellbeing. 00:48:07.980 |
For example, we have satellites monitoring solar flares 00:48:18.300 |
of this anthropomorphic, anthropocentric anchor for AI, 00:48:23.300 |
we'll be able to explore the wider space of intelligences 00:48:26.580 |
that can really be a massive benefit to our wellbeing 00:48:32.700 |
And still we're able to see the beauty and meaning 00:48:35.660 |
in the human experience even though we're no longer 00:48:39.540 |
in our best understanding of the world at the center of it. 00:48:42.940 |
- I think there's a lot of beauty in the universe, right? 00:48:54.940 |
So you have humans, technology, capital, memes. 00:49:02.300 |
Everything induces a selective pressure on one another. 00:49:05.380 |
And it's a beautiful machine that has created us, 00:49:07.860 |
has created the technology we're using to speak today 00:49:15.020 |
technology we use to augment ourselves every day. 00:49:19.300 |
I think the system is beautiful and the principle 00:49:22.900 |
that induces this sort of adaptability and convergence 00:49:32.580 |
It's a beautiful principle that we're part of. 00:49:37.300 |
And I think part of EAC is to appreciate this principle 00:49:42.300 |
in a way that's not just centered on humanity 00:49:49.900 |
Appreciate life, the preciousness of consciousness 00:50:32.620 |
- So during my career, I had a moment where I realized 00:50:42.100 |
to truly understand the universe around us, right? 00:50:45.240 |
Instead of just having humans with pen and paper 00:50:49.980 |
And to me, that sort of process of letting go 00:51:01.820 |
A quantum computer is much better than a human 00:51:08.140 |
Similarly, I think that humanity has a choice. 00:51:13.140 |
Do we accept the opportunity to have intellectual 00:51:25.300 |
this path of growth and scope and scale of civilization? 00:51:39.540 |
by combining and augmenting ourselves with AI, 00:51:51.980 |
is one where humans augment themselves with AI. 00:51:56.540 |
I think we're already on this path to augmentation. 00:52:04.020 |
We have wearables soon that have shared perception with us, 00:52:12.420 |
technically, your Tesla car has shared perception. 00:52:16.300 |
And so if you have shared experience, shared context, 00:52:27.620 |
And to me, I think that humanity augmenting itself with AI 00:52:37.860 |
and having AI that is not anchored to anything biological, 00:52:53.560 |
that are made of humans and technology, right? 00:52:56.120 |
Companies are sort of large mixture of expert models 00:53:00.580 |
where we have neural routing of tasks within a company, 00:53:18.780 |
of matter or information leads to maximal growth 00:53:23.440 |
will be where we converge just from like physical principles. 00:53:28.440 |
And so we can either align ourselves to that reality 00:53:33.140 |
and join the acceleration up in scope and scale 00:53:42.660 |
and try to decelerate and move back in the forest, 00:53:47.060 |
let go of technology and return to our primitive state. 00:53:51.180 |
And those are the two paths forward, at least to me. 00:53:56.220 |
whether there's a limit to the human capacity to align. 00:54:14.560 |
But to him, to Dan, this is not a good thing, 00:54:19.500 |
as he argues that natural selection favors AIs over humans, 00:54:26.900 |
If it is an evolutionary process and AI systems 00:54:45.580 |
Right now we run AIs that have positive utility to humans, 00:54:57.060 |
when there's an API running instances of it on GPUs, right? 00:55:04.740 |
the ones that have high utility to us, right? 00:55:18.260 |
I think there's gonna be an opportunity to steer AI 00:55:56.720 |
one of the concerns is unintended consequences. 00:56:06.580 |
through unintended consequences of AI systems is very large. 00:56:13.940 |
By augmenting ourselves with AI is unimaginable right now. 00:56:22.500 |
Whether we take the path of creating these technologies, 00:56:35.940 |
of like we don't birth these technologies at all, 00:56:38.980 |
and then we leave all the potential upside on the table. 00:56:42.740 |
And to me, out of responsibility to the future humans 00:56:54.380 |
I think we have to make the greater, grander future happen. 00:57:05.240 |
- I think, like I said, the market will exhibit caution. 00:57:18.760 |
to things that have negative utility to them. 00:57:23.300 |
is like there's not always perfect information. 00:57:27.020 |
There's bad faith actors that mess with the system. 00:57:40.980 |
- Well, that's why we need freedom of information, 00:57:49.180 |
be able to converge on the subspace of technologies 00:57:52.760 |
that have positive utility for us all, right? 00:58:18.320 |
I think it's, people just throw numbers out there, 00:58:31.800 |
if you have enough variables or hidden Markov process. 00:59:06.440 |
because that was an evolutionary optimum, right? 00:59:12.920 |
higher neuroticism will just think of negative futures 00:59:17.920 |
where everything goes wrong all day, every day, 00:59:22.280 |
and claim that they're doing unbiased sampling. 00:59:40.440 |
And in general, I don't think that we can predict the future 00:59:44.040 |
with that much granularity because of chaos, right? 00:59:49.880 |
you have some uncertainty and a couple of variables. 00:59:54.400 |
you have this concept of a Lyapunov exponent, right? 00:59:57.600 |
A bit of fuzz becomes a lot of fuzz in our estimate, 01:00:10.480 |
All we know, the only prior we have is the laws of physics. 01:00:16.880 |
The laws of physics say the system will wanna grow. 01:00:24.040 |
and replication are more likely in the future. 01:00:31.080 |
our current mutual information with the future. 01:00:33.880 |
And the path towards that is for us to accelerate 01:00:44.480 |
similar to the quantum supremacy experiment at Google, 01:00:53.160 |
That was an example of a quantum chaotic system 01:01:02.120 |
with even the biggest supercomputer in the world, right? 01:01:19.280 |
I think they would be very rich trading on the stock market. 01:01:23.280 |
- But nevertheless, it's true that humans are biased, 01:01:32.880 |
But we can still imagine different trajectories 01:01:37.640 |
We don't know all the other ones that don't, necessarily. 01:01:55.800 |
how can powerful technology hurt a lot of people? 01:02:16.820 |
Philosophical meaning, like, is there a chance? 01:02:25.600 |
- I think, to me, one of the biggest existential risks 01:02:29.320 |
would be the concentration of the power of AI 01:02:35.400 |
Especially if it's a mix between the companies 01:02:38.760 |
that control the flow of information, and the government. 01:02:49.460 |
where only a very few, an oligopoly in the government, 01:02:54.240 |
have AI, and they could even convince the public 01:03:13.200 |
we have a data-driven prior, of these things happening. 01:03:27.880 |
in my Bayesian inference, than sci-fi-based priors, right? 01:03:32.880 |
Like my prior came from the Terminator movie. 01:03:57.200 |
or highly unlikely transition in that chain, right? 01:04:09.280 |
and we're wired to deem the unknown to be dangerous, 01:04:14.280 |
because that's a good heuristic for survival, right? 01:04:25.160 |
so much upside to lose by preemptively stopping 01:04:29.680 |
the positive futures from happening out of fear. 01:04:33.100 |
And so, I think that we shouldn't give in to fear. 01:04:46.360 |
For example, the founding fathers of the United States 01:04:59.240 |
and I think the same could possibly be done for AGI. 01:05:13.680 |
When there's a dictator, a lot of dark, bad things happen. 01:05:18.680 |
The question is, can AGI become that dictator? 01:05:23.200 |
Can AGI, when developed, become the centralizer 01:05:36.400 |
the same Stalin-like tendencies to centralize 01:05:40.280 |
and manage centrally the allocation of resources. 01:05:45.280 |
And you can even see that as a compelling argument 01:05:59.920 |
whatever forces that corrupt the human mind with power 01:06:05.040 |
It'll just say, well, humans are dispensable. 01:06:34.840 |
because it decreases the amount of poor people 01:06:48.280 |
Of course, it misses a fundamental piece here 01:06:53.000 |
that's hard to put into a mathematical equation 01:07:12.600 |
there's a bias towards over-centralization of AI 01:07:26.800 |
we're gonna run out of data to scrape over the internet. 01:07:31.000 |
well, actually I'm working on increasing the compute density 01:07:43.200 |
I think that fundamentally centralized cybernetic control, 01:07:54.520 |
and is trying to perceive the world accurately, 01:08:00.080 |
and control it and enact its will upon the world. 01:08:04.480 |
I think that's just never been the optimum, right? 01:08:19.320 |
to fuse all the information that is coming to it 01:08:34.120 |
is a notion of sort of hierarchical cybernetic control, 01:09:11.280 |
And then it samples each person's update once per week. 01:09:17.520 |
and you have larger timescale and greater scope. 01:09:21.800 |
as sort of the optimal way to control systems. 01:09:25.280 |
And really that's what capitalism gives us, right? 01:09:31.960 |
and you can even have like parent companies and so on. 01:09:39.440 |
In quantum computing, that's my field I came from, 01:09:46.400 |
Quantum error correction is detecting a fault 01:09:50.640 |
predicting how it's propagated through the system 01:09:56.680 |
And it turns out that decoders that are hierarchical 01:10:04.840 |
perform the best by far and are far more fault tolerant. 01:10:09.360 |
And the reason is if you have a non-local decoder, 01:10:24.800 |
that everybody reports to and that CEO goes on vacation, 01:10:32.640 |
yes, we're seeing a tendency towards centralization of AI, 01:10:37.040 |
but I think there's gonna be a correction over time 01:10:40.000 |
where intelligence is gonna go closer to the perception 01:10:43.600 |
and we're gonna break up AI into smaller subsystems 01:11:01.840 |
but in relation to each other, nations are anarchic, 01:11:13.200 |
what'd you call it, a centralized cybernetic control? 01:11:30.240 |
you may have two units working on similar technology 01:11:36.320 |
and you prune the one that performs not as well, right? 01:11:39.640 |
And that's a sort of selection process for a tree 01:12:08.280 |
'cause you're describing human systems mostly right now. 01:12:11.660 |
I just hope when there's a monopoly on AGI in one company 01:12:16.660 |
that we'll see the same thing we see with humans, 01:12:23.500 |
- I mean, that's been the case so far, right? 01:12:25.860 |
We have OpenAI, we have Anthropic, now we have XAI. 01:12:38.860 |
You don't have to trust any one party too much 01:12:42.040 |
'cause we're kind of always hedging our bets at every level. 01:12:47.060 |
And that's the most beautiful thing to me at least 01:12:54.740 |
And maintaining that dynamism is how we avoid tyranny, right? 01:12:59.140 |
Making sure that everyone has access to these tools, 01:13:04.140 |
to these models and can contribute to the research 01:13:11.940 |
where very few people have control over AI for the world 01:13:24.740 |
you mentioned multipartite quantum entanglement. 01:13:33.620 |
When you think about quantum mechanical systems 01:13:37.340 |
happening in them, what do you think is intelligent 01:13:42.340 |
about the kind of computation the universe is able to do? 01:13:47.700 |
is the kind of computation a human brain is able to do? 01:14:04.180 |
If you had access to all of the degrees of freedom, 01:14:08.460 |
you could in a very, very, very large quantum computer 01:14:14.580 |
let's say a few qubits per Planck volume, right? 01:14:24.820 |
Then you'd be able to simulate the whole universe, right? 01:14:31.180 |
assuming you're looking at a finite volume, of course, 01:14:35.340 |
I think that, at least to me, intelligence is the, 01:14:43.100 |
The ability to perceive, predict, and control our world. 01:14:49.300 |
a lot of intelligence we use is more about compression, right? 01:14:54.300 |
It's about operationalizing information theory, right? 01:15:00.300 |
In information theory, you have the notion of entropy 01:15:06.300 |
And entropy tells you that you need this many bits 01:15:10.620 |
to encode this distribution or this subsystem 01:15:27.500 |
is very much trying to minimize relative entropy 01:15:32.500 |
between our models of the world and the world, 01:15:40.540 |
And so we're learning, we're searching over the space 01:15:50.700 |
that has distilled all the variance and noise and entropy. 01:15:57.380 |
And originally, I came to quantum machine learning 01:16:03.780 |
because the entropy of black holes is very interesting. 01:16:22.340 |
how do black holes actually encode information? 01:16:28.500 |
And so that got me into the space of algorithms 01:16:40.460 |
how do you acquire quantum information from the world? 01:16:44.060 |
So something I've worked on, this is public now, 01:16:50.020 |
So how do you capture information from the real world 01:16:54.540 |
in superposition and not destroy the superposition, 01:16:57.540 |
but digitize for a quantum mechanical computer, 01:17:04.260 |
And so if you have an ability to capture quantum information 01:17:09.260 |
and search over, learn representations of it, 01:17:43.580 |
A lot of biology and protein folding and so on 01:17:51.020 |
And so unlocking an ability to augment human intellect 01:18:01.140 |
like a fundamental capability for civilization 01:18:09.820 |
but over time I kind of grew weary of the timelines 01:18:14.660 |
that were starting to look like nuclear fusion. 01:18:20.060 |
maybe by way of definition, by way of explanation, 01:18:27.260 |
- So a quantum computer really is a quantum mechanical system 01:18:40.660 |
and it can maintain its quantum mechanical state. 01:18:53.300 |
And it's actually more fundamental than probability theory. 01:19:00.080 |
but we're not used to thinking in superpositions 01:19:09.180 |
So we have to translate the quantum mechanical world 01:19:20.100 |
You have to represent things with very large matrices, 01:19:27.100 |
And we've seen all sorts of players from neutral atoms, 01:19:38.260 |
I think you can make a quantum computer out of many things. 01:19:40.460 |
But to me, the thing that was really interesting 01:19:48.300 |
was about understanding the quantum mechanical world 01:19:53.260 |
So embedding the physical world into AI representations 01:19:59.620 |
was embedding AI algorithms into the physical world. 01:20:03.960 |
So this bidirectionality of embedding physical world 01:20:12.060 |
really that's the sort of core of my quest really, 01:20:25.040 |
to merge really physics and AI fundamentally. 01:20:31.400 |
to do machine learning on a representation of nature 01:20:37.600 |
that stays true to the quantum mechanical aspect of nature. 01:20:42.600 |
- Yeah, it's learning quantum mechanical representations. 01:20:49.300 |
Alternatively, you can try to do classical machine learning 01:20:56.660 |
I wouldn't advise it because you may have some speed ups, 01:21:01.180 |
but very often the speed ups come with huge costs. 01:21:17.240 |
So what you have to do is what I've been mentioning, 01:21:21.300 |
which is really an algorithmic fridge, right? 01:21:24.360 |
It's trying to pump entropy out of the system, 01:21:37.420 |
there's just such a huge overhead, it's not worth it. 01:21:42.020 |
It's like thinking about shipping something across a city 01:21:57.040 |
Can you understand with quantum deep learning 01:22:09.280 |
that has sufficient quantum mechanical correlations 01:22:14.280 |
that are very hard to capture for classical representations, 01:22:24.900 |
The question is which systems have sufficient correlations 01:22:32.300 |
which systems are still relevant to industry? 01:22:37.780 |
People are leaning towards chemistry, nuclear physics. 01:22:52.660 |
they've captured a quantum mechanical image of the world 01:22:57.400 |
that becomes a sort of quantum form of machine perception. 01:23:00.100 |
And so, for example, Fermilab has a project exploring 01:23:04.900 |
detecting dark matter with these quantum sensors. 01:23:11.900 |
to understand the universe ever since I was a child. 01:23:18.560 |
that help us peer into the earliest parts of the universe. 01:23:24.500 |
For example, the LIGO is a quantum sensor, right? 01:23:29.160 |
So, yeah, I would say quantum machine perception, 01:23:33.540 |
simulations, right, grokking quantum simulations, 01:23:39.660 |
AlphaFold understood the probability distribution 01:23:48.400 |
over configurations of electrons more efficiently 01:23:55.480 |
A Universal Training Algorithm for Quantum Deep Learning 01:24:06.660 |
Is there some interesting aspects you could just mention 01:24:09.620 |
on how kind of backprop and some of these things 01:24:15.780 |
transfer over to the quantum machine learning? 01:24:21.540 |
That was one of my first papers in quantum deep learning. 01:24:24.580 |
Everybody was saying, "Oh, I think deep learning 01:24:41.140 |
you embed reversible operations into a quantum computation. 01:24:46.340 |
And so, the trick there was to do a feed-forward operation 01:24:54.220 |
You just kick the system with a certain force 01:25:08.700 |
you start with a superposition over parameters, 01:25:15.100 |
Now, you don't have just a point for parameters. 01:25:18.300 |
You have a superposition over many potential parameters. 01:25:28.340 |
- 'Cause phase kicks emulate having the parameter space 01:25:37.700 |
And you're trying to get the Schrodinger equation, 01:25:45.780 |
And so, you do an algorithm to induce this phase kick, 01:25:52.700 |
And then, when you uncompute the feed-forward, 01:26:01.380 |
and hit each one of the parameters throughout the layers. 01:26:09.460 |
then it's kind of like a particle moving in N dimensions, 01:26:18.300 |
would be that it can tunnel through the landscape 01:26:20.740 |
and find new optima that would have been difficult 01:26:26.660 |
But again, this is kind of a theoretical thing. 01:26:30.760 |
And in practice, with at least the current architectures 01:26:37.460 |
such algorithms would be extremely expensive to run. 01:26:42.580 |
to ask the difference between the different fields 01:26:56.460 |
I think a lot of the stuff you're talking about here 01:27:19.360 |
which we started in school at the University of Waterloo, 01:27:24.540 |
Initially, I was a physicist, a mathematician. 01:27:28.260 |
We had a computer scientist, we had a mechanical engineer, 01:27:32.140 |
and then we had a physicist that was experimental, primarily. 01:27:39.980 |
and figuring out how to communicate and share knowledge 01:27:45.420 |
this sort of interdisciplinary engineering work. 01:28:01.820 |
And in engineering, you're trying to hack the world. 01:28:05.420 |
You're trying to find how to apply the physics 01:28:08.020 |
that I know, my knowledge of the world, to do things. 01:28:12.820 |
I think there's just a lot of limits to engineering. 01:28:32.580 |
why is it so hard to build a quantum computer? 01:28:51.360 |
there's a sort of exodus from quantum computing 01:29:00.780 |
- But we should say the name of your company is Extropic. 01:29:17.940 |
sort of zero-temperature subspace of information. 01:29:22.640 |
And the way to do that is by encoding information. 01:29:26.080 |
You encode a code within a code within a code 01:29:36.620 |
But ultimately, it's a sort of algorithmic refrigerator, 01:29:43.260 |
It's just pumping out entropy out of the subsystem 01:30:00.080 |
It's very difficult because in order to scale up 01:30:05.380 |
you need each component to be of sufficient quality 01:30:09.660 |
Because if you try to do this error correction, 01:30:13.940 |
in each quantum bit and your control over them, 01:30:16.740 |
if it's insufficient, it's not worth scaling up. 01:30:21.780 |
You're actually adding more errors than you remove. 01:30:26.500 |
where if your quantum bits are of sufficient quality 01:30:38.500 |
And so, it's just a very long slog of engineering, 01:30:50.520 |
And people are crossing, they're achieving milestones. 01:30:56.920 |
It's just, in general, the media always gets ahead 01:31:16.460 |
but I think there's other quests that can be done 01:31:22.540 |
- Well, let me just explore different beautiful ideas 01:31:33.920 |
"Asymptotically Limitless Quantum Energy Teleportation 01:31:42.460 |
can you explain what a Q-DIT is, which is a qubit? 01:31:55.020 |
can you have a notion of like an integer floating point 01:32:04.060 |
to later work on quantum analog digital conversion. 01:32:06.700 |
There it was interesting because during my master's, 01:32:15.540 |
and entanglement of the vacuum, right, of emptiness. 01:32:20.020 |
Emptiness has energy, which is very weird to say. 01:32:31.140 |
of quantum energy there is in the fluctuations. 01:32:35.140 |
And so, I was trying to hack the energy of the vacuum, 01:32:51.340 |
But just like, you know, in the stock market, 01:32:53.260 |
if you have a stock that's correlated over time, 01:33:02.660 |
If you communicated that information to another point, 01:33:05.500 |
you can infer what configuration the vacuum is in 01:33:19.880 |
because you could create pockets of negative energy density, 01:33:23.640 |
which is energy density that is below the vacuum, 01:33:28.560 |
because we don't understand how the vacuum gravitates. 01:33:40.080 |
is really a canvas made out of quantum entanglement. 01:33:50.320 |
of the vacuum locally increases quantum entanglement, 01:34:00.840 |
if you're into weird theories about UAPs and whatnot, 01:34:15.280 |
How would they go faster than the speed of light? 01:34:19.080 |
You would need a sort of negative energy density. 01:34:28.200 |
and hit the limits allowable by the laws of physics. 01:34:34.280 |
where you can't extract more than you've put in, obviously. 01:34:39.280 |
- But you're saying it's possible to teleport the energy 01:34:44.720 |
because you can extract information in one place 01:34:58.720 |
- Yeah, I mean, it's allowable by the laws of physics. 01:35:01.800 |
The reality, though, is that the correlations 01:35:04.080 |
decay with distance, and so you're gonna have 01:35:19.000 |
we talked about intelligence, and I forgot to ask. 01:35:21.840 |
What's your view on the other possible intelligences 01:35:29.280 |
Do you think there's other intelligent alien civilizations? 01:35:44.160 |
and we're trying to increase our capabilities 01:35:47.840 |
as fast as possible because we could get disrupted. 01:36:08.680 |
I mean, I've read what most people have read on the topic. 01:36:20.360 |
to instill a sense of urgency in developing technologies 01:36:34.840 |
or a foreign intelligence from a different planet. 01:36:51.720 |
- But to me, it's also an interesting challenge 01:36:54.720 |
and thought experiment on how to perceive intelligence. 01:36:59.080 |
This has to do with quantum mechanical systems. 01:37:08.400 |
say the aliens are here or they are directly observable 01:37:19.040 |
or don't have the right processing of the sensor data 01:37:23.140 |
to see the obvious intelligence that's all around us. 01:37:25.880 |
- Well, that's why we work on quantum sensors, right? 01:37:30.960 |
- Yeah, but there could be, so that's a good one, 01:37:34.920 |
that's not even in the currently known forces of physics. 01:37:45.880 |
And the most entertaining thought experiment to me 01:37:58.560 |
But there could be stuff that's just like obviously there. 01:38:01.420 |
And once you know it, it's like, oh, right, right. 01:38:09.400 |
from the laws of physics, we understand them, 01:38:11.400 |
is actually a fundamental part of the universe 01:38:15.240 |
and can be incorporated in physics, most understood. 01:38:25.640 |
virally self-replicating von Neumann-like probe system, 01:38:30.280 |
right, and it's possible that there are such systems that, 01:38:45.840 |
- But that wouldn't violate any of my priors, 01:38:49.340 |
but am I certain that these systems are here? 01:38:53.080 |
And it'd be difficult for me to say so, right? 01:38:56.200 |
I only have second-hand information about there being data. 01:39:09.200 |
Could aliens be the very thoughts that come into my head? 01:39:17.520 |
how do you know that, what's the origin of ideas 01:39:20.240 |
in your mind when an idea comes to your head? 01:39:33.600 |
it really felt like it was being beamed from space. 01:39:44.480 |
But you know, I think that alien life could take many forms, 01:39:56.840 |
much more broadly, to be less anthropocentric or biocentric. 01:40:01.840 |
- Just to linger a little longer on quantum mechanics, 01:40:08.080 |
what's, through all your explorations of quantum computing, 01:40:30.920 |
through this picture where a hologram of lesser dimension 01:40:41.040 |
to a bulk theory of quantum gravity of an extra dimension. 01:40:50.480 |
comes from trying to learn deep learning-like representations 01:40:55.480 |
of the boundary, and so at least part of my journey someday 01:41:01.040 |
on my bucket list is to apply quantum machine learning 01:41:14.240 |
and learn an emergent geometry from the boundary theory. 01:41:18.640 |
And so we can have a form of machine learning 01:41:26.960 |
which is still a holy grail that I would like to hit 01:41:35.000 |
- What do you think is going on with black holes 01:41:43.500 |
What do you think is going on with black holes? 01:41:46.160 |
- Black holes are really fascinating objects. 01:41:59.200 |
there's been sort of this black hole information paradox 01:42:13.160 |
that has been allegedly resolved in recent years 01:42:15.880 |
by a former peer of mine who's now a professor at Berkeley, 01:42:37.360 |
from the point of view of the observer on the outside, 01:42:45.880 |
And so everything that is falling to a black hole 01:42:55.400 |
And at some point, it gets so close to the horizon, 01:43:01.360 |
in which quantum effects and quantum fluctuations matter. 01:43:28.040 |
And that's how there's sort of mutual information 01:43:31.040 |
between the outgoing radiation and the infalling matter. 01:43:38.280 |
I think we're only just starting to put the pieces together. 01:43:43.280 |
- There's a few pothead-like questions I wanna ask you. 01:43:48.720 |
that there's a giant black hole at the center of our galaxy? 01:43:53.300 |
I just want to set up shop near it to fast forward, 01:44:17.560 |
that everything's destroyed inside a black hole. 01:44:19.800 |
Like all the information that makes up Guillaume 01:44:27.800 |
it's tied together in some deeply memophil way. 01:44:44.800 |
Then that would mean that if we ascend the Kardashev scale 01:44:56.480 |
to transmit information to new universes we create. 01:45:04.480 |
And so we, even though our universe may reach a heat death, 01:45:20.400 |
To peer into that regime of higher energy physics. 01:45:25.120 |
- And maybe you can speak to the Kardashev scale 01:45:56.680 |
where we are producing the equivalent wattage 01:46:00.840 |
to all the energy that is incident on Earth from the Sun. 01:46:04.480 |
Kardashev type two would be harnessing all the energy 01:46:09.760 |
And I think type three is like the whole galaxy. 01:46:14.960 |
- Yeah, and then some people have some crazy type four 01:46:17.560 |
and five, but I don't know if I believe in those. 01:46:19.640 |
But to me, it seems like from the first principles 01:46:25.080 |
of thermodynamics that, again, there's this concept 01:46:28.920 |
of thermodynamic driven dissipative adaptation 01:46:38.080 |
because we have this sort of energetic drive from the Sun. 01:46:42.760 |
We have incident energy and life evolved on Earth 01:46:50.000 |
that free energy to maintain itself and grow. 01:47:03.120 |
and we kind of have a responsibility to do so 01:47:06.160 |
because that's the process that brought us here. 01:47:08.760 |
So, we don't even know what it has in store for us 01:47:23.440 |
In a sub-stack blog post titled "What the Fuck is EAC?" 01:47:34.760 |
"civilization goals that are all interdependent." 01:47:37.940 |
And the four goals are increase the amount of energy 01:47:46.520 |
In the short term, this almost certainly means 01:47:51.640 |
Increase human flourishing via pro-population growth policies 01:47:59.320 |
the single greatest force multiplier in human history. 01:48:06.120 |
so that humanity can spread beyond the Earth. 01:48:20.520 |
- The goal is for the human techno-capital memetic machine 01:48:28.440 |
and to hyperstitiously engineer its own growth. 01:48:38.040 |
you have capital, and then you have memes, information. 01:48:41.520 |
And all of those systems are coupled with one another. 01:48:55.400 |
And our goal was to have a sort of viral optimistic movement 01:49:08.640 |
And we simply want to lean into the natural tendencies 01:49:19.900 |
The EAC is literally a memetic optimism virus 01:49:30.780 |
So you do want it to be a virus to maximize the spread. 01:49:37.640 |
therefore the optimism will incentivize its growth. 01:49:51.200 |
from which you can have much more opinionated forks. 01:49:59.800 |
What got us here is this adaptation of the whole system, 01:50:07.520 |
and that process is good and we should keep it going. 01:50:16.760 |
that we maintain this malleability and adaptability? 01:50:24.160 |
and maintaining free speech, freedom of thought, 01:50:37.960 |
on the space of technologies, ideas, and whatnot 01:50:44.200 |
And so ultimately, there's been quite a few forks. 01:50:49.500 |
Some are just memes, but some are more serious, right? 01:51:00.980 |
of the unique characteristic of that fork from Vitalik? 01:51:05.460 |
- I would say that it's trying to find a middle ground 01:51:20.480 |
was important to sort of shift the dynamic range of opinions. 01:51:24.600 |
And it's like the balance between centralization 01:51:29.520 |
The real optimum's always somewhere in the middle, right? 01:51:32.840 |
But for EAC, we're pushing for entropy, novelty, 01:51:48.040 |
adding constraints, adding too many regulations, 01:51:53.960 |
we're trying to bring balance to the force, right? 01:52:00.160 |
- Balance to the force of human civilization, yeah. 01:52:04.600 |
versus the entropic force that makes us explore, right? 01:52:09.120 |
Systems are optimal when they're at the edge of criticality 01:52:15.800 |
Between constraints, energy minimization, and entropy. 01:52:20.800 |
Systems want to equilibrate, balance these two things. 01:52:24.460 |
And so I thought that the balance was lacking. 01:52:27.600 |
And so we created this movement to bring balance. 01:52:31.680 |
- Well, I like how, I like the sort of visual 01:52:35.120 |
of the landscape of ideas evolving through forks. 01:52:39.080 |
So kind of thinking on the other part of history, 01:52:43.820 |
thinking of Marxism as the original repository, 01:52:52.200 |
and then Maoism as a fork of Marxism and communism. 01:53:02.560 |
- Thinking of culture almost like code, right? 01:53:12.760 |
is basically its cultural framework, what it believes, right? 01:53:23.940 |
from what has worked in the sort of machine of software 01:53:33.620 |
And our goal is to not say you should live your life 01:53:41.040 |
where people are always searching over subcultures 01:53:46.920 |
And I think creating this malleability of culture 01:53:50.320 |
is super important for us to converge onto the cultures 01:53:53.840 |
and the heuristics about how to live one's life 01:54:06.120 |
People don't feel like they belong to any one group. 01:54:24.680 |
which is a decelerative that is kind of the overall pattern 01:54:38.200 |
not only one movement, but many, many variants. 01:55:09.560 |
So you have individuals that are able to think freely 01:55:23.140 |
that leads to like mass hypnosis, mass hysteria, 01:55:28.300 |
whenever there's a sexy idea that captures our minds. 01:55:46.500 |
- Well, first of all, it's fun, it's rebellious, right? 01:55:54.300 |
there's this concept of sort of meta-irony, right? 01:56:03.460 |
and it's much more playful and much more fun, right? 01:56:06.540 |
Like, for example, we talk about thermodynamics 01:56:14.460 |
but there's no like ceremony and robes and whatnot. 01:56:20.020 |
But ultimately, yeah, I mean, I totally agree 01:56:29.060 |
So they naturally try to agree with their neighbors 01:56:54.820 |
but the whole point is that we're not trying to have 01:57:05.120 |
and there can be many clusters and many islands. 01:57:07.220 |
And I shouldn't be in control of it in any way. 01:57:16.140 |
I just put out tweets and certain blog posts, 01:57:29.580 |
a sort of deterritorialization in the space of ideas 01:57:38.900 |
And so cults, usually, they don't allow people 01:58:08.980 |
But there's a kind of embracing of the absurdity 01:58:17.860 |
but at the same time, it can also decrease the quality 01:58:26.620 |
So initially, I think, what allowed us to grow 01:58:30.220 |
under the radar was because it was camouflaged 01:58:35.900 |
We would sneak in deep truths within a package of humor 01:58:40.900 |
and memes and what are called shitposts, right? 01:58:45.520 |
And I think that was purposefully a sort of camouflage 01:58:51.780 |
against those that seek status and do not want to, 01:59:10.820 |
And so, that allowed us to grow pretty rapidly 01:59:16.340 |
But of course, essentially, people get steered, 01:59:21.340 |
their notion of the truth comes from the data they see, 01:59:34.860 |
And really what we've been doing is sort of engineering 01:59:39.860 |
what we call high mimetic fitness packets of information 01:59:44.740 |
so that they can spread effectively and carry a message, 01:59:48.420 |
So it's kind of a vector to spread the message. 01:59:56.140 |
that are optimal for today's algorithmically amplified 02:00:02.540 |
But I think we're reaching the point of scale 02:00:06.500 |
where we can have serious debates and serious conversations. 02:00:10.260 |
And that's why we're considering doing a bunch of debates 02:00:15.260 |
and having more serious long-form discussions. 02:00:18.060 |
'Cause I don't think that the timeline is optimal 02:00:21.620 |
for sort of very serious, thoughtful discussions. 02:00:24.860 |
You get rewarded for sort of polarization, right? 02:00:33.020 |
that is literally trying to polarize the tech ecosystem, 02:00:37.660 |
at the end of the day, it's so that we can have 02:00:42.680 |
- I mean, that's kind of what I try to do with this podcast, 02:00:49.220 |
But there is a degree to which absurdity is fully embraced. 02:01:01.340 |
So, first of all, I should say that I just very recently 02:01:20.820 |
what do you think of that particular individual 02:01:25.640 |
- Yeah, I mean, I think Jeff is really great. 02:01:29.420 |
I mean, he's built one of the most epic companies 02:01:56.620 |
letting the system compound and keep improving. 02:02:03.060 |
some of the most amount of capital in robotics out there. 02:02:10.260 |
kind of enabled the sort of tech boom we've seen today 02:02:16.700 |
I guess, myself and all of our friends to some extent. 02:02:20.940 |
And so, I think we can all be grateful to Jeff 02:02:24.780 |
and he's one of the great entrepreneurs out there, 02:02:36.300 |
is trying to make humans a multi-planetary species, 02:03:00.840 |
can think on a multi-decadal or multi-century time scale. 02:03:10.420 |
that they unlock the ability to allocate capitals 02:03:20.860 |
putting all this capital towards getting us to Mars. 02:03:27.020 |
and I think he wants to build O'Neill cylinders 02:03:38.500 |
I know this is a controversial statement sometimes, 02:03:46.300 |
Like, if you've allocated capital efficiently, 02:03:56.500 |
you know how to allocate capital more efficiently, 02:03:59.540 |
which is in contrast to politicians that get elected 02:04:08.020 |
of allocating taxpayer capital most efficiently. 02:04:15.860 |
over, say, giving all our money to the government 02:04:18.460 |
and letting them figure out how to allocate it. 02:04:24.900 |
and it's a popular meme to criticize billionaires, 02:04:30.580 |
Why do you think there's quite a widespread criticism 02:04:51.620 |
and realizing they have much more agency than they think, 02:04:54.740 |
they'd rather have this sort of victim mindset. 02:05:01.680 |
And the successful players clearly must be evil 02:05:15.720 |
and make them realize how the techno capital machine works 02:05:25.960 |
you capture some of the value you create for the world. 02:05:27.740 |
And that sort of positive sum mindset shift is so potent. 02:05:57.940 |
By the way, thank you so much for the Red Bull. 02:06:10.460 |
Just building in general means having agency, 02:06:14.400 |
trying to change the world by creating, let's say, 02:06:18.740 |
a company which is a self-sustaining organism 02:06:27.580 |
To us, that's the way to achieve change in the world 02:06:39.820 |
their function can no longer be accomplished. 02:06:42.100 |
You're kind of deforming the market artificially 02:06:45.500 |
compared to sort of subverting or coercing the market 02:07:07.120 |
we're gonna manage our way out of a climate crisis, 02:07:12.780 |
that is self-sustaining, profitable, and growing, 02:07:16.060 |
and we're gonna innovate our way out of this dilemma, right? 02:07:19.520 |
And we're trying to get people to do the latter 02:07:32.160 |
but he's also somebody who has for a long time 02:07:35.220 |
warned about the dangers, the potential dangers, 02:07:39.180 |
existential risks of artificial intelligence. 02:07:45.020 |
- It is somewhat because he's very much against regulation 02:08:07.500 |
over the cultural priors that you can embed in these LLMs 02:08:12.500 |
that then, as LLMs now become the source of truth 02:08:17.820 |
for people, then you can shape the culture of the people, 02:08:21.020 |
and so you can control people by controlling LLMs. 02:08:23.940 |
And he saw that, just like it was the case for social media, 02:08:28.580 |
if you shape the function of information propagation, 02:08:36.040 |
So at least, I think we're very aligned there 02:08:45.820 |
I'd love to talk to him to understand sort of his thinking 02:08:49.880 |
about how to make, how to advance AI going forwards. 02:08:54.880 |
I mean, he's also hedging his bets, I would say, 02:09:04.720 |
So look at the actions, not just the words, but-- 02:09:09.720 |
- Well, I mean, there's some degree where being concerned, 02:09:17.120 |
being concerned about threats all around us is a motivator. 02:09:22.400 |
I operate much better when there's a deadline, 02:09:26.600 |
Like, and I, for myself, create artificial things. 02:09:29.080 |
Like, I wanna create in myself this kind of anxiety 02:09:38.700 |
because creating AI that's aligned with humans 02:09:49.380 |
It just seems to be a very powerful psychological formulation 02:10:07.200 |
And I think that's what he's trying to do with XAI. 02:10:15.620 |
let's say, the open source ecosystem from thriving, right, 02:10:25.100 |
claiming that open source LMs are dual-use technologies 02:10:36.760 |
And I think that extra friction will dissuade a lot 02:10:42.560 |
hackers that could later become the researchers 02:10:45.600 |
that make key discoveries that push us forward, right, 02:10:52.720 |
And so I think I just wanna maintain ubiquity 02:11:04.620 |
where only a few players get to play the game. 02:11:07.780 |
- I mean, so the EAC movement is often sort of caricatured 02:11:11.700 |
to mean sort of progress and innovation at all costs. 02:11:22.860 |
You just build cool shit as fast as possible. 02:11:26.320 |
Stay up all night with a Diet Coke, whatever it takes. 02:11:31.240 |
I think, I guess, I don't know if there's a question 02:11:37.440 |
and what you've seen the different formulations 02:11:42.400 |
- I think, again, I think if there was no one working on it, 02:11:50.840 |
I think, again, our goal is to sort of bring balance 02:11:54.020 |
and obviously a sense of urgency is a useful tool, right, 02:12:03.900 |
and gives us energy to work late into the night. 02:12:12.520 |
At the end of the day, it's like, what am I contributing to? 02:12:14.540 |
I'm contributing to the growth of this beautiful machine 02:12:25.400 |
- So you're saying AI safety is important to you, 02:13:08.020 |
there's no amount of sort of centralization of control 02:13:14.780 |
There's always more nine, nine, nines of P safety 02:13:23.300 |
Oh, please give us full access to everything you do, 02:13:27.980 |
And frankly, those that are proponents of AI safety 02:13:32.100 |
have proposed like having a global panopticon, right? 02:13:41.260 |
And to me, that just opens up the door wide open 02:13:44.020 |
for a sort of Big Brother 1984-like scenario, 02:13:49.580 |
- 'Cause we know, we have some examples throughout history 02:13:55.960 |
- You mentioned you founded a company, Xtropic, 02:13:58.940 |
that recently announced a 14.1 million seed round. 02:14:05.660 |
You're talking about a lot of interesting physics things. 02:14:14.100 |
originally we weren't gonna announce last week, 02:14:21.740 |
So we had to disclose roughly what we were doing, 02:14:24.820 |
but really Xtropic was born from my dissatisfaction 02:14:49.980 |
But ultimately our greatest enemy was this noise, 02:14:59.820 |
out of the system to maintain this pristine environment 02:15:13.160 |
as generative AI is sort of eating the world, 02:15:17.900 |
more and more of the world's computational workloads 02:15:47.180 |
that are inspired by out-of-equilibrium thermodynamics 02:15:55.780 |
to do machine learning as a physical process. 02:16:05.420 |
Is that hardware, is it software, is it both? 02:16:27.020 |
We have some of the best quantum computer architects, 02:16:31.680 |
those that have designed IBM's and AWS's systems. 02:16:44.960 |
Well, actually, that's nothing new around TensorFlow Quantum. 02:16:47.680 |
What lessons have you learned from TensorFlow Quantum? 02:16:55.620 |
to create, essentially, what, like a software API 02:17:01.440 |
- Right, I mean, that was a challenge to build, 02:17:31.960 |
that are differentiable in quantum computing. 02:17:37.840 |
calls differentiable programming software 2.0, right? 02:17:41.600 |
It's like gradient descent is a better programmer than you. 02:17:51.160 |
And so, which quantum programs should you run? 02:17:54.480 |
Well, just let gradient descent find those programs instead. 02:17:58.200 |
And so, we built sort of the first infrastructure 02:18:01.120 |
to not only run differentiable quantum programs, 02:18:05.720 |
but combine them as part of broader deep learning graphs, 02:18:16.840 |
with what are called quantum neural networks. 02:18:19.660 |
And ultimately, it was a very cross-disciplinary effort. 02:18:26.220 |
We had to invent all sorts of ways to differentiate, 02:18:29.360 |
to back propagate through the graph, the hybrid graph. 02:18:35.600 |
the way to program matter and to program physics is 02:18:39.400 |
by differentiating through control parameters. 02:18:43.600 |
If you have parameters that affects the physics 02:18:50.840 |
you can optimize the system to accomplish a task, 02:18:56.980 |
And that's a very sort of universal meta framework 02:19:07.900 |
make those parameters differential, and then optimize. 02:19:13.940 |
So, is there some more practical engineering lessons 02:19:17.580 |
from TensorFlow Quantum, just organizationally too, 02:19:22.300 |
like the humans involved, and how to get to a product, 02:19:25.440 |
how to create good documentation, how to have, 02:19:29.120 |
I don't know, all of these little subtle things 02:19:34.240 |
- I think like working across disciplinary boundaries 02:19:39.240 |
is always a challenge, and you have to be extremely patient 02:19:44.320 |
I learned a lot of software engineering through the process. 02:19:47.720 |
My colleagues learned a lot of quantum physics, 02:19:59.880 |
that are passionate and trust each other in a room, 02:20:02.880 |
and you have a small team, and you teach each other 02:20:06.320 |
your specialties, suddenly you're kind of forming 02:20:12.420 |
and something special comes out of that, right? 02:20:15.040 |
It's like combining genes, but for your knowledge bases. 02:20:18.720 |
And sometimes special products come out of that. 02:20:21.680 |
And so I think like even though it's very high friction 02:20:24.800 |
initially to work in an interdisciplinary team, 02:20:28.400 |
I think the product at the end of the day is worth it. 02:20:31.200 |
And so learned a lot trying to bridge the gap there, 02:20:34.380 |
and I mean, it's still a challenge to this day. 02:20:43.120 |
and somehow we have to make them talk to one another, right? 02:20:47.040 |
- Is there a magic, is there some science and art 02:20:55.440 |
- Yeah, it's really hard to pinpoint that je ne sais quoi. 02:21:01.820 |
- I didn't know you speak French, that's very nice. 02:21:11.680 |
I thought you were just doing that for the cred. 02:21:15.360 |
- No, no, I'm truly French-Canadian from Montreal. 02:21:27.840 |
because they're gonna have to get out of their comfort zone. 02:21:34.120 |
and very quickly get comfortable with them, right? 02:21:38.200 |
And so that's sort of what we look for when we hire. 02:21:42.120 |
We can't hire people that are just optimizing 02:21:46.640 |
this subsystem for the past three or four years. 02:21:59.040 |
'cause if you're pioneering a new approach from scratch, 02:22:02.320 |
there is no textbook, there's no reference, it's just us. 02:22:11.740 |
we have to share knowledge bases, collaborate, 02:22:19.120 |
And so people that are used to just getting prescribed 02:22:33.000 |
you're trying to build the physical substrate 02:22:37.680 |
What's the difference between that and the AGI AI itself? 02:22:42.540 |
So is it possible that in the halls of your company, 02:22:48.700 |
Or will AGI just be using this as a substrate? 02:22:51.860 |
- I think our goal is to both run human-like AI, 02:23:01.980 |
- We think that the future is actually physics-based AI, 02:23:10.660 |
So you can imagine I have a sort of world modeling engine 02:23:17.100 |
Physics-based AI is better at representing the world 02:23:19.440 |
at all scales, 'cause it can be quantum mechanical, 02:23:33.960 |
in the ways you learn representations of nature, 02:23:35.780 |
you can have much more accurate representations of nature. 02:23:38.180 |
So you can have very accurate world models at all scales. 02:23:45.700 |
and then you have the sort of anthropomorphic AI 02:23:57.060 |
And to us, that joint system of a physics-based AI 02:24:00.380 |
and an anthropomorphic AI is the closest thing 02:24:03.900 |
to a fully general artificially intelligent system. 02:24:07.620 |
- So you can get closer to truth by grounding 02:24:13.900 |
but you can also still have a anthropomorphic interface 02:24:17.220 |
to us humans that like to talk to other humans, 02:24:30.860 |
is that they're not, they're good bullshitters. 02:24:34.980 |
They're not really grounded to truth necessarily. 02:24:42.220 |
You wouldn't try to extrapolate the stock market 02:24:45.660 |
with an LM trained on text from the internet, right? 02:24:50.620 |
It's not gonna model its priors or its uncertainties 02:24:58.660 |
to compliment sort of this text extrapolation AI, yeah. 02:25:09.900 |
- I don't know if I believe in a finite time singularity 02:25:23.420 |
we have the limits of physics restricting our ability 02:25:27.340 |
to grow so obviously can't fully diverge on a finite time. 02:25:36.700 |
I think a lot of people on the other side of the aisle 02:25:46.180 |
and a sudden like fume, like suddenly AI is gonna grok 02:25:50.380 |
how to, you know, manipulate matter at the nanoscale 02:25:55.100 |
And having worked, you know, for nearly a decade 02:26:06.940 |
that's very accurate and costly or nature itself. 02:26:15.620 |
There's a sort of minimal cost computationally 02:26:19.820 |
and thermodynamically to acquiring information 02:26:22.460 |
about the world in order to be able to predict 02:26:24.300 |
and control it and that keeps things in check. 02:26:27.460 |
- It's funny you mentioned the other side of the aisle. 02:26:30.020 |
So in the poll I posted about P-Doom yesterday, 02:26:37.900 |
between people think it's very likely and very unlikely. 02:26:44.900 |
the actual Republicans versus Democrats division, 02:26:56.060 |
is not right wing or left wing fundamentally. 02:26:58.620 |
It's more like up versus down in terms of the scale. 02:27:05.220 |
- But it seems to be like there is a sort of case 02:27:09.780 |
of alignment of the existing political parties 02:27:12.220 |
where those that are for more centralization of power 02:27:17.500 |
control and more regulations are aligning with sort of, 02:27:23.940 |
because that sort of instilling fear in people 02:27:28.620 |
is a great way for them to give up more control 02:27:33.020 |
But fundamentally, we're not left versus right. 02:27:36.260 |
I think there's, we've done polls of people's alignment 02:27:45.780 |
It's not just centralization versus decentralization. 02:27:48.180 |
It's kind of, do we go, it's like tech progressivism 02:27:54.060 |
- So yak is, as a movement is often formulated 02:28:07.500 |
What's interesting, insightful to you about them? 02:28:15.460 |
- Right, I think like people trying to do good 02:28:23.100 |
- We should actually say, and sorry to interrupt. 02:28:55.860 |
- We're both trying to do good to some extent. 02:29:01.540 |
for which loss function we should use, right? 02:29:04.800 |
- Their loss function is sort of hedons, right? 02:29:19.860 |
But to us, that seems like that loss function 02:29:25.420 |
You can start minimizing shrimp farm pain, right? 02:29:43.340 |
And you feel good on the short term timescale 02:29:48.100 |
But on long term timescale, it causes decay and death, right? 02:29:54.060 |
Whereas sort of EAC measuring progress of civilization, 02:29:59.060 |
not in terms of a subjective loss function like hedonism, 02:30:08.180 |
a quantity that cannot be gamed that is physical energy, 02:30:16.900 |
If you did it in terms of like GDP or a currency, 02:30:20.580 |
that's pinned to a certain value that's moving, right? 02:30:23.180 |
And so that's not a good way to measure our progress. 02:30:26.880 |
And so, but the thing is we're both trying to make progress 02:30:31.880 |
and ensure humanity flourishes and gets to grow. 02:30:41.260 |
- Is there a degree, maybe you can educate me, correct me. 02:30:50.020 |
trying to reduce all of the human civilization, 02:30:55.820 |
Is there a degree that we should be skeptical 02:31:14.800 |
it's not stiff, it's kind of an average of averages, right? 02:31:18.660 |
It's like distributions of states in the future 02:31:28.800 |
It's not like, we're not on like stiff rails, right? 02:31:31.800 |
It's just a statistical statement about the future. 02:31:41.380 |
but it's not necessarily an option to obey it, right? 02:31:44.640 |
And some people try to test that and that goes not so well. 02:32:00.280 |
and chart a path forward given this fundamental truth. 02:32:10.440 |
with the lack of information with narratives. 02:32:21.540 |
And humans tend to use that to further their own means. 02:32:28.760 |
So it's always, whenever there's an equation, 02:32:34.840 |
of the universe, humans will do what humans do. 02:32:38.920 |
And they try to use the narrative of doing good 02:32:53.960 |
that should be skeptical about in all movements. 02:33:01.780 |
- Do you have an understanding of what might, 02:33:08.780 |
with effective altruism that might also go wrong 02:33:18.240 |
I think it provided initially a sense of community 02:33:21.720 |
for engineers and intellectuals and rationalists 02:33:25.880 |
And it seems like the community was very healthy, 02:33:28.800 |
but then, you know, they formed all sorts of organizations 02:33:32.000 |
and started routing capital and having actual power, right? 02:33:43.200 |
I mean, they're literally controlling the board of OBDI, 02:33:48.880 |
I think they all have some control over that too. 02:33:54.000 |
the assumption of EAC is more like capitalism 02:33:56.520 |
is that every agent organism and meta organism 02:34:02.040 |
And we should maintain sort of adversarial equilibrium 02:34:05.360 |
or adversarial competition to keep each other in check 02:34:09.980 |
I think that, yeah, ultimately it was the perfect cover 02:34:18.280 |
And unfortunately, sometimes that corrupts people over time. 02:34:42.920 |
I would say my favorite days are 12 p.m. to 4 a.m. 02:34:47.920 |
And I would have meetings in the early afternoon, 02:34:53.440 |
usually external meetings, some internal meetings. 02:34:59.040 |
with the outside world, whether it's customers 02:35:00.720 |
or investors or interviewing potential candidates. 02:35:04.500 |
And usually I'll have ketones, exogenous ketones. 02:35:16.560 |
- I've done keto before for football and whatnot. 02:35:21.240 |
But I like to have a meal after part of my day is done. 02:35:31.040 |
- You do the social interactions earlier in the day 02:35:45.720 |
'Cause then when you eat, you're actually allocating 02:35:47.840 |
some of your energy that could be going to neural energy 02:35:53.000 |
After I eat, maybe I take a break an hour or so, 02:35:57.500 |
And then usually it's like ideally one meal a day, 02:36:18.200 |
And there I'll just stay up late into the night 02:36:22.440 |
and work with engineers on very technical problems. 02:36:40.280 |
And I think Demis Hassabis has a similar work day 02:36:47.240 |
So I think that's definitely inspired my work day. 02:36:49.840 |
But yeah, I started this work day when I was at Google 02:36:54.560 |
and had to manage a bit of the product during the day 02:36:57.400 |
and have meetings and then do technical work at night. 02:37:03.940 |
You said football, you used to play football? 02:37:10.600 |
And then I was into powerlifting for a while. 02:37:13.940 |
So when I was studying mathematics in grad school, 02:37:17.240 |
I would just do math and lift, take caffeine, 02:37:25.760 |
But it's really interesting how in powerlifting 02:37:32.760 |
and you're trying to engineer neuroplasticity 02:37:36.680 |
And you have all sorts of brain-derived neurotrophic factors 02:37:44.040 |
So it's funny to me how I was trying to engineer 02:37:47.080 |
neural adaptation in my nervous system more broadly, 02:37:53.360 |
not just my brain, while learning mathematics. 02:38:08.160 |
let's say caffeine or some cholinergic supplement 02:38:13.840 |
I should chat with Andrew Huberman at some point. 02:38:21.140 |
you can try to input more tokens into your brain, 02:38:25.080 |
if you will, and you can try to increase the learning rate 02:38:27.520 |
so that you can learn much faster on a shorter timescale. 02:38:36.800 |
about what you're doing, you're gonna learn faster, 02:38:44.440 |
And so I advise people to follow their curiosity 02:38:47.200 |
and don't respect the boundaries of certain fields 02:38:50.320 |
or what you've been allocated in terms of lane 02:38:57.320 |
and try to acquire and compress as much information 02:39:09.880 |
is like tricking yourself that you care about a thing. 02:39:13.360 |
- And then you start to really care about it. 02:39:22.120 |
- Right, and so at least part of my character, 02:39:30.480 |
- Yeah, just hype, but I'm like hyping myself up, 02:39:34.360 |
And it's just when I'm trying to get really hyped up 02:39:36.600 |
and in like an altered state of consciousness 02:39:38.680 |
where I'm like ultra focused, in the flow wired, 02:39:42.120 |
trying to invent something that's never existed, 02:39:44.040 |
I need to get to like unreal levels of like excitement. 02:39:52.320 |
that you can unlock with like higher levels of adrenaline 02:39:56.720 |
And I mean, I've learned that in powerlifting 02:39:59.480 |
that actually you can engineer a mental switch 02:40:07.920 |
maybe you have a prompt like a certain song or some music 02:40:16.560 |
And I've engineered that switch through years of lifting. 02:40:23.980 |
if you don't have that switch to be wired in, you might die. 02:40:30.160 |
And that sort of skill I've carried over to like research. 02:40:37.240 |
somehow I just reach another level of neural performance. 02:40:40.360 |
- So Beth Jezos is your sort of embodiment representation 02:40:46.520 |
It's your productivity hulk that they just turn on. 02:40:50.880 |
What have you learned about the nature of identity 02:40:58.200 |
to be able to put on those two hats so explicitly. 02:41:01.320 |
- I think it was interesting in the early days. 02:41:06.560 |
Like, oh yeah, this is a character, I'm Guillaume, 02:41:13.180 |
and then I extrapolate them to a bit more extreme. 02:41:16.160 |
But over time, it's kind of like both identities 02:41:20.600 |
were starting to merge mentally and people were like, 02:41:39.400 |
- Would you recommend people sort of have an alt? 02:41:43.800 |
- Like young people, would you recommend them 02:42:00.440 |
because you're an on account with, I don't know, 02:42:09.320 |
in the era of everything being under your main name, 02:42:15.600 |
explore ideas that aren't fully formed, right? 02:42:41.320 |
if it was just through open conversation with real names. 02:43:04.000 |
That's a fun, interesting way to really explore 02:43:13.600 |
And taking that across the span of several days, 02:43:22.880 |
That's the Nietzsche gaze long into the abyss. 02:43:34.920 |
Yeah, you wake up with a shaved head one day. 02:43:40.440 |
So you've mentioned quite a bit of advice already, 02:43:44.480 |
but what advice would you give to young people 02:43:46.760 |
of how to, in this interesting world we're in, 02:43:57.080 |
- I think to me, the reason I went to theoretical physics 02:44:01.960 |
was that I had to learn the base of the stack 02:44:10.240 |
And to me, that was the foundation upon which 02:44:14.040 |
then I later built engineering skills and other skills. 02:44:24.560 |
but certain things like fundamental mathematics 02:44:30.600 |
and knowledge about complex systems and adaptive systems, 02:44:37.640 |
And so not everybody has to study mathematics, 02:44:40.600 |
but I think it's really a huge cognitive unlock 02:44:44.520 |
to learn math and some physics and engineering. 02:44:48.480 |
- Get as close to the base of the stack as possible. 02:44:51.400 |
- Yeah, that's right, 'cause the base of the stack 02:44:55.560 |
your knowledge might become not as relevant in a few years. 02:44:58.160 |
Of course, there's a sort of transfer learning you can do, 02:45:00.360 |
but then you have to always transfer learn constantly. 02:45:04.440 |
- I guess the closer you are to the base of the stack, 02:45:06.320 |
the easier the transfer learning, the shorter the jump. 02:45:12.120 |
And you'd be surprised once you've learned concepts 02:45:18.480 |
how they can carry over to understanding other systems 02:45:26.200 |
the principles and tenet post that was based on physics, 02:45:42.640 |
- If you look at your one cog in the machine, 02:45:52.320 |
do you think mortality is a feature or a bug? 02:46:03.960 |
dissipative adaptation, there's the word dissipation. 02:46:08.680 |
Dissipation is important, death is important, right? 02:46:26.480 |
I think that we should probably extend our lifespan 02:46:34.240 |
'cause the world is more and more complex, right? 02:46:41.200 |
And if we have a finite window of higher neuroplasticity, 02:46:47.960 |
in how much we can understand about our world. 02:46:54.880 |
I think it's important if you have like a king 02:46:57.640 |
that would never die, that would be a problem, right? 02:47:00.360 |
Like the system wouldn't be constantly adapting, right? 02:47:05.280 |
You need novelty, you need youth, you need disruption 02:47:08.920 |
to make sure the system's always adapting and malleable. 02:47:17.240 |
if you have, let's say, corporations that are there forever 02:47:19.560 |
and they have the monopoly, they get calcified, 02:47:21.920 |
they become not as optimal, not as high fitness 02:47:25.200 |
in a changing, time-varying landscape, right? 02:47:28.560 |
And so, death gives space for youth and novelty 02:47:47.200 |
and longer time for neuroplasticity, bigger brains, 02:47:50.360 |
which should be something we should strive for. 02:47:52.880 |
- Well, in that, Jeff Bezos and Bev Jezos agree 02:48:05.840 |
try to constantly, for as long as possible, reinvent. 02:48:14.760 |
'cause it's so damn difficult to keep reinventing. 02:48:20.660 |
- I think I have ideas and things I'd like to achieve 02:48:32.000 |
but I don't think I'm necessarily afraid of death. 02:48:34.600 |
- So, you're not attached to this particular body 02:48:38.240 |
- No, I think, I'm sure there's gonna be better 02:48:47.120 |
- Forks, right, genetic forks, or other, right? 02:48:53.600 |
I think there's a sort of a evolutionary-like algorithm 02:49:10.160 |
And I think maintaining this adaptation malleability 02:49:13.280 |
is how we have constant optimization of the whole machine. 02:49:16.680 |
And so, I don't think I'm particularly an optimum 02:49:23.000 |
I think there's gonna be greater optima in many ways. 02:49:25.760 |
- What do you think is the meaning of it all? 02:49:27.280 |
What's the why of the machine, the IAC machine? 02:49:42.520 |
and of civilization, of evolution of technologies 02:49:51.800 |
Why do we have these particular hyperparameters, 02:49:55.600 |
Well, then you get into the anthropic principle, right? 02:49:59.480 |
In the landscape of potential universes, right? 02:50:04.840 |
And then why is there potentially many universes? 02:50:12.480 |
But could we potentially engineer new universes 02:50:16.560 |
or create pocket universes and set the hyperparameters 02:50:27.400 |
I think that's really, I don't know, that'd be very poetic. 02:50:32.560 |
But again, this is why figuring out quantum gravity 02:50:36.680 |
would allow us to understand if we can do that. 02:51:04.240 |
to minimize cross-entropy between its internal model 02:51:07.880 |
We seek to minimize the statistical divergence 02:51:11.120 |
between our predictions in the world and the world itself. 02:51:14.280 |
And having regimes of energy scales or physical scales 02:51:26.080 |
And we want to be able to understand the world better 02:51:31.080 |
in order to best steer it or steer us through it. 02:51:35.900 |
And in general, it's the capability that has evolved 02:51:39.880 |
because the better you can predict the world, 02:51:42.140 |
the better you can capture utility or free energy 02:52:04.440 |
I think there's a lot to learn in the mesoscales. 02:52:07.240 |
There's a lot of information to acquire about our world 02:52:10.540 |
and a lot of engineering, perception, prediction, 02:52:13.720 |
and control to be done to climb up the Kardashev scale. 02:52:18.280 |
And to us, that's the great challenge of our times. 02:52:27.760 |
Guillaume, Beth, thank you for talking today. 02:52:40.880 |
- Thank you for listening to this conversation 02:52:45.120 |
please check out our sponsors in the description.