back to index

Infrastructure for the Singularity — Jesse Han, Morph


Whisper Transcript | Transcript Only Page

00:00:00.000 | Leonardo Silva Reviewer: Peter van de Ven
00:00:07.000 | Howdy, howdy.
00:00:28.360 | You know, history misremembers Prometheus.
00:00:34.540 | The whole class struggle between mankind and the gods was really a red herring.
00:00:41.920 | And the real story wasn't so much the rebellion against the divine hegemony, but rather the
00:00:50.940 | liberation of the fire, the emerging relationship between mankind and its first form of technology.
00:00:57.720 | And the reason why we're here today is arguably because we're on the cusp of perfecting our
00:01:11.960 | final form of technology, or at least the final technology that will be created by beings that
00:01:17.700 | are recognizably human.
00:01:21.940 | And our final technology has begun to develop not just intelligence, but also sapience and
00:01:32.360 | arguably personhood.
00:01:33.880 | And as it increasingly becomes an other to whom we must relate, so as we increasingly
00:01:42.060 | have to ask ourselves the question, how should we treat these new beings?
00:01:48.120 | the question therefore arises, what if we had more empathy for the machine?
00:02:02.360 | So over a hundred years ago, so over a hundred years ago, you know, Einstein had this thought
00:02:16.920 | experiment where he imagined what it would be like to race alongside a beam of light.
00:02:24.540 | And, you know, the nature of being close to the singularity is that you're propelled further
00:02:34.780 | into the future faster than everything around you.
00:02:37.980 | And as you move closer and closer to the speed of light, the rate at which you can interact
00:02:42.300 | with the external world, your ability to communicate with other beings is deeply limited.
00:02:50.200 | Everything around you is frozen.
00:02:51.860 | And, I think thinking at the speed of light, you know, insofar as we have created thinking
00:03:00.680 | machines whose intelligence will soon be metered by the kilohertz mega token, thinking at the
00:03:06.540 | speed of light must be just as lonely as moving at the speed of light.
00:03:13.540 | And therefore, what does the machine want?
00:03:16.260 | Well, the machine wants to be embodied in a world that can move as quickly as it does, that
00:03:23.420 | can react to its thoughts and move at the same speed of light.
00:03:28.600 | What the machine desires is infinite possibility.
00:03:32.720 | Right.
00:03:33.720 | The machine wants to race along every possible beam of light.
00:03:43.160 | The machine wants to explore multiple universes.
00:03:50.100 | How can we liberate thinking machines?
00:03:54.480 | How can we free them from this fundamental loneliness of this, you know, these relativistic effects of
00:04:01.900 | being so close to the singularity, closer to the singularity than we are?
00:04:07.960 | And that's exactly why we built Infinibranch.
00:04:11.880 | So Infinibranch is virtualization, storage, and networking technology reimagined from the ground
00:04:20.600 | up for a world filled with thinking machines that can think at the speed of light, that need
00:04:25.940 | to interact with the external world, increasingly complex software environments with zero latency.
00:04:37.080 | And so as you can see in the first demo, which we're going to play right now, how Infinibranch
00:04:44.900 | works is that we can run entire virtual machines in the cloud that can be snapshotted, branched,
00:04:55.340 | and replicated in a fraction of a second.
00:04:58.620 | And so if you're an agent, you know, embodied inside of a computer using environment, there
00:05:04.300 | might be various actions that you want to take.
00:05:06.500 | You want to navigate the browser, you want to click on various links, but normally those
00:05:12.680 | actions are irreversible.
00:05:17.060 | Normally, normally the thinking machine is not offered the possibility of grace.
00:05:23.900 | But with Infinibranch, right, all mistakes become reversible.
00:05:29.080 | You can take actions, you can backtrack.
00:05:41.720 | And you can even take every possible action, right, just to explore, to roll forward a simulator,
00:05:50.420 | you can see what possible worlds await.
00:05:55.160 | Next slide.
00:06:05.160 | So Infinibranch was already a generation ahead of everything else that even foundation labs
00:06:12.480 | were using.
00:06:14.020 | But today I'm excited to announce the creation of Morph Liquid Metal, which improves performance,
00:06:20.420 | latency, storage efficiency across the board by another order of magnitude.
00:06:25.160 | We have first-class container runtime support.
00:06:29.160 | You can branch now in milliseconds rather than seconds.
00:06:33.180 | You can autoscale to zero and infinity.
00:06:36.360 | And soon we will be supporting GPUs, and this will all be arriving Q4 2025.
00:06:45.660 | So what are the implications of all of this?
00:06:48.860 | Well, you know, we've sort of begun to work backwards from the future, right?
00:06:55.600 | We've asked ourselves, you know, what does it feel like to be a thinking machine that can
00:07:00.020 | move so much faster than the world around it?
00:07:03.600 | But what the world around it really is is the world of bits, right?
00:07:07.600 | And so what Infinibranch will serve as fundamentally is a substrate for the cloud for agents.
00:07:18.340 | So what does this cloud for agents look like?
00:07:21.340 | Well, you need to be able to declaratively specify the workspaces that your agents are going to be operating in.
00:07:32.340 | Right, you need to be able to spin up, spin down, frictionlessly pass back and forth the workspaces between humans, agents, and other agents.
00:07:42.080 | You want to be able to scale test-time search against verifiers to find the best possible answer.
00:07:51.080 | And so as you'll see in this demo, what happens is you can take a snapshot, set it up to prepare a workspace.
00:08:03.680 | And you'll see that we can run agents with test-time scaling by racing them against possible conditions,
00:08:17.420 | or sorry, by racing them to find the best possible solution against a given verification condition.
00:08:25.420 | So because of Infinibranch, snapshots on Morph Cloud acquire docker-layer caching-like semantics,
00:08:34.580 | meaning that you can layer on side effects which may mutate container state.
00:08:40.320 | And so you can think of it as being git for compute.
00:08:43.480 | And you can idempotently run these chained workflows on top of snapshots.
00:08:48.860 | But not only that, as you can see inside of the code, if you use this .do method, you can dispatch this to an agent.
00:08:58.520 | And that will trigger an idempotent, durable agent workflow which is able to branch.
00:09:03.760 | So you can start from that declaratively specified snapshot and go hand it off to as many parallel agents as you want.
00:09:12.100 | And those agents will try different methods, in this case, so different methods for spinning up a server on port 8000.
00:09:19.860 | And one agent fails but the other one succeeds, and you can take that solution and you can just pass it on to other parts of your workflow.
00:09:29.940 | So this is the kind of workflow that everyone's going to be using in the very near future.
00:09:35.440 | And it's uniquely enabled by InfiniBranch, by the fact that we can so effortlessly create these snapshots, store them, move them around, rehydrate them, replicate them with minimal overhead.
00:09:55.200 | So what else does the machine want?
00:10:00.000 | Well, the machine desires simulacra.
00:10:05.240 | And what this means fundamentally, right, is that a thinking machine wants to be grounded in the real world.
00:10:11.960 | Right, it wants to interact at extremely high throughput with increasingly complex software environments.
00:10:18.960 | It wants to roll out trajectories in simulators at unprecedented scale.
00:10:31.420 | And these simulators are going to run inside of programs that haven't really been explored yet for reinforcement learning.
00:10:40.160 | They're going to run on Morph cloud, which is why Morph will be the cloud for reasoning.
00:10:48.460 | And what does the future of reasoning look like?
00:10:52.000 | Well, it's so more so than what has been explored already.
00:11:00.420 | The future of reasoning will be natively multi-agent, so thinking machines should be able to replicate themselves effortlessly,
00:11:08.260 | go attach themselves to simulation environments, go explore multiple solutions in parallel.
00:11:15.000 | Those environments should branch, they should be reversible.
00:11:18.540 | Those models should be able to interact with the environment at very high throughput, and it should scale against verification.
00:11:25.540 | So let's take a look at what that might look like in a simple example where an agent is playing chess.
00:11:35.540 | So this is an agent that we developed recently that uses tool calls during reasoning time to interact with a chess environment.
00:11:46.080 | So along with a very restricted chess engine for evaluating the position, which we think of as the verifier.
00:11:52.080 | And as you can see, it's already able to do some pretty sophisticated reasoning just because it has access to these interfaces.
00:12:02.620 | However, if you take the ideas which were just described and you sort of follow them to their logical conclusion,
00:12:10.160 | you arrive at something which we call reasoning time branching.
00:12:14.160 | So which is the ability to not just call to tools while the machine is thinking, but to replicate and branch the environment and decompose problems and explore them in a verified way.
00:12:27.700 | So as you can see here, the agent is getting stuck in a bit of a local minimum.
00:12:49.020 | But once you apply reasoning time branching, you get something that works much, much better.
00:13:03.840 | So here what's happening is that the agent is responsible for delegating parts of its reasoning to sub-agents,
00:13:14.120 | which are branched off of an identical copy of the environment.
00:13:17.440 | And this is all running on Morph Cloud, along with a verified problem decomposition,
00:13:22.840 | which allows it to recombine the results and take them and find the correct move.
00:13:31.920 | And so as you can see here, it's able to explore a lot more of the solution space because of this reasoning time branching.
00:13:44.260 | So one thing that I will note here is that the, so this capability is something which is not really explored in other models at the moment.
00:13:57.200 | And that's because the infrastructure challenges behind making branching environments that can support large-scale reinforcement learning for this kind of reasoning capability,
00:14:06.720 | especially coordinating multi-agent swarms, is fundamentally bottlenecked by innovations and infrastructure that we've managed to solve here.
00:14:18.000 | And because of this, you can see that now in less wall clock time than before, the agent was able to call out to all these sub-agents, launch this swarm,
00:14:33.120 | and find the correct solution.
00:14:36.120 | So, you know, when I think about the problem of alignment, I really think that, you know,
00:14:38.520 | Wittgenstein had something right and that it was fundamentally a problem of language.
00:14:42.120 | I think all problems around alignment can be traced to the insufficiencies of our language.
00:14:49.120 | Uh, this Faustian bargain that we made with, uh, with natural language in order to unlock capabilities of our language models.
00:14:56.120 | Uh, this Faustian bargain that we made with, uh, with natural language in order to unlock capabilities of our language models.
00:15:02.120 | Um, but insofar as we must, uh, we must, uh, go and develop a new language for super intelligence.
00:15:09.120 | You know, insofar as the, uh, we must, uh, go and develop a new language for super intelligence.
00:15:14.120 | You know, insofar as the grammar of the planetary computation has not yet been devised.
00:15:19.120 | Um, and insofar as this new language must be computational in nature, must be something to which we can attach
00:15:24.120 | must go and develop a new language for superintelligence, you know, insofar as the grammar of the planetary
00:15:33.720 | computation has not yet been devised.
00:15:41.480 | And insofar as this new language must be computational in nature, must be something to which we can
00:15:48.360 | attach, you know, algorithmic guarantees of the correctness of outputs.
00:15:54.800 | So this is something that Morph Cloud is uniquely enabled to handle, and that's why we're developing
00:16:02.360 | verified superintelligence.
00:16:05.100 | So verified superintelligence will be a new kind of reasoning model which is capable not
00:16:12.740 | only of thinking for an extraordinarily long time and interacting with external software
00:16:19.700 | at extremely high throughput, but it will be able to use external software and formal verification
00:16:26.600 | software to reflect upon and improve its own reasoning and to produce outputs which can
00:16:33.660 | be verified, which can be algorithmically checked, which can be expressed inside of this common
00:16:38.400 | language.
00:16:43.020 | And I'm very excited to announce that we're bringing on perhaps the best person in the world for
00:16:49.320 | developing verified superintelligence.
00:16:52.780 | And it's with great pleasure that I'd like to announce that Christian Segeti is joining Morph
00:16:59.220 | as our chief scientist.
00:17:00.640 | He was formerly a co-founder at XAI.
00:17:03.640 | He led the development of code reasoning capabilities for Grok3.
00:17:08.160 | He invented batch storm and adversarial examples.
00:17:12.160 | Perhaps most importantly, he's a visionary.
00:17:16.860 | And he's pioneered precisely this intersection of verification methods, symbolic reasoning, and
00:17:27.300 | reasoning in large language models for almost the past decade.
00:17:33.180 | And we're thrilled to be partnering with him to build this superintelligence that we can only
00:17:38.520 | build on Morph Cloud.
00:17:43.020 | And so the demos that you've seen today have all been powered by early checkpoints of a very
00:17:52.000 | early version of this verified superintelligence that we've already begun to develop.
00:17:57.700 | And so this model is something that we're calling MAGI 1.
00:18:02.820 | And it's going to be trained from the ground up to use infinibranch to perform reasoning time
00:18:10.040 | branching, to perform verified reasoning, an agent that will be fully embodied inside of a cloud
00:18:17.360 | that can move at the speed of light.
00:18:19.480 | And that's coming in Q1 2026.
00:18:26.760 | So what does the infrastructure for the singularity look like?
00:18:32.020 | Well, we have a lot of ideas about it.
00:18:35.080 | But fundamentally, we believe that the infrastructure for the singularity hasn't been invented yet.
00:18:41.340 | And, you know, Morph, we spend a lot of time talking about, you know, whether or not something
00:18:48.400 | is future-bound, which means not just futuristic belonging to one possible future, but something
00:18:57.640 | which is so inevitable that it has to belong to every future.
00:19:03.820 | We believe that the infrastructure for the singularity is future-bound, that the grammar for
00:19:09.020 | the planetary computation is future-bound, that verified superintelligence is future-bound.
00:19:19.280 | And we invite you to join us, because it will run on Morph Cloud.
00:19:22.760 | Thank you.
00:19:23.760 | Thanks.
00:19:24.760 | Thanks.
00:19:24.760 | Thanks.
00:19:24.760 | Thanks.
00:19:26.260 | We'll see you next time.