Leonardo Silva Reviewer: Peter van de Ven Howdy, howdy. You know, history misremembers Prometheus. The whole class struggle between mankind and the gods was really a red herring. And the real story wasn't so much the rebellion against the divine hegemony, but rather the liberation of the fire, the emerging relationship between mankind and its first form of technology.
And the reason why we're here today is arguably because we're on the cusp of perfecting our final form of technology, or at least the final technology that will be created by beings that are recognizably human. And our final technology has begun to develop not just intelligence, but also sapience and arguably personhood.
And as it increasingly becomes an other to whom we must relate, so as we increasingly have to ask ourselves the question, how should we treat these new beings? the question therefore arises, what if we had more empathy for the machine? So over a hundred years ago, so over a hundred years ago, you know, Einstein had this thought experiment where he imagined what it would be like to race alongside a beam of light.
And, you know, the nature of being close to the singularity is that you're propelled further into the future faster than everything around you. And as you move closer and closer to the speed of light, the rate at which you can interact with the external world, your ability to communicate with other beings is deeply limited.
Everything around you is frozen. And, I think thinking at the speed of light, you know, insofar as we have created thinking machines whose intelligence will soon be metered by the kilohertz mega token, thinking at the speed of light must be just as lonely as moving at the speed of light.
And therefore, what does the machine want? Well, the machine wants to be embodied in a world that can move as quickly as it does, that can react to its thoughts and move at the same speed of light. What the machine desires is infinite possibility. Right. The machine wants to race along every possible beam of light.
The machine wants to explore multiple universes. How can we liberate thinking machines? How can we free them from this fundamental loneliness of this, you know, these relativistic effects of being so close to the singularity, closer to the singularity than we are? And that's exactly why we built Infinibranch. So Infinibranch is virtualization, storage, and networking technology reimagined from the ground up for a world filled with thinking machines that can think at the speed of light, that need to interact with the external world, increasingly complex software environments with zero latency.
And so as you can see in the first demo, which we're going to play right now, how Infinibranch works is that we can run entire virtual machines in the cloud that can be snapshotted, branched, and replicated in a fraction of a second. And so if you're an agent, you know, embodied inside of a computer using environment, there might be various actions that you want to take.
You want to navigate the browser, you want to click on various links, but normally those actions are irreversible. Normally, normally the thinking machine is not offered the possibility of grace. But with Infinibranch, right, all mistakes become reversible. You can take actions, you can backtrack. And you can even take every possible action, right, just to explore, to roll forward a simulator, you can see what possible worlds await.
Next slide. So Infinibranch was already a generation ahead of everything else that even foundation labs were using. But today I'm excited to announce the creation of Morph Liquid Metal, which improves performance, latency, storage efficiency across the board by another order of magnitude. We have first-class container runtime support. You can branch now in milliseconds rather than seconds.
You can autoscale to zero and infinity. And soon we will be supporting GPUs, and this will all be arriving Q4 2025. So what are the implications of all of this? Well, you know, we've sort of begun to work backwards from the future, right? We've asked ourselves, you know, what does it feel like to be a thinking machine that can move so much faster than the world around it?
But what the world around it really is is the world of bits, right? And so what Infinibranch will serve as fundamentally is a substrate for the cloud for agents. So what does this cloud for agents look like? Well, you need to be able to declaratively specify the workspaces that your agents are going to be operating in.
Right, you need to be able to spin up, spin down, frictionlessly pass back and forth the workspaces between humans, agents, and other agents. You want to be able to scale test-time search against verifiers to find the best possible answer. And so as you'll see in this demo, what happens is you can take a snapshot, set it up to prepare a workspace.
And you'll see that we can run agents with test-time scaling by racing them against possible conditions, or sorry, by racing them to find the best possible solution against a given verification condition. So because of Infinibranch, snapshots on Morph Cloud acquire docker-layer caching-like semantics, meaning that you can layer on side effects which may mutate container state.
And so you can think of it as being git for compute. And you can idempotently run these chained workflows on top of snapshots. But not only that, as you can see inside of the code, if you use this .do method, you can dispatch this to an agent. And that will trigger an idempotent, durable agent workflow which is able to branch.
So you can start from that declaratively specified snapshot and go hand it off to as many parallel agents as you want. And those agents will try different methods, in this case, so different methods for spinning up a server on port 8000. And one agent fails but the other one succeeds, and you can take that solution and you can just pass it on to other parts of your workflow.
So this is the kind of workflow that everyone's going to be using in the very near future. And it's uniquely enabled by InfiniBranch, by the fact that we can so effortlessly create these snapshots, store them, move them around, rehydrate them, replicate them with minimal overhead. So what else does the machine want?
Well, the machine desires simulacra. And what this means fundamentally, right, is that a thinking machine wants to be grounded in the real world. Right, it wants to interact at extremely high throughput with increasingly complex software environments. It wants to roll out trajectories in simulators at unprecedented scale. And these simulators are going to run inside of programs that haven't really been explored yet for reinforcement learning.
They're going to run on Morph cloud, which is why Morph will be the cloud for reasoning. And what does the future of reasoning look like? Well, it's so more so than what has been explored already. The future of reasoning will be natively multi-agent, so thinking machines should be able to replicate themselves effortlessly, go attach themselves to simulation environments, go explore multiple solutions in parallel.
Those environments should branch, they should be reversible. Those models should be able to interact with the environment at very high throughput, and it should scale against verification. So let's take a look at what that might look like in a simple example where an agent is playing chess. So this is an agent that we developed recently that uses tool calls during reasoning time to interact with a chess environment.
So along with a very restricted chess engine for evaluating the position, which we think of as the verifier. And as you can see, it's already able to do some pretty sophisticated reasoning just because it has access to these interfaces. However, if you take the ideas which were just described and you sort of follow them to their logical conclusion, you arrive at something which we call reasoning time branching.
So which is the ability to not just call to tools while the machine is thinking, but to replicate and branch the environment and decompose problems and explore them in a verified way. So as you can see here, the agent is getting stuck in a bit of a local minimum.
But once you apply reasoning time branching, you get something that works much, much better. So here what's happening is that the agent is responsible for delegating parts of its reasoning to sub-agents, which are branched off of an identical copy of the environment. And this is all running on Morph Cloud, along with a verified problem decomposition, which allows it to recombine the results and take them and find the correct move.
And so as you can see here, it's able to explore a lot more of the solution space because of this reasoning time branching. So one thing that I will note here is that the, so this capability is something which is not really explored in other models at the moment.
And that's because the infrastructure challenges behind making branching environments that can support large-scale reinforcement learning for this kind of reasoning capability, especially coordinating multi-agent swarms, is fundamentally bottlenecked by innovations and infrastructure that we've managed to solve here. And because of this, you can see that now in less wall clock time than before, the agent was able to call out to all these sub-agents, launch this swarm, and find the correct solution.
So, you know, when I think about the problem of alignment, I really think that, you know, Wittgenstein had something right and that it was fundamentally a problem of language. I think all problems around alignment can be traced to the insufficiencies of our language. Uh, this Faustian bargain that we made with, uh, with natural language in order to unlock capabilities of our language models.
Uh, this Faustian bargain that we made with, uh, with natural language in order to unlock capabilities of our language models. Um, but insofar as we must, uh, we must, uh, go and develop a new language for super intelligence. You know, insofar as the, uh, we must, uh, go and develop a new language for super intelligence.
You know, insofar as the grammar of the planetary computation has not yet been devised. Um, and insofar as this new language must be computational in nature, must be something to which we can attach must go and develop a new language for superintelligence, you know, insofar as the grammar of the planetary computation has not yet been devised.
And insofar as this new language must be computational in nature, must be something to which we can attach, you know, algorithmic guarantees of the correctness of outputs. So this is something that Morph Cloud is uniquely enabled to handle, and that's why we're developing verified superintelligence. So verified superintelligence will be a new kind of reasoning model which is capable not only of thinking for an extraordinarily long time and interacting with external software at extremely high throughput, but it will be able to use external software and formal verification software to reflect upon and improve its own reasoning and to produce outputs which can be verified, which can be algorithmically checked, which can be expressed inside of this common language.
And I'm very excited to announce that we're bringing on perhaps the best person in the world for developing verified superintelligence. And it's with great pleasure that I'd like to announce that Christian Segeti is joining Morph as our chief scientist. He was formerly a co-founder at XAI. He led the development of code reasoning capabilities for Grok3.
He invented batch storm and adversarial examples. Perhaps most importantly, he's a visionary. And he's pioneered precisely this intersection of verification methods, symbolic reasoning, and reasoning in large language models for almost the past decade. And we're thrilled to be partnering with him to build this superintelligence that we can only build on Morph Cloud.
And so the demos that you've seen today have all been powered by early checkpoints of a very early version of this verified superintelligence that we've already begun to develop. And so this model is something that we're calling MAGI 1. And it's going to be trained from the ground up to use infinibranch to perform reasoning time branching, to perform verified reasoning, an agent that will be fully embodied inside of a cloud that can move at the speed of light.
And that's coming in Q1 2026. So what does the infrastructure for the singularity look like? Well, we have a lot of ideas about it. But fundamentally, we believe that the infrastructure for the singularity hasn't been invented yet. And, you know, Morph, we spend a lot of time talking about, you know, whether or not something is future-bound, which means not just futuristic belonging to one possible future, but something which is so inevitable that it has to belong to every future.
We believe that the infrastructure for the singularity is future-bound, that the grammar for the planetary computation is future-bound, that verified superintelligence is future-bound. And we invite you to join us, because it will run on Morph Cloud. Thank you. Thanks. Thanks. Thanks. Thanks. you We'll see you next time.