Back to Index

All things AI w @altcap @sama & @satyanadella. A Halloween Special. 🎃🔥BG2 w/ Brad Gerstner


Chapters

0:0 Intro
2:28 Microsoft’s Investment in OpenAI
3:19 The Nonprofit Structure and Its Impact
5:46 Health, AI Security, and Resilience
7:50 Models, Exclusivity, and Distribution
8:58 Revenue Sharing and AGI Milestones
11:38 OpenAI’s Growth and Compute Commitments
15:21 Compute Constraints and Scaling
21:27 The Future of AI Devices and Consumer Use
24:31 Regulation and the Patchwork Problem
28:1 Looking Ahead to 2026 and Beyond
37:10 Microsoft’s Strategic Value from OpenAI
57:15 The Economics of AI and SaaS
64:28 Productivity, Jobs, and the Age of AI
70:43 Reindustrialization of America

Transcript

I think this is really an amazing partnership through every phase. We had kind of no idea where it was all going to go when we started, as Satya said, but I don't think, I think this is one of the great tech partnerships ever. And without, certainly without Microsoft and particularly Satya's early conviction, we would not have been able to do that.

What a week, what a week. Great to see you both. Sam, how's the baby? Baby is great. That's the best thing ever. Every cliche is true and it is the best thing ever. Hey Satya, with all your time- Smile on Sam's face whenever he talks about his baby. It's just so different.

And compute, I guess, when he talks about compute and his baby. Well Satya, have you given any dad tips with all this time you guys have spent together? I said, just enjoy it. I mean, it's so awesome that, you know, we had our babies or our children so young and I wish I could redo it.

So in some sense, it's just the most precious time. And as they grow, it's just so wonderful. I'm so glad Sam is- I'm happy to be doing it older, but I do think sometimes, man, I wish I had the energy of when I was like 25. That part's harder.

No doubt about it. What's the average age at OpenAI, Sam? Any idea? It's young. It's not crazy young. Not like most Silicon Valley startups. I don't know, maybe low 30s average. Are babies trending positively or negatively? Babies trending positively. Oh, that's good. That's good. Yeah. Well, you guys, such a big week.

You know, I was thinking about I started at NVIDIA's GTC, you know, just hit $5 trillion. Google, Meta, Microsoft, Satya, you had your earnings yesterday. You know, and we heard consistently not enough compute, not enough compute, not enough compute. We got rate cuts on Wednesday. The GDP is tracking near 4%.

And then I was just saying to Sam, you know, the president's cut these massive deals in Malaysia, South Korea, Japan, sounds like with China. You know, deals that really incredibly provide the financial firepower to re-industrialize America. 80 billion for new nuclear fission, all the things that you guys need to build more compute.

But certainly wasn't, what wasn't lost in all of this was you guys had a big announcement on Tuesday that clarified your partnership. Congrats on that. And I thought we'd just start there. I really want to just break down the deal in really simple, plain language to make sure I understand it and others.

But, you know, we'll just start with your investment, Satya. You know, Microsoft started investing in 2019, is invested in the ballpark at $13, $14 billion into OpenAI. And for that, you get 27% of the business ownership in the business on a fully diluted basis. I think it was about a third.

And you took some dilution over the course of last year with all the investment. So does that sound about right in terms of ownership? Yeah, it does. But I would say before even our stake in it, Brad, I think what's pretty unique about OpenAI is the fact that as part of OpenAI's process of restructuring, one of the largest nonprofit gets created.

I mean, let's not forget that, you know, in some sense, I say at Microsoft, like, you know, we are very proud of the fact that we were associated with the two of the largest nonprofits, the Gates Foundation and now the OpenAI Foundation. So that's, I think, the big news.

We obviously are thrilled. It's not what we thought. And as I said to somebody, it's not like when we first invested our billion dollars that, oh, this is going to be the hundred bagger that I'm going to be talking about to VCs about. But here we are. But we are very thrilled to be an investor and an early backer.

And it's a great, and it's really a testament to what Sam and team have done, quite frankly. I mean, they obviously had the vision early about what this technology could do. And they ran with it and just executed, you know, in a masterful way. I think this has really been an amazing partnership through every phase.

We had kind of no idea where it was all going to go when we started, as Satya said. But I don't think, I think this is one of the great tech partnerships ever. And without, certainly without Microsoft, and particularly Satya's early conviction, we would not have been able to do this.

I don't think there were a lot of other people that would have been willing to take that kind of a bet given what the world looked like at the time. We didn't know exactly how the tech was going to go. Well, not exactly. We didn't know at all how the tech was going to go.

We just had a lot of conviction in this one idea of pushing on deep learning and trusting that if we could do that, we'd figure out ways to make wonderful products and create a lot of value. And also, as Satya said, create what we believe will be the largest nonprofit ever.

And I think it's going to do amazingly great things. It was, I really like the structure because it lets the nonprofit grow in value while the PBC is able to get the capital that it needs to keep scaling. I don't think the nonprofit would be able to be this valuable if we didn't come up with the structure and if we didn't have partners around the table that we're excited for it to work this way.

But, you know, I think it's been six, more than six years since we first started this partnership. And a pretty crazy amount of achievement for six years. And I think much, much, much more to come. I hope that Satya makes a trillion dollars on the investment, not a hundred billion, you know, whatever it is.

Well, as part of the restructuring, you guys talked about it. You have this nonprofit on top and a public benefit corp below. It's pretty insane. The nonprofit is already capitalized with $130 billion, $130 billion of open AI stock. It's one of the largest in the world out of the gates.

It could end up being much, much larger. The California attorney general said they're not going to object to it. You already have this $130 billion dedicated to making sure that AGI benefits all of humanity. You announced that you're going to direct the first $25 billion to health and AI security and resilience, Sam.

First, let me just say, you know, as somebody who participates in the ecosystem, kudos to you both. It's incredible, this contribution to the future of AI. But Sam, talk to us a bit about the importance of the choice around health and resilience, and then help us understand how do we make sure that you get maximal benefit without it getting weighted down, as we've seen with so many nonprofits with its own political biases?

Yeah, first of all, the best way to create a bunch of value for the world is hopefully what we've already been doing, which is to make these amazing tools and just let people use them. And I think capitalism is great. I think companies are great. I think people are doing amazing work getting advanced AI into the hands of a lot of people and companies that are doing incredible things.

There are some areas where the, I think, market forces don't quite work for what's in the best interest of people, and you do need to do things in a different way. There are also some new things with this technology that just haven't existed before, like the potential to use AI to do science at a rapid clip, like really truly automated discovery.

And when we thought about the areas we wanted to first focus on, clearly, if we can cure a lot of disease and make the data and information for that broadly available, that would be a wonderful thing to do for the world. And then on this point of AI resilience, I do think some things may get a little strange and they won't all be addressed by companies doing their thing.

So as the world has to navigate through this transition, if we can fund some work to help with that, and that could be, you know, cyber defense, that could be AI safety research, that could be economic studies, all of these things, helping society get through this transition smoothly. We're very confident about how great it can be on the other side.

But, you know, I'm sure there will be some choppiness along the way. Let's keep busting through the deal. So models and exclusivity, Sam, OpenAI can distribute its models, its leading models on Azure, but I don't think you can distribute them on any other leading of the big clouds for seven years until 2032.

But that would end earlier if AGI is verified, we can come back to that. But you can distribute your open source models, Sora, agents, codecs, wearables, everything else on other platforms. So Sam, I assume this means no chat GPT or GPT-6 on Amazon or Google. No, so we have a cat.

First of all, we want to do lots of things together to help, you know, create value for Microsoft. We want them to do lots of things to create value for us. And there are many, many things that will happen in that category. We are keeping what Satya termed once, and I think it's a great phrase of stateless APIs on Azure exclusively through 2030 and everything else we're going to, you know, distribute elsewhere.

And that's obviously in Microsoft's interest too. So we'll put lots of products, lots of places, and then this thing we'll do on Azure and people can get it there or via us. And I think that's great. And then the rev share, there's still a rev share that gets paid by OpenAI to Microsoft on all your revenues that also runs until 2032, more until AGI is verified.

So let's just assume for the sake of argument, I know this is pedestrian, but it's important that the rev share is 15%. So that would mean if you had $20 billion in revenue that you're paying $3 billion to Microsoft and that counts as revenue to Azure. Satya, does that sound about right?

Yeah, we have a rev share. And I think as you characterized it, it is either going to AGI or till the end of the term. And I actually don't know exactly where we count it, quite honestly, whether it goes into Azure or somewhere else. That's a good question. It's a good question for Amy.

Given that both exclusivity and the rev share end early in the case AGI is verified, it seems to make AGI a pretty big deal. And as I understand it, you know, if OpenAI claimed AGI, it sounds like it goes to an expert panel and you guys basically select a jury who's got to make a relatively quick decision whether or not AGI has been reached.

Satya, you said on yesterday's earning call that nobody's even close to getting to AGI and you don't expect it to happen anytime soon. You talked about this spiky and jagged intelligence. Sam, I've heard you perhaps sound a little bit more bullish on, you know, when we might get to AGI.

So I guess the question is to you both, do you worry that over the next two or three years, we're going to end up having to call in the jury to effectively make a call on whether or not we've hit AGI? I realize you've got to try to make some drama between us here.

I think putting a process in place for this is a good thing to do. I expect that the technology will take several surprising twists and turns and we will continue to be good partners to each other and figure out what makes sense. That's well said, I think. And that's one of the reasons why I think this process we put in place is a good one.

And at the end of the day, I'm a big believer in the fact that intelligence capability-wise is going to continue to improve. And our real goal, quite frankly, is that, which is how do you put that in the hands of people and organizations so that they can get the maximum benefits?

And that was the original mission of OpenAI that attracted me to OpenAI and Sam and team. And that's kind of what we plan to continue on. Brad, to say the obvious, if we had super intelligence tomorrow, we would still want Microsoft's help getting this product out into people's hands.

And we want them, like, yeah. Of course, of course. Yeah, no, again, I'm asking the questions I know that are on people's minds and that makes a ton of sense to me. Obviously, Microsoft is one of the largest distribution platforms in the world. You guys have been great partners for a long time, but I think it dispels some of the myths that are out there.

But let's shift gears a little bit. You know, obviously, OpenAI is one of the fastest growing companies in history. Satya, you said on the pod a year ago, this pod, that every new phase shift creates a new Google and the Google of this phase shift is already known and it's OpenAI.

And none of this would have been possible had you guys not made these huge bets. With all that said, you know, OpenAI's revenues are still a reported $13 billion in 2025. And Sam, on your live stream this week, you talked about this massive commitment to compute, right? 1.4 trillion over the next four or five years with, you know, big commitments, 500 million to Nvidia, 300 million to AMD and Oracle, 250 billion to Azure.

So I think the single biggest question I've heard all week and, and hanging over the market is how, you know, how can the company with 13 billion in revenues make 1.4 trillion of spend commitments, you know, and, and, and you've heard the criticism. Sam, we're doing well more revenue than that.

Second of all, Brad, if you want to sell your shares, I'll find you a buyer. I just, enough, like, you know, people are, I think there's a lot of people who would love to buy OpenAI shares. I don't, I don't think you would. Including myself. Including myself. People who talk with a lot of like breathless concern about our compute stuff or whatever, that would be thrilled to buy shares.

So I think we could sell, you know, your shares or anybody else's to some of the people who are making the most noise on Twitter, whatever about this very quickly. We do plan for revenue to grow steeply. Revenue is growing steeply. We are taking a forward bet that it's going to continue to grow.

And that not only will Chow Chow Chow Tee keep growing, but we will be able to become one of the important AI clouds. That our consumer device business will be a significant and important thing. That AI that can automate science will create huge value. So, you know, there are not many times that I want to be a public company, but one of the rare times it's appealing is when those people that are writing these ridiculous open AI is about to go out of business and, you know, whatever.

I would love to tell them they could just short the stock and I would love to see them get burned on that. But, you know, I, we carefully plan, we understand where the technology, where the capability is going to grow, go and, and how the products we can build around that and the revenue we can generate.

We might screw it up. Like this is the bet that we're making and we're taking a risk along with that. A certain risk is if we don't have the compute, we will not be able to generate the revenue or make the models at these, at this kind of scale.

Exactly. And let me just say one thing, Brad, as both a partner and an investor, there is not been a single business plan that I've seen from OpenAI that they've put in and not beaten it. So in some sense, this is the one place where, you know, in terms of their growth and just even the business, it's been unbelievable execution, quite frankly.

I mean, obviously, OpenAI, everyone talks about all the success and the usage and what have you. But even, I'd say all up, the business execution has been just pretty unbelievable. I heard Greg Brockman say on CNBC a couple of weeks ago, right, if we could 10x our compute, we might not have 10x more revenue, but we'd certainly have a lot more revenue.

Simply because of lack of compute power, things like, yeah, it's just, it's really wild when I just look at how much we are held back. And in many ways we have, you know, we've scaled our compute probably 10x over the past year. But if we had 10x more compute, I don't know if we'd have 10x more revenue, but I don't think it'd be that far.

And we heard this from you as well last night, Satya, that you were compute constrained and growth would have been higher even if you add more compute. So help us contextualize Sam, maybe like how compute constrained do you feel today? And do you when you look at the build out over the course of the next two to three years, do you think you'll ever get to the point where you're not compute constrained?

We talked about this question of is there ever enough compute a lot? I think the answer is the only, the best way to think about this is like energy or something. You can talk about demand for energy at a certain price point, but you can't talk about demand for energy without talking about at different, you know, different demand at different price levels.

If the price of compute per like unit of intelligence or whatever, however you want to think about it, fell by a factor of a hundred tomorrow, you would see usage go up by much more than a hundred. And there'd be a lot of things that people would love to do with that compute that just make no economic sense at the current cost, but there would be new kind of demand.

So I think the, now on the other hand, as the models get even smarter and you can use these models to cure cancer or discover novel physics or drive a bunch of humanoid robots to construct a space station or whatever crazy thing you want, then maybe there's huge willingness to pay a much higher rate cost per unit of intelligence for a much higher level of intelligence that we don't know yet, but I would bet there will be.

So I, I think when you talk about capacity, it's, it's like a, you know, cost per unit and, you know, capability per unit. And you have to kind of, without those curves, it's sort of a made up note. It's not a super well-specified problem. Yeah. I mean, I think the one thing that, you know, Sam, you've talked about, which I think is the right ways to think about is that if intelligence is whatever log of compute, then you try and really make sure you keep getting efficient.

And so that means the tokens per dollar per watt, uh, and the economic value that the society gets out of it is what we should maximize and reduce the costs. And so that's where, if you sort of where like the Jevons paradox point is that, which is you keep reducing it, commoditizing in some sense intelligence, uh, so that it becomes the real driver of GDP growth all around.

Unfortunately, it's something closer to, uh, log of intelligence. It was log of compute, but we may figure out better scaling laws and we may figure out how to do this. Yeah. We heard from both Microsoft and Google yesterday, both said their cloud businesses would have been growing faster if they have more GPUs.

You know, I asked Jensen on this pod, if there was any chance over the course of the next five years, we would have a compute glut. And he said, it's virtually non-existent chance in the next two to three years. And I assume you guys would both agree with Jensen that while we can't see out five, six, seven years, certainly over the course of the next two to three years for the, for the reasons we just discussed, that it's almost a non-existent chance that you have excess compute.

Well, I mean, I think the, the cycles of demand and supply in this particular case, you can't really predict, right? I mean, even the point is what's the secular trend? The secular trend is what Sam said, which is at the end of the day, because quite frankly, the biggest issue we are now having is not a compute glut, but it's a power and it's sort of the ability to get the bills done fast enough close to power.

So if you can't do that, you may actually have a bunch of chips sitting in inventory that I can't plug in. And in fact, that is my problem today, right? It's not a supply issue of chips. It's actually the fact that I don't have warm shells to plug into.

And so how some supply chain constraints emerge, tough to predict because the demand is just going, you know, is tough to predict, right? I mean, I wouldn't, it's not like Sam and I would want to be sitting here saying, oh my God, we're less short on compute. It's because we just were not that good at being able to project out what the demand would really look like.

So I think that that's, and by the way, the worldwide side, right? It's one thing to sort of talk about one segment in one country, but it's about, you know, really getting it out to everywhere in the world. And so there will be constraints and how we work through them is going to be the most important thing.

It won't be a linear path for sure. There will come a glut for sure. And whether that's like in two to three years or five to six, Satya, and I can't tell you, but like, it's going to happen at some point, probably several points along the way. Like this is, there is something deep about human psychology here and bubbles.

And also, as Satya said, like there's, it's such a complex supply chain, weird stuff gets built. The technological landscape shifts in big ways. So, you know, if a very cheap form of energy comes online soon at mass scale, and a lot of people are going to be extremely burned with existing contracts they've signed, if, if we can continue this unbelievable reduction in cost per unit of intelligence, let's say it's been averaging like 40 X for a given level per year, you know, that's like a very scary exponent I think that's really well said.

And you have to hold those two simultaneous truths. We had that happen in 2000, 2001, and yet the internet became much bigger and produced much greater outcomes for society than anybody estimated in that period of time. Yeah, but I think that the one thing that Sam said is not talked about enough, which is the, for example, the optimizations that OpenAI has done on the inference stack for a given GPU.

I mean, it's kind of like, it's, you know, we talk about the Moore's law improvement on one end, but the software improvements are much more exponential than that. I mean, someday we will make an incredible consumer device that can run a GPT-5 or GPT-6 capable model completely locally at a low power draw.

And this is like, so hard to wrap my head around. That will be incredible. And, you know, that's the type of thing I think that scares some of the people who are building, obviously, these large centralized compute stacks. And Satya, you've talked a lot about the distribution, both to the edge, as well as having inference capability distributed around the world.

Yeah, I mean, at least I've thought about it is more about really building a fungible fleet. I mean, when I look at sort of in the cloud infrastructure business, one of the key things you have to do is have two things. One is an efficient, like in this context, in a very efficient token factory, and then high utilization.

That's it. There are two simple things that you need to achieve. And in order to have a high utilization, you have to have multiple workloads that can be scheduled, even on the training. I mean, if you look at the AI pipelines, there's pre-training, there's mid-training, there's post-training, there's RL, you want to be able to do all of those things.

So thinking about fungibility of the fleet is everything for a cloud provider. Okay, so Sam, you referenced, you know, and Reuters was reporting yesterday that OpenAI may be planning to go public late 26 or in 27. No, no, no, we don't have anything that specific. I'm a realist. I assume it will happen someday.

But that was, I don't know why people write these reports. We don't have like a date in mind. Great to know. I don't have a decision to do this or anything like that. I just assume it's where things will eventually go. But it does seem to me, if you guys were, you know, are doing in excess of $100 billion of revenue in 28 or 29, that you at least would be in position.

What? How about 27? Yeah, 27, even better. You are in position to do an IPO and the rumored trillion dollars, again, just to contextualize for listeners. If you guys went public at 10 times $100 billion in revenue, right, which would be, I think, a lower multiple than Facebook went public at, a lower multiple than a lot of other big consumer companies went public at, that would put you at a trillion dollars.

If you floated 10 to 20% of the company, that raises $100 to $200 billion, which seems like that would be a good path to fund a lot of the growth and a lot of the stuff that we just talked about. So you're, you're not opposed to it. You're not, but you guys are making the company with revenue growth, which is what I would like us to do.

But no doubt about it. Well, I've also said, I think that this is such an important company. And, you know, there are so many people, including my kids who like to trade their little accounts and they use ChatGPT. And I think having retail investors have an opportunity to buy one of the most important and largest companies.

That would be nice. Honestly, that is probably the single most appealing thing about it to me. That would be really nice. So one of the things I've talked to you both about shifting gears again is part of the big, beautiful bill, you know, Senator Cruz had included federal preemption so that we wouldn't have this state patchwork, 50 different laws that mires the industry down and kind of needless compliance and regulation.

Unfortunately, got killed at the last second by Senator Blackburn, because frankly, I think AI is pretty poorly understood in Washington. And there's a lot of doomerism, I think, that has gained traction in Washington. So now we have state laws like the Colorado AI Act that goes into full effect in February, I believe, that creates this whole new class of litigants.

Anybody who claims any unfair impact from an algorithmic discrimination in a chatbot. So somebody could claim harm for countless reasons. Sam, how worried are you that, you know, having this state patchwork of AI, you know, poses real challenges to, you know, our ability to continue to accelerate and compete around the world?

I don't know how we're supposed to comply with that California, sorry, Colorado law, I would love them to tell us, you know, we'd like to be able to do it. But that's just from what I've read of that, that's like, I literally don't know what we're supposed to do.

I'm very worried about a 50 state patchwork. I think it's a big mistake. I think it's, there's a reason we don't usually do that for these sorts of things. I think it'd be bad. Yeah, I mean, I think the fundamental problem of, you know, this patchwork approach is, quite frankly, I mean, between OpenAI and Microsoft, we'll figure out a way to navigate this, right?

I mean, we can figure this out. The problem is anyone starting a startup and trying to kind of, this is sort of, it just goes to the exact opposite, or I think what the intent here is, which obviously safety is very important, making sure that the fundamental, you know, concerns people have are addressed, but there's a way to do that at the federal level.

So, I think if we don't do this, again, you know, EU will do it, and then that'll cause its own issues. So, I think if US leads, it's better as, you know, as one regulatory framework. For sure. And to be clear, it's not that one is advocating for no regulation.

It's simply saying, let's have, you know, agreed upon regulation at the federal level, as opposed to 50 competing state laws, which certainly firebombs the AI startup industry. And I think it makes it super challenging, even for companies like yours, who can afford to defend all these cases. Yeah. And I would just say, quite frankly, my hope is that this time around, even across EU and the United States, like that'll be the dream, right?

Quite frankly, for any European startup. I don't think that's going to happen, Satu. What is that? That would be great. I don't, I wouldn't hold your breath for that one. That would be great. No, but I really think that if you think about it, right, if you sort of, if anyone in Europe is thinking about their, you know, how can they participate in this AI economy with their companies, this should be the main concern there as well.

So therefore that's, I hope there is some enlightened approach to it, but I agree with you that, you know, today I wouldn't bet on that. I do think that with SACS, as the AIs are, you at least have a president that I think might fight for that in terms of coordination of AI policy, using trade as a lever to make sure that, you know, we don't end up with overly restricted European policy, but we shall see.

I think first things first, federal preemption in the United States is pretty critical. You know, we've been down in the weeds a little bit here, Sam. So I want to telescope out a little bit. You know, I've heard people on your team talk about all the great things coming up.

And as you start thinking about much more unlimited compute chat GPT six and beyond robotics, physical devices, scientific research, as you, as you look forward to 2026, what do you think surprises us the most? What, what, what are you most excited about in terms of what's on the drawing board?

You, I mean, you just hit on a lot of the key points there. I think Codex has been a very cool thing to watch this year. And as these go from multi-hour tasks to multi-day tasks, which I expect to happen next year, what people be able to do to create software at an unprecedented rate and really in fundamentally new ways.

I'm very excited for that. I think we'll see that in other industries too. I have like a bias towards coding. I understand that one better, but I think we'll see that really start to transform what people are capable of. I, I hope for very small scientific discoveries in 2026, but if we can get those very small ones, we'll get bigger ones in future years.

That's a really crazy thing to say is that like AI is going to make a novel scientific discovery in 2026, even a very small one. This is like, this is a wildly important thing to be talking about. So I'm excited for that. Certainly robotics and computer and new kind of computers in future years.

That'll be, that'll be very important. But yeah, my personal bias is if we can really get AI to do science here, that is, I mean, that is super intelligence in some sense. Like if this is expanding the total sum of human knowledge, that is a crazy big deal. Yeah.

I mean, I think one of the things to use your codex example, I think the combination of the model capability, I mean, if you think about the magical moment that happened with ChatGPT was the UI that met intelligence that just took off, right? It's just, you know, unbelievable right form fact.

And some of it was also the instruction following piece of model capability was ready for chat. I think that that's what the codex and the, you know, these coding agents are about to help us, which is what's that, you know, coding agent goes off for a long period of time, comes back, and then I'm then dropped into what I should steer.

Like one of the metaphors I think we're all sort of working towards is I do this macro delegation and micro steering. What is that UI meets this new intelligence capability? And you can see the beginnings of that with codex, right? The way at least I use it inside a GitHub Copilot is like, you know, it's now, it's just a, it's just a different way than the chat interface.

And I think that, that I think would be a new way for the human computer interface, quite frankly, it's probably bigger than, that might be the departure. That's one reason I'm very excited that we're doing new form factors of computing devices, because computers were not built for that kind of workflow very well.

Certainly, a UI like ChatGPT is wrong for it. But this idea that you can have a device that is sort of always with you, but able to go off and do things and get micro steer from you when it needs and have like really good contextual awareness of your whole life and flow.

And I think that'd be cool. And what neither of you have talked about is the consumer use case, I think a lot about, you know, again, we go onto this device, and we have to hunt and pack through 100 different applications and fill out little web forms, things that really haven't changed in 20 years.

But to just have, you know, a personal assistant that we take for granted, perhaps that we actually have a personal assistant, but to give a personal assistant for virtually free to billions of people around the world to improve their lives, whether it's, you know, ordering diapers for their kid, or whether it's, you know, booking their hotel or making changes in their calendar, I think sometimes it's the pedestrian that's the most impactful.

And as we move from answers to memory and actions, and then the ability to interface with that through an earbud or some other device that doesn't require me to constantly be staring at this rectangular piece of glass, I think it's pretty extraordinary. I think that that's what Sam was teasing.

Yeah, yeah. All right. I got to drop off, unfortunately. Sam, it was great to see you. Thanks for joining us. Congrats again on this big step forward and we'll talk soon. Thanks for letting me crash. See you, Sam. Take care. See ya. As Sam well knows, we're certainly a buyer, not a seller.

But sometimes, you know, I think it's important because the world, you know, we're pretty small. We spend all day long thinking about this stuff, right? And so conviction, it comes from the 10,000 hours we've spent thinking about it. But the reality is we have to bring along the rest of the world and the rest of the world doesn't spend 10,000 hours thinking about this.

And frankly, they look at some things that appear overly ambitious, right? And get worried about whether or not we can pull those things off. So you took this idea to the board in 2019 to invest a billion dollars into OpenAI. Was it a no-brainer in the boardroom? You know, did you have to expend any political capital to get it done?

Dish for me a little bit, like, what that moment was like. Because I think it was such a pivotal moment, not just for Microsoft, not just for the country, but I really do think for the world. It's interesting when you look back. The journey, when I look at it, it's been, you know, we were involved even in 2016 when initially OpenAI started.

In fact, Azure was even the first sponsor, I think. And then they were doing a lot more reinforcement learning at that time. I remember the Dota 2 competition, I think, happened on Azure. And then they moved on to other things. And, you know, I was interested in RL, but quite frankly, you know, it speaks a little bit to your 10,000 hours or the prepared mind.

Microsoft, since 1995, was obsessed. I mean, Bill's obsession for the company was natural language, natural language. I mean, after all, we had a coding company, information work company. So it's when Sam in 2019 started talking about text and natural language and transformers and scaling laws. That's when I said, wow, like, this is an interesting, I mean, he, you know, this was a team that was going in the direction or the direction of travel was now clear.

It had a lot more overlap with our interest. So in that sense, it was a no brainer. Obviously, you go to the board and say, hey, I have an idea of taking a billion dollars and giving it to this crazy structure, which we don't even kind of understand. What is it?

It's a nonprofit, blah, blah, blah. And saying, go for it. There was a debate. Bill was kind of rightfully so skeptical because, and then he became like, once he saw the GPT-4 demo, like that was like the thing that Bill's talked about publicly, where when he saw it, he said, it's the best demo he saw after, you know, what Charles Simone showed him at Xerox PARC.

And, but, you know, quite honestly, none of us could. So the moment for me was that, you know, let's go give it a shot. Then seeing the early codecs inside of Copilot, inside of GitHub Copilot and seeing just the code completions and seeing it work, that's when I would say we, I felt like I can go from one to 10 because that was the big call, quite frankly.

One was controversial, but the one to 10 was what really made this entire era possible. And then obviously, the great execution by the team and the productization on their part, our part. I mean, if I think about it, right, the collective monetization reach of GitHub Copilot, ChatGPT, Microsoft 365 Copilot and Copilot, you add those four things, that is it, right?

That's the biggest sort of AI set of products out there on the planet. And that's, you know, what obviously has let us sustain all of this. And I think not many people know that your CTO, Kevin Scott, you know, an ex Googler lives down here in Silicon Valley. And to contextualize it, right, Microsoft had missed out on search, had missed out on mobile, you become CEO, almost had missed out on the cloud, right?

You've described it, caught the last train out of town to capture the cloud. And I think you were pretty determined to have eyes and ears down here. So you didn't miss the next big thing. So I assume that Kevin played a good role for you as well. Absolutely. I find deep seek and open AI.

Yeah, I mean, I mean, if it's in fact, I would say Kevin's conviction, and Kevin was also skeptical, like that was the thing. I always watch for people who are skeptical, who change their opinion, because to me, that's a signal. So I'm always looking for someone who's a non-believer in something, and then suddenly changes.

And then they get excited about it, that I have all the time for that, because I'm then curious, why? And so, Kevin started with thinking that all of us were kind of skeptical, right? No, I mean, in some sense, it defies the, you know, we're all having gone to school and said, God, you know, there must be an algorithm to crack this versus just scaling laws and throw compute.

But quite frankly, Kevin's conviction that this is worth going after is one of the big things that drove this. Well, we talk about, you know, that investment that it's now worth 130 billion, I suppose, could be worth a trillion someday, as Sam says. But it really, in many ways, understates the value of the partnership, right?

So you have the value in the rev share billions per year going to Microsoft. You have the profits you make off the $250 billion of the Azure compute commitment from OpenAI. And of course, you get huge sales from the exclusive distribution of the API. So talk to us how you think about the value across those domains, especially how this exclusivity has brought a lot of customers who may have been on AWS to Azure.

Yeah, no, absolutely. I mean, so to us, if I look at it, you know, aside from all the equity parts, the real strategic thing that comes together and that remains going forward, is that stateless API exclusivity on Azure that helps quite frankly, both OpenAI and us and our customers.

Because when somebody in the enterprise is trying to build an application, they want an API that's stateless, they want to mix it up with compute and storage, put a database underneath it to capture your state and build a full workload. And that's where, you know, Azure coming together with this API.

And so what we're doing with even Azure Foundry, right? Because in some sense, you let's say you want to build an AI application. But the key thing is, how do you make sure that the evals of what you're doing with AI are great. So that's where you need even a full app server in Foundry.

That's what we have done. And so therefore, I feel that that is the way we will go to market in our infrastructure business. The other side of the value capture for us is going to be incorporating all this IP. Not only we have the exclusivity of the model in Azure, but we have access to the IP.

I mean, having a royalty free, let's even forgetting all the know how and the knowledge side of it. But having royalty free access all the way till seven more years gives us a lot of flexibility business model wise. It's kind of like having a frontier model for free. In some sense, if you're an MSFT shareholder, that's kind of where you should start from is to think about, we have a frontier model that we can then deploy, whether it's in GitHub, whether it's in M365, whether it's in our consumer copilot, then add to it our own data, post train it.

So that means we can have it embedded in the weights there. And so therefore, we are excited about the value creation on both the Azure and the infrastructure side, as well as in our high value domains, whether it is in health, whether it's in knowledge work, whether it's in coding or security.

You've been consolidating the losses from OpenAI. You know, I think you just reported earnings yesterday. I think you consolidated 4 billion of losses in the quarter. Do you think that investors are, I mean, they may even be attributing negative value, right? Because of the losses, you know, as they apply their multiple of earnings, Satya.

Whereas I hear this, and I think about all of those benefits we just described, not to mention the look through equity value that you own in a company that could be worth a trillion unto itself. You know, do you think that the market is kind of misunderstanding the value of OpenAI as a component of Microsoft?

Dr. Abdul- Yeah, that's a good one. So I think the approach that Amy is going to take is full transparency, because at some level, I'm no accounting expert. So therefore, the best thing to do is to give all the transparency. I think this time around as well, I think that's why the non-GAAP gap, so that at least people can see the EPS numbers.

Because the common sense way I look at it, Brad, is simple. If you've invested, let's call it $13.5 billion, you can, of course, lose $13.5 billion. Dr. Abdul- Right, right. Dr. Abdul- But you can't lose more than $13.5 billion. At least the last time I checked, that's what you have at risk.

You could also say, hey, the $135 billion that is, you know, today, our equity stake, you know, is sort of illiquid, what have you. We don't plan to sell it. So therefore, it's got risk associated with it. But the real story I think you are pulling is all the other things that are happening.

What's happening with Azure growth, right? Would Azure be growing if we had not sort of had the OpenAI partnership. To your point, the number of customers who came from other clouds for the first time, right? This is the thing that really we benefited from. What's happening with Microsoft 365?

In fact, one of the things about Microsoft 365 was, what was the next big thing after E5? Guess what? We found it in Copilot. It's bigger than any suite. Like, you know, we talk about penetration and usage and the pace. It's bigger than anything we have done in our information work, which we've been at it for decades.

And so we feel very, very good about the opportunity to create value for our shareholders. And then at the same time, be fully transparent so that people can look through the, what are the losses? I mean, who knows what the accounting rules are, but we will do whatever is needed and people will then be able to see what's happening.

But a year ago, Satya, there were a bunch of headlines that Microsoft was pulling back on AI infrastructure, right? Fair or unfair, they were out there. You know, and perhaps you guys were a little more conservative, a little more skeptical of what was going on. Amy said on the call last night, though, that you've been short power and infrastructure for many quarters, and she thought that you would catch up, but you haven't caught up because demand keeps increasing.

So I guess the question is, were you too conservative, you know, knowing what you know now and what's the roadmap from here? Yeah, it's a great question because, see, the thing that we realized, and I'm glad we did, is that the concept of building a fleet that truly was fungible, fungible for all the parts of the life cycle of AI, fungible across geographies, and fungible across generations, right?

So because one of the key things is when you have, let's take even what Jensen and team are doing, right? I mean, they're at a pace. In fact, one of the things I like is the speed of light, right? We now have GB300s bringing, you know, that we're bringing up.

So you don't want to have ordered a bunch of GB200s that are getting plugged in only to find the GB300s are in full production. So you kind of have to make sure you're continuously modernizing, you're spreading the fleet all over, you are really truly fungible by workload, and you're adding to that the software optimizations we talked about.

So to me, that is the decision we made. And we said, look, sometimes you may have to say no to some of the demand, including some of the open AI demand, right? Because sometimes, you know, Sam may say, hey, build me a dedicated, you know, big, you know, whatever, multi-gigawatt data center in one location for training.

Makes sense from an open AI perspective. Doesn't make sense from a long-term infrastructure build out for Azure. And that's where I thought they did the right thing to give them flexibility to go procure that from others, while maintaining, again, a significant book of business from open AI. But more importantly, giving ourselves the flexibility with other customers, our own 1P.

Remember, like one of the things that we don't want to do is be short on is, you know, we talk about Azure. In fact, some of the times our investors are overly fixated on the Azure number. But remember, for me, the high margin business for me is co-pilot. It is security co-pilot.

It's GitHub co-pilot. It's the healthcare co-pilot. So we want to make sure we have a balanced way to approach the returns that the investors have. And so that's kind of one of the other misunderstood, perhaps, in our investor base in particular, which I find pretty strange and funny because I think they want to hold Microsoft because of the portfolio we have.

But man, are they fixated on the growth number of one little thing called Azure. On that point, Azure grew 39% in the quarter on a staggering 93 billion-dollar run rate. And, you know, I think that compares to GCP that grew at 32% and AWS closer to 20%. But could Azure, because you did give compute to 1P and because you did give compute to research, it sounds like Azure could have grown 41%, 42% had you had more compute to offer.

Absolutely. Absolutely. There's no question. There's no question. So that's why I think the internal thing is to balance out what we think, again, is in the long-term interests of our shareholders and also to serve our customers well. And also not to kind of, you know, one of the other things was, you know, people talk about concentration risk, right?

We obviously want a lot of OpenAI, but we also want other customers. And so we're shaping the demand here. You know, we are in a supply, you know, we're not demand-constrained, we're supply-constrained. So we are shaping the demand such that it matches the supply in the optimal way with a long-term view.

So to that point, Sacha, you talked about $400 billion, it's an incredible number, of remaining performance obligations last night. You said that, you know, that's your booked business today. It'll surely go up tomorrow as sales continue to come in. And you said you're going to, you know, your need to build out capacity just to serve, that backlog is very high.

You know, how diversified is that backlog to your point? And how confident are you that that $400 billion does turn into revenue over the course of the next couple of years? Yeah, that $400 billion, it has a very short duration as Amy explained. It's the two-year duration on average.

So that's definitely our intent. That's one of the reasons why we're spending the capital out clear with high certainty that we just need to clear the backlog. And to your point, it's pretty diversified both on the 1P and the 3P. Our own demand is quite frankly pretty high for our one first party.

And even amongst third party, one of the things we now are seeing is the rise of all the other companies building real workloads that are scaling. And so given that, I think we feel very good. I mean, obviously, that's one of the best things about RPO is you can be planful, quite frankly.

And so therefore, we feel very, very good about building. And then this doesn't include, obviously, the additional demand that we're already going to start seeing, including the 250, you know, which will have a longer duration and will build accordingly. Right. So there are a lot of new entrants, right, in this race to build out compute Oracle, CoreWeave, Crusoe, et cetera.

And normally we think that will compete away margins, but you've somehow managed to build all this out while maintaining healthy operating margins at Azure. So I guess the question is for Microsoft, how do you compete in this world that is where people are levering up, taking lower margins while balancing that profit and risk?

And do you see any of those competitors doing deals that cause you to scratch your head and say, oh, we're just setting ourselves up for another boom and bust cycle? I mean, at some level, the good news for us has been competing even as a hyperscaler every day. You know, there's a lot of competition, right, between us and Amazon and Google on all of these, right?

I mean, it's sort of one of those interesting things, which is everything is a commodity, right? Compute, storage. I remember everybody saying, wow, how can there be a margin? Except at scale, nothing is a commodity. And so therefore, yes, so we have to have a cost structure, our supply chain efficiency, our software efficiencies all have to kind of continue to compound in order to make sure that there's margins, but scale.

And to your point, one of the things that I really love about the OpenAI partnership is it's gotten us to scale, right? This is a scale game. When you have the biggest workload there is running on your cloud, that means not only are we going to learn faster on what it means to operate with scale, that means your cost structure is going to come down faster than anything else.

And guess what? That'll make us price competitive. And so I feel pretty confident about our ability to, you know, have margins. And this is where the portfolio helps. I've always said, you know, I've been forced into giving the Azure numbers, right? Because at some level, I've never thought of allocating compute.

I mean, my capital allocation is for the cloud, from whether it is Xbox cloud gaming, or Microsoft 365, or for Azure, it's one capital outlay. And then everything is a meter, as far as I'm concerned, from an MSFT perspective. It's a question of, hey, the blended average of that should match the operating margins we need as a company.

Because after all, otherwise, we're not a conglomerate. We're one company with one platform logic. It's not running five, six different businesses. We're in these five, six different businesses only to compound the returns on the cloud and AI investment. Yeah, I love that line. Nothing is a commodity at scale.

You know, there's been a lot of ink and time spent even on this podcast with my partner, Bill Gurley, talking about circular revenues, including Microsoft Stasher credits, right to open AI that were booked as revenue. Do you see anything going on like the AMD deal, you know, where they traded 10% of their equity and, you know, for a deal or the Nvidia deal?

Again, I don't want to be overly fixated on concern, but I do want to address head on what is being talked about every day on CNBC and Bloomberg. And there are a lot of these overlapping deals that are going on out there. Do you, do you, when you think about that in the context of Microsoft, does any of that worry you again, as to the sustainability or durability of the AI revenues that we see in the world?

Yeah. I mean, first of all, our investment of let's say that 13 and a half, which was all the training investment that was not booked as revenue. That is the, that is the reason why we have the equity percentage. That's the reason why we have the 27% or 135 billion.

So that was not something some that somehow that made it into Azure revenue. In fact, if anything, the Azure revenue was purely the consumption revenue of chat GPT and anything else, and the APIs they put out that they monetized and we monetized. To your aspect of others, you know, to some degree, it's always been there in terms of vendor financing, right?

So it's not like a new concept that when someone's building something and they have a customer who's also building something, but they need financing, you know, for whether it is, you know, it's, it's sort of some, they're taking some exotic forms, which obviously need to be scrutinized by the investment community.

But that said, you know, vendor financing is not a new concept. Interestingly enough, we have not had to do any of that, right? I mean, we may have, you know, really either invested in OpenAI and essentially got an equity stake in it for return for compute, or essentially sold them great pricing of compute in order to be able to sort of bootstrap them.

But, you know, others choose to do so differently. And, and I think circularity ultimately will be tested by demand because all this will work as long as there is demand for the final output of it. And up to now, that has been the case. Certainly, certainly. Well, I want to shift, you know, as you said, over half your business's software applications, you know, I want to think about software and agents, you know, last year on this pod, you made a bit of a stir by saying that much of the application software, you know, was this thin layer that sat on top of a CRUD database.

So the notion that business applications exist, that's probably where they will all collapse, right? In the agent era. Because if you think about it, right, they are essentially CRUD databases with a bunch of business logic. The business logic is all going to these agents. public software companies are now trading at about 5.2 times forward revenue.

So that's below their 10 year average of seven times, despite the markets being at all time highs. And there's lots of concern that SaaS subscriptions and margins may be put at risk by AI. So how today is AI affecting the growth rates of your software products of, you know, those core products?

And specifically, as you think about database, fabric security, Office 360. And then second question, I guess, is what are you doing to make sure that software is not disrupted, but is instead superpowered by AI? Yeah, I think that's right. So the last time we talked about this, my point really there was the architecture of SaaS applications is changing because this agent tier is replacing the old business logic tier.

And so because if you think about it, the way we built SaaS applications in the past was you had the data, the logic tier and the UI all tightly coupled. And AI, quite frankly, doesn't respect that coupling because it requires you to be able to decouple. And yet the context engineering is going to be very important.

I mean, take, you know, something like Office 365. One of the things I love about our Microsoft 365 offering is it's low ARPU, high usage, right? I mean, if you think about it, right, Outlook or Teams or SharePoint, you pick Word or Excel, like people are using it all the time, creating lots and lots of data, which is going into the graph, and our ARPU is low.

So that's sort of what gives me real confidence that this AI tier, I can meet it by exposing all my data. In fact, one of the fascinating things that's happened, Brad, with both GitHub and Microsoft 365 is thanks to AI, we are seeing all time highs in terms of data that's going into the graph or the repo.

I mean, think about it, the more code that gets generated, whether it is Codex or Cloud or wherever, where is it going? GitHub. More PowerPoints that get created, Excel models that get created, all these artifacts and chat conversations. Chat conversations are new docs. They're all going in to the graph and all that is needed, again, for grounding.

So that's what, you know, you turn it into a forward index, into an embedding, and basically that semantics is what you really go ground any agent request. And so I think the next generation of SaaS applications will have to sort of, if you are high ARPU, low usage, then you have a little bit of a problem.

But if you are, we are the exact opposite. We are low ARPU, high usage. And I think that anyone who can structure that and then use this AI as in fact an accelerant, because I mean, like, if you look at the M365 Copilot price, I mean, it's higher than any other thing that we sell.

And yet it's getting deployed faster and with more usage. And so I feel very good. Oh, or coding, right? Who would have thought? In fact, take GitHub, right? What GitHub did in the first, I don't know, 15 years of its existence or 10 years of its existence, it was basically done in the last year, just because coding is no longer a tool, it's more a substitute for wages.

And so it's a very different type of business model, even. Kind of thinking about the stack and where value gets distributed. So until very recently, right, clouds largely ran pre-compiled software, you didn't need a lot of GPUs, and most of the value accrued to the software layer, to the database, to the applications like CRM and Excel.

But it does seem in the future that these interfaces will only be valuable, right, if they're intelligent, right? If they're pre-compiled, they're kind of dumb. The software's got to be able to think and to act and to advise. And that requires, you know, the production of these tokens, you know, dealing with the ever-changing context.

And so in that world, it does seem like much more of the value will accrue to the AI factory, if you will, to, you know, gents in producing, you know, helping to produce these tokens at the lowest cost, and to the models. And maybe that the agents or the software will accrue a little bit less of the value in the future than they've accrued in the past.

Steelman, for me, why that's wrong? Yeah. So I think there are two things that are necessary to try and to drive the value of AI. One is what you described first, which is the token factory. And even if you unpack the token factory, it's the hardware silicon system. But then it is about running it most efficiently with the system software, with all the fungibility, max utilization.

That's where the hyperscaler's role is, right? What is a hyperscaler? Is hyperscaler, like everybody says, if you sort of said, hey, I want to run a hyperscaler. Yeah, you could say, oh, it's simple. I'll buy a bunch of servers and wire them up and run it. It's not that, right?

I mean, if it was that simple, then there would have been more than three hyperscalers by now. So the hyperscaler is the know-how of running that max util and the token factories. And it's not, and by the way, it's going to be heterogeneous. Obviously, Jensen's super competitive. Lisa is going to come, you know, Hawk's going to produce things from Broadcom, we will all do our own.

So there's going to be a combination. So you want to run ultimately a heterogeneous fleet that is maximized for token throughput and efficiency and so on. So that's kind of one job. The next thing is what I call the agent factory. Remember that a SaaS application in the modern world is driving a business outcome.

It knows how to most efficiently use the tokens to create some business value. In fact, GitHub Copilot is a great example of it, right? Which is, you know, if you think about it, the auto mode of GitHub Copilot is the smartest thing we've done, right? So it chooses based on the prompt, which model to use for a code completion or a task handoff, right?

That's what you, and you do that not just by, you know, choosing in some round robin fashion, you do it because of the feedback cycle you have, you have the evals, the data loops and so on. So the new SaaS applications, as you rightfully said, are intelligent applications that are optimized for a set of evals and a set of outcomes that then know how to use the token factory's output most efficiently.

Sometimes latency matters, sometimes performance matters. And knowing how to do that trade in a smart way is where the SaaS application value is. But overall, it is going to be true that there is a real marginal cost to software this time around. It was there in the cloud era too.

When we were doing, you know, CD-ROMs, there wasn't much of a marginal cost. You know, with the cloud, there was, and this time around, it's a lot more. And so therefore, the business models have to adjust, and you have to do these optimizations for the agent factory and the token factory separately.

You have a big search business that most people don't know about, you know, but it turns out that that's probably one of the most profitable businesses in the history of the world because people are running lots of searches, billions of searches, and the cost of completing a search, if you're Microsoft, is many fractions of a penny, right?

It doesn't cost very much to complete a search. But the comparable query or prompt stack today when you use a chatbot looks different, right? So I guess the question is, assume similar levels of revenue in the future for those two businesses, right? Do you ever get to a point where kind of that chat interaction has unit economics that are as profitable as search?

I think that's a great point because, see, search was pretty magical in terms of its ad unit and its cost economics, because there was the index, which was a fixed cost that you could then amortize in a much more efficient way. Whereas this one, you know, each chat, to your point, you have to burn a lot more GPU cycles, both with the intent and the retrieval.

So the economics are different. So I think you know, that's why I think a lot of the early sort of economics of chat have been the freemium model and subscription on the even on the consumer side. So we are yet to discover whether it's agentic commerce or whatever is the ad unit, how it's going to be litigated.

But at the same time, the fact that at this point, you know, I kind of know, in fact, I use search for very, very specific navigational queries. I used to say I use it a lot for commerce, but that's also shifting to my, you know, copilot. Look, I look at the copilot mode in edge and Bing or copilot.

Now they're blending in. So I think that, yes, I think there is going to be a real litigation, just like that we talked about the SaaS disruption, where in the beginning of the cheese being a little moved in consumer economics of that category. Right. I mean, and given that it's the multi-trillion dollar, this is the thing that's driven all the economics of the internet, right?

When you move the economics of search for both you and Google, and it converges on something that looks more like a personal agent, a personal assistant chat, you know, that could end up being much, much bigger in terms of the total value delivered to humanity. But the unit economics, you're not just amortizing this one-time fixed index, right?

That's right. That's right. I think that the consumer- Yeah, the consumer category, because you're putting a thread on something that I think a lot about, right? Which is what, during these disruptions, you kind of have to have a real sense of where is, what is the category economics? Is it winner take all?

And both matter, right? The problem on consumer space always is that there's a finite amount of time. And so, if I'm not doing one thing, I'm doing something else. And if your monetization is predicated on some human interaction in particular, if there was truly agentic stuff even on consumer, that could be different.

Whereas in the enterprise, one is it's not winner take all. And two, it is going to be a lot more friendly for agentic interaction. So, it's not like, for example, the per seat versus consumption. The reality is agents are the new seats. And so, you can think of it as the enterprise monetization is much clearer.

The consumer monetization, I think, is a little more murky. You know, we've seen a spate of layoffs recently with Amazon announcing some big, big layoffs this week. You know, the mag seven has had little job growth over the last three years, despite really robust top lines. You know, you didn't grow your head count really from 24 to 25.

It's around 225,000. You know, many attribute this to normal getting fit, you know, just getting more efficient coming out of COVID. And I think there's a lot of truth to that. But do you think part of this is due to AI? Do you think that AI is going to be a net job creator?

And do you see this being a long-term positive for Microsoft productivity? Like it feels to me like the pie grows, but you can do all these things much more efficiently, which either means your margins expand or it means you reinvest those margin dollars and you grow faster for longer.

I call it the golden agent margin expansion. I'm a firm believer that the productivity curve does and will bend or in the sense that we will start seeing some of what is the work and the workflow in particular change, right? There's going to be more agency for you at a task level to get to job complete because of the power of these tools in your hand.

And that I think is going to be the case. So that's why I think we are even internally, for example, when you talked about even our allocation of tokens, we want to make sure that everybody at Microsoft standard issue, right? All of them have Microsoft 365 to the tilt in the sort of most unlimited way and have GitHub co-pilot so that they can really be more productive.

But here is the other interesting thing, Brad, we're learning is there's a new way to even learn, right? Which is, you know, how to work with agents, right? So that's kind of like when the first, when Word, Excel, PowerPoint all showed up in office, kind of, we learned how to rethink, let's say, how we did a forecast, right?

I mean, think about it, right? In the 80s, the forecasts were inter-office memos and faxes and what have you. And then suddenly, somebody said, Oh, here's an Excel spreadsheet, let's put in an email, send it around, people enter numbers, and there was a forecast. Similarly, right now, any planning, any execution starts with AI, you research with AI, you think with AI, you share with your colleagues, and what have you.

So there's a new artifact being created and a new workflow being created. And that is the rate of, the pace of change of the business process that matches the capability of AI, that's where the productivity efficiencies come. And so organizations that can master that are going to be the biggest beneficiaries, whether it's in our industry, or quite frankly, in the real world.

And so is Microsoft benefiting from that? You know, so let's, let's think about a couple years from now, five years from now, at the current growth rate will be sooner, but let's call it five years from now, your top line is twice as big as what it is today, Satya.

How many more employees will you have? If you're, if you're, if you grow revenue by it? Like one of the best things right now is these examples that I'm hit with every day from the employees of Microsoft. There is this person who leads our network operations, right? I mean, if you think about the amount of fiber we have had to put for like this, you know, this two gigawatt data center, we just built out in Fairwater, right?

And the amount of fiber there, the AI ran and what have you, it's just crazy, right? So, and it turns out, this is a real world asset. There are, I think 400 different fiber operators we're dealing with worldwide. Every time something happens, we're literally going and dealing with all these DevOps pipelines.

The person who leads it, she basically said to me, you know what, I, there's no way I'll ever get the headcount to go do all this. Not forget, even if I even approve the budget, I can't hire all these folks. So she, she did the next best thing. She just built herself a whole bunch of agents to automate the DevOps pipeline of how to deal with the maintenance.

That is an example of, to your point, a team with AI tools being able to get more productivity. So if you have a question, I will say we will grow a headcount. But the way I look at it is that headcount we grow will grow with a lot more leverage than the headcount we had pre AI.

And that's the adjustment, I think structurally you're seeing first, which is one, you called it getting fit. I think of it as more getting to a place where everybody is really not learning how to rethink how they work. And it's the how, not even the what. Even if the what remains the constant, how you go about it has to be relearned.

And it's the unlearning and learning process that I think will take the next year or so. Then the headcount growth will come with max leverage. Yeah, no, it's a, I think we're on the verge of incredible economic productivity growth. It does feel like when I talk to you or Michael Dell that most companies aren't even really in the first inning, maybe the first batter in the first inning in reworking those workflows to get maximum leverage from these agents.

But it sure feels like over the course of the next two to three years, that's where a lot of gains are going to start coming from. And again, I, you know, I certainly am an optimist. I think we're going to have net job gains from all of this. But I think for those companies, they'll just be able to grow their bottom line, their number of employees slower than their top line.

That is the productivity gain to the company. Aggregate all that up. That's the productivity gain to the economy. And then we'll just take that consumer surplus and invest it in creating a lot of things that didn't exist before. A hundred percent. A hundred percent. Even in software development, right?

One of the things I look at it is no one would say we're going to have a challenge in having, you know, more software engineers contribute to our sort of society. Because the reality is you look at the IT backlog in any organization. And so the question is all these software agents are hopefully going to help us go and take a whack at all of the IT backlog we have.

And think of that dream of evergreen software. That's going to be true. And then think about the demand for software. So I think that to your point, it's the levels of abstraction at which knowledge work happens will change. We will adjust to that, the work and the workflow. That will then adjust itself even in terms of the demand for the products of this industry.

-I'm going to end on this, which is really around the reindustrialization of America. I've said if you add up the $4 trillion of CapEx that you and these and so many of the big large US tech companies are investing over the course of the next four or five years, it's about 10 times the size of the Manhattan Bitcoin project on an inflation adjusted or GDP adjusted basis.

So it's a massive undertaking for America. But the president has made it a real priority of his administration to recut the trade deals. And it looks like we now have trillions of dollars. South Koreans committed $350 billion of investments just today into the United States. And when you think about, you know, what you see going on in power in the United States, both production, the grid, et cetera, what you see going on in terms of this reindustrialization, how do you think this is all going?

And, you know, maybe just reflect on where we're landing the plane here and your level of optimism for the few years ahead. -Yeah, no, I feel very, very optimistic because in some sense, you know, Brad Smith was telling me about sort of the economy around our Wisconsin data center.

It's fascinating. Most people think, oh, data center, that is sort of like, yeah, it's going to be one big warehouse and there is, you know, fully automated. A lot of it is true. But first of all, what went into the construction of that data center and the local supply chain of the data center, that is in some sense, the reindustrialization of the United States as well.

Even before you get to what is happening in Arizona with the TSMC plants or what was happening with Micron and their investments in memory or Intel and their fabs and what have you, right? There's a lot of stuff that we will want to start building. Doesn't mean we won't have trade deals that make sense for the United States with other countries.

But to your point, the reindustrialization for the new economy and making sure that all the skills and all that capacity from power on down, I think, is sort of very important, right, for us. And the other thing that I also say, Brad, it's important, and this is something that I've had a chance to talk to President Trump as well as Secretary Lutnik and others, is it's important to recognize that we as hyperscalers of the United States are also investing around the world.

So in other words, the United States is the biggest investor of compute factories or token factories around the world. But not only are we attracting foreign capital to invest in our country so that we can reindustrialize, we are helping, whether it's in Europe or in Asia or elsewhere, in Latin America and in Africa, with our capital investments, bringing the best American tech to the world that they can then innovate on and trust.

And so both of those, I think, are really bode well for the United States long term. I'm grateful for your leadership. Sam is really helping lead the charge at OpenAI for America. I think this is a moment where I look ahead, you know, you can see 4% GDP growth on the horizon.

We'll have our challenges, we'll have our ups and downs. These tend to be stairs, you know, stairs up rather than a line straight up and to the right. But I, for one, see a level of coordination going on between Washington and Silicon Valley, between big tech and the reindustrialization of America that gives me cause for incredible hope.

Watching what happened this week in Asia, led by the president and his team, and then watching what's happening here is super exciting. So thanks for making the time. We're big fans. Thanks, Satya. Thanks so much, Brad. Thank you. As a reminder to everybody, just our opinions, not investment advice.