Back to Index

Satya Nadella | BG2 w/ Bill Gurley & Brad Gerstner


Chapters

0:0 Intro
1:31 Becoming Microsoft CEO
6:42 Satya’s Memo to CEO Committee
10:42 Satya’s Advantage as a CEO
11:34 Advice for CEOs
15:1 Microsoft’s Investment in OpenAI
19:42 AI Arms Race
23:55 Legacy Search and Consumer AI
28:7 The Future of AI Agents
38:32 Near-Infinite Memory
39:47 Copilot Approach to AI Adoption
50:26 Leveraging AI within Microsoft
56:3 CapX
60:20 The Cost of Model Scaling and Inference
75:15 Open AI Conversion to Profit
78:5 Next Steps for OpenAI
79:43 Open vs. Closed and Safe AI

Transcript

I think the company of this generation has already been created, which is open AI, in some sense. It's kind of like the Google or the Microsoft or the meta of this era. Well, it's great to be with you. When Bill and I were talking, Satya, and looking back at your tenure as CEO, it was really quite astonishing.

You know, you started at Microsoft in 1992, for those who may not know. You took over online in 2007. You launched Bing Search in 2009. You took over servers and launched Azure in 2011. And you became CEO in 2014. And it was just before that that a pretty now well-known essay entitled "The Irrelevance of Microsoft" had just been published.

Now, since then, you've taken Azure from $1 billion to $66 billion run rate. The total revenues of the business are up $2.5x. The total earnings are up over $3x. And the share price is up almost $10x. You've added almost $3 trillion of value to Microsoft shareholders. And as you reflect back on that over the course of last decade, what's the single greatest change that you thought you could do then to unlock the value, to change the course of Microsoft, which has been just an extraordinary success?

Yeah. So the way I've always thought, Brad, about sort of that entire period of time is some sense from '92 to now, it's just one continuous sort of period for me. Although, obviously, 2014 was a big event with the accountability that goes with it. But what I felt was essentially pattern match when we were successful and when we were not, and do more of the former and less of the latter.

I mean, in some sense, it's as simple as that, because I've sort of lived through... When I joined in '92, that was just after Windows 3.1 was launched. I think Windows 3.1 was May of '92, and I joined in November of '92. In fact, I was working at Sun, and I was thinking of going to business school, and I got an offer at Microsoft.

And I said, "Ah, maybe I'll go to business school." And then I somehow or the other, the boss who was hiring me convinced me to just come to Microsoft. And it was like the best decision, because the thing that convinced me was the PDC of '91 in Moscone Center, when I went and saw the basically Windows NT, it was not called Windows NT at that time, and x86.

And I said, "God, this, what's happening in the client will happen on the server." And this is a platform company and a partner company, and they're going to ride the wave. And so that was sort of the calculus then. Then, of course, the web happened. We managed that transition.

We got a lot of things right. Like, for example, I mean, we recognized the browser. We competed and got that browser thing eventually right. We missed search, right? We sort of felt like, wow, the big thing there was the browser because it felt more like an operating system, but we didn't understand the new category, which is the organizing layer of the internet happened to be search.

Then we kind of were there in mobile, but we really didn't get it right. Obviously, the iPhone happened, and we got the cloud right. So if I look at it, and then we are here, we are on the fourth one on AI. In all of those cases, I think doing things which are not coming out of because somebody else got it and we just need to do the same.

Sometimes it's okay to fast follow, and it worked out, but you shouldn't do things out of envy. That was one of the hardest lessons I think we've learned. Do it because you have permission to do this, and you can do it better. Like, both of those matter to me, the brand permission.

Like, you know, Jeffrey Moore once said this to me, which I say, "Hey, why don't you go do things which your customers expect you to do?" I love that, right, which is cloud was one such thing, which is the custom, you know, in fact, when I first remember showing up in Azure, people would tell me, "Oh, it's a winner-take-all, it's all over, and Amazon's won it all." I never believed it, because after all, I'd compete against Oracle and IBM in the servers, and I always felt like, look, it's just never going to be winner-take-all when it comes to infrastructure.

And all you need to do is just get into the game with a value proposition. So, in some sense, a lot of these transitions for me has been about making sure you kind of recognize your structural position, you really get a good understanding of where you have permission from those partners and customers who want you to win, and go do those obvious things first.

And I think, you know, you could call it, "Hey, that's the basics of strategy," but that's sort of what I feel, I think, at least has been key. And, you know, there are things that cultivated, to your point, Brad, which is, you know, there's the sense of purpose and mission, the culture that you need to have, all those are the most, I would say those are the necessary conditions to even have a real chance for shots on goal.

But I would just say getting that strategy right by recognizing your structural position and permission is probably what I have, you know, hopefully done a reasonable job on. - Satya, before we move on to AI, I have a couple of questions about the transition and just echoing what Brad said.

I mean, there's a, I think that it's definitive that you may be the best CEO hire of all time. I mean, 3 trillion is unmatched. So one, I read an article that suggested, and maybe this isn't true, so you tell us, that you wrote a 10-page memo to the committee that was choosing the CEO.

Is that true? And what was in the memo? - Yeah, it is true. Yeah, because I think our CEO process was pretty open out there. And at that time, quite frankly, it is definitely not obvious to me that one, in the beginning, remember, I never thought that first, Bill would leave, and then second, Steve would leave, right?

It's not like you join Microsoft and think, oh yeah, you know, founders are going to retire and there's going to be a job opening and you can apply for it. I mean, that was not the mental model growing up at Microsoft. So when Steve decided to retire, I forget now, I think in August of 2013, there was a pretty big shock.

And you know, at that time I was running our server and tools business, as it was called, in which Azure was housed and so on, and I was having a lot of fun. And I didn't even put up my hand first, saying, oh, I want to be CEO, because it was not even like a thing that I was thinking that'll happen.

And then eventually the board came around and asked, and there were a lot of other candidates at that time, even internally at Microsoft. And so at some point in that process, they asked us to write. And quite frankly, it's fascinating. That memo, everything I said in it, right? You know, one of the terms I used in that memo, which I subsequently used even in the first piece of email I sent out to the company, had ambient intelligence and ubiquitous computing.

And I dumbed it down to mobile first, cloud first later, because, you know, my PR folks came and said, what the heck is this? Nobody will understand what ambient computing is and, you know, ubiquitous or other ambient intelligence and ubiquitous computing. But that was the mobile first, cloud first.

How do you really go where the secular shift is? Then understanding our structural position, thinking about Microsoft Cloud, what are the assets we have? Why is M365? In fact, one of the things I've always resisted is thinking of our cloud, the way the market segments it, right? The market segments it, oh, here is IaaS.

Even Brad, the way he described ads, to me, I've never, I don't allocate my capital thinking here is the Azure capital, here is the M365 capital, here is, you know, gaming. I kind of think of, hey, there's a cloud infra. That's the core theory of the firm for me.

On top of it, I have a set of workloads. One of those workloads happens to be Azure. The other one is M365, dynamics, gaming, what have you. And so in some sense, that was all in that memo and pretty much has played out. And one of the assumptions at that time was that this, you know, we had a 98%, 99% gross margin business in our servers and clients.

And people said, oh, you know, good news. You now can move to the cloud and maybe you'll have some margin. And so that was the transition. And my gut was, it is going to be less GM, but the TAM is bigger. You know, we'll sell more to small businesses.

We will sell more in aggregate in terms of even upsell, like the consumption would increase, right? Because, you know, we had sold a bit of exchange, but if you think about it, exchange, SharePoint, Teams, now everything expanded. So that was the basic arc that I had in that memo.

- Was there any element of cultural shift? I mean, the number of CEO hire, there's CEO hires made in the world all the time and many of them fail. I mean, Intel's going through a second reboot here as we speak. And as Brad pointed out, there were people that are arguing, oh, Microsoft's the next IBM or DEC that it's better days are over.

So what did you do and what would you advise new CEOs that come on to kind of reboot the culture and get it moving in a different direction? - Yeah, one of the advantages I think I had was I was a consummate insider, right? I mean, having grown up pretty much all my professional career at Microsoft.

And so in some sense, if I would even criticize our culture, it was criticizing myself. So in an interest, the break I got was, it never felt like somebody from the outside coming and criticizing these folks who are here, versus it's about mostly pointing the finger right back at me, because I was pretty much part of the culture, right?

I couldn't say anything that I was not part of. And so I felt like, to your point, Bill, I distinctly remember, I think the first time Microsoft became the largest market cap company, I remember walking around the campus, all of us, including me, we were all strutting around as if we were like, the best thing to humankind, right?

And it is all our brilliance that's finally reflected in the market cap. And somehow it stuck with me that, God, that is the culture that you want to avoid, right? Because as I always say, from sort of ancient sort of Greece to modern Silicon Valley, there's only one thing that brings civilizations, countries, and companies down, which is hubris.

And so one of the greatest breaks is, my wife had introduced me to a book by Carol Dweck, a few years before I became CEO, which I read on "Growth Mindset," more in the context of my children's education and parenting and what have you. And I said, God, this thing is like the best.

You know, all of us are always talking about learning and learning cultures and so on. And this was the best cultural meme we could have ever picked. So I attribute a lot of our success culturally to that meme, because it is not, the other thing, nice thing about that, Bill, was it is not trademarked Microsoft, or it's not some new dogma from a CEO.

It's a thing that speaks to work and life. You can be a better parent, a better partner, a better friend, a neighbor, and a manager and a leader. So we picked that, and the pithy way I've always characterized it is, hey, go from being the know-it-alls to learn-it-alls, and it's a destination you never reach, because the day you say, I have a growth mindset, means you don't have a growth mindset by definition.

And so it has been very, very helpful for us. And it's like all cultural change, you got to give it time, oxygen, breathing space, and it's both top-down and bottom-up, and it middles out, right? Which is, there's not a single meeting that I do with the company, or even my executive staff, or what have you, where I don't start with mission culture.

Those are the two bookends. And I've been very, like, the other thing is, I've been super disciplined on my framework. To your point about that memo, pretty much for the last, now close to 11 years, the arc is the same. Mission culture, it's the worldview, right? That ambient intelligence, ubiquitous computing, and then the specific set of products/strategies.

That frame, I pick and choose every word. I'm very, very deliberate about it. I repeat it until I'm bored stiff, but I just stay on it. Well, speaking of that, you've, you know, you mentioned the phase shifts that we've been through, and I've heard you say that as a large platform company, most of the value capture, right, is determined in that first three or four years of the phase shift, when the market position is established, Satya.

You know, I've heard you say, you basically, you know, Microsoft was coming off of having missed search, having largely missed mobile, and I've heard you say, caught the last train out of town on cloud, right? So as you started thinking about the next big phase shift, it appears that you and others in the team, Kevin Scott, sniffed out pretty early that Google was likely ahead in AI with DeepMind.

You make the decision to invest in open AI. What convinced you of this direction, right, versus the internal AI research efforts that you had underway? - Yeah, it's a great point, because there are a couple of things there, right? One is we were at it on AI for a long, long time.

Obviously, you know, when Bill started MSR in 1995, I think, you know, the first group, I mean, he was always into this natural user interface. I think the first group was speech. You know, Rick Rashid came. There was, you know, in fact, Kaifu worked here, and, you know, we had a lot of, I would say, focus on trying to crack natural user interface.

Language was always something that we cared about, right? In fact, even Hinton worked, like some of the early work in DNNs happened when he was in residency in MSR, and then Google hired him. So we missed, I would say, even in the early 2010s, some of what could have been doubling down at around the same time that Google doubled down and bought even DeepMind, right?

And so that actually bothered me quite a bit, but I always wanted to focus. Like, for example, Skype Translate was one of the first things I focused on, because that was pretty cool. Like, that was the first time you could see transfer learning work, right, which is you could train it on one language pair, and it got better on another language, right?

That was the first place where we could say, "Wow, machine translation is also with DNNs, like, it's different." And so ever since I've been obsessed with language, along with Kevin, in fact, the first time, actually, Elon and Sam, they were looking for, obviously, Azure credits and what have you, and we gave them some credits.

And that time, they were more into RL and Dota 2 and what have you, and that was interesting. And then we stopped for, I forget even exactly what happened, and then they, I think, went to GCP. And then they came back to talk about sort of what they wanted to do with language.

That was the moment, right, which they talked about transformers and natural language. And because I always felt like, look, because that's, to me, our core business. And it goes back a little bit to how I think, which is what's our structural position? I knew always that if there was a way to have a nonlinear breakthrough in terms of some model architecture that sort of exhibited, you know, like one of the things that Bill, you know, he'd always say throughout our career here was there's only one category in digital.

It's called information management. The way he thought about it was you schematize the world, right, take people, places, things, you know, just build a schema, right? We went down many, you know, there was this very infamous project called WinFS at Microsoft, which was all about schematize everything. And then, you know, you'll make sense of all information.

And this was, it was just, it's just impossible to do. And so, therefore, you needed some breakthrough. And we said, maybe the way to do that is how we schematize. After all, the human brain does it through language and in a monologue and reasoning. And so, therefore, anyway, so that's what led me to OpenAI and quite frankly, the ambition that Sam and Greg and team had.

And that was the other thing, right? Scaling laws. In fact, I think the first memo, weirdly enough, I read on scaling was written by Dario when he was at OpenAI and Ilya. And that's sort of what, like I said, let's take a bet on this, right? Which is, hey, wow, if this is going to have exponential performance, why not go all in and give it a real shot?

And then, of course, once we started seeing it work on GitHub, Copilot and so on, then it was pretty easy to double down. But that was the intuition. - One of the things that has happened, I think, in previous phase shifts is some of the incumbents don't get on board fast enough.

You even talked about Microsoft perhaps missing mobile or search or that kind of thing. I could argue, especially since I'm old and I've seen these shifts, that everyone's awake on this one. Or it's the most awake. Like it's heavily choreographed. Everyone's maybe at the starting line at almost the same time.

I'm curious if you agree with that and how you think about the key players in the race. Google, Amazon, Meta with Lama, Elon has entered the game. (laughs) - Yeah, it's an interesting one. To your point about... I always think about it, right? If you sort of say take the late '90s, there was Microsoft and there was Daylight.

And then there was the rest. Interestingly enough, now people talk about the Mag 7. There is probably more than that, even to your point about everybody's awake to it. They all have amazing balance sheets. There are even, I think, I'll call it... If you think about OpenAI, in some sense, you could say it's Mag 8 because I think the company of this generation has already been created, which is OpenAI in some sense.

It's kind of like the Google or the Microsoft or the Meta of this era. And so there are a couple of things. So therefore, I think it's going to be very competitive. I also think that I don't think it's going to be winner-take-all, right? Because there may be some categories that may be winner-take-all.

For example, on the hyperscale side, absolutely not, right? I mean, the world will demand, even ex-China, multiple providers of frontier models distributed all over the world. In fact, one of the best structural positions that I think Microsoft has is... Because if you remember, the Azure structure is slightly different, right?

We built out Azure for enterprise workloads with a lot of data residency, with lots... We have 60-plus regions, more regions than others. So it was not like we built our cloud for one big app. We built cloud for a lot of heterogeneous enterprise workloads, which I think in the long run is where all the inference demand will be with nexus to data and the app server and what have you.

So I think there is going to be multiple winners at the infrastructure layer. There is going to be in the models even there, just the model and the app servers that each hyperscaler will have a bunch of models and there will be an app server around... Like every app today, even including Copilot, it's just a multi-model app.

And so there's, in fact, a complete new app server. Like everyone, there was a mobile app server, there was a web app server, and guess what? There's an AI app server now. And for us, that's Foundry and we're building one and others will build. There'll be multiple of those.

Then in apps, I think there will be more folk... I would say network effects is always going to be at the software layer, right? So at the app layer. There'll be different network effects in consumer, in the enterprise and what have you. And so to your fundamental point, I think you have to analyze it at structurally by layer.

And there is going to be fierce competition between the seven, eight, nine, 10 of us at different layers of the stack. And as I always say to our team, which is watch for the one who comes and adds to it, right? That's the game you're all in, where you're always looking at who is the new entrepreneur will come out of the blue.

And at least I would say OpenAI is one such company, which at this point has escaped velocity. Yeah, if we think about the app layer for a second, start with consumer AI a little bit here, Satya. Bing's a very large business. You and I've discussed 10 blue links was maybe the best business model in the history of capitalism, but it's massively threatened by a new modality where consumers just want answers, right?

For example, my kids, they're like, why would I go to a search engine when I can just get answers? So do you think, first, can Google and Bing continue to grow the legacy search businesses in the age of answers? And then what does Bing need to do or your consumer efforts under Mustafa need to do in order to compete with ChatGPT, which really looks like it's broken out from a consumer perspective?

- Yeah, I mean, I think the first thing is what you said last, which is chat meets answers. And that's ChatGPT, both the brand, the product, and it's becoming stateful, right? I mean, like ChatGPT now is not just, in fact, search was a stateless, there was search history, but I think more so these agents will be a lot more stateful.

So, in fact, so that's why I was so thrilled. Like I've been trying to get an Apple search deal for like 10 years. And so when Tim finally did a deal with Sam, I was like the most thrilled person, which is better, it's better to have ChatGPT get that deal than anybody else, because we have that commercial and investor relationship with OpenAI.

So to that point, the way I look at it and say, is at the same time, distribution matters, right? I mean, this is where Google has an enormous advantage, right? They have the distribution on Apple. They're the default. They are obviously the default on Android. They touch, so therefore I think, and the habits don't go away, right?

I mean, the number of times you just go to the browser URL and just type in your query, right? I mean, even now, even though I wanna go to Copa, I mean, my usage is mostly Copilot. And like, if I have to think about Bing versus Copilot, it's kind of interesting, right?

Some of the navigational stuff, I go to Bing, pretty much everything else, I go to Copilot, right? That shift, I think, is what's happening universally. And we are away, maybe one or two of these agents for shopping or travel away from even some of the commercial query. That's the time when the dam breaks, I think, on traditional search, when some of the commercial intent also migrates into the chat, right?

Now, mostly the business has withstood because the commercial intent has not migrated. But once commercial intent migrates, that's when it suddenly moves. And so I think, yes, this is a secular shift. The way we are managing it is, we have three properties in Mustafa's world, right? There is Bing, MSN, and Copilot.

So we think, in fact, he's got a crisp vision of what these three things are. They're all sort of one ecosystem. One is a feed, one is search in the traditional way, and then the other is this new agent interface. And they all have a social contract with content providers.

We need to drive traffic. We need to have paywalls, maybe. We need to have ad-supported models, all of those. And so that's what we're trying to manage. We have our own distribution. The one advantage we do still have is Windows. We get to relitigate. We lost the browser, right?

Even Chrome became the dominant browser, which is a real travesty because we had won against Netscape only to lose to Google. And we are getting it back now in an interesting way, both with Edge and with Copilot. Guess what? Now even Gemini has to earn. Like the good news about Windows for at least is, it's the open system, right?

Chat GPT has a shot. Gemini has a shot. You don't have to call Microsoft. You can go do your best work and go over the top. But that also means we also get to, having lost it is great sometimes because you can win it all back. And so to me, even Windows distribution, I mean, I always say Google makes more money on Windows than all of Microsoft.

I mean, literally. I mean, and I say, wow, this is the best news for Microsoft shareholders that we lost so badly that we can now go contest it and win back some share. - Hey Satya, one thing that everybody's talking about these agents, and if you just kind of think forward in your mind a bit, you can imagine all kinds of players wanting to enact action on other apps and other data that may be on a system.

And Microsoft's in an interesting position 'cause you control the Windows ecosystem, but you have apps on like the iPhone ecosystem or the Android ecosystem. And how do you think about, and this is partially in terms of service question, partially a partnership question, will Apple allow Microsoft to control other apps on iOS?

Will Microsoft let a chat GBT instantiate apps and take data from apps on Windows OS? I mean, you get the question. It goes all the way to when you start thinking about search and commerce, like will booking.com let Jim and I run transactions on it without their permission or knowledge?

- Yeah, I think that this is the most interesting question. I mean, to some degree, it's unclear exactly how this will happen. There is a slight very old school way of thinking about some of this, which is if you remember, how did business applications of various kinds manage to do interop, right?

They did manage that interop using connectors and people had connector licenses. So there was a business model that emerged, right? I mean, SAP was one of the most classic ones where you could say, hey, you can access SAP data as long as you had connectors. So there's a part of me which says something like that will emerge as when agent to agent interface occurs, it's unclear exactly what happens in consumer because consumer, the value exchange was a lot of advertising and traffic and what have you.

Some of those things go away in an agentic world. So I think the business model is slightly unclear to me on the consumer side. But on the enterprise side, I think what will happen is everybody will say, hey, in order for you to either action into my action space or to get data out of my sort of schema, so to speak, there is some kind of an interface to my agent that is licensed, so to speak.

And I think that that's a reason, like today, for example, when I go to co-pilot at Microsoft, I have connectors into Adobe, into my instance of SAP, obviously our instance of CRM, which is Dynamics. So it's fascinating. In fact, when was the last time any of us really went to a business application, right?

We licensed all these SaaS applications. We hardly use them. And somebody in the org is sort of inputting data into it. But in the AI age, the intensity goes up because all that data now is easy, right? You're a query away. I can literally say, hey, I'm meeting with Bill.

Tell me about all the companies that Benchmark's invested in. It's both taking the web, anything that's in my CRM database, collating it all together, giving me a note, what have you. So to some degree, all that, I think, can be monetized by us and by even these connectors. - But more explicitly, like the thing that could happen really quickly, 'cause there's been talk about it, like would you allow chat GPT on the Windows OS to just start opening random apps and take- - Now that's an interesting one, right?

So that over-the-top computer use, who is going to permit that, right? So which is, is it the user or is it the operating system? Like on Windows, there is quite frankly not anything I can do to prevent that other than some security guardrails, right? So I could sort of definitely, because I think if they became a secure, like one of my big fears is the security risk, right?

If some malware got downloaded and that malware started sort of actioning stuff, right? That's when it's really dangerous. So I think those are the ones that we will build into the OS itself, right? Which is some elevated access and privilege that this computer use stuff happens. But at the end of the day, the user will be in control on an open platform like Windows and I'm sure Apple and Google will have a lot more control so they won't allow it.

And so that's, in some sense, you could say that's a advantage they have or, you know, depending on how AT rules on all of those, you know, ultimately it'll be an interesting thing to watch. - Flip that around and then we can move on. But like, would you allow the Android OS or let's just call it the Android AI or the iOS AI to read email, you know, through a Microsoft client on that smartphone?

- Yeah, I mean, we kind of like, you know, for example, today, you know, one of the things I always think about is, I don't know whether that was value leaking or did it actually help us, right? Which is we licensed the sync for Outlook to Apple for Apple Mail.

It was kind of a, it was an interesting case. And I think that there was a lot of value leaked perhaps, but at the same time, I think that was one of the reasons why we were able to hold on to Exchange, right? It would have been doubly problematic - Understood.

- If we had not done that. And so one of the things I think is going to your point, Bill, if we are building out, the reason we are going to do this is we have to have a trust system around Microsoft 365. We just cannot sort of say, hey, any agent comes in and does anything because after all, first it's not our data, it's our customers' data, right?

It'll be, and so therefore the customer will have to permit it. The IT folks in the customer will have to permit it. It's not like some blanket flag I can set. And then the second thing is, it has to have a trust boundary. So I think what we will do is, it's kind of, it's an interesting way.

It's kind of like what Apple Intelligence is doing. Think of it as we will do that around M365. - I've played with it a lot today. I'd highly recommend people download it. It's super interesting. - Yeah. So Sachit, clicking on this, Mustafa said that 2025 will be the year of infinite memory.

And Bill and I have talked a lot dating back to the start of this year, that we think the next 10X function, it sounds like you agree on chat GPT, is really this persistent memory combined with being able to take some actions on our behalf. So we're already seeing the starts of memory and I'm pretty convinced as well that 2025, it seems like that one's pretty well solved.

But this question of actions, when am I gonna be able to say to chat GPT, book me the four seasons in Seattle next Tuesday at the lowest price, right? And Bill and I have gone back and forth on this one and it would seem that computer use is the early test case for that.

But do you have any sense, is it, you know, does that seem like a hard one from here to you? - Yeah, I mean, the most open-ended action space is still hard. But to your point, there are two things or maybe three things that are really exciting. Beyond I'll just say, I'm sure we'll talk about it, the scaling laws itself and capabilities of the raw models.

One is memory. The other is tools use or actions. And the other one I would say is even entitlements, right? Which is, you know, what can you like, you know, one of the most interesting products we have even is Purview inside of Microsoft because increasingly, what do you have permissions to?

What can you get? You know, you have to be able to access things in a safe way. Somebody needs to have governance on it and what have you. So if you put all those three things together and this agent is going to then be more governable and when it comes to actions, it is verifiable and then it has memory, then I think you're off to a very different place where for doing more autonomous work, so to speak.

I still think, one of the things I always think is you're built, I like this co-pilot as the UI for AI because even in a fully autonomous world, from time to time, you'll raise exceptions, you will ask for permission, you will ask for invocation, what have you. And so therefore, this UI layer will be the organizing layer.

In fact, that's kind of why we think of co-pilot as the organizing layer for work, work artifacts and workflow. But to your fundamental point, I don't think the models, I take even 4.0, right? Not even going to '01. 4.0 is pretty good with function calling. So you can do in the enterprise setting significant more, more so than consumer because consumer web function calling is just hard, where at least in an open-ended web, you can do it for a couple of websites.

But once you say, hey, let's go do a book me a ticket on anything and it just, and if there's schema changes on the backend and so on, it'll trip over. You can teach it that, that's where I think '01 can get better if it's a verifiable, auto-gradable sort of process on Rails.

But I think we are maybe a year, year to two years away from doing more and more, but I think at least from an enterprise perspective, going and doing, here's my sales agent, here's my marketing agent, here's my supply chain agent, which can do more of these autonomous tasks.

We built 10 or 15 of them into dynamics, right? Even looking into sort of my supplier communications and automatically handling my supplier communications, updating my databases, changing my inventories, my apply. Those are the kinds of things that you can do today, I would say. - Mustafa made this comment about near-infinite memory and I'm sure you heard it or hear it internally.

Is there any clarification you can offer about that or is that more to come? - I think that, I mean, at some level, the idea that you have essentially a type system for your memory, right? That's the thing, right? Which is, it's not like every time I start, you have to organize.

- I get the idea. He made it sound like you guys had an internal technical breakthrough on this front. - Yeah, I mean, there's an open source project even. I think it's, I forget, it's the same set of folks who did all the TypeScript stuff who are working on this.

So what we're trying to do is essentially take memory and schematize it and sort of make it available such that you can go. Like each time I start, let's just imagine I'm prompting on some new prompt, I know how to cluster based on everything else I've done. And then that type matching and so on, I think is a good way for us to build up a memory system.

- So shifting maybe to enterprise AI, Satya, you know, the Microsoft AI business has already reported to be about $10 billion. You've said that it's all inference and that you're not actually renting raw GPUs to others to train on because your inference demand is so high. So as we think about this, there's a lot of, I think, skepticism out there in the world as to whether or not major workloads are moving, you know?

And so if you think about the key revenue products that people are using today and how it's driving that inference revenue for you today and how that may be similar or different from Amazon or Google, I'd be interested in that. - Yeah, I think that's a good, so the way for us this thing has played out is, you've got to remember most of our training stuff with OpenAI is sort of more investment logic, right?

So it's sort of not in our quarterly results, it's more in the other income based on our investment. So that means the only thing that shows up in revenue- - Or loss, other income or loss, right? - That is right, that is right. Right now, that's how it shows up.

And so most of the revenue or all the revenue is pretty much our API business, or in fact, to your point, ChatGPT's inference costs are there, right? So that's a different piece. And so the fact is the big hit apps of this era are what? ChatGPT, Copilot, GitHub Copilot, and the APIs of OpenAI and Azure OpenAI, right?

So in some sense, if you had to list out the 10 most sort of hit apps, these would probably be in the four or five of them. And so therefore that's the biggest driver. The advantage we have had and OpenAI has had, which is we've had two years of runway, right?

Pretty much uncontested. To your point, Bill made the point about, hey, everybody's awake, but, and it might be, I don't think there will be ever again maybe a two-year lead like this. Who knows? You say that and somebody else drops some sample and suddenly blows the world away. But that said, I think it's unlikely that that type of lead could be established with some foundation model.

But we have that advantage. That was the great advantage we've had with OpenAI. OpenAI was able to really build out this escape velocity with ChatGPT. But on the API side, the biggest thing that we were able to gain was, you know, take Shopify or Stripe or Spotify. These were not customers of Azure.

They were all customers of GCP or they were customers of AWS. So suddenly we got access to many, many more logos who are all quote-unquote digital natives who are using Azure in some shape or fashion and so on. So that's sort of one. And when it comes to the traditional enterprise, I think it's scaling.

Like, I mean, literally it is, you know, people are playing with Copilot on one end and then are building agents on the other end using Foundry. But like these things are design wins and project wins and they're slow, but they're starting to scale. And again, the fact that we've had two years of runway on it, I think I like that business a lot more.

And that's one of the reasons why the adverse selection problems here would have been lots of tech startups all looking for their H100 allocation in small batches, right? That, you know, having watched what happened to Sun Microsystems in the sort of .com, I always worry about that, which is, whoa, if, you know, you just can't chase everybody building models.

In fact, even in the, I think the investor side, I think the sentiment is changing, which is now people are wanting to be more capital light and build on top of other people's models and so on and so forth. And if that's the case, you know, everybody who was looking for a H100 will not want to, you know, want to look for it more.

So that's kind of what we've been selective on. - And your sense is that for the others, that training of those models and those model clusters was a much bigger part of their AI revenue versus yours. - I don't know. I mean, this is where I'm speaking for other people's results.

I don't know. I mean, it's just, I go back and say, what are the other big hit apps, right? I don't know what they are. Like, I mean, where do they, like, what models do they run? Where do they run them? And I would, like, that's kind of, I'm not, I mean, obviously Google's Gemini.

I don't know, when I look at the Dow numbers of any of these AI products, there is ChatGPT. - Right. - And then there is, you know, like even Gemini, I'm very surprised at the Gemini numbers. I mean, obviously I think it will grow, you know, because of all the inherent distribution, but it's kind of interesting to say that they're not that many.

In fact, we talk a lot more about AI scale, but there is not that many hit apps, right? There is ChatGPT, GitHub Copilot, there's Copilot, and there's Gemini. I think those are the four, I would say, in a Dow. Like, is there anything else that comes to your mind?

- Well, I think there, you know, there are a lot of these startup use cases that I think are starting to get some traction, kind of bottoms up. A lot of them build on top of Lama, but, you know. - But if you said, oh, and there's Meta. - Right, right.

- But if you said there are 10 more, what are the apps that have more than 5 million Dow, right? I think it's going to be interesting. - I think Zuckerberg would argue Meta AI, certainly, you know, has more, et cetera. But I think you're right, in terms of the non-affiliated apps, you named them.

- And Zack's stuff all runs on his own cloud. I mean, he's not running a public cloud. - That's right, yeah. - Satya, on the enterprise side, obviously the coding space is off into the races, and you guys are doing well, and there's a lot of venture-backed players there.

On some of the productivity apps, I have a question about the Copilot approach. And I guess Mark Benioff's been kind of obnoxiously critical on this front and called it Clippy 2 or whatever. Do you worry that someone might think kind of first principles AI from ground up, and that some of the infrastructure, say in an Excel spreadsheet, isn't necessary to know if you did an AI first product?

And the same thing, by the way, could be said about the CRM, right? There's a bunch of fields and tasks that may be able to be obfuscated for the user. - Yeah, I mean, it's a very, very, very important question. The SaaS applications or biz apps, so let me just speak of our own dynamics thing.

The approach at least we're taking is, I think the notion that business applications exist, that's probably where they'll all collapse, right? In the agent era. Because if you think about it, right, they are essentially CRUD databases with a bunch of business logic. The business logic is all going to these agents.

And these agents are going to be multi-repo CRUD, right? So they're not going to discriminate between what the backend is. They're going to update multiple databases and all the logic will be in the AI tier, so to speak. And once the AI tier becomes the place where all the logic is, then people will start replacing the backends, right?

We have people, that's what, in fact, it's interesting. As we speak, I think we are seeing pretty high rates of wins on dynamics, backends, and the agent use. And we are going to go pretty aggressively and try and collapse it all, right? Whether it's in customer service, whether it is in, by the way, the other fascinating thing that's increasing is just not CRM, but even what we call finance and operations.

Because people want more AI native biz apps, right? That means the logic tier can be orchestrated by AI and AI agents. So in other words, co-pilot to agent to my business application should be very seamless. Now, in the same way, and you could even say, "Hey, why do I need Excel?" Like, interestingly enough, one of the most exciting things for me is Excel with Python is like GitHub with co-pilot, right?

That's essential. So what we have done is when you have Excel, like this, by the way, it would be fun for you guys, right? Which is, you should just bring up Excel, bring up co-pilot, and start playing with it because it's no longer like, oh, you know, it is like having a data analyst.

And so it's no longer just making sense of the numbers that you have. It will do the plan for you, right? It will literally, like how GitHub co-pilot workspace creates the plan, and then it executes the plan. This is like a data analyst who is using Excel as a sort of row column visualization to do analysis scratch pad.

So it's kind of tools you. So the co-pilot is using Excel as a tool with all of its action space, because it can generate, and it has Python interpreter. That is, in fact, a great way to reconceptualize Excel. And at some point you could say, "Hey, I'll generate all of Excel." And that is also true.

After all, there's a code interpreter, right? So therefore you can generate anything. And so, yes, I think there will be disruption, but so the way we are approaching at least our M365 stuff is one is build co-pilot as that organizing layer, UI for AI, get all agents, including our own agents.

You can say the Excel is an agent to my co-pilot. Word is an agent. It's kind of specialized canvases, which is I'm doing a legal document. Let me take it into Pages and then to Word, and then have the co-pilot go with it. Go into Excel and have the co-pilot go with it.

And so that's sort of a new way to think about the work and workflow. You know, one of the questions I hear people wringing their hands about a lot today, Satya, is the ROI people are making on these investments. You know, you have over 225,000 employees. Are you leveraging AI to increase productivity, reduce costs, drive revenues in your own business?

If so, kind of what are the biggest examples there? You know, and maybe to a finer point on that, you know, when we had Jensen on, I asked him, you know, when he 2 or 3x'd his top line, what did he expect his headcount to increase by? And he said 25%.

And when asked why, he said, "Well, I'll have 100,000 agents helping us do the work." So when you 2 or 3x your revenue for Azure, you know, do you expect to see that similar type of leverage on headcount? - Yeah, I mean, it's top of mind, and top of mind for both us at Microsoft as well as our customers.

Here's the way I come at it. I love this thing of, I've been going to school on learning a lot about what happened in industrial companies with lean, right? I mean, it's fascinating, right? They're all GDP plus growers. It's unbelievable. Like, I mean, the discipline they have in how the good industrials can literally say, hey, I'll add 2 to 3, you know, 100 basis points of tailwind just by lean, which is increased value, reduced waste, right?

That's the practice. So I think of AI as the lean for knowledge work. You know, we are really going to school on it, like, which is how do we really go look at, that's why I think, you know, the good old, you know, we remember in the '90s, we had all this business process re-engineering.

I think it's back in a new way where people who can think end-to-end process flows and say, hey, what's the way to think about the process efficiency? What can be automated? What can be made more efficient? So that's a little bit of, I think. So customer service is the obvious one.

Like, we are on course. We spend around $4 billion or so. This is everything from Xbox support to Azure support. This is really, I mean, this is serious one year because of the deflection rate on the front end. Then the biggest benefit is the agent efficiency, right? Where the agent is happier, the customer is happier, and our costs are going down.

And so that's, I think, the most obvious place and that we have in our contact center application that's also doing super well. The other one is obviously GitHub Copilot. That's the other, and with GitHub Copilot Workspace, that's the first place where even this, what is agentic sort of side comes in, right?

You go from an issue to a plan or to a spec to a plan and then multi-file edit, right? So it's just completely changes the workflow for the age team. As I said, and then the O365 is the catch-all, right? So the M365 Copilot is where, I mean, just to give you a feel, like even my own, right?

Every time I'm meeting a customer, I would say the workflow of the prep of the CEO office has not changed since 1990, right? Basically, I mean, in fact, one of the ways I look at it is just imagine how did forecasting happen pre-PCs and post-PCs, right? There were faxes, then inter-office memos, and then PCs became a thing and people said, "Hey, I'm just gonna put an Excel spreadsheet in email "and send it around and people will enter numbers "and we will have a forecast." The same thing is happening in the AI era right now all over the place, right?

I prep for a customer meeting where I literally go into Copilot and I say, "Tell me everything I need to know about the customer." It tells me everything from my CRM, my emails, my team's meetings, and the web, right? It grounds it. I put it into pages, share it with my account team in real time.

So just imagine the hierarchy, this entire thing of, "Oh, let me prepare a brief for the CEO," goes away. It's just a query away. I generate a query, share a page. If they wanna annotate it, so I'm reasoning with AI and collaborating with my colleagues, right? That's the new workflow.

And that's happening all over the place. Somebody gave me this example from supply chain. Like somebody said, "Supply chain is like a trading desk, "except it doesn't have real-time information," right? That's kind of what it is. So it's like you wait for the quarter to end and then the CFO comes and bangs you on the head and saying all the mistakes you made.

What if that financial analyst essentially can be in real time, be available to you and giving you like, "Oh, you're doing this contract "for this data center in this region. "You should think about these terms." All that intelligence in real time is changing the workflow and work artifacts. So lots and lots of use cases all around.

And I think to your fundamental point, our goal is to kind of create operating leverage through AI. So I think headcount will, in fact, one of the ways I look at it and say is our total people costs will go down. Our cost per head will go up and my GPU per researcher will go up.

That's kind of the way I look at it. - That makes sense. Hey, let's shift ahead to something that you referenced earlier, just around what we're seeing out of model scaling and CapEx generally. You know, I've heard you talk about, you know, Microsoft's CapEx. I imagine in 2014, when you took over, you had no idea that the CapEx would look like it does today.

In fact, you've said it looks increasingly these companies look more like industrial company CapEx than traditional software companies. Your CapEx come from about 20 billion in 2020 to maybe as high as 70 billion in 2025. You know, you've earned a pretty consistent return on that CapEx, right? So there's actually a very high correlation when you look at your CapEx to revenue.

Some people are worried that that correlation will break and even you have said, you know, maybe at some point there's going to be, you know, CapEx is going to have to be spent ahead of the revenue. You know, there may be an error pocket. We have to build for this resiliency.

So how do you feel about the level of CapEx? Does it cause you any sleepless nights? And when does it begin to taper off, you know, in terms of this rate of growth? Yeah, I mean, a couple of different things, right? One is, this is where being a hyperscaler, I think structurally super helpful because in some sense, we've been practicing this for a long time, right?

Which is, you know, hey, data centers have 20 year life cycles, power you pay only when you use, the kits are six years, you know how to sort of drive utilization up. And the good news here is it's kind of like capital intensive but it's also software intensive and you use software to bring the ROIC of your capital higher, right?

That's kind of like when people even in the early days said, hey, how can like a hyperscaler ever make money? Because what's the difference between old holsters and the new hyperscalers? It is software, right? And that I think is what's going to apply even to this GPU physics even, right?

Which is, hey, you buy leading, you build it out. In fact, one of the things that's happening right now is what I'll call catch up, right? Which is we built after all, over the last 15 years, the cloud. Suddenly a new meter showed up in the cloud. It's called the AI accelerator because every app now needs a database, a Kubernetes cluster and a model that runs on an AI accelerator, right?

So if you sort of say, oh, I need all three, you suddenly add to build up these AI accelerators in order to be able to provision for all of these applications. So that will normalize. So the first thing is the build out will happen, the workloads will normalize and then it will be, you will just keep growing like the cloud has grown.

So that's sort of the one side of it. And that's where avoiding some of these adverse selection issues, making sure it's not just all supply side, everybody's sort of building only hoping demand will come, just making sure that there is real diverse demand all over the world, all over the segments, I watch for all of that.

So I think that that's, I think the way to manage the ROIC. And by the way, the margins will be different, right? This goes back to the very early dialogue we had on, when I think about the Microsoft cloud, the margin profile of a raw GPU versus the margin profile of fabric plus GPU or foundry plus GPU or GitHub copilot add on to M365.

So they're all gonna be different. And so if you're having a portfolio matters here, right? Because if I look at even the Microsoft, why does Microsoft have a premium today in the cloud? We are bigger than Amazon, growing faster than Amazon with better margins than Amazon because we have all these layers.

And that's kind of what we wanna do even in the AI era. - Satya, there's been a lot of talk about model scaling. And obviously there was talk historically about kind of 10Xing the cluster size that you might do over and over again, not once and then twice. And NX.AI is still making noise about going in that direction.

There was a podcast recently where they kind of flipped everything on their head and they said, well, if we're not doing that anymore, it's way better because we can just move on to inference, which is getting cheaper and you won't have to spend all this capex. I'm curious, those are two kind of views of the same coin, but what's your view on large LLM model scaling and training costs and where we're headed in the future?

- Yeah, I mean, I'm a big believer in scaling laws, I'll sort of first say. In fact, if anything, the bet we placed in 2019 was on scaling laws and I stay on that, which is in other words, don't bet against scaling laws. But at the same time, let's also be grounded on a couple of different things.

One is these exponentials on scaling laws will become harder just because as the clusters become harder, everything. I mean, the distributed computing problem of doing large scale training becomes harder. And so that's kind of one side of it. So there is, but I would just still say, and I'll let the OpenAI folks speak for what they're doing, but they are continuing to, pre-training I think is not over, it sort of continues.

But the exciting thing, which again, OpenAI has talked about and Sam has talked about is what they've done with O1, right? So this chain of thought with autograding and is just a fantastic. In fact, basically it is test time computer inference time compute as another scaling law, right? So you have pre-training and then you have effectively this test time sampling that then creates the tokens that can go back into pre-training, creating even more powerful models that then are running on your inference, right?

So therefore that's I think a fantastic way to increase model capability. So the good news of test time or inference time compute is sometimes running of those O1 models means the run, there's two separate things. Sampling is kind of like training when you're using it to generate tokens for training, for your pre-training, but also customers when they are using O1, they're using more of your meters and so you are getting paid for it.

And so therefore there is more of an economic model, right? So therefore I like it. In fact, that's where I said I have a good structural position with 60 plus data centers all over the world. - Right, it's a different hardware architecture for one of those scaling versus the other, for the pre-training versus.

- Exactly, and I think the best way to think about it is it's a ratio, right? So going back to sort of Brad's thing about ROIC, this is where I think you have to sort of really establish a stable state. In fact, whenever I've talked to Jensen, I think he's got it right, which is look, you kind of wanna buy some every year, not buy, like think about it, right?

When you depreciate something over six years, the best way is what we have always done, which is you buy a little every year and you age it, you age it, you age it, right? You use the leading node for training and then the next year it makes it, goes into inference.

And that's sort of the stable state I think we will get into across the fleet for both utilization and the ROIC and then the demands meets supply. And like basically to your point about everybody saying, oh, wow, have the exponential stopped? One of the other things is the economic realities will also sort of stop, right?

I mean, at some point everybody will look and say, what's the economically rational thing to do? - I agree. - Which is, hey, even if I double every year's capability, but I'm not able to sell that inventory. And the other problem is the winner's curse, right? Which is, you don't even have to publish a paper.

The other folks have to just look at your capability and do either a distillation. It's just impossible. It's kind of like piracy, right? I mean, you can sort of all kinds of terms of use, but it's impossible to control distillation. That's one. Second thing is, you don't even have to do anything.

You just have to reverse engineer that capability and you do it in a more compute efficient way. And so given all this, I think there will be a governor on how much people will kind of chase. Right now, a little bit of everybody wants to be first. It's great, but at some point, all the economic reality will set in on everyone.

And the network effects are in the app layer. So why would I want to spend a lot on some model capability if the network effects are all on the app layer? - What I heard you say, I believe, you know, so Elon has said that he's going to build a million GPU cluster.

I think Meta has said the same thing. I think the pre-training-- - I think he said 200 and then he kind of joked about a million, but I-- - I think he joked about a billion, but you know, the fact of the matter is have your, versus the start of the year, Satya, based on what you've seen around pre-training and scaling, have you changed your infrastructure plans around that?

And then I have a separate question with regard to '01. I am building to what I would say is a way, like a little bit of the 10X point, right? Which is, hey, how do you, we can argue the duration, like is it every two years? Is it every three years, every four years?

There is an economic model. And this is where I think a little bit of disciplined way of thinking about how do you clear your inventory such that it makes sense, right? Which is, or the other way is the depreciation cycle of your kit, right? There is no way you can sort of buy, you can, unless you find the physics of the GPU works out, where suddenly it flows through my P&L and it's actually, you know, it's in the same or better margin than a hyperscaler.

That's simple. Like, so that's kind of what I'm gonna do. I'm gonna keep going and building basically to, hey, how do I drive inference demand? And then keep increasing my capability and be efficient at it. I absolutely, and Sam may have a different objective and he's been open to it, right?

He's sort of like, he may say, hey, I wanna build because I know I'm deeply, have deep conviction on what AGI looks like or what have you. And so be it. So therefore that's where I think a little bit of our tension is even. - And to clarify something, I heard Mustafa say on a podcast that Microsoft is not going to engage in the biggest model training competition that's going on.

Is that fair? - Well, what we won't do is do it twice, right? Because after all, we have the IP from, like it'd be silly for Microsoft today, given the partnership with OpenAI to do two unnecessary, I mean, doing a second training set. Correct. So we are very, and that's why we have concerted.

And by the way, that's the strategic discipline we have had, right? Which is, that's why I always stress to Sam, like we bet the farm on OpenAI and said, hey, we will concentrate our compute. And we did it because we had all the rights to the IP. And so that's sort of the give gets on it.

And we feel fantastic about it. And so then what Mustafa is basically saying is, hey, we will also do, in fact, a lot of focus on our end is post-training and even on the verification or what have you. So that's a big thing. So we'll focus a lot of our compute resources on adding more model adaptations and capabilities that make sense, while also having a principled pre-training stuff that sort of gives us capability internally to do things.

We anyway have different model weights and model classes for different use cases that we will continue to go ahead and develop as well. - Does your question to Brad's question about, your answer to Brad's question about the balancing of GPU ROI, does that answer the question as to why you've outsourced some of the infrastructure to CoreWeave in that partnership that you have?

- That we did because we all got caught with the hit called ChatGPT and OpenAI API, right? Yeah, we were completely, I mean like it was impossible. There's no supply chain planning I could have done in, what is it, like none of us knew what was gonna happen, what happened in November of 22, like that was just a bolt from the blue, right?

So therefore we had to catch up. So we said, hey, we're not going to in fact worry about too much inefficiency. So that's why, whether it's CoreWeave or many others, we bought all over the place. - Fair enough. - So, and that is a one-time thing and then now it's all catching up and yeah.

So that was just more about trying to get caught up with demand. - Are you still supply constrained Satya? - I am power, yes. I am not chip supply constrained. We were definitely constrained in 24. What we have told the street is that's why we are optimistic about sort of the first half of 25, which is the rest of our fiscal year.

And then after that, I think we'll be in better shape going into 26 and so on. We have good line of sight. - So I'm hearing with respect to this level two thinking, the O1 test time compute, post-training work that's being done on that is leading to really positive outcomes.

And when you think about that, that's also pretty compute intensive because you're generating a lot of tokens, you're recycling those tokens back into the context window and you're doing that time and time again. And so that compounds very quickly. Jensen said he thought looking at O1, the inference was going to a million or a billion X, just that it was the demand for inference is going to go up dramatically.

In that regard, do you feel like you have the right long-term plan to scale inference to keep up with these new models? - Yeah, I mean, I think there are two things there, Brad, which is in some sense, it's very helpful to think about the full workload there, the full workload.

Like in the agentic world, you have to have the AI accelerator. One of the fastest growing things of, in fact, open AI itself is the container service because after all these agents need a scratch pad for doing some of those autograding even to generate the samples, right? And so that is like where they run a code interpreter.

And that, by the way, is a regular Azure Kubernetes cluster. So in an interesting way, there's a ratio of even what is regular Azure compute and its nexus to the GPU and then some data service. So to your point, when we say inference, that's why I look at it and say there's, people think about AI as separate from the cloud.

AI is now core part of the cloud. And I think in a world where every AI application is a stateful application, it's an agentic application, that agent performs actions, then classic app server plus the AI app server plus the database are all required. And so I go back to my fundamental thing, which is, hey, we built this 60 plus AI regions.

I mean, Azure regions, they all will be ready for full-on AI applications. And that's, I think, what will be needed. That makes a lot of sense. So let's talk a little bit, we've talked around OpenAI a lot during this conversation, but you're managing this balance between a huge investment there and your own efforts.

At Ignition, you showed a slide highlighting the differences between Azure OpenAI and OpenAI Enterprise. And a lot of those were about the enterprise grade, you know, things that you bring to the table. So when you look at that tension, you know, the competition that you have with OpenAI, do you think about them as ChatGPT is likely to be that winner on the consumer side?

You'll have your own consumer apps as well. And then you'll divide and conquer when it comes to enterprise. How do you think about competing with them? The way I think about, at this point, given OpenAI is a very at-scale company, right? So it's no longer, it's a really very successful company with even multiple lines, if you will, of business and segments and what have you.

And so I come at it very principally like I would with any other big partner, right? Because I don't think of them. So I think of them as, hey, as an investor, what are their interests and our interests and how do we align them? I think of them as an IP partner.

And because we give them systems IP, they give us model IP, right? So that's another side of it where we are very deeply interested in each other's success. The third is, I think of them as a big customer. And so therefore I want to serve them like I would serve any other big customer.

And so, and then the last one is the cooperation, right? Which is, whether it's co-pilot in the consumer space, whether it's co-pilot with M365 or whatever else, we sort of say, hey, where is the competition? Where is, and that's where I kind of look at it and say, ultimately these things will have some overlap, but I also in that context, the fact that they have the Apple deal is in some sense for the MSFT shareholder accretive, right?

Even in like the fact that their APIs, like to your point about the API differences, hey, you choose, right? The customers can choose which API front or like some of the, there's differences, right? Azure has a particular style. And if you're an Azure customer and you want to use other services of Azure, then it's easiest to have an Azure and Azure Mac.

But if you're on AWS and you want to just use just the API in a stateless way, great. Just use even open AI. So I think in an interesting way, there's sometimes having these two types of distributions is also helpful to the MSFT cost. - Satya, I would say the kind of curious part of the Silicon Valley community and even writ larger, I would say the entire business community is I think infatuated with the relationship between Microsoft and open AI.

I was at DealBook last week and Andrew Sorkin pushed Sam really hard on this. I imagine there's a lot you can't say, but is there anything you can say? There's supposedly a restructuring, conversion to profit. I guess Elon's launched a missive in there as well. What can you tell us?

- Yeah, I mean, I think those, Bill, are obviously all for the open AI board and Sam and Sarah and Brad and that team to decide what they want to do. And we want to be supportive. I mean, so this is where we're an investor. We, let me, I'd say the one thing that we care deeply about is open AI continues to succeed, right?

I mean, it's in our interest. And I also think it's a company that is an iconic company of this platform shift and the world is better with open AI doing well. And so therefore that's sort of the fundamental position. Then after that, the pace with which the tension to your point comes from, like in all of these partnerships, some of it is that co-op petition tension.

Some of it is, you know, Sam's somebody who is an unbelievable entrepreneur with great amount of sort of vision and ambition and the pace with which he wants to move, you know, and so therefore we have to balance that all out, right? Which is, you know, what he wants to do, I have to accommodate for so that he can do what he does to do and he needs to accommodate for the discipline that we need on our end, given, you know, the overall constraints we may have.

And so I think we'll work it out. But I mean, the good news here, I think is in this construct, we have come a long way. I mean, this five years has been great for them. It's been great for us. And at least for my part, I'm going to keep going back to that.

And I want to prolong it as long as I can. It will only behoove us to have a long-term stable partnership. - When you think about, you know, the separate funding, you know, and untangling the two businesses, such, you know, are you guys motivated to do that relatively quickly?

I've talked about thinking that the next step for them, you know, it'd be great to have them as a public company. You know, it's such an iconic, you know, business, early leader in AI. Is that the path that you see, you know, for these guys on the way forward?

Or do you think that it stays kind of in the relationship that we are today? - And that's the place where, Brad, I want to be careful not to overstep, right? Because in some sense, you know, I'm neither, we're not in the board, we're investors like you, and at the end of the day, it's their board and their management decision.

And so at some level, I'm going to take whatever their cues are. Like, in other words, I'm very clear that I want to support them with whatever decision they make. And to me, perhaps even as an investor, it's that commercial and IP partnership that matters the most. We want to make sure we protect our interests in all of this.

And if anything, bolster them going forward. But I think, you know, at this point, you know, people like Sarah and Brad and Sam are, you know, are very, very smart folks on this and what makes the most sense for them to achieve their objectives on the mission is what we would be supportive of.

- Well, maybe we should wrap and thank you for so much time today. But I want to wrap on this topic of open versus closed, you know, and how we should cooperate to usher in safe AI. And so maybe I'll just leave it open-ended to you. You know, talk to us a little bit about how you think about some of these differences and debates and the importance of doing this.

And one anecdote I would just throw out there is Reuters recently reported that Chinese researchers developed an AI model for potential military use on the back of MetaLlama, right? And, you know, there are a lot of supporters like Bill and I of open source, but we've also heard critics, and you said everybody can distill a model, you know, out there.

So we are going to see some of these put to uses that we're not going to be happy about. So how do you think about, you know, us coming together really as a nation and as a collection of companies to usher in safe AI? - Yeah, I think two things.

I think that I always have thought of open source versus close source as two different tactics in order to create network effects, right? I've never thought of them as just religious battles. I've thought of them as more like, hey, two different, I mean, that's why I think what Meta and Mark are doing is very smart, right?

Which is in some sense, he's trying to commoditize even his compliment, right? It makes a ton of sense to me. If I were in his shoes, I would do that, right? Which is get the entire world converged. I mean, I think he talks openly and very eloquently about how he wants to be the Linux of LLMs.

And I think it's a beautiful model. In fact, there is even a model there, I think that, you know, sometimes going back to some of your economics question, I think there is like the game, theoretically, a consortium could be a superior model, quite frankly, than any one player trying to do it.

Like this has, unlike the Linux Foundation, where the contributions were mostly Apex contributions, right? Which is if, I always say Linux wouldn't have happened, but for, I guess, you know, in fact, Microsoft's one of the largest committers to Linux. And so was IBM, so was Oracle and what have you.

And I think that there may be a real place for, and open source is a beautiful mechanism for that, right? Which is when you have multiple entities coming together and so on. And it's a smart business strategy. Then closed source may make sense in closed source. After all, we have had lots of closed source products.

Then safety is an important but orthogonal issue because after all, regulations will apply and safety will apply to both sides. And, you know, one could make arguments that, "Hey, if everybody's inspecting it, and, you know, there will be more safety on one side or the other." So I think of these as perhaps best dealt with in capitalism, at least.

It's better to have multiple models and let there be competition and different companies will choose different paths. And then we should be pretty hardcore and the governments will demand that. I think at tech, you know, now there's no chance of saying, "Hey, we'll see what happens to the unintended consequences later." I mean, no government, no community, no society is going to tolerate that.

So therefore, these AI safety, you know, institutions all over will hold a same bar. And also national security, to your point, if there is sort of national security leakage or challenges, the people will worry about that too. So therefore, I think states and state policy will have a lot to say about which of these models and what the regulatory regime will look like.

- Well, it's hard to believe that we're only 22 months into the post-chat GPT era, you know, but, you know, it's interesting when I reflect back on your, you know, framework around phase shifts, you have to put Microsoft in a really good position as we emerge into the age of AI.

And so congrats on the run over the last 10 years. It's been really, you know, a sight to behold, but, you know, it's great. I think both Bill and I get excited when we see the leadership, you, Elon, Mark, Sundar, et cetera, you know, really forging ahead for Team America around AI.

You know, I feel, I think we both feel pretty incredibly optimistic about how we're going to be positioned vis-a-vis the rest of the world. So thanks for spending some time with us. - Yeah, I can't thank you enough for the time, Satya. Really appreciate it. - Thank you so much.

- Bye-bye. - Thank you, Brad and Bill. - Take care, Satya. - Cheers. As a reminder to everybody, just our opinions, not investment advice.