Back to Index

NVIDIA: OpenAI, Future of Compute, and the American Dream | BG2 w/ Bill Gurley and Brad Gerstner


Chapters

0:0 Intro
0:37 The Year in AI Recap
3:24 OpenAI Stargate & Nvidia Investment
8:41 Nvidia Accelerated Compute TAM
18:55 NVDA ROI – Glut or Bubble?
27:45 Roundtripping Claims
31:10 Annual Release Cadence & Extreme Co-design
40:45 Future of ASICs & Economics
53:47 Nvidia's Competitive Moat
56:55 Elon, X.ai & Colossus 2
58:47 Sovereign AI & Global Buildout
62:21 The AI Administration
67:43 Chinese AI Chips & NVIDIA’s Role
77:24 H-1B, Talent, & the American Dream
89:33 Invest America & American Right to Rise
97:40 The Future Ahead

Transcript

- I think that OpenAI is likely going to be the next multi-trillion dollar hyperscale company. - Yeah. - Jensen, great to be back, of course, with my partner, Clark Tang. You know, I can't believe it's been-- - Welcome to NVIDIA. - Oh, and nice glasses. Those actually look really good on you.

The problem is now everybody's going to want you to wear them all the time. They're going to say, "Where are the red glasses?" I can vouch for that. So it's been over a year since we did the last pod. - Over 40% of your revenue today is inference, but inference is about ready because of chain of reasoning.

- Yeah. - It's about to go up by a billion times. - Right, by a million X by a billion X. - That's right. That's the part that most people, you know, haven't completely internalized. This is that industry we were talking about, but this is the industrial revolution. - Right.

Honestly, it's felt like you and I have had a continuation of the pod every day since then. You know, in AI time, it's been about a hundred years. I was re-watching the pod recently, and the many things that we talked about that stood out, the most, the one that was probably most profound for me, was you pounding the table.

That, you know, remember at the time, there was kind of a slump in terms of pre-training, and people were like, oh my God. - The end of pre-training. - Right, the end of pre-training. We're not gonna, we're over building. This is about a year and a half ago. And you said inference isn't going to a hundred X, a thousand X, it's gonna one billion X, which brings us to where we are today.

You know, you announced this huge deal. You know, we ought to start there. - I underestimated. Let me just go on record. I underestimated. We now have three scaling laws, right? We have pre-training scaling law. We have post-training scaling law. Post-training is basically like AI practicing. - Yes. - Practicing a skill until it gets it right.

And so it tries a whole bunch of different ways. And in order to do that, you've got to do inference. So now training and inference are now integrated in reinforcement learning. Really complicated. And so that's called post-training. And then the third is inference. The old way of doing inference was one shot.

- Right. - But the new way of doing inference, which we appreciate is thinking. So think before you answer. - Yeah. - And so now you have three scaling laws. The longer you think, the better the quality answer you get. While you're thinking, you do research, you go check on some ground truth, and you learn some things, you think some more, you go learn some more, and then you generate an answer.

Don't just generate right off the bat. And so thinking, post-training, pre-training, we now have three scaling laws, not one. - You knew that last year, but is your level of confidence this year in the inferences going to 1 billion X, and where that will take the levels of intelligence?

Is it higher, are you more confident this year than you were a year ago? - I'm more confident this year. And the reason for that is because, look at the agentic systems now. - Right. - And AI is no longer a language model, and AI is a system of language models.

And they're all running concurrently, maybe using tools. Some of us are using tools, some of us are doing research. You know, there's a whole bunch of stuff. And it's all multi-modality. And look at all the video that's been generated. I mean, it's just-- - It's incredible. - Crazy stuff, yeah.

- It really brings us to, you know, kind of the seminal moment this week that everybody's talking about the massive deal you announced a couple of days ago with OpenAI Stargate, where you're going to be a preferred partner, invest $100 billion in the company over a period of time.

They're going to build 10 gigs. And if they used NVIDIA for those 10 gigs, that could be upwards of $400 billion in revenue to NVIDIA. So help us understand, you know, just tell us a little bit about that partnership, what it means to you, right? And why that investment makes so much sense for NVIDIA.

- So first of all, I'll answer that last question first. And then I'll come back and present my way through it. I think that OpenAI is likely going to be the next multi-trillion dollar hyperscale company. - Okay. - I think you would not-- - Why do you call it a hyperscale company?

- Hyperscale, like Meta's a hyperscale. Google's a hyperscale. They're going to have consumer and enterprise services, and they are very likely going to be the world's next multi-trillion dollar hyperscale company. - Yes. - And I think you would agree with that. - I agree. - If that's the case, the opportunity to invest before they get there, this is some of the smartest investments we can possibly imagine.

And you got to invest in things you know. - Right. - And it turns out we happen to know this space. And so the opportunity to invest in that, the return on that money is going to be fantastic. So we love the opportunity to invest. We don't have to invest.

- Right. - And it's not required for us to invest, but they're giving us the opportunity to invest. Fantastic thing. Now let me start from the beginning. So we're partnering with OpenAI in several projects. The first project is the build out of Microsoft Azure. We're going to continue to do that.

And that partnership is going fantastically. When we have several years of build out to do, hundreds of billions of dollars of work just to do there. - Right. - The second is the OCI build out. - Mm-hmm. - And I think there's some five, six, seven gigawatts that are about to be built out.

And so working with OCI and OpenAI and SoftBank to build that out. - Right. - Those projects are contracted, we're working on it, lots of work to do. And then the third is CoreWeave. And so CoreWeave 4, I'm talking about OpenAI still. - Yes. - Okay, everything in the context of OpenAI.

And so the question is, what is this new partnership? This new partnership is about helping OpenAI, working, partnering with OpenAI, to build their own self-build AI infrastructure for the first time. - Right. - And so this is us working directly with OpenAI at the chip level, at the software level, at the systems level, at the AI factory level, to help them become a fully operated hyperscale company.

I mean, this is going to go on for some time. It's going to supplement the amount of, you know, they're going through two exponentials, as you know. - Right. - The first exponential is the number of customers is growing exponentially. - Right. - And the reason for that is the AI is getting better, the use cases are getting better.

Just about every application is connected to OpenAI now. And so they're going through the usage exponential. The second exponential is the computational exponential of every use. - Yes. - Right? - Yes. - Instead of just a one-shot inference, it's now thinking before it answers. And so these two exponentials compounding their compute requirements.

And so we've got to build out all these different projects. And so this last one is an additive on top of everything that they've already announced, all the things that we're already working on with them. - Right. - It's additive on top of that. And it's going to support the, you know, this incredible exponential.

- One of the things you said that's really interesting to me is kind of, you know, they're going to be high probability, multi-trillion dollar company in your mind. You think it's a great investment. At the same time, you know, they're self-building. You're helping them self-build their data centers. So heretofore they've been outsourcing to Microsoft to build the data center.

Now they want to build full-stack factories themselves. - They want to do, they want to, they want to basically have a relationship with us the way that, that Elon and X has relationship. - Correct. - I mean, Elon and X self-built. - Exactly. - But I think that's also...

- This is a very big deal. - When you think about, when you think about the advantage that Colossus had. - Yeah. - They're building full-stack. That is a hyperscaler. Because if they don't use the capacity, they could sell it to somebody else. - That's right. - And the same way Stargate, they're building monstrous capacity.

They think they'll need to use most of it, but it puts them in a position to sell it to somebody else as well. It sounds very much like AWS or GCP or Azure. That's what you're saying. - Yeah. I think they'll likely use it themselves. And just in the case of X, they'll likely use it themselves.

But they would like to have the same direct relationship with us. - Yes. - Direct working relationship and direct purchasing relationship. Meta. Just as what Zuck and Meta has with us, it's exactly a direct. Our relationship between us and Sundar and Google, direct. Our partnership with Satya and Azure, direct.

Isn't that right? And so they've gotten to a large enough scale that they believe it's time for them to start building these direct relationships. I'm delighted to support that. And Satya knows it. And Larry knows it. And everybody's aware of what's going on. And everybody's very supportive of it.

So one of the things I find mysterious, right? You know, you just mentioned Oracle 300 billion. Colossus, what they're building. We know what the sovereigns are building. We know what the hyperscalers are building. You know, Sam's talking in terms of trillions. But of the 25 sell-side analysts on Wall Street who cover your stock, if I look at the consensus estimate, it basically has your growth flatlining starting in 2027.

8% growth 2027 through 2030. Okay? That is the 25 people in their only job, they get paid to forecast the growth rate for NVIDIA. So clearly... We're comfortable with that, by the way. Right, right. Look, we're comfortable with that. Okay? We have no trouble beating the numbers on a regular basis.

Right. No, I understand that. But there is this interesting disconnect. No. Right? I hear it every day on CNBC and Bloomberg. And I think it goes to, you know, some of these questions around, you know, shortages leading to a glut that they don't believe. They say, okay, we'll give you credit for 26, but 27, you know, maybe we'll have too much and you're not going to need that.

But it is interesting to me, and I think it's important to point out that your consensus forecast is that this won't happen. Right? And we also put together a forecast, you know, for the company, taking into account all of these numbers. Mm-hmm. And what it shows me is still, even though we're two and a half years into the age of AI, a massive divergence of belief between what we hear Sam Altman saying, you saying, Sundar saying, Satya saying, and what Wall Street still believes.

And, you know, again, you're comfortable with that. I also don't think it's inconsistent. Okay. So, explain that a little bit. Yeah. So, first of all, for the builders, we're supposed to be building for opportunity. Right. We're builders. Let me give you three points to think through. And these three points, it'll help you, hopefully, be more comfortable with NVIDIA in this future.

So, the first point, and this is the laws of physics point, this is the most important point, that general purpose computing is over. Yes. And the future is accelerated computing and AI computing. Yeah. That's the first point. And so, the way to think about that is, there's how many trillions of dollars of computing infrastructures in the world?

That has to be refreshed. Right. Right. And when it gets refreshed, it's going to be accelerated computing. That's right. And so, the first thing you have to realize is that general purpose computing, and nobody disputes that. Everybody goes, yeah, we completely agree with that. General purpose computing is over.

Moore's law is dead. People say these things. And so, what does that mean? So, general purpose computing is going to go to accelerated computing. Our partnership with Intel is recognizing that general purpose computing needs to be fused with accelerated computing to create opportunities for them. Is that right? And so, one, general purpose computing is shifting to accelerated computing and AI.

Two, the first use case of AI is actually already everywhere. Right. It's in search, recommender engines. Isn't that right? In shopping. The basic hyperscale computing infrastructure used to be CPUs doing recommenders. Right. Is now going to GPUs doing AI. Right. So, you just take classical computing, it's going to accelerated computing and AI.

You take hyperscale computing, it's going from CPUs to accelerated computing and AI. And then now that's the second point. Just feeding the Metas, the Googles, the ByteDances, the Amazons, and take their classical, traditional way of doing hyperscaling and moving into AI, that's hundreds of billions of dollars. And because that may be four billion people on the planet today, if you take TikTok, Meta into account.

That's right. Google into account, who are already demanding workloads that are driven by accelerated computing. That's exactly right. And so, without even thinking about AI creating new opportunities, it's about AI shifting how you used to do something to the new way of doing something. Right. Okay. And then now, let's talk about the future.

Yeah. I just, so far I've only spoken kind of largely about just mundane stuff. Just mundane stuff. The old way is now wrong. You're going to go, you're no longer going to use fuel light lanterns. You're going to go to electricity. That's all. Right. You're no longer, you know, prop planes, you're going to go to jets.

That's all. And so, you know, so far, you know, that's all I've talked about. And then, now that the incredible thing is, when you go to AI, when you go to accelerated computing, then what happens? What are the new applications that emerge as a result? And that's all the AI stuff that we're talking about.

Yeah. And that's the, that opportunity, what is it, how, what does that look like? Well, the simple way of thinking about that is where motors replaced labor and physical activity. We now have AI, these AI supercomputers, these AI factories that I talk about. They're going to generate tokens to augment human intelligence, right?

And human intelligence represents, what, 55, 65% of the world's GDP. Let's call it $50 trillion. And that $50 trillion is going to get augmented by something. And so let's use this. Let's come back to a single person. Suppose I were to hire a $100,000 employee, and I augmented that $100,000 employee with a $10,000 AI.

And that $10,000 AI, as a result, made the $100,000 employee twice more productive, three times more productive would I do it? Heartbeat. I'm doing it across every single person in our company right now, right? They all have co-agents. That's right. Co-workers. Every single software engineer, every single chip designer in our company already has AIs working with them.

A hundred percent coverage. As a result, the number of chips we're building is better. The number is growing. The pace at which we're doing it is right. And so we're growing faster as a company. As a result, we're hiring more people. Our productivity is greater. Our top line is greater.

Our profitability is greater. What's not to love about that? Now, apply the NVIDIA story to the world's GDP. Yeah. And so what's likely to happen is that that $50 trillion is augmented by, let's pick a number, $10 trillion. That $10 trillion needs to run on a machine. Now, the reason that AI is different than IT in the past, in a way, software was written a priori, and then it runs on a CPU.

It runs—a person would operate it. In the future, of course, AI is generating tokens. But a machine has to generate the tokens, and it's thinking. So that software is running all the time, whereas in the past the software was written once, now the software is in fact writing all the time.

It's thinking. In order for the AI to think, it needs a factory. And so let's say that that $10 trillion of token generated, 50% gross margins, and $5 trillion of it needs a factory, needs an AI infrastructure. Right. So if you told me that on an annual basis, the CapEx of the world was about $5 trillion, I would say the math seems to make sense.

Yeah. And that's kind of the future, right? Yeah. Going from general-purpose computing to accelerated computing, replacing all the hyperscales with AI, and then now augmenting human intelligence for the world's GDP. And today that market is about—our estimate is about $400 billion annually. Yeah. So the TAM, you know, is a 4 to 5x increase over where it is today.

Yeah. Eddie, last night, Eddie Wu at Alibaba said, between now and the end of the year, excuse me, now and the end of the decade, they're going to increase their data center power by 10x, right? Right. Yeah. You just said how much? 4x. There you go. There you go.

Yeah. They're going to increase power by 10x, and we correlate to power. NVIDIA's revenue is almost correlated to power. Yes. Isn't that right? Yeah. That's going to be his morris loss. Another thing—what else did he say? Yeah. He said, "Token generation is doubling every few months." Yeah. What's that saying?

The perf per watt has to keep on going exponentially. That's why NVIDIA's, like, cranking it out with perf per watt. And revenue per watt is, you know, watt is basically revenues in this future. Embedded in this assumption, I find it very fascinating in a historical context. Right. For 2,000 years, basically GDP did not grow.

Okay. And then we get the industrial revolution. GDP accelerates. We get the digital revolution. GDP accelerates. And basically what you're saying—and Scott Besson has said it. He said, "I think we're going to have 4% GDP growth next year." Basically what you're saying is the world's GDP growth is going to accelerate because now we are giving the world billions of coworkers that will do work for us.

And if GDP is an amount of output for a fixed amount of labor and capital, it has to accelerate. It has to. It has to. Look at what's going on with AI. As a result of the technology of AI, and that technology of AI, let's just call it the large language models and all the AI agents, it's now creating a new industry of AI agents.

There's no question about that. Okay. So that's opening eyes to fastest growing revenue company in history. Right. And they're growing exponentially. Right? And so AI itself is a fast-growing industry. Because of AI needs a factory behind it. Right. An infrastructure behind it. There's this industry is growing. My industry is growing.

Yes. And because my industry is growing, the industry underneath us is growing. Energy is growing. Right. Power. Shell. This is like renaissance for the energy industry. Isn't that right? Right. Nuclear energy. Gas turbines. I mean, look at all of those companies in the infrastructure ecosystem underneath us. They're doing incredibly well.

Everybody's growing. These numbers have everybody talking about a glut or a bubble. Right? Zuckerberg said last week on a podcast, you know, he said, listen, I think it's quite possible at some point that we will have an air pocket. And Meta may in fact overspend by $10 billion or whatever.

But he said, it doesn't matter. It's so existential to the future of his business that it's a risk that they have to take. But when you think about that, it sounds a little bit like prisoner's dilemma. Right? And walk us again through. These are very happy prisoners. Walk us again through.

Right? Today, our estimate is that we're going to have $100 billion of AI revenue in 2026. Excluding Meta, excluding, you know, the GPUs running recommender engines. Yeah. Okay? So there's... Or search or... Correct. So there's other stuff. Yeah. But let's call it $100 billion. What is that industry anyways?

What is the industry already in the hyperscale? Right. What is the hyperscales, you know, between... Trillions. Yeah, exactly. By the way, that industry is going to AI. Before anybody starts at zero, you've got to start there. But I think the skeptics would say we need to go from $100 billion of AI revenue in '26 to at least $1 trillion of AI revenue in 2030.

Okay? You just were talking a minute ago about $5 trillion when you look at kind of global GDP. If you did a bottoms up, can you see your way to $1 trillion of AI-driven revenues from $100 billion over the course of the next five years? Are we growing that fast?

Yes. And I would also say we're already there. Okay. So explain that. Because the hyperscalers, they went from CPUs to AI. Okay. Their entire revenue base is all now AI-driven. Correct. You can't do TikTok without AI. Correct. You can't do YouTube short without AI. You can't, you know, you can't do any of this stuff without AI.

The amazing things that Meta's doing for, you know, customized content, personalized content. You can't do that without AI. It's all of that stuff used to be humans, you know, doing content a priori, creating four choices that are then selected by a recommender engine. Correct. And now it's infinite number of choices generated by an AI, right?

But those things are already, like we had the transition from CPUs to GPUs largely for those recommender engines. And now they're going… That's fairly new. I would say… In the last three or four years, maybe? Yeah, Zuck would tell you, I was at SIGGRAPH, and Zuck would tell you, you know, they were late getting to GPUs.

For sure. Yeah. For sure. For sure. For sure. GPUs for Meta is, what, a couple years? Okay. Year and a half? It's pretty new. Search with GPUs? For sure. Brand spanking new. For sure. Brand spanking new. Search for GPUs on GPUs? So your argument would be the probability that we're going to have a trillion dollars of AI revenues by 2030 is near certain because we're almost… Already there.

Already there. Okay. Let's just talk about incremental from where we are today. Incremental. Now we can talk about incremental. Incremental from where we are today. Right. Exactly. Right? As you do your bottoms up or your tops down, I just heard your tops down about percentage of global GDP. Yeah.

What is the percentage probability that you think will have a glut, will run into a glut in the next three or four or five years? Right? It's a distribution of… We don't know the future. It's a distribution of probabilities. Until we fully convert all general purpose computing to accelerated computing in AI.

Until we do that. Yes. I think the chances are extremely low. Okay. Okay. And that will take a few years? That'll take a few years. Yeah. Let me ask one more. Until all recommender engines are AI-based. Until all content generation is AI-based. Because content generation, consumer-oriented content generation is very largely recommender systems and so on and so forth.

And all of that's going to be AI-generated. Until all of this stuff, what classically was hyperscale, now transitions to AI. You know, everything from shopping to e-commerce. You know, all that stuff. Until everything goes over. Because… But all this new build, right? When we're talking about trillions. We're investing ahead of where we are.

You know, is that like at will? Are you obliged to invest the money even if you see a slowdown or a kind of a glut coming? Or is this one of these things that you're just waving the flag to the ecosystem to say, get out and build. And at some point in time, if we see some of this slowdown, we can always pull back on the level of investment.

Actually, it's the other way because we're at the end of the supply chain, right? And so, we respond to demand. Okay. And right now, all the VCs will tell you. You guys know. The demand. There's a shortage of compute in the world. Not because there's a shortage of GPUs in the world.

Okay? If they give me an order, I'll build it. Right? Over the last couple of years, we've really plumbed the supply chain. So, all of the supply chain behind me from wafer starts to co-auths to HBM memories. You know, all of that technology. We've really geared up. Yeah. If we need to double, we'll double.

Yes. Okay? So, the supply chain is ready. Now, we're just waiting for demand signals. And when the CSPs and the hyperscalers and our customers do their annual plan and they give us, you know, their forecast, we respond to that. And we build to that. Now, what's going on, of course, is that every one of their forecasts that they provide us turns out to have been wrong.

Right. Because they under-forecasted. And so, now we're always in a scramble mode. And so, we've been in a scramble mode now for, you know, a couple of years. Yeah. And it's, whatever forecast we've been given has been always significant increase from last year, but not enough. Satya last year seemed to be pulling back a little bit.

You know, seemed to be, you know, some people called him the adult in the room tamping down kind of some of these expectations. A few weeks ago, he said, "Hey, I've also built two gigs this year and we're going to accelerate in the future." Do you see some of the traditional hyperscalers that may have been moving a little slower than, let's call it a CoreWeave or an Elon X or maybe a little slower than Stargate?

Do you see them all? It sounds like, to me, they're all leaning in more now and they're all also accelerating. Because of the second exponential. Okay. Yeah. We've already had one exponential we were experiencing, which was the adoption rate of AI, the engagement of AI was growing exponentially. Yes.

The second exponential that kicked in was reasoning. Yeah. That was the conversation we had one year ago. One year ago. Yeah. We said, "Hey, listen, the moment you take AI from one shot, memorizing and generalizing, that's basically pre-training." Yeah. So, memorizing an answer, you know, what's eight times eight, just memorize it, okay?

And so, memorizing an answer and generalizing, that was one shot AI. Now, a year ago, reasoning came about. For sure. Research came about. Tool use came about. And now you're a thinking AI. One billion X. Isn't that right? It's going to use a lot more compute. Certain hyperscale customers, to your point, had internal workloads that they had to migrate anyways from general purpose computing to accelerated computing.

So, they built through the cycle. I think maybe some hyperscalers had different workloads, so they weren't quite sure how quickly they could digest it. That's right. One has now concluded that they dramatically under-built. One of the applications that my favorite is just good old-fashioned data processing. Structured data and unstructured data.

Just good old-fashioned data processing. Yes. And very soon, we're going to announce a very big initiative of accelerated data processing. Data processing represents the vast majority of the world's CPUs today. It still completely runs on CPUs. You know, if you go to Databricks, it's mostly CPUs. You go to Snowflakes, mostly CPUs.

SQL processing at Oracle, mostly CPUs. Everybody's using CPUs to do SQL, structured data. In the future, that's all going to move to AI data. That is one gigantic, massive market that we're going to move to. But everything that NVIDIA does requires acceleration layers and requires, you know, domain-specific data processing.

Recipes. Recipes. We've got to go build that. But that's coming. So, one of the pushbacks—you know, I turned on CNBC yesterday. They were like, "Oh, glut, bubble." When I turned on Bloomberg, it was about round-tripping and circular revenues. Okay? And so, for the benefit of people, you know, at home, you know, these arrangements are when companies enter into a misleading transaction that artificially inflates revenue without any underlying economic substance.

So, in other words, growth's propped up by financial engineering, not by customer demand. And the canonical case everybody's referencing, of course, is Cisco and Nortel from the last bubble 25 years ago. Mm-hmm. So, when you guys or Microsoft or Amazon are investing in companies that are also your big customers—in this case, you guys investing in OpenAI—while OpenAI is buying tens of billions of chips, just remind us and remind everybody else, like, what are the analysts on Bloomberg and otherwise getting wrong when they're hyperventilating about circular revenues or about round-tripping?

10 gigawatts is like $400 billion, right? Yeah. Something like that. And that $400 billion will have to be largely funded by their offtake, right, their revenues, which is growing exponentially. It has to be funded by their capital, the money they've raised through equity, and whatever debt they can raise.

Those are the three vehicles. And the equity that they could raise and the debt that they could raise has something to do with the confidence of the revenues that they could sustain. For sure. And so, smart investors and smart lenders will consider all of these factors. Fundamentally, that's what they're going to do.

Mm-hmm. That's their company. It's not my business. And of course, we have to stay very close to them to make sure that we build in support of their continued growth. Mm-hmm. Okay? And so, there's the revenue side of it, and it has nothing to do with the investment side of it.

The investment side of it is not tied to anything. It's an opportunity to invest in them. And as we were mentioning earlier, this is likely going to be the next multi-trillion dollar hyperscale company. And who doesn't want to be an investor in that? You know, my only regret is that they invited us to invest early on.

I remember those conversations. And we were so poor, you know, that we were so poor we didn't invest enough. And I should have given them all my money. And the reality is, if you guys don't do your jobs and keep up with, you know, if Vera Rubin doesn't turn into a good chip, they can go get other chips and put them in these data centers.

That's right. Yeah, that's right. There's no obligation that they have to use your chips. That's right. And like you said, you're looking at this as an opportunistic equity investment. The other thing I would say here— And we've made some great investments. I've got to put it out there. Right.

You know? We invested in XAI. We invested in CoreWeave. Incredible. Yeah. How smart was that? Yeah. As I go back to this, the other fundamental thing, it seems to me, is, you know, you're putting it out there. You're saying this is what we're doing. And the underlying economic substance here, right?

It's not that you're just somehow sending revenues back and forth between the two companies. We got people sending money every month for ChatGPT, a billion and a half monthly users using the product. You just said every enterprise in the world is either going to do this or they will die.

Every sovereign views this as existential to their national security and economic security as nuclear power was. What person, company, or nation says intelligence is basically optional for us? Right. I mean, it's fundamental to them. Well, I beat the— It's the automation of intelligence. I beat the demand question to death.

So, let's jump in a little bit to system design. And I'm going to turn to Clark here in a second on that. But in 2024, you switched to your annual release cycle, right, with Hopper. You then had a massive upgrade, which required, you know, significant data center overhaul with Grace Blackwell in 2025.

And in the back half of 26, we're going to get Vera Rubin. 27, we'll get Ultra. And 28, Feynman. How is the annual release cycle going? Okay. What were the main goals of going to an annual release cycle? And did AI inside NVIDIA allow you to execute the annual release cycle?

Yeah, the answer is yes on the back of the last question. Without it, NVIDIA's velocity, our pace, our scale would be limited. And so, without AI these days, it's just simply not possible to build what we build. Now, why do we do it? There's something that—remember, Eddie said it at his earnings call or his conference.

Satya has said it. Sam has said it. The token generation rate is going up exponentially. Yeah. And the customer use is going up exponentially. I think they're at 800 million weekly active users or something like that. Yes. I mean, that's— less than two years from ChatGPT, right? And each of those users is generating massively more tokens because they're using inference-time reasoning.

That's right. Exactly. And so, the first thing is, because the token generation rate is going up so incredibly—two exponentials on top of each other—we have to—unless we increase the performance at incredible rates, the cost of token generation will keep growing because Moore's Law is dead, right? Right. Because transistors basically cost the same every single year now.

And power is largely the same. And between those two fundamental laws, unless we come up with new technologies to drive the cost down, even if there's a slight difference in growth, you give somebody a discount of a few percent, how's that going to make up for two exponentials? Right.

And so, we have to increase our performance annually at a pace that keeps up with that exponential. Yeah. So, in the case of going from, I guess, Kepler to—all the way to—kepler—all the way to Hopper was probably 100,000x. Mm-hmm. That was the beginning of the AI journey for NVIDIA.

Mm-hmm. 100,000x in 10 years. Okay? Between Hopper and Blackwell, we increased—because of MVLink 72—30x in one year. And then we'll get another x-factor again with Rubin. Mm-hmm. And then we'll get another x-factor with Feynman. And the way we do that is because the transistors aren't really helping us very much, right?

Moore's Law is largely—the density is growing up, but going up, but the performance is not. So, if that's the case, one of the challenges that we have to do is we have to break the entire problem down at the system level and change every chip at the same time and all the software stack and all the systems all at the same time.

The ultimate extreme co-design. Right. Nobody's ever co-designed at this level before. Right. We change the CPU, revolutionize the CPU, a GPU, the networking chip, the MVLink scale-up, the Spectrum X scale-out. Somebody said—I heard somebody said, "Oh, yeah, it's just Ethernet." Yeah, right. Okay. So, Spectrum X Ethernet is not just Ethernet, and people are starting to discover, "Oh, my God, the x-factor is pretty incredible." Right.

You know, NVIDIA's Ethernet business, the just Ethernet business, is the fastest-growing Ethernet business in the world. Yeah. And so, scale-out. And, of course, now we have to build even larger systems, so we scale across multiple AI factories connected together. And then we do this at an annual pace. And so, we now have an exponential of exponentials going ourselves from technology.

And that allows our customers to drive the cost of tokens down, keep making those tokens smarter and smarter with pre-training and post-training and thinking. And as a result, when the AI gets smarter, they get more used. When they get more used, they're going to grow exponentially. For people who may not be as familiar, what does extreme co-design mean?

Extreme co-design means that you have to optimize the model, algorithm, system, and chip at the same time. Right. You have to innovate outside the box. Right. Because Moore's Law said, you just have to keep making the CPU faster and faster. Everything got faster. You were innovating within a box.

Mm-hmm. Just make that chip faster. Yeah. Well, if that chip doesn't go any faster, then what are you going to do? Innovate outside the box. Mm-hmm. And so, NVIDIA really changed things because we did two things. We invented CUDA, invented GPUs, and we invented the idea of co-design at a very large scale.

Mm-hmm. There's all these industries we're in. We're creating all these libraries and co-design. Number one, full stack. Extreme is even beyond software and GPUs, it's now at the data center level, switches and networking and, you know, all of that software in the switches and the networking and the NICs, the scale-up, the scale-out, optimizing across all of that.

As a result of that, Blackwell to Hopper is 30x. Mm-hmm. No Moore's law could possibly achieve that. Right, right. And so, that's extreme. And that comes from the extreme co-design. Extreme co-design. That's because NVIDIA has—that's why we got into networking and switching and scale-up and scale-out and scale-across and building CPUs and building GPUs and building NICs.

You know, that's the reason why NVIDIA is so rich in software and people—we check in more open-source software. in the world than just about anybody except one other company. I think it's AI2 or something like that. Mm-hmm. And so, we have such enormous richness of software, and that's just in AI.

Mm-hmm. Don't forget computer graphics and digital biology and autonomous vehicles. And, you know, the amount of software we produce as a company is incredible. Mm-hmm. That allows us to do deep and extreme co-design. I heard from one of your competitors, you know, yes, he's doing this because it helps drive down the cost of token generation.

But at the same time, your annual release cycle makes it almost impossible for your competitors to keep up. The supply chain gets locked up more because you're giving three-year visibility to your supply chain. So, now the supply chain has confidence as to what they can build to. So, do you— Well, Brad, think about this.

Wait, wait, before you ask the question. Yeah. Think about this. In order for us to do several hundred billion dollars a year of AI infrastructure build-out— Yes. Think about how much capacity we had to go start a year ago. Yes. We're talking about building hundreds of billions of dollars of wafer starts and DRAM buys and—are you guys talking?

Yeah. This is now at a scale that hardly any company can keep up with. So, would you say your competitive moat is greater today than it was three years ago? Yeah. You know, first of all, there's just more competition than ever before, but it's harder than ever before. And the reason why I say that is because wafer costs are getting higher, which means that unless you do co-design at an extreme scale, you're just not going to be able to deliver the X-factor growth, number one.

And so, unless you're working on six, seven, eight chips a year, that's an amazing thing. It's not about building an ASIC. Yeah. It's about building an AI factory. System. And this system has a lot of chips in it, and they're all co-designed. Yeah. And together, they deliver that, you know, that 10X factor that we get almost regularly.

Okay? So, number one, the co-design is extreme. The second thing is that the scale is extreme. When your customers deploy a gigawatt, that's 400,000, 500,000 GPUs. Right. Getting 500,000 GPUs to work together is a miracle. Right. I mean, it's just a miracle. And so, your customers are taking an enormous risk on you to go buy all of this.

You've got to ask yourself, what customer would place a $50 billion PO on an architecture? Right. On an unproven architecture. That's right. A new one. Right. A new architecture. Yeah. You just taped out a whole new chip. You're as excited as you are about it, you know? Right. And everybody's excited for you.

And you just show the first silicon. Right. Who's going to give you $50 billion PO? Right. And why would you start $50 billion worth of wafers for a chip that just taped out? Oh. But for NVIDIA, we could do that. Because our architecture is so proven. So, the scale of our customers is so incredible.

Now, the scale of our supply chain is incredible. Right. Who's going to start all of that stuff, pre-build all of that stuff, for a company, unless they know that NVIDIA can deliver through? Right. Isn't that right? If we believe that we can deliver through to all of the customers around the world, they're willing to start several hundred billion dollars at a time.

Yeah. The scale's incredible. To that point, you know, one of the biggest key debates and controversies in the world is this question of GPUs versus ASICs. Hmm. Google's TPUs, Amazon's Tranium. And it seems like everyone from ARM to OpenAI to Anthropic are, you know, rumored to be building one.

Mm-hmm. Last year, you said, you know, we're building systems, not chips. And you're driving performance through every single part of that stack. You also said that many of these projects may never get to production scale. But given like the seeming… Most of them. Most of them. Yeah. Given the seeming success of Google's TPUs, you know, how are you thinking about this evolving landscape today?

Yeah. First of all, the advantage that Google had is foresight. Mm-hmm. Remember, they started TPU 1 before everything started. Right. You know, this is no different than a startup. You're supposed to build a startup, you're supposed to create a startup before the market grows. Mm-hmm. You're not supposed to come up as a startup when the market's a trillion dollars large.

Mm-hmm. You know, this fallacy, and all VCs know this, this fallacy that a large market, if you could just take a few percent market share, you could be a giant company. Right. That's actually fundamentally wrong. Yeah. You're supposed to take 100% of a tiny company. Yeah. A tiny industry, which is what NVIDIA did, right?

Mm-hmm. Which is what TPUs did. There were only the two of us. But you better hope that that industry gets really big. That's right. You're creating an industry. That's right. And I mean, the NVIDIA story, you know, which is— And so that's the challenge for people who are building ASICs now.

It looks like a juicy market. Mm-hmm. But remember, this juicy market has evolved from a chip called a GPU to, I just described, an AI factory. And you guys just saw, I just announced a chip called CPX for context processing and diffusion video generation. A very specialized workload, but an important workload inside the data center.

I just prelude it to maybe AI data processing processors. Because guess what? You need long-term memory. You need short-term memory. The KV cache processing is really intense. Mm-hmm. AI memory is a big deal. Mm-hmm. You know, you kind of like your AI to have good memory. Mm-hmm. And just dealing with all the KV caching around the system, really complicated stuff.

Maybe it wants to have a specialized processor. Mm-hmm. Maybe there's other things, right? So, you see that NVIDIA's—our viewpoint is now not GPU. Our viewpoint is looking at the entire AI infrastructure and what does it take for these incredible companies to get all of their workload through it, which is diverse and changing.

Mm-hmm. Look at the transformer. The transformer architecture is changing incredibly. If not for the fact that CUDA is easy to operate on and iterate on, how do they try all of their vast number of experiments to decide which one of the transformer versions, what kind of attention algorithm to use?

Mm-hmm. How do you disaggregate? Cuda helps you do all that because it's so programmable. Mm-hmm. And so, the way to think about our business now is you look at when all of these ASIC companies or ASIC projects start three, four, five years ago, I got to tell you, that industry was super adorable and simple.

Mm-hmm. There was a GPU involved. Right. But now, it's giant and complex. Mm-hmm. And in another two years, it's going to be completely massive. The scale is going to be so large. Mm-hmm. And so, I think that the battle of getting into a very large market as a nascent player is just hard, you know, as you guys know.

Even for the customers who perhaps are successful with ASICs, isn't there an optimal balance in their compute fleet? Like, you know, I think investors are very much binary creatures. They just want a yes or no black and white answer. Yeah. But even if you get the ASIC to work, isn't there an optimal balance because you think, I'm buying the NVIDIA platform.

CPX is going to come out for pre-fill for, you know, for video generation. Maybe a decode, you know, a platform. A video transcoder. Exactly. Yeah. Yeah. So, there will be, like, many different chips or parts to add to the NVIDIA ecosystem. Accelerated compute fleet, right? Yeah. As new workloads are, you know, are born.

That's right. And, you know, people trying to tape out new chips today are not really anticipating what's happening a year from now. They're just trying to get a chip to work. That's right. I said it another way. Google is a big GPU customer. Google is a big GPU customer.

If you look at, and Google is a very special case. Right. I mean, we just have to, you know, show respect where respect is really deserved. I mean, TPU is on TPU 7. Yes. Right? And so, and it's a challenge for them as well, right? And so, the work that they do is incredibly hard.

So, I think the first thing to, let me do a, you know, remember, there are three categories of chips. There's the category of chips that are architectural, x86 CPUs, ARM CPUs, NVIDIA GPUs, architectural. And it has an ecosystem above. And the architecture allows, has rich IP and rich ecosystem, very complicated technology.

It's built by the owners like us. Okay? There's ASICS. I worked for the original company, LSI Logic, who invented the idea of ASICS. As you know, LSI Logic is not here anymore. Right. And the reason for that is because ASICS is really fantastic when the market size is not very large.

It's easy to have somebody be a contractor to help you put the packaging of all that stuff together and do the manufacturing on your behalf. And they charge you 50, 60 points of margin. But when the market gets large for an ASIC, there's a new way of doing things called COT, customer-owned tooling.

And who would do something like that? Apple's smartphone chip, the volume is so large, they would never go pay somebody else 50, 60% gross margin to be an ASIC. They do customer-owned tooling. And so, where will TPUs go when it becomes a large business? Customer-owned tooling. There's no question about it.

And so, but there's a place for ASICS. Video transcoders will never be too large. SmartNICs will never be too large. And so, when there's 10, 12, 15 ASIC projects going on at an ASIC company, I'm not surprised by that. Because there are probably five SmartNICs and four transcoders. And are they all AI chips?

Of course not. And if somebody were to build an embedding processor for a specific recommender system, and that was an ASIC, of course you could do that. But would you do that as the fundamental compute engine for AI that's changing all the time? You've got low latency workload. You've got high throughput workload.

You have token generation for chat. You have thinking workload. You have AI video generation workload. Is there a—you know, now you're talking about a very— That's the workhorse backbone of your accelerator. That's what NVIDIA is all about. Yeah. Again, dumb this down. It's like playing chess and checkers, right?

The fact of the matter is the folks who are starting ASICs today, whether it's Tranium or whether it's some of these other intrinsic accelerators, etc., they're building a chip that's a component of a much larger machine. You've built a very sophisticated system, platform, factory, whatever you want to call it.

And now you're opening up a little bit, right? So you mentioned CPX GPU, right? That is—it seems to me that in some ways you're disaggregating the workloads to the best slice of the hardware for that particular demand. Look what we did. We announced this thing called Dynamo. Right. Disaggregated orchest—AI workload orchestration.

Right. And we open sourced it because the future AI factory is disaggregated. Right. And you launched Envy Fusion. Yeah. That even said to your competitors— Yeah. —including Intel, which you just invested in— That's right. You know, the way in which you participate in this factory that we're building, because nobody else is crazy enough to try to build the entire factory, but you can plug into that if you have a product that's good enough, compelling enough, that the end user says, Hey, we want to use this instead of an ARM GPU, or we want to use this instead of your inference accelerator, et cetera.

Is that correct? Yeah. We're delighted to connect you in. Yeah. Tell us a little bit more— Envy Link Fusion. Such a great idea. And we're so happy to partner with Intel on that. It takes the Intel ecosystem. You know, most of the world's enterprise still runs on Intel. It takes the Intel ecosystem, takes the NVIDIA AI ecosystem, accelerated computing, and we fused it together.

Right. And we did that with ARM. Right? And there are several others we're going to be doing it with. And that opens up opportunities for both of us. It's a win for both of us. Great, great win. I'll be a large customer of theirs, and they're going to expose us to a much, much larger market opportunity.

Yeah. It's deeply related to this idea is the argument you've made that kind of shocks some people, where you say, "Are competitors building ASICs?" They could literally—all their chips are cheaper already today—but they could literally price them at zero. Our objective is they could price them at zero, and you would still buy an NVIDIA system, because the total cost of operating that system—power, data center, land, et cetera—the intelligence out is still a better bet than buying a chip even if it's given to you for free.

Because the land power in Shell is already $15 billion. Right. Yeah. So we've taken a crack at kind of the math on that, but walk us through your math, because I think for people who don't spend as much time here, it just doesn't compute. How could it possibly be that you were pricing your competitors' chips at zero, given the expense of your chips, and it still is a better bet?

There's two ways to think about it. One way is—let's just think about it from a perspective of revenues. Yes. Okay? So everybody's power limited. And let's say you were able to secure two more gigawatts of power. Well, that two gigawatts of power you would like to have translate to revenues.

Yes. So your performance or tokens per watt was twice as high as somebody else's token per watt because I did deep and extreme co-design, and my performance was much higher per unit energy. And my customer can produce twice as much revenues from their data center. And who doesn't want twice as much revenues?

And if somebody gave them a 15% discount, the difference between our gross margins—we call it 75 points—and somebody else's gross margins, call it 50 to 65 points—it's not so much as to make up for the 30 times difference between Blackwall and Hopper. Right. Let's pretend Hopper—Hopper's an amazing chip, an amazing system.

Let's pretend somebody else's ASIC is Hopper. Yeah. Blackwall's 30 times. So you've got to give up 30x revenues in that one gigawatt. It's too much to give up. So even if they gave it to you for free, you only have two gigawatts to work with. Right. Your opportunity cost is so insanely high, you would always choose the best perf per watt.

Right. So I heard this from one of the CFOs at one of the hyperscalers, that given the performance improvement, right, that's coming out of your chips, again, precisely to that point, tokens per gig, and power being the limiting factor, right, that they had to upgrade to the new cycle.

So when you look ahead at Rubin, at Rubin Ultra, at Feynman, does that trajectory continue? We're building, what, six, seven chips a year now? Yeah. And each one of the— As part of that system. That's right. And that system, software is everywhere. Right. And it takes the integration and the optimization across all of those six, seven chips to deliver on the 30x Blackwall.

Right. Now imagine I'm doing this every single year. Bam, bam, bam, bam, bam, bam. And so if you build one ASIC in that soup of ASICs, in that soup of chips, and we're optimizing across that, you know, it's a hard problem to solve. This does bring me back to where we started about the competitive moat.

We've been covering this and investors for a while. We're investors throughout the ecosystem and competitors of yours, you know, from Google to Broadcom. But when I really just first principles around this— Excellent comments. —and say, are you increasing or decreasing your competitive moat? You move to an annual cadence.

You're co-developing with a supply chain. The scale is massively bigger than anybody anticipated, which requires scale, both of balance sheet and of development. Right. The moves you made both through acquisition and organically with things like Envy Fusion, CPX, which we just talked about. All of those things together cause me to believe that your competitive moat is increasing vis-a-vis, at least insofar as building out the factory or the system.

It's at least surprising. But I think it's interesting that your multiple is much lower than most of those other people. And I think part of that has to do with this law of large numbers. A $4.5 trillion company couldn't possibly get any bigger. But I asked you this a year and a half ago.

As you sit here today, if the market's going to—AI workloads are going to 10x or 5x, you know, we know what CapEx is doing, etc. Is there any conceivable world in your mind where your top line in five years isn't 2 or 3x bigger than it is in 2025?

Like, what's the probability that it's actually not much higher than it is today, given those advantages? I'll answer it this way. Our opportunity, as I described it, is much larger than the consensus. I'll say it here. I think NVIDIA will likely be the first $10 trillion company. And I've been here long enough—it wasn't that long ago, just a decade ago, as you well remember—that people said there could never be a trillion-dollar company.

Now we have 10. Right? And today people— But the world's bigger. Right. And today—this is back to the exponentials around GDP and the growth rate. The world is bigger. And people misunderstand what we do. They remember we're a chip company. Right. And we build chips. Boy, do we build chips.

We build the most amazing chips in the world. But NVIDIA is really an AI infrastructure company. You know, we are your AI infrastructure partner. And our partnership with OpenAI is a perfect demonstration of that. Yeah. That we are their AI infrastructure partner. And we work with people in a lot of different ways.

You know, you would—we don't require anybody to buy everything from us. We don't require that they buy the full rack. They could buy a chip. They could buy a component. They could buy our networking. They could buy our—we have customers buying only our CPU. You know, just buy our GPUs and buy somebody else's CPUs and somebody else's networking.

You know, we're kind of okay selling any way you like to buy. You know, my only request is just buy a little something from us, you know? You said, you know, this isn't just about better models. We also have to build. We have to—we have to have world-class builders.

And you said, you know, the most world-class builder maybe that we have in the country is Elon Musk. And we talked about Colossus I and what he—you know, what he was doing there, standing up a couple hundred thousand, you know, at the time, H100s, H200s in a coherent cluster.

Now he's working on Colossus II, you know, which may be 500,000 GBs, millions of H100 equivalents in a coherent cluster. I would not be surprised if he gets to a gigawatt before anybody else does. Right. In one— Yeah, so say a little bit about that. The advantage of being, you know, the builder who, you know, isn't just building the software and the models, but understands what it takes to build those clusters.

Well, you know, these AI supercomputers are complicated things. The technology is complicated. Procuring it is complicated because of financing issues. Securing the land power and shell, powering it is complicated. Building it all, bringing it all up. I mean, this is unquestionably the most complex systems problem humanity has ever endeavored.

And so, Elon has a great advantage that in his head, all of these systems are interoperating. And the interdependencies are, you know, resides in one head. Yes. Including the financing. Yes. And so— He's a big GPT. Yeah. He's a big supercomputer himself. He's the ultimate—yeah, the ultimate GPU. Yeah.

Yeah. And so, he has a great advantage there. Yeah. And he has a great sense of urgency. Yeah. He has a real desire to build it. And so, when will comes together with skill, you know, unbelievable things can happen. Yeah. Yeah. Quite unique. Something you've been so involved in is—I want to talk about sovereign AI.

I want to talk about China and the global AI race that's going on, you know. And when I look back at you 30 years ago, you couldn't have imagined you were going to be hanging out in palaces with Amirs and the king this week. And you're at the White House all the time.

The president has said that you and NVIDIA are critical to US, you know, national security. So, when you look at that, first just contextualize for me, like, it's hard to believe that you would be in those places if sovereigns didn't view this at least as existential, as important, as maybe we did nuclear in the 1940s.

Right? We don't have a Manhattan Project today at least funded by the government, but it's funded by NVIDIA, it's funded by OpenAI, it's funded by Meta, it's funded by Google. We have companies today the size of nation states, and thank God for America, right, who are funding something that it appears to me presidents and kings think is existential to their future economic and national security.

Would you agree with that? Nobody needs atomic bombs. Everybody needs AI. Well said. Okay. Hear, hear. Yeah. Hear, hear. And so that's a very, very large difference. AI, as you know, it's modern software. I just, that's where I started, from general purpose computing to accelerated computing, from human-written code line at a time to AI-written code.

That foundation can't be forgotten. We've reinvented computing. There's not a new species on earth. Yeah. We just reinvented computing. And everybody needs computing. It needs to be democratized. Which is the reason why everybody, all of these, all of the countries realize they have to get into the AI world.

Because everybody needs to stay in computing. There's nobody in the world that says, guess what? You know, I used to use computers yesterday. I'm pretty good with, you know, clubs and fire tomorrow. You know? And so everybody needs to move into computing. It's just, it's just being modernized, that's all.

Okay. Number one. It is the case that in order to participate in AI, you have to encode within AI your history, your culture, your values. And of course, AI is getting smarter and smarter so that even the core AI is able to learn these things fairly quickly. You don't have to start from the ground, you know, from ground zero.

And so I think that every country needs to have some sovereign capability. I recommend that they all use open AI. They all use Gemini. They all use, you know, these open models, you use Grok. And I think I recommend they all do that. I recommend they all use Anthropic.

But they should also dedicate resources to learn how to build AI. And the reason for that is because they need to learn how to build it not just for language models, but they need to build it for industrial models, manufacturing models. National security models. National security models. There's a whole bunch of intelligence they had to go cultivate themselves.

Right. So they ought to have sovereign capability. Every country should develop it. And is that what you see? Is that what you're hearing around the world? They all realize it. They all realize it. They all realize it. And they all are going to be customers of OpenAI, Anthropic, and Grok, and Gemini.

But they all really need to also build their own infrastructure. And this is the big idea that what NVIDIA does is we're building infrastructure. Just as every country needs energy infrastructure, the communications and internet infrastructure, now every single country needs AI infrastructure. So let's start with the rest of the world.

You know, our good friend David Sachs, the AI czar, who's doing a heck of a job. We are so lucky to have David and Sriram in Washington, D.C. doing AI in the AI czar. What a smart move by President Trump to put them in the White House. Because during this pivotal time, the technology is complicated.

Sriram is the only person in Washington, D.C. that I think knows CUDA. Yeah. Which is strange anyways. But I just love the fact that during this pivotal time when technology is complicated, policy is complicated, the impact to the future of our nation is so great that we have somebody who is clear-minded, dedicated, dedicated to the time to understand the technology, and thoughtful to help us through that.

And it would seem to me, again, going back to the Manhattan Project analogy, that you have a president who understands how existential this is. You have governors like Greg Abbott in Texas who want to remove regulations to accelerate because they understand how important it is. You have secretaries Wright at Energy and Doug Burgum at Interior and Lutnik at Commerce who also understand how important this is.

How pro-energy they are. Could you imagine? Could you imagine the alternative? If we had an administration right now who is not pro-energy and want energy to grow in our nation so that we could have AI leadership? I find it— I find it— I just can't even think about it.

I find it ironic that just a couple years ago we were saying, "China's building a hundred nuclear reactors. They're so far ahead of us." Like, that's the primitive to AI. But now you have people—when we go to build it, everybody says, "Oh, it's a glut." Right? Like, it seems to me that this is something that the government—it is in their interest.

And we have industry and government working together in a way that I haven't seen in a long time. You've been around a long time. You're very close with President Trump at this stage. Help us understand, like, what is the nature of industry-government relationships? We saw that dinner last week with all the CEOs.

You know, you spent a lot of time. Is it unique? Have you seen anything like this in your career over the last 30 years? It was hard to go to D.C. in the past, as you know. Getting an appointment is almost impossible. Right. President Trump has an open door to leaders who want to come in and help them understand the future.

This is an administration that believes in growth. Right. Fundamentally, President Trump wants America to grow. Yeah. If we can grow economically, we will be strong militarily. If we could grow economically, we will be secure. I've never met somebody who is secure who is poor. Being rich as a nation is an essential part of national security.

And he knows that. He also wants America to win the AI race. This is going to be a very long-term race. And he understands that this is a pivotal time. He wants the technology industry to run. He wants everybody in the world to be built on American technology. Right.

These are sensible, logical things. You know, the opposite is strange to me. If I take everything and I just reversed it, we want our country not to grow. And because we don't want our country to grow, we don't need any energy because we know we need energy to grow.

And so let's not have any energy. And in fact, we don't want our technology industry to lead. He understands that our technology industry is our national treasure. Correct. And that technology, like corn and steel and things in the past, are now such fundamental trade opportunities. It's an essential part of trade.

And why would you not want American technology to be coveted by everyone so that it could be used for trade? Right. So let's talk about, you know, the internet. Yeah. Google spread around the world. Yeah. We had democratic values spread around the world by way of search. And Google didn't have to go to Washington to get permission to do it.

It just happened. We diffused our technology around the world. David Sachs has been crystal clear of the need to accelerate export licenses so that the American AI stack wins around the world. Right? We're talking chips. We're talking models. We're talking data centers, et cetera. We know a year and a half ago that wasn't happening.

There was a concept that was called small yard tall fence or something like that. A small yard tall fence. Yes. And the irony of it was it was described in such a way, and it was recommended in policy in such a way, it was a small yard tall fence around America.

That was the strange part. I think President Trump's got it right that we want to maximize exports. We want to maximize American influence around the world. We're supposed to maximize those things, not minimize them. And do you see those licenses coming? Are you seeing the acceleration in Washington? I know it's being said at the top, but are you seeing it flow down through government that's accelerating us around the world?

Secretary Lutnik was all over it. Great. Yeah. So now let's talk about China. You know, what most people may not realize is I think you understand China as well as any leader in the United States. We've been there for 30 years. Been there for 30 years. What most people don't realize is up until a couple of years ago, you had dominant market share within China in terms of- 95% market share.

Right. 95% market share in the most important thing, arguably. 95% market share in the United States. And you have said that our biggest own goal that we as a country could have, under the guise of somehow trying to slow them down, is we've unilaterally disarmed, we forced Nvidia out of China, which has allowed Huawei to accelerate on the back of monopoly profits within China.

And I just saw this morning, you're seeing announcements out of Huawei and Baba and others that they're going to build data centers around the world now. Huawei has a three-year plan to pass Nvidia, funded by the monopoly profits in the biggest AI market in the world. So it's looking like your admonition that this is a huge mistake to hand China, you know, monopoly markets is coming true.

The president said, you know, after kind of the ban on H20s, now we have a situation where you can sell, you know, chips to China, but there's a 15% export tax. But now it appears that the Chinese, perhaps offended by statements out of the United States, are saying, no, Nvidia is not allowed to sell here now.

Where do we stand today between Nvidia and China? And can you reiterate kind of what you think we as a country should be doing to put ourselves in the best position to win the AI race around the world? We have a competitive relationship with China. We should acknowledge that China rightfully should want their companies to do well.

Which I don't, I don't for a second begrudge them for that. They should do well. They should, they should give them as much support as they like. It's all their prerogative. They're prerogative. And don't forget that China has some of the best entrepreneurs in the world because they came from some of the best STEM schools in the world.

Yes. They're the most hungry in the world. Yes. Nine, nine, six, as you know. Mm-hmm. This is a very- Producing the most AI engineers in the world. Nine, nine, six, so the audience knows. Nine and more into nine at night, six days a week. Yeah. That is their culture.

Yeah. Okay. We're up against a formidable, innovative, hungry, fast-moving, under-regulated. Yeah. Okay. People don't realize this. They are very lightly regulated. Right. Less regulated, ironically, than we are in a capitalist system. That's right. That's right. People think that they're centrally governed. But remember, the genius of China was distributed economic systems.

Yeah. And so all of these 33 provinces and all the mayor economy has driven an enormous amount of internal competition, internal economic vibrancy, which, of course, has some of its side effects. But this is a vibrant, entrepreneurial, high-tech, modern industry. And two, one, some of the things I heard, they could never build AI chips.

That just sounded insane. Two, that China can't manufacture. China can't manufacture? If there's one thing they could do is manufacture. Yeah. And three, they're years behind us. Is it two years, three years? Come on. They're nanoseconds behind us. Nanoseconds. Yeah, they're nanoseconds behind us. And so we've got to go compete.

Yeah. We've got to go compete. And so the question then becomes, what's in the best interest of China, of course, is that they have a vibrant industry. They also publicly say, and rightfully, I believe they believe this, is that they want China to be an open market. They want to attract foreign investment.

They want companies to come to China and compete in the marketplace. Right. And I believe that they, I hope, I believe and I hope that we'll return to that in our context, answering your question, what do I see in the future? I do hope, because they say it, their leaders say it, and I take it at face value, and I believe it because I think it makes sense for China, that what's in the best interest of China is for foreign companies to invest in China, compete in China, and for them to also have vibrant competition themselves.

Yeah. And they would also like to come out of China and participate around the world. That is, I think, is a fairly sensible outcome. And we, what we need to do as a country is to enable our technology industry, which today is the, I'm privileged to be working in an industry that is our national treasure.

We have to acknowledge it is our national treasure. It is our best industry. It is our single best industry. Yeah. Why would we not allow this industry to go compete for its survival? For this industry to go and proliferate the technology around the world, so that we could have the world be built on top of American technology, so that we can maximize our economic success, maximize our geopolitical influence, maximize this technology industry doing such a vibrant time, such a pivotal time to allow it to thrive.

The skeptic says, Jensen just wants to sell more chips, and if he can sell them to China, great, he'll sell them to China. He doesn't care about, you know, what that means for America. That's a skeptic. Can I just address the skeptics? Yeah. Just because I want America ecosystem and economy to grow, it doesn't make me wrong.

Right, right. Okay. So first of all, everything that has been said so far, that's been made up so far about China has proven to be wrong. The facts are just wrong. Right. The ground truth is wrong. And so, just because we want America to win, just because we want this industry to grow, doesn't make me wrong.

Correct. And I think anybody who knows you, and now the president, certainly myself, you deeply care about the country, you deeply want the United States of America to win the global AI race. You just happen to believe, and I think you have as much experience or more experience than anyone, that it enures to our advantage the probability of us winning the global AI race actually goes up if you are competing in China.

That's right. Because it allows us to tap into half of the world's AI engineers, keeping them, you know, in this ecosystem. And let's be clear, with the companies we're talking about here, ByteDance, Alibaba, et cetera, these are companies that are largely owned by American investors. Yeah, right. Right? Like, these are global companies that are building recommender engines.

That's right. And by the way, extraordinary technologies. Incredible companies. And so I think, and I'm hopeful, that the argument that you're making vis-a-vis China, which is a harder argument than diffusion to the rest of the world, I understand that. And that's why I thought when the president said, you know, I don't know.

It's a flip of a coin. Maybe Jensen's right. Maybe the other guys are right. But if Jensen's willing to put a little bit of 15% into the U.S. Treasury as a hedge on that, then I'll go for it. But I was really disappointed on the heels of that. Mm-hmm.

I think if the Chinese feel like they're being taken advantage of, that we're going to send them chips that are, you know, 10 years old or something, then I get why they had that response. Age 20 is really quite spectacular still. Yeah. Of course, it's not as good as Blackwell.

Right. And I get that. Yeah. Look, you know, I'm patient. And I believe that they're wise. They're thinking through their situation. They have larger agendas to deal with, you know, vis-a-vis the United States. There are a lot of discussions going on. But I'll come back to the ground truth, fundamental truth.

I believe that it is in the best interest of China that NVIDIA is able to serve that market and compete in that market. I fundamentally believe it's in the best interest of China. It is, of course, fantastic interest in the fantastic interest of the United States. Yeah. But those two truths can coexist.

It is possible for both to be true, and I believe it is both true. And so I'm rather, you know, even though I tell all of our investors that our guidance includes no China. Yeah. And I appreciate all of our investors to include no China in any of our guidance.

We've got plenty of growth opportunities outside and, you know, we've got all of that is true. It doesn't make China not important to us. It's very important to us. Anybody who thinks that the Chinese market is not important has their head deep in the sand. Yeah. And so this is one of the most important markets in the world, smart markets, as you know, smart people doing smart things, and we want to be there.

Yeah. And I think it's in the best interest of both countries that we are there. And so I think when I take a step back, I am confident that ultimately the wisdom will prevail. Yes. Yes. I've always been confident that wisdom prevails. I've always been confident that truth prevails.

And it's taken me this far. And I believe that to be fundamentally true now. And so these things will get sorted out. And we will have the opportunity to go compete in that China market. I'm not very political, but very topical is the administration's decision to charge $100,000 per H-1B visa.

You've spent a lot of time with the president. You've called him our secret weapon in AI. I also know you want to recruit the best and brightest to our country. Yeah. So how do you think about the decision to charge $100,000 per H-1B visa? Does this make it easier or harder to recruit talent?

And, you know, does perhaps it's a little different for large companies or small companies? Like, how do you think about it? So I'm going to start with it's a great start. Hold on. You said it's a great start. It's a great start. I'm just going to start there. Wow.

And the reason for that is this. That implies I hope it's not the end. But I think it's a great start. I just hope it's not the end. Here's what I fundamentally believe. America has won a singular brand reputation that no country in the world has. And no country in the world is in the position or in the horizon to be able to say, come to America and realize the American dream.

What country has the word dream behind it? Yes. It's part of its brand. We are utterly singular. And you're talking to somebody who represents the American dream. My parents didn't have any money sent us over here. We started from nothing. You guys know I, you know, bus tables, wash dishes, clean toilets.

And here I am. This is the American dream. President Trump knows that. We want legal immigrants. There's a difference between legal immigrants and illegal immigrants. But the idea that it's a country that's free for all doesn't make sense. And so now the question is, how do we go from the idea that we want to protect fundamentally the American dream to dealing with illegal immigrants at such a large scale?

How do we find a logical, pragmatic solution? Right. So the idea that we put a $100,000 price tag on H-1B probably sets the bar a little too high. But as a first bar, it at least eliminates illegal immigration. And that's a good start. How does it eliminate illegal immigration?

Well, it at least eliminates abuse of H-1B. Yeah. At least. And that's a good start. And at least we can have a conversation. So one of the things that we know about President Trump, he's a good listener. He actually listens. I mean, he listens to you. He listens to me.

And he doesn't have to. And he listens to a lot of people. And he's integrating a lot of information. And this is obviously a very complicated issue. And so I think that this is a fine start. It's a fine start. I'm not confused that anyone in the administration, anyone in the White House is confused that legal immigration, immigration is the foundation of the American dream.

And it's the ultimate brand that we want to protect. And that's the future we want to protect. And I would also say, it seems to me that certainly Sachs and other people in the administration know that we have to recruit the world's best and brightest. We should not sacrifice the greatness of the brand.

Charging $100,000 or let's say, you know, it got lowered to 50 or whatever the case is. It does seem like it tilts the playing field in favor of big companies who can effectively sponsor all these people. Yeah, right. And it's more challenging for the startup ecosystem where people are already super expensive.

No doubt. And now I got to pay this fee on top of it. It also has an unintended consequence. It might accelerate investment outside the United States. Right. And so there are unintended consequences. But like I said, start somewhere. Move towards the right answer. Right. You know, oftentimes people want to go directly from a wrong answer, wrong condition.

Right. We don't want this condition where we're at. Right. To directly jump to the perfect answer is hard to find. Right. Just start somewhere. It's the entrepreneurial way. It's important to me. You know, the president talked about before when he was running for office, he wanted to staple a green card to the, you know, to the diplomas of these STEM students.

So smart. People coming to the United States from China AI researchers studying at Stanford. Like we want to keep them here. We want to get, you know, and by the way, if their families can't get here, they're going to leave after a few years. So you might even want to make it easier for their families to come here and others.

Are you confident that we have a strategic plan in this administration? You know, this is a start, but your conversations, they give you confidence that we have a broader strategic plan to make sure we're recruiting the best and the brightest. I don't know that I have an answer for that.

Okay. But I understand that where we're at is not where we want to be. Yeah. And I don't think anybody's lost their focus on, you know, the American dream, the importance of immigration, the importance of attracting all of the world's best talent to United States, create the conditions for them to stay here.

There are things that are done from time to time that works against what I just described. Right. Making foreign students uncomfortable. Right. And being here in the United States. Threatens the brand. Threatens the brand. Yeah. Let's not forget that it's okay to be competitive with China, but be careful not to be tough on Chinese.

And so we need to make sure that that slippery slope isn't crossed. Yeah. You know, and so there are all of these things that goes along with finesse and nuance. Yes. But the fact of the matter is, we know where we want to be. We know we're in a difficult situation.

We don't want to be here. And President Trump doesn't have much time to move us in that direction. Right. To the extent that we move in that direction, I believe it's a good start. Agreed. Yeah. from a Chinese researcher leading one of our leading labs in the US. Mm-hmm.

That three years ago, 90% of the top AI researchers graduating from universities in China wanted to come to the United States and did come to the United States to work in our leading labs. Mm-hmm. And he guessed that today that's closer to 10 or 15%. Right. So seeing a precipitous drop.

That's precisely the concern that we have. Right. So have you seen this? Have you, you know, you're paying attention to both markets. Do you see this? And what are the things we need to do in order to reverse that? We definitely see a greater concern of Chinese students who come here and remain here.

Yeah. Or many of them who come here for school and are thinking about going elsewhere. Right. Many of them thinking about Europe. Right. And so I think we need to be super, super concerned about this. This is a source of existential crisis. Mm-hmm. This is definitely the early indicators of a future problem.

Right. Right. You know, smart people's desire to come to America and smart students' desire to stay. Those are what I would call KPIs. Yes. Early indicators of future success. Yes. I think of it a bit like the Warriors. You know, if they have an advantage of recruiting all the best players in the NBA, right, then they can continue to win championships.

Yeah. But the second that recruiting pipeline, right, because of the brand of the Warriors gets diminished or something else happens, then they're not going to be able to recruit the best future players and you're not going to win championships. That's it. And when you talk about the American dream so eloquently, that being brand USA.

Mm-hmm. Right? The right to come here and to do what you've done. And, you know, so I hope that the feedback to this administration, it's not just the administration, it's also just how we as a country talk about immigration. That's right. Right? This needs to be the place that welcomes the best and the brightest, that attracts, has a strategic plan for recruiting the best and the brightest and making sure that this is the place.

That they want to work. As you know, there's a phrase, and I didn't hear about this phrase until just a few years ago, China hawks. Yes. And apparently, if you're a China hawk, you get to wear that label with pride. It's almost like a badge of honor. Right. It's a badge of shame.

There's no question it's a badge of shame. There's no question that although they want what's in the best interest of our country, and we all want what's in the best interest of our country, destroying that pipeline. Right. Of the American dream. Yeah. It's not patriotic. Right. They think they're doing the right thing for our country, but it's not patriotic.

Not even a little bit. Right. And so we need to continue to be the great country we are, to have the confidence of a great country. Yes. Well said. And to have the confidence of a great country, and have somebody who wants to compete with us, and to have the attitude, bring it on.

Right. Right. Because I believe in our people. I believe in our people. I believe in the people that are here. I believe in our culture. I believe in our country. I believe in our system. Bring it on. And is it your take that that's where the president is? He's a pragmatist.

He's a believer in the growth and the ability of the United States to compete. It seems to me that's where he is. There's no question President Trump is the bring it on president. Right. Right. And he doesn't seem to me like the reason I'm confident and I've said on this pod that I think he'll get a big deal done with China.

I really, really do hope so. Yeah. And I think he speaks positively with great respect and great eloquence about his relationship and the importance of China. Not one time have I ever heard him say the word decouple, which we heard a lot in the last administration. Right. You can't decouple against the single most, the two most important relationships for the next century.

That doesn't make any sense at all. Yeah. Decoupling is exactly the wrong concept. Right. I mean, it seems to me he and Scott Besson are saying, listen, we need to make America great. We need to re-industrialize America. We need to balance and make sure that we have fair trade, that we protect industries that we need to help build, that China helps us do that, recognizing that we have helped them do it over the course of the last 25 years.

But that ultimately he said, the best way to understand me is I'm a great deal maker. I make deals. Right. Whereas I think in other camps, there are people who are iconoclastic or dogmatic. You know, it's the Mearsheimer view of China, that there's a great power struggle. One must win and one must lose versus this idea.

The idea that every country has to look exactly like ours. Right. Right. We want diversity. Right. You want America to win, but that doesn't have to come at the expense of poking an eye and telling somebody else they have to lose. Because we're that confident. Yeah. We're that confident.

Because we're that mighty. Because we're that incredible. I've got no trouble. As you know, I've got no trouble working with all my colleagues in the ecosystem. Right. And notice, we just did the ultimate deal. Right? Right. Partnering with Intel, a company that spent most of its life trying to put us out of business.

Right. And I had no trouble partnering with them. Right. You know, and the reason for that is because, number one, bring it on. Yes. And number two, the future is so much greater. Mm-hmm. It doesn't have to be all us or them. It could be us and them. Yeah.

Yeah. But nonetheless, bring it on. Yeah. Agreed. You know, you mentioned something that's profoundly important to both of us. You and I have talked a lot about this, the American dream. Yeah. You know, and it was, I think, Abraham Lincoln who said, "Fundamental to the American dream is the right to rise." Yeah.

The belief that your kids can do better than you did. That's right. Right? And you've experienced the right to rise. Right. We've all experienced the right to rise in America. But now— Yeah, you go to Wikipedia, you go look up American dreams, my picture. Right. Yeah. The ultimate American dream.

And yet, we live at this time where because of the nature of these technological systems, we have companies that are going to be worth $10 trillion. We'll probably have individuals that are worth a trillion. Those are the incentives that give people the encouragement to rise. But at the same time, when we head into this age of abundance, something that I was deeply worried about was that too many people get left behind.

Yeah. Right? And they feel left out and left behind, so it makes sense for them to attack this system of capitalism. Something that you and I worked on together and I'm deeply grateful for was the idea of Invest America. That we have to start every kid at birth on the capitalist right to rise journey.

Yeah. Give them a thousand bucks in great companies like NVIDIA. The ultimate Social Security. And OpenAI, etc. Yeah. And they benefit. Mm-hmm. Right? As the country wins, they win. And they own it individually. They can see it on their phone. Every kid is a shareholder in the future of America.

Of America. So on the 200, because of your support, and I wanted to take the chance on this podcast, and the support of… Well, I want to thank you for starting it, for driving it. Yeah. Yeah. Thank you. What a great idea. And, you know, so this… You're a genius.

Please. This passed in the Big Beautiful Bill. Yeah. Most people don't even realize that yet. Yeah. Starting in 2026, every child born forevermore in the history of this country will start off with an investment account at birth. Yeah. Seeding a thousand bucks in the best American companies. And your company has agreed to add to the accounts of not only the kids who work for your employees, but maybe even kids of others.

I'm going to adopt schools, you know, and lots of philanthropists and companies. We think every company across America… Yeah. Wonderful way for companies to give back. Right. Yeah. As part of the 401 . This seems to me to be part of the change in the social contract that needs to occur, because if we're seeing this exponential progress, we know that the evolution of government and the social contract needs to keep up with it.

Obviously, President Trump and bipartisan group in the House and Senate passed this into law. So, maybe just talk to us a little bit. When you think about the pace and magnitude of changes that are coming, right? So, I know you believe it would be a net good, but there are also going to be a bunch of people displaced along the way.

We probably need things like this and other things, right, in order to, you know, bring everybody along for the journey. There are several things that President Trump has done—and let me just start there—has done that is incredibly good for bringing everybody along. The first thing is re-industrializing America. Yeah.

President Trump, Secretary Lutnik—you know they're all in behind that, all the work that they're doing—encouraging companies to come build here in the United States, investing in factories and re-skilling and up-skilling that skilled labor workforce—incredibly valuable to our country. The idea that we no longer make it only that you get a PhD or you go to, you know, one of the great schools, and only in that way can you build a great life and deserve to have a great living—we've got to change all that.

That doesn't make any sense. We love craft. I love people who make things with their hands. And we're now going to go back and build things, build magnificent, incredible things. I love that. That's going to transform America. There's no question about that. There's a whole band of an economy, a whole band of society that has been largely left behind because we outsourced everything.

Right. Now, I'm not suggesting we insource everything. Right. You know, all the people arguing about, you know, manufacturing tennis shoes and toothpicks—I mean, you know, that's denigrating a perfectly good discussion into some insane level. You know, we've got to recognize that re-industrializing America is just fundamentally going to be transformative.

Number one. Number two. And aspirational. Oh, it's fantastic. Elon taking us to Mars, watching spaceships caught with, you know, chopsticks out in the sky. This is not only great for the industrializing base of America, it's aspirational for America. Yeah, it's fantastic. That's right. And then, of course, AI. Yeah.

It is the greatest equalizer. Just think, everybody can have an AI now. The ultimate equalizer. We've closed the technology divide. Remember, the last time that somebody had to learn, wants to use a computer for their economic or career benefit, they have to learn C++ or C or at least Python.

Right. Now they just have to learn human. You know? And so, and if you don't know how to program an AI, you tell the AI, "Hi, I don't know how to program an AI. How do I program an AI?" And the AI explains it to you. Or does it for you.

It does it for you. It does it for you. And so, it's incredible. Right. Isn't that right? And we've now closed the technology divide with technology. Yeah. This is something that everybody's got to engage. Yeah. You know, OpenAI has 800 million active users. Gosh, it really needs to be 6 billion.

Yeah. Right? It really needs to be 8 billion soon. And so, I think that's number one and number two. And then number three, you know, I think the AI will change tasks. Yeah. The thing that people confuse is there are many tasks that will be eliminated. There are many tasks that will actually be created.

Right. But it is very likely that for many people, their jobs are gainfully protected. Right. And so, for example, I'm using AI all the time. You're using AI all the time. My analysts are using AI all the time. My engineers, every one of them use AI continuously. Yeah. And we're hiring more engineers.

We're hiring more people. We're hiring across the board. The reason for that is because we have more ideas. Yes. We can now go pursue more ideas. The reason for that is because our company became more productive. And because we became more productive, we became more rich. We became more rich.

We can hire more people to go after those ideas. Right. The concept that AI comes along, and therefore there's going to be a mass destruction of jobs, starts with the premise that we have no more ideas. Right. It starts with the premise we have nothing left to do. Right.

Everything we're doing in our lives today, this is the end. Yeah. And if somebody else were to do that one task for me, I have one task less. Now I have to sit there and wait for something. Yes. You know, wait for retirement, sit on my rocking chair. That idea doesn't make sense to me.

Right. And so I think that intelligence is not a zero-sum game. The more intelligent people I'm surrounded by, the more geniuses I'm surrounded by. Surprisingly, the more ideas I have, the more problems I imagine that we can go solve, the more work we create, the more jobs we create.

And so I think for, I don't know what the world looks like in a million years that's going to be left for my children. But for the next several decades, my sense is that economy is going to grow. Lots of new jobs are going to be created. Every job will be changed.

Some jobs will be lost. And we're not going to be, you know, riding horses on streets. And those things, it'll be fine. You know, humans are famously skeptical and terrible at understanding compounding systems. And they're even worse at understanding exponential systems that accelerate with size. We've talked about exponentials a lot today.

You know, the great futurist Ray Kurzweil said, in the 21st century, we're not going to have a hundred years of progress. We're likely to have 20,000 years of progress. Right. Right. You said earlier, we're so fortunate to be living at this moment and contributing to this moment. I'm not going to ask you to look out 10 or 20 or 30 years because I think it's so challenging.

But when we think about 2030, things like robots. 30 years is easier than 2030. Oh, really? Yeah, yeah. Okay. So I'll grant you license to go out 30. As you think out over the course of, I like these shorter timeframes because they have to marry bits and atoms. It's more important.

Bits and atoms, the hard part of building this stuff, right? Yeah, yeah. Because everybody's saying it's going to happen. Science fiction is interesting, but not helpful. Exactly. But if we have 20,000 years of progress, reflect on that statement by Ray. Reflect on exponentials and how all of our listeners, whether you work in government, whether you're in a startup, whether you're running a big company, need to be thinking about the accelerating rate of change, the accelerating rate of growth, and how you will be co-intelligent in this new world.

Well, there are a lot of things that many people have already said, and they're all very sensible. I think in the next five years, one of the things that is really cool that's going to get involved is the fusion of artificial intelligence and mechatronics, robotics. And so we're going to have AIs that are going to be wandering around us.

And that everybody knows. We all know that we're going to all grow up with our own R2D2. And that R2D2 will remember everything about us and coach us along the way and be our companion. We already know that. And so the idea that every human will have their own GPUs associated with them in the cloud, and that there are 8 billion people, 8 billion GPUs, that's a viable outcome.

Yeah. And each having their own model that's fine-tuned for them? Fine-tuned for them. And that AIs in the cloud is also embodied in a whole bunch of – it's embodied in your car, it's embodied in your own robot. It's everywhere with you. And so I think that future is a very sensible thing.

The idea that we're going to understand the infinite complexity of biology and understanding the system of biology and how to predict it and have digital twins of everybody. Our own digital twin for healthcare, like we have a digital twin for shopping at Amazon. Why wouldn't we have our digital twin at healthcare?

Of course we would. And so a system that predicts how we're going to age, what disease we're likely going to have, and anything that's about to happen maybe even next week or tomorrow afternoon and predict it early. Of course we wouldn't have all that. And so I think all of that is a given.

I think the part that I'm asked a lot by CEOs that I work with about now given all of that, what happens? What do you do? And this is the common sense of things that move fast. Right. If you have a train that's about to get faster and faster and go exponential, the only thing that you really need to do is get on it.

Yeah. And once you get on it, you'll figure everything else out along the way. Right. And so to predict where that train's going to be and try to shoot a bullet at it or predict where that train's going to be and it's going exponentially faster every second and go figure out what intersection to wait for it.

Right. That's impossible. Just get on it while it's going kind of slowly. Right. And go exponential along the way. A lot of people think this just happened overnight. You know, you've been at this for 35 years. I remember hearing Larry Page say probably around 2005 or 2006 that the end state of Google will be when the machine can predict the question before you even answer it, before you even ask it, and give you the answer without having to look.

Right. I heard Bill Gates say in 2016— Because contextually, you must be asking about—you must be wondering about that. Right. I heard Bill Gates say in 2016 when somebody said, "Hasn't all the things been done? We've had the internet, we've had cloud, we've had mobile, social, etc." He said, "We haven't even started." I said, "What do you think?

Why would you say that?" He said, "We won't even begin until machines go from being dumb calculators to beginning to think for themselves, to think with us." Right. Kind of that is the moment, right, that we're in. I think to have leaders like you, leaders like Sam and Elon, Satya, etc.

It's such an extraordinary advantage for this country. Right. And to have the cooperation that we see between a system of risk capital that I take, you know, that I'm part of, which can provide the risk capital for people to do. We're not relying on government having a Manhattan Project.

We can actually do this ourselves and together for the benefit of the country. It's an extraordinary time. And at a scale that's unimaginable. Right. Right. It's an extraordinary time. But I also think, you know, one of the things that I'm just grateful is that we have leaders who also understand their responsibility to the fact that we are creating change at an accelerating rate.

And we know while it will most likely be great for the vast majority, there'll be challenges along the way. And we'll deal with those as they come and raise the floor for everybody and make sure that this is a win, not just for some elite plutocrats at the top hanging out in Silicon Valley.

And don't scare them. Bring them along. It's a win. Don't scare them. Bring them along. And we will. Yeah. So thank you for that. Exactly. As a reminder to everybody, just our opinions, not investment advice.