Back to Index

Market Predictions, Rates & Inflation, DOGE, CES, AI Compute | BG2 w/ Bill Gurley & Brad Gerstner


Chapters

0:0 Intro
2:58 Frontline Ideas for 2025
7:2 The Bogeyman (Interest Rates and Inflation)
9:21 Mega Cap Valuations & Expectations
14:2 Big Tech CapEx
16:51 Scaling Inference
20:11 Future of AI
39:50 Role of Power in AI Development
42:58 Interest Rates & Economic Growth
45:11 Federal Spending & DOGE
54:15 Regulatory Landscape for AI
61:27 Co-opetition in AI
77:6 Reasoning Models & Chain of Thought

Transcript

Elon said he thinks that every cognitive task that can be done by a human will be able to be done by an AI within three to four years. If you believe that to be true, the value of all that human labor that you're replacing is measured in trillions. (upbeat music) Good to see you, Brad, happy new year.

Happy new year, I have on a, you probably can't tell, Bill, I have on a blue shirt today, not a black shirt. I have on my green pants, a little Notre Dame spirit for the big Notre Dame game tonight. I know you have a big Texas game coming up.

Yeah, that's tomorrow in Dallas, yeah. So we'll have to see if both those teams could win in advance, that'd be pretty spectacular. You and I were talking about some of this insanity going on in LA, and I know we both have a friend who's lost a house and there's a lot of this debate going on online, whether or not any of the policies that we've had are approximate causes, at least to the severity of the thing that's going on.

And one of the things that I know you and I both really appreciate is as we're entering 2025, just the opportunity, I feel like the conversation going on online, the debate going on online is more robust than it's been. And I think it rubs a lot of people the wrong way, but I tend to be in the camp that the more open debate and accountability, it's just the better, right?

And no matter what side of the political aisle you end up being on, we should all be in favor of just doing better. And when you look at how horrific these scenes are in LA, you have to ask the question, what could we have done better? Yeah, and I totally agree with you.

I have this framework in my head that I think about in relation to these types of situations, which is a lot of people, I think, evaluate politicians and policies based on what they think the intent of the decision was, and they fail to follow up and then see if the output or the outcome is identical to what they thought the original intent was.

And I think all too often the original intent of something may have sounded good, or you might be voting for someone 'cause you agree with their point of view, but if the policy fails to achieve that, or in many, many cases, achieves the exact opposite of that, then you really have to ask yourself, what's the point?

And so, yeah, I hope that this type of accountability and transparency, shining a light in, could lead to better outcomes in the future. And there's always trade-offs, obviously, like you can't have everything. For sure, for sure. Well, I thought we could do something unique and different, if you're up for it, related to this time of year.

So, obviously, large investment funds like yourself are typically operating on a calendar cycle. Sometimes the reporting is certainly looked at annually, and sometimes even some of the fees and whatnot are calculated on an annual basis. And so I'm sure that creates a annual cycle for you. And as you've been going through that process, I thought it'd be really cool to expose the listeners to both the analysis that you're going on now, but a window into that process.

And so, how does someone, a large professional investor like yourself, think about this time of year, and specifically, what are you looking at now, and give everyone a peek inside? Yeah, no doubt about it. And we've been, I think, having this conversation for 20 years. But half of our business, the venture capital part of our business has a much longer cycle time.

What informs how we think about the annual cadence of our public market positioning, which is the other half of our business, is a lot of times these big trends that we're having debates on. We ended last year in this great conversation with Dylan about this tension or debate in the world about how much compute the world needs, all the investment going on, et cetera, that we'll get into today.

But it's all those things, including a major political change going on in this country, that all impact how we think about this year. And so, at the end of every year, I not only look at kind of errors and omissions, like what could we have done better in our public trading and our public investing for 2024, and we had a great year and I'm proud of the team, or whatever, but there's a lot we always can do better.

And then I try to look ahead, not at January, but I try to put myself in the shoes of where I think the world will be in December of the following year, right? So you really have to get in the habit of being an analyst, a forecaster, a prognosticator, thinking about the big trends, but also thinking about all of the competing things going on, interest rates, inflation, et cetera, the backdrop.

And so we go through that exercise. I journal to myself and I make everybody on the team do this independently, Bill. And then we get together, and this started at the beginning of December, but certainly informs how we're positioned at the start of the year. What's top of mind, what's top of mind?

What are the big things you're thinking about, the big blocks? There are a lot of exciting things that can cause you to believe that this is the golden age, right? That 2025 could just be a really phenomenal year. Lower taxes, lower regulations, GDP growth is strong, employment looks good.

And at the same time, we have these mega trends that have been percolating for a decade that really seem to be bearing fruit, right? Obviously AI being the biggest mega trend, but ancillary things like robotics and self-driving cars and all these things that are yielding productivity improvements to the economy, which as we know is a huge driver of GDP.

So there's a lot to be positive about. On the other side, you gotta look around the world and you say, well, we got a situation in the Middle East, could that get better or worse? The situation with China, better or worse? Situation in Ukraine, better or worse? Those things can break both ways, but I could make an argument how all those things get better.

And so I think on the one hand, there's a lot of enthusiasm. At the same time, valuations bill are quite high relative to where we started '23 and relative to where we started '24. So the world assumes that things are going to be better. And as we look at it, I kept asking the team, what is the boogeyman?

Like, what's the thing that we're not thinking about that could go wrong here? And if you look at the start of this year, I would say that the boogeyman that's out there, you and I have a lot of smart friends and some of them have been shorting the U.S.

tenure, that is they expect rates to go up. And I really think that it's this inflation and higher rates combined with higher valuations that could put a damper on at least the market in the first part of this year. And again, this is just like one of the potential outcomes, but we've seen the tenure go up almost 100 basis points and people didn't expect that.

They thought the Fed was gonna be cutting rates and the tenure would come down and that would lead to more housing sales, more sales of a lot of things, be a tailwind for the economy. The exact opposite has happened. And so I think we have to explore why is that happening?

Where do we expect that to go? So that would, I guess, be the overall framework. - Let's dive in on the positive side and specifically the enthusiasm around AI. I would share with you, like just as an observant rather than as a participant, when I listen to you talk, when I just listened to the short interview that Elon did at CES, I'm hearing a level of enthusiasm that feels somewhat unprecedented to me, like over the past three decades in terms of just how much excitement there is about what's possible and how much investment's going into it.

Why wouldn't that just be, well, first of all, do you agree with that? And then second, how do you interpret it from an investor point of view? - Well, certainly there's enthusiasm, but I would say there's plenty, a wall of worry as well. For every conversation I have with people in Silicon Valley who say they're gonna invest more in compute, this is the year of agentic AI that we may see AGI in the next 18 months.

I have another call maybe with a big fund in New York who says, listen, this is gonna lead to the telecom bust of 2000. This is Cisco all over again. There's a bubble, et cetera. And so there are the words that everybody says, Bill, and the byproduct of those words is the valuations for companies.

So maybe the place to start is just like looking at where the valuations are for big tech as we start the year. And when I look at that, the S&P 500 recently peaked at about 22 times. So in Q4 '21, it was about 22 times earnings. It troughed in '22 at 16 times, and now we're at 23 times.

So the S&P 500 as a whole is actually, like it's near its kind of recent peak valuation. If you look at companies like Meta, they're trading at 23 times. If you look at Google, trading about 21 times. If you look at NVIDIA, it peaked at 66 times. It's now at 36 times consensus estimates.

And I would tell you closer to 28 or 30 times our estimates. And so I look at the valuations for these businesses, Bill. But growing at a rate that's unprecedented for a company of this size. Correct. Unprecedented. Correct. And so like, and we'll put all these charts in, but at the highest level, I look at that and I say, is that a bubble, right?

Does this feel like a bubble? I say, no, it's not as good a deal as you got at the start of '23. And it's not as good a deal as you got at the start of '24. But if we achieve the growth expectations we expect out of these businesses, then I don't think it's overly onerous.

But the question really is, and you brought up, like, what does the market expect in terms of earnings growth? Like first, what was earnings growth last year? And then what is the market expect for earnings growth this year? So here's a chart that shows the mega cap earnings growth broken out from the rest of the S&P 500.

And in 2024, the big five, so in this case, Microsoft, Meta, Nvidia, Amazon, and Google grew earnings at 44% in 2024. Stocks were up a lot, but Bill, earnings were up tremendously, 44% is extraordinary. If you look at the S&P 500 as a whole, because of the tailwind of the mag five, the S&P 500 as a whole was up 9%.

If you look at the other 495 companies in the S&P, their earnings only grew 2%, right? So it was pretty anemic earnings growth in the S&P 500 in 2024 relative to the mag five. If you look at it now, what's expected, what the consensus forecast is for 2025, we see the mag five's earnings growth comes from 44% down to 21%.

How much of that has already started, Brad? Do you know what Q4 was? That deceleration, I'm just curious. There's certainly a deceleration in these businesses. Remember, they're coming off of easy comps, and by the time you get to 2025, it's very hard comps, and still growing 20% on the massive, massive earnings and revenue bases of these companies.

Pretty extraordinary. Five years ago, nobody thought they would be growing that fast. But the interesting thing here that kind of worries me a little bit about the market, Bill, is that we expect non-tech earnings to grow from 2% to 11%. So a big acceleration in non-tech earnings. And so I asked myself, well, where does everybody think this is gonna come from?

And the next slide is the answer to that. So the yellow shaded box shows you that people expect healthcare earnings to go from plus 4% to plus 20%. They expect industrials to go from negative 4% earnings growth to plus 16. Materials to go from negative 10 to plus 17.

Those are huge turnarounds, right? Huge accelerations in earnings. Now, a big component of that is associated with the tax cuts, but I think that only speaks for about 30% of that bump in earnings growth. So if you said to me again, Brad, where is there a potential boogeyman? Well, if those earnings aren't delivered, if you don't see this acceleration in non-tech earnings in the S&P 500, then it's hard for me to see how you can have a big year.

I don't think we're gonna see a year that looks anything like 2023 or 2024 anyway. But if you just said, can we get to 10 or 15% on the market, a couple of things have to happen. You have to see this acceleration in earnings, number one. And number two, you really have to see interest rates not go above 5%.

I think if interest rates go up a lot, that becomes a big albatross on market performance in 2025. Let's come back to the interest rate thing. I wanna stick with the large cap companies for a second. You and I were having a discussion about their CapEx trends. And this is something that's remarkably different than any window of past tech investing, right?

This level of CapEx wasn't a part of the equation other than maybe in a manufacturing company. So why don't you set this up? I know you put together a slide, like what's happening with CapEx in these large companies? CapEx was growing before the chat GPT moment, but nothing like we've seen it grow over the last two years and we're forecasting it to grow for the next couple years.

So this first slide just shows the combined CapEx of Google, Meta, Amazon, Microsoft, Apple, and Oracle, both what they did in 2024, which you can see that huge step up in 2024. Yeah, from about 160 billion to 260-ish. Exactly, and Bill, right? It consumed the vast majority of the incremental free cashflow of those businesses, right?

So they clearly are all betting and believe that those are really NPV positive investments and we can dig into that. So you heard Satya say, I think they are expected, in their most recent quarter, they did about 20 billion in CapEx, Bill. And you just, you know, he said earlier in the year, which caused, or a couple of days ago, which caused a bit of a kerfuffle that they expected to spend 80 billion in CapEx this year.

Which is the run rate, basically. Yeah, it's just the run rate that he was on. But it is, when people hear the number, it's still pretty shocking. Yeah, on the CES interview, Elon goes, he said, yeah, that's a big number in anyone's universe. Coming from his point of view, he's like, yeah, that's big.

Right, and so if you go company by company, the one thing, you know, there's debate. A lot of people were debating, we talked about it with Dylan in December, will people actually spend this money, right? Like, will they actually buy more compute in 2025? And I think what we can put to bed, based upon the conversations we've had, what people are hearing out there, the conversations at CES, there are going to, these investments are gonna show up in 2025, right?

And the reason they're showing up, I believe, is that people are still seeing a lot of returns on training. We'll get into the scaling laws, these different scaling laws that they're building for. So whether that's pre-training, whether that's post-training, whether that's test time scaling, they're investing in making the models better.

But the other one, which is coming on really strong, remember, Jensen in our pod with him. 40% of your revenue today is inference. But inference is about ready, because of chain of reasoning, right? It's about ready. It's about to go up by a billion times. Right, by a million X, by a billion X.

That's right. That's the part that most people haven't completely internalized. This is that industry we were talking about, but this is the industrial revolution. That's the production of intelligence. That's right. It's gonna go up a billion times. He expected inference to go up by 100,000 times, a million times, maybe even a billion times.

So inference is scaling very, very quickly, and that's all new compute that's gotta get built out to support that inference. Looking at this chart that you put together, you've got, it looks like Meta and Microsoft popping above 25% of revenue on CapEx. You've got Apple and Amazon kind of in the middle in this 10 to 15% range.

And then Apple, ironically, is falling below 5%. Which makes sense, right? Because they're not, Apple is not investing in frontier models. And then the other thing is those same top two are bouncing up against 100% of free cash flow, right? Which is certainly a moment of pause, right? So how do you frame either of those?

How do you think about whether percent of revenue should matter? Is there a limit that's too high? And then we'll do free cash flow. I will tell you, this is a very, very robust debate. You know, I was out in Omaha in December, and I'll tell you, they're pretty skeptical about the amount of dollars.

Define who they are. Yeah, you know, they being the biggest investor in Omaha. Yeah, okay. I think that they're worried, like a lot of other investors who've kind of seen these moments, that this is like the telecom buildup, right? I will tell you, though, I have a lot of respect for Elon, for Satya, for Sundar, for the people who are making these investment decisions.

And the numbers are there. You know, as Satya told us on the pod, they have 10 billion in inference revenue, and I think he expects that to grow significantly. And that is the ROI, right, on the dollars that he's putting into the ground. But they are making a bet, Bill, right?

And you can make a bet for one of two reasons. You can make an offensive bet that this will drive future revenue growth or profit growth in my business, or it could be a defensive bet, right? The prisoner's dilemma. Like, I can't not do this because my competitors are doing it.

And I think that's where some of these concerns emanate from. But it's very clear to me that part of the reason the multiples on these businesses have not, you know, kind of gotten even higher is this wall of worry. They're spending a lot of money and they may not see the return.

So are there any rules of thumb? Like, does 25 or 30, like, does it matter to you as an investor what that percentage of revenue is? Of course, of course. I mean, like, listen, you and I both know when we're investing in a startup, we want, you know, here's the irony, right?

Over the last 10 years, people celebrated raising these bigger and bigger and bigger rounds. And you and I would look at each other quizzically and we'd say, you know, people lost the script. The goal is the least amount of money in for the most amount of money out. Not the most in for the most out.

And I would say today, the thing that's very clear to me is that over the last decade, we've had super high returns, super high returns on the incremental capital that got invested. And there's no doubt in the short run here that those returns are compressing. So you have to believe as an investor, you have to have an imagination to believe, right?

Just like when Google was investing gobs in early 2000s or when Amazon was investing gobs in '08, '09, 2010 in AWS, you have to believe that there is a pot of gold at the end of this rainbow. I happen to believe it, right? I see the revenue growth inside some of these businesses, the inference revenue growth inside these businesses.

And when you replace human labor, you know, Elon said on this recent interview, I think it was with Bill Miller at CES, it was a virtual interview. I mean, AI really within the next few years will be able to do any cognitive task. You're like, it obviously begs the question, what are we all going to do?

You know, I'd say max three or four years, maximum. And Elon said he thinks that every cognitive task that can be done by a human will be able to be done by an AI within three to four years. If you believe that to be true, the value of all that human labor that you're replacing is measured in trillions.

And I think these are things that, you know, most of these CEOs have determined, even though that makes them a little bit uncomfortable. You said it makes the CFOs talking a little bit high pitches and all that's true, but I think they've all determined they can't not make this level of investment in something that has the very high potential to be this big.

When I look at the free cash flow, you know, certainly it causes you to step back and say, we've however, the past decade has been unprecedented free cash flow from the max seven, right? And the amount of cash that's been put on their balance sheet has, was so unprecedented.

And, you know, everyone would talk about it and like, what are they going to do with all this money? And yeah, to move from that to a place where a hundred percent of incremental free cash flow is now spoken for is a certainly a change, right? And my friend, Mike Mosin would get mad at me for worrying about it because he would say all that matters is not that they're using up their free cash flow, but what's the return on the incremental investment.

Of course, of course. That's super hard to understand. That's the great puzzle. And that, this is the thing where I think that in these early parts of a phase shift, in the first three to four years of a phase shift, right? It's really hard for the traditional Wall Street analysts to model this.

Remember, I give this quote, in 2023, you had 26 analysts on Wall Street covering NVIDIA. And the consensus forecast of all 26 of them missed by 80%, okay? I mean, it's just like, that's how wrong you can be if you're thinking linearly at a moment of a phase shift.

And so I think there, we're still in that moment. And that's why I think it's an advantage to be in Silicon Valley, because you're really spending all your time with the people who are actually doing these things. To get more exposed, yeah. Right, and seeing what they're actually building.

You know, once it's well-known, think about, by 2016, the cloud was well-known, and so everybody could model it reasonably well. But in 2012, 2013, right, when you're earlier on in that phase shift, there was a real opportunity early in those companies, Snowflake, Mongo, Okta, et cetera, to do things that people thought were not possible at that point in time.

Let's spend a second on this, what you call the prisoner's dilemma argument. And I guess you might include just the hyperscalers at this point. And there's, by the way, let me ask a quick aside. There has been talk that Meta has been hiring an enterprise group and is structuring new deals, you know, around higher-end versions of LLAMA or unlimited versions of LLAMA.

Do you have a perspective? Are they entering the enterprise hyperscaler business? You know, I have no evidence that they are. I, you know, I take Mark at his words last year in one of his podcasts where he said they already charge license fees to the large hyperscalers to use LLAMA and to provide it to them in certain ways, et cetera.

I think it's de minimis revenue in the scheme of Meta, but, you know, he's smart enough in that- I've heard rumors in the billions and people have found that there are searches, you know, on, you know, for executives to come in that have certain backgrounds. I don't know. No, no, no, that is in fact occurring.

But again, you could have revenues of a couple of billion dollars, but in the scheme of Meta, that's still not all that material. But they're smart enough to maintain the optionality bill. I mean, you saw NVIDIA release some models, which is the packaging of LLAMA models to make them easier for some customers of NVIDIA.

So it would only make sense to me that, you know, that they're thinking about that. Let's include them for the sake of this question and argument. So let's say there's four hyperscalers and this is the prisoner's dilemma thing. Every one of them is kind of being forced to announce their CapEx.

And if any of them- Not being forced, they do it as a matter of their earnings call. So they're all gonna give us their CapEx. Sure. And so they, but my point is, it's become this point of focus. Everyone's paying attention to it. Oh, for sure. And so, you know, if Satya were to say 60 next year instead of 80, does he have a concern that that's a perception that they don't believe as much or that they're losing ground to other people that are taking share?

And therefore, you get into this kind of reflexive argument. Like if we want to be perceived as leaders in AI with confidence that we're gonna take our fair share, don't we have to be announcing that we're betting via CapEx? Yeah, I mean, it would be fairly conspiratorial to think that they're doing this just so that the optics are that they are our leaders.

What I will tell you is Satya has said that they have been compute constrained for all of last year and their inference revenues are going to accelerate in the first half of 25 because they will have less constraint. OpenAI has said publicly, Sam has said many times that the reason that they can't release models widely, they can't release some of the voice stuff that you wanted, the reason they couldn't widely release Sora is because they were compute constrained.

They didn't have the compute they needed to support these models. Part of the reason they charge higher prices for pro models is because they have to artificially reduce the demand because they don't have the compute. And so I think across the board, there's compute constraint. Now that occurs for two reasons, Bill.

Number one, you have over 300 million people a week who've decided that I want to use chat GPT instead of search or something else to answer my question. So the usage is bigger than people expected. On the other hand, these new models, inference time compute is a compute hog, right?

These things require huge amounts of compute. And so those two things combined, we just don't have a compute infrastructure in 2024 that kept up with it. And so I think the investments we're seeing in 2025, frankly, I don't even think is going to get us to the point where we have compute surplus.

I think we're going to end this year, compute constrained. And I think a lot of the frontier labs feel the same. By the way, one quick aside on that. You've seen Google release a high-end pay-for consumer, you know, by consumer, I mean just a individual user 'cause it could be a consumer at a company as well, but one with a price point on it.

And when Elon was talking about their release of Grok 2 and potentially Grok 3, they said Grok 2 is always going to be free. I infer from that Grok 3 might be pay-for and this to me is a positive outcome for OpenAI 'cause I think when people frame the fight as a competition with search, they assume that the frontier is going to be free and not pay-for.

If multiple people fall in line behind them with subscription pricing, that could start to get sticky. Well, it's going to be very, very difficult for any frontier lab to invest at the level that's going to be required and not have a robust business model behind it. What I would tell you when, you know, and without, you know, say anything other than what's been said publicly about OpenAI's revenues, right?

You know, they ended the year, I think it was rumored four to five billion run rate growing very robustly. You know, you would expect a company at this stage to be growing at least triple digits. And so you start thinking about 10 billion plus in revenue for this company.

Now, compare that to some of the other companies, Bill, right, Mistral, a lot of the other model companies have fully pivoted. Like they've already raised their hand, white flag, we don't have the revenues that can support the spend to keep up, right? It's going to be interesting to see what Anthropic does, right?

Their revenues are reported to be less than a billion dollars so can they afford- And mostly on the enterprise API side, not in this consumer- Correct. So can they afford to keep up? And I would tell you that as large as Google is and as large as X is, I don't think that you can go out and spend and not have a business model to support the spend.

So what you're hearing would make sense to me. One other thing I just want to say about this, Bill, one of the things we've spent a lot of time on internally is trying to really model out what we think of that compute demand relative to the compute supply. Like, are we really constrained?

And I want to keep coming back to this point that Jensen made and maybe we'll insert the clip here on the pod where he talks about his inference revenue is already 50% of his revenue. So already upwards of 50% of his revenue. And I ask him, is the mix going to go up?

And he said, well, Brad, of course, because I think inference is going to go up by a million X or a billion X. Now what drives that, Bill? I think you've heard a lot of people say that 2025 is going to be the year of agents, right? And so you have these O-series models and now you have deep reasoning out of Google that launch this whole different vector of scaling intelligence and reasoning.

But the thing about those, Bill, as you well know, is they are compute hogs, right? The number of tokens that you have to produce, that you have to repopulate back into the prompt, the number of branches of inference that you go down, it dwarfs single shot kind of chat GPT.

And now you add in personalization, long-term memory about each individual users, and actions, book my hotel, book my, do things for me. Again, those things are all compute consumptive. So that's why I think the frontier labs like chat GPT are modeling out that expected compute demand and then looking at what they have and saying, we're not even close, right?

We're not even close. I do want to come back to that, but I want to finish the CapEx thing real quick. Obviously, one way to look at, we're looking at a slide, Google, Meta, Amazon, Microsoft, Apple, and they're all spending this money. And you can say, what does it mean for those stocks?

But I'm sure as an investor, the easier thing to consider is these people are telling us, forecasting that they're going to spend this amount of money. And the processes for spending that amount of money are sticky and slow. They're not fast. Like you can't back out. Like you pre-commit and you go build.

Who are the recipients of this stuff? Like that's an easy win. It's a great question. So we started the conversation like, how do we think about the framework for the year? What I didn't say at the start is, we started 2023 with 95% net long. So think of that bill as all your chips on the table.

So that's our longs minus our shorts. We started last year, I think around 80% net long. Now that can go up and go down, but it's less chips on the table. And you might say, well, Brad, if you think it's the golden age, and if you think all this money's being spent, then why the hell do you have fewer chips on the table?

And I just told you, valuations are a lot higher. So let's start there. Warren Buffett has told us the most important thing, Charlie Munger, the most important thing in investing is the price of entry. So the price of entry to play is higher. The variant perception is lower. When I look at Q1, when it comes to the MAG 5 or MAG 6, what I say is there's gonna be a lot of FX headwind, okay?

So the strength of the dollar has gone up a lot. A lot of these companies have revenues denominated in non-US dollars. So their revenues will have headwinds from that FX exchange. I'm not sure that's totally appreciated. And then secondly, their CapEx is going up a lot. I think Dylan said on our podcast, and I agree with him, that the CapEx is higher than people think, okay?

So both of those things aren't particularly good for the biggest companies trading at high valuations. Now, I don't think it's gonna cause some cataclysmic event, but it's a headwind, okay? But who are the recipients? Who are the winners? Obviously, NVIDIA is the biggest one, but who else is a recipient here?

Yeah, well, first I wanna show a slide on NVIDIA because I hear a lot and I get a lot. We owned it since it was $120 a share. Today, split adjust is $1,500 a share. So this, you know, or $1,450. It's gone up a lot. But people say, oh my God, I remember when it went up two X and Jim Cramer was telling everybody, sell it, it's up two X, and then three X, sell it, it's up.

Every time you have to recalibrate based upon the facts on the field, right? It went up a lot because its earnings went up a lot. It went up a lot because its revenues went up a lot. So here's a slide, Bill, that just shows the consensus data center revenues, right, for NVIDIA.

And the key here is to remember that NVIDIA prior, you know, data center revenues are mostly, you know, these chips that are driving training and inference. And this is a business that's got very high margins on it. And in 2023, it was $61 billion, right? In 2025, it's estimated to be close to $200 billion.

And so if you notice the jump between '24 and '25, again, this is consensus, that is about a $60 billion increase, '24 to '25. Well, if you just add up the hyperscalers that we just went through, that's $60 billion, right? And where else are they spending the money? Yes, they're spending some on custom ASICs, but it's tiny on a relative basis.

Most of this stuff is NVIDIA wall-to-wall in 2025. You know, Jensen told you at CES, he's got 40 AI factories. So he doesn't have six people in the world building this stuff. He's got lots of people building this stuff. So I actually think if you look at the consensus numbers, they probably would imply that NVIDIA's market share is going down '24 to '25, and I don't think that to be true.

But so NVIDIA is one of-- Comparing expectations for NVIDIA with this pre-committed CapEx spend. Correct, correct. And your expectation of what percentage of that-- Correct, and remember, NVIDIA's stock has not gone up from June of last year until now. It's basically, you know, because this debate's been going on.

You know, we had a lot of debates about, have we hit a wall on pre-scaling? Can the models continue to scale? We had the DeepSeq model come out. It was a smaller model, very capable. So people say, maybe you don't need all these chips. So like, there is real tension in the world about this name.

Now-- Well, I think there's also, I mean, another one that comes up is if we're moving from more spend on inference than pre-training, do you need this big, large, holistic cluster? Can you use a more distributed approach? Would you use products that aren't GPUs, you know, TPUs from Google's, some of the startups as well?

Of course, they can't ramp to anywhere near this scale anytime soon, so-- Right, and Google, I think, has said, you know, I'm pretty sure it's said publicly that, you know, they're spending a lot on GPUs, not just TPUs. So, you know, but you asked a question, who are the recipients?

Well, NVIDIA's clearly, you know, high on that list. Now, remember, NVIDIA was up, I don't know, 140, 150% last year, but Intel was down over 20%, AMD, I think, was down on the year, and so it was not winner, you know, this was not a situation where everybody in the semiconductor complex won.

But another group, you know, I know I share this feeling as well with my friend Gavin Baker, you know, we're also investors in the memory space. So SK Hynix, which is providing high-bandwidth memory, this becomes very important, particularly in a world of inference time compute. And we think that we have a memory shortage, which is a key part of the NVIDIA supercomputer ecosystem, as far as we can see.

So Micron and SK Hynix would be in that memory space. So what, pause on SK, for example. So when you look at companies like NVIDIA, you know, or others that are doing well in this environment, they have multiples that, you know, would suggest everyone understands that and buys into it.

If I'm reading the numbers correctly, SK trades at like six times forward earnings, which is insanely low for most companies. What's going on here? Do people not believe the number? Or is this the commodity, you know, argument that happens with hard drives, where you buy it with the highest multiple and sell it with the lowest multiple?

Yeah, no, I think the reason it has that multiple is because people do think it's hard drives, it's commodity memory, that it's a boom and a bust, and that they don't add a lot of kind of sticky value that you can just easily put on a lot more supply.

That supply will ultimately drive down the cost and the margins, and that demand is cyclical. We believe those things are not true. We think this has become a way, way more sophisticated product, lots of software baked into it, that this is secular, not cyclical. And so there are two ways to win there.

Number one is just, you get the earnings growth of the business, but the second one is you could have a re-rating higher in terms of the multiple people are willing to pay. People come to believe that. If that SK gets treated more as a non-commodity company over time. You know, and then of course, you know, you and I talked a lot around the Diablo episode last year, just, you know, we have power issues in this country, right?

And so who, how are we going to power up, you know, this five, 10, 15 gigs of data center that we need to bring online? Because remember, if they're spending this money, Bill, then they're telling you that they are, like, you have to have power, you have to have a shell, and you have to fill it with chips.

And so, you know, one of the things- And I continue to have people tell me, like, I know they'll say like, I know this is hard to believe, but we are power limited. Like, we are power limited. We have CapEx that we would put online that we're not putting online 'cause we can't power.

I will tell you this. I had a call this week with the COO of PG&E and with the head of Diablo Canyon. And we, I'm very focused in talking to our friend, David Sachs and others in this administration. We have to give regulatory relief. Otherwise, we're going to have huge hurdles placed in the front of our AI industry.

We know China's building 100X as much as we are in terms of power, and power is the single most important primitive to AI. So I'll just give you one example. Gavin Newsom still has not signed the extension for Diablo Canyon beyond 2029. It's insane. We have to make that happen this year.

It's 10% of California's green, clean power. We have to extend that so that they can begin doing the appropriate planning. Hell, you and I think they should probably be expanding it. But at a minimum, the idea that you would take 10% out of the grid at a time that we are already capacity constrained is totally insane.

Let me ask you this. So Altimeter's historically been tech-focused. Do you, when your information leads you this direction, do you start moving in and out of energy companies? Well, I'll tell you, a lot of my tech peers have, right? And if you look at some of the highest returning companies in what I would call the semiconductor-related complex last year, they were in fact companies in the energy space.

And we looked at all of those. Ultimately, I think one of the overarching Altimeter North Stars is essentialism, right? Like just don't make anything any more complex than it has to be. And so I would always ask my analysts when they would bring me a power company, I would say, why is that better than NVIDIA, right?

It's a derivative of NVIDIA, it's the exact same bet. So why not just buy more NVIDIA? So we tend to just take bigger bets on our best ideas rather than diversifying out into these other things that we know less about. Because the truth of the matter is, I do know that we need a lot more power, Bill, but I don't necessarily know exactly how the regulatory is going to play out, exactly how, whether these small nuclear reactors, all this other stuff.

This falls into the Buffett comment, there's a fool in every market, and if you don't know who it is, it might be you. (laughing) You know, one of the things we ought to come back to, though, Bill, is because you said, what could the boogeyman be? And I said, interest rates.

So maybe just spend a few minutes double-clicking on rates, what's going on? Why are rates going up? And how do I think that may or may not change? But if you look at this chart, the black line on here shows the 10-year, just over the course of the last couple of months, has been trending up and is now around 4.7%.

This, despite the fact that inflation has largely, you know, like, remember, just a couple years ago, we had a nine handle on headline inflation. And I remember when I said, a couple years from now, we'll be back to a two handle, people laughed. They said, no way. And in fact, that's where we were.

And if you look at the Morgan Stanley consensus forecast for this year, they expect it to finish the year at 2.2. Okay? But you have to ask the question, why then are people concerned, right? What's going on? Why are rates going up? And I think there are a few things at play here.

Number one, the Trump election got people really excited about more growth in the economy. And you're gonna have 400 billion of stimulus by way of, on an annual basis, if the Trump tax cuts are passed. That's a lot of stimulus into the economy. Regulatory relief will stimulate the economy.

And so what's that stimulus going to do? A lot of people are concerned it could reignite inflation. And they're just saying, there's a chance that inflation goes higher because we have all this stimulus. And if it does, rates are not gonna be able to go lower. So if you look at the next slide, Bill, we show the expectation in the world now has gone from a lot of rate cuts to only one rate cut in the first half of next year, right?

So basically the market's saying that there's too much, that there's a lot of stimulus and we're not gonna get rate cuts. So I think in the first part of this year, there's gonna be a lot of tension back to that core PCE. Is inflation continuing to roll over? We think that a lot of the components like rent equivalent and shelter, et cetera, will cause it to continue to go down.

But the other thing going on here, and this is something I know you and I are close to, you know, and care a lot about with Doge, the market is saying we expect this stimulus to come in, but I think it's also saying we do not expect Congress to have the courage to cut an equivalent amount out of the deficit, right?

So if you think about this, if we have 400 million of tailwind from this tax stimulus, then I think what the market would like to see is three or $400 billion of cutting come out of the budget, right? And when you look at what Congress has done over the last, I don't know, five, 10 years, there's no evidence, we've never done it.

So, you know, the market's just saying, I'll believe it when I see it. Trump has talked about interest rates being too high. He talked about it at the press conference the other day. He wants them to go lower. If he talks to his, you know, chief economists in the White House, I think they'll say, these are the issues that are at play, so we really have to make these cuts.

Well, we have a reconciliation package. So all of this is gonna get determined. They've told us, in a single reconciliation package, by April or May of this year. So this is what I'm telling you. This is the backdrop that is going to impact valuations that we have to grapple with, we have to try to understand.

So here's my forecast. My forecast is that DOJ is actually, and this administration, are actually going to cut a lot out of the budget. And, you know, I think that you have the leadership in the House and the Senate. And so if they cut hundreds of billions of dollars out of this budget, then I think the market will say, great, fiscal discipline, we don't have as much stimulus, and I think you'll see the 10-year come in.

And if the 10-year comes in, then I think it could be off to the races. But if it doesn't, if we do the opposite, if we don't make the cuts, if the 10-year goes to 5-2-5 or 5-5 bill, it's gonna be an anchor on the stock market, it's gonna be an anchor on the economy.

So I actually did a little analysis I shared with you on this that I wanna review with folks on the pod because given how important I just said that I think this is, I think it's important to try to get our arms around and understanding, is it possible? Is it even possible to cut three or $400 billion annually out of spending?

And so here's this federal spending chart. And what we did here, Bill, we went back to 2019, and you can see the total spending by the federal government in 2019 was $4.4 trillion. And you can see the subcomponents. Social Security, we spent a trillion dollars, Medicare, 644 billion, Medicaid, 409 billion.

You can see the different categories under that, defense, 676, et cetera. And you can see what our net interest expense was at the time because interest rates were relatively low, right, and our debt was a lot lower. The next column is what we actually spent in 2024. So this is our best estimate.

The CBO has given us an estimate. You can see in the footnotes here. So total spending was 6.7 trillion, okay? Well, the question is, is that what we would have expected? Not what we had expected. The next column, what we did, Bill, is what we call a fiscal year '24 baseline.

How did we determine that baseline? We went back to 2019 and we grew every category roughly at two and a half percent. We said the GDP's growing at two and a half percent, inflation's growing roughly that rate, the population's growing at that rate, so that's what government spending should roughly grow at.

And if you see there, the baseline 2019 budget adjusted two and a half percent CAGR. We should have spent five trillion, but instead we spent 6.7 trillion. That's a $1.7 trillion differential. So when you hear Doge talk about the ability to find $2 trillion in savings, this is, I think, really what has people optimistic, that there's opportunity to make cuts.

So you go through there, and I think there are some interesting ways in which we can go after those savings. There are some proposals, and I'll post this proposal, which goes through and says, here's $700 billion of easy deficit reduction that both Democrats and Republicans agree on over 10 years.

So that doesn't get us there, but it is certainly a start. But I think this is gonna be critically important that we get our arms around federal spending this year, because that, to me, is the biggest potential boogeyman out there. And there are ways that Trump can potentially do some of this unilaterally, through rescission, challenging the Impoundment Control Act, and just refusing to spend what Congress allocates.

But given that the Republicans control both houses of Congress, I think that, you know, I'm expecting, and I hope, that we see cuts that at least offset the tax cuts. Yeah. I mean, one of the challenges, obviously, is some of these spend, you know, if you take healthcare, for example, those categories on a per-population basis have been growing way above your 2.5% baseline assumption, and expanding as a percentage of the overall spend for the government.

And so it may require you to attack that specific problem within that industry, not just, you know, say we're gonna spend less. Listen, no doubt about it. There is a global demographic problem that you and I, you can look at any population chart of any country on the planet.

The number of people over the age of 60 is increasing at a very fast rate, and the number of new people coming into the workforce is slowing down as a total percentage. That means that your working population that is paying taxes to pay for all of these things is a much smaller percentage.

So demographically, that is going to be challenged, but, you know, so how do you deal with it, right? Well, you can either just give less things to the people than what you've promised in the past, or you can come up with more efficient delivery mechanisms. Obviously, we know all of the easy and crazy (beep) the government spends money on that Doge has been talking about that we can cut.

My assumption is we're also going to have to harness technology, harness AI, defense spending's gotta get a lot smarter, and you really have to go through every category. And if you zero base this, I tend to be where, I think there are huge opportunities here, but listen, the track record of Congress, both Republicans and Democrats, have not been good at cutting these costs.

And what I would say is if we don't do it, then there's a real risk that the bond vigilantes come into the bond market, you end up with higher rates, because they don't believe that we're on a stable fiscal path. Look, I mean, I think it's an assumption in American life that the government's inefficient.

Like, I don't think there's anyone that, anywhere that would stand up and say, we think our government is great at execution and spending dollars. There's no one that defends it. They might argue that it has to exist regardless, but they don't argue that it's efficient. So there's plenty of opportunity.

Elon said it's like shooting fish in a barrel if I'm being asked to look at inefficiency in the government. And so there will be opportunity. I think there's an even bigger opportunity, which he also hinted at, which is, if you find policy that's restrictive and unnecessary, you could unlock GDP growth, which helps in this equation as well.

And I totally agree with you that the inhibitor here won't be the identification or knowing what to do, it'll be whether the system can actually reform itself. And we'll have to find out. I mean, like, you know, we'll put in here the charts that we showed last year on how you get to a balanced budget in 2029.

But it is just not that hard to get to a balanced budget at $6 trillion, right? That's 1 trillion above what we should be spending on the baseline today, right? But it just gets you back to that 2019 baseline, right? And it assumes that we have some acceleration, but not a lot in terms of growth.

And we are gonna get acceleration, right? Cutting taxes three to $400 billion a year is going to accelerate it. We are going to get efficiency and productivity gains from AI. Like these things are happening. So we've got a lot of goodness. And by the way, relative to Europe, which is a basket case and a disaster, like the US is in such an incredible position.

We just have to have the courage of our conviction and get rid of some of this wastefulness. That's the only boogeyman I see out there. But Bill, there's one other thing, you know, like we've talked a lot about spending. Let's talk for a second about regulation, because I know there's an AI bill brewing down in Texas, akin to California's 1047 that we got killed.

So maybe just, you know, as again, we got a lot of changing politics. I suspect that Trump is gonna rescind all, or at least part, the major part, of the Biden-EO executive order on AI. But what's going on in Texas around AI? Well, you and I have talked about this in the past, and there had been a huge movement underway.

And I would call it an unusual movement when there's a new market evolving, where people are literally begging for regulation. And that got a bunch of different parties up on both sides, that, you know, part of those, people believe part of those efforts led to the Biden-EO. And they also led to this big push in California, which we talked about.

What was it? 1092, that everyone got on one side or the other. Everyone in our community had a point of view. You had Scott Weiner as the person inside the administration in California pushing for that. And it ended up with this huge, you know, argument, and then Gavin Newsom vetoing it.

And I suspect that most people see administration change, or friend David Sachs coming in as the AIs are, and they would expect that this is kind of, at least for now, put to bed. What's happening, and people are probably not aware of, is that the people pushing for this regulation have moved underground to a certain extent, and they're pushing it in a bunch of different states.

I've heard there's as many as 25 now, state-by-state initiatives. And I guess this is how policy works in this country. But there's a bunch of reasons to be really worried about this. I can't stand that the one that's got kind of the most heat right now is in Texas.

I think Governor Abbott, and obviously Elon, and all his companies have had this massive impact on the Texas economy by making it the state that has the least red tape, that's kind of the most pro-innovation and pro-business. And for this thing to pop up there is just so ironic in my mind, and would be literally brain dead.

When 1092 was being proposed, 1047, sorry, people, even including myself, said, "Why don't you just write a sign that says move your AI company to Texas?" Well, now those words sound stupid, because Texas, and the first thing that people should realize is there's no reason to do this on a state-by-state basis.

At the end of Obama's term, he had this initiative that he didn't get around to, I wish he had, where he had identified like 100 different industries where there's state-by-state regulation, including like haircare and stuff. And it's just wasteful red tape. And for us, when so many people in Washington are worried about our competitive position versus China, to build a gauntlet that our companies would have to move through to adhere to state-by-state interpretations and rules is just mind-numbingly, you know, I wanna say the word stupid out loud, and the only reason I fear saying it is that like someone trying to write that legislation would use it against me.

But like, I can't imagine, like we have a changing administration, we have people looking at this. If we feel we just have to do something, God, please, you know, I prefer it at a national level to a state-by-state. Yeah, no, federal preemption. I mean, like, listen, this is the Interstate Commerce Clause, like there's nothing, our great advantage over Europe is, you know, they got this crazy patchwork of regulation across all these different countries.

And here, if you're operating an internet company, you have one rule of the road, right? And we need to have one rule of the road around AI. And I think that it should be promulgated and will be promulgated in the federal level. And just so people, you know, I've heard this boogeyman out there that people are like, oh, everybody in Silicon Valley, they don't want any regulation.

They're just a bunch of crazy libertarians. No, what I think you hear is pushback to uninformed regulation that would slow us down and cause us to lose an important race to China. If we care about the race with China, then the first thing we need to do is to reduce the impediments to us running the fastest race we can run, right?

While still caring about AI safety, while still caring about national security, while still caring about all of these things. But if you have this patchwork where these states are onerously regulating all of these companies out of the gates, all you're doing is handing the gold medal to China. And so I think the unintended consequences are really bad.

I'm quite confident. Let me tell you about some of the details and I'll include a link in here. Dean Ball wrote a really solid analysis of everything that's wrong with the Texas proposal. But one of the things that's in there is that you have to do a risk assessment.

So this is, you know, similar to like creating an audit and, you know, for those of you who've run companies who've gone through audits, you know how difficult it is. But not only would you now have to, you'd have to publish a risk assessment. So whatever product I'm working on, I've now stepped through these hurdles and wrote these reports.

And then there's liability associated with it looking backwards. And so if I do something bad, I'm liable for it. But I also then get a second look back and did I run the analysis where I should have known about the risk that's there. And it's like, and it applies, you know, very broadly.

I give the people that are looking for regulatory capture credit in that I think they saw that they were losing at the national level and realized that they could create messy anxiety at a state level, which, you know, may actually promote trying to get to a national, output, but man, this would be horrific.

I think they're going to get a lot of pushback. I'll be very, I would be shocked if this happened in Texas. I don't think they're, you know, I think you have a lot of coordination going on in Washington. I think it's quite smart. I think it's on both sides of the aisle.

You had a great report published by Congressman Jay Obernolte in the house, who was a co-chair of the AI committee in the house. The Senate's done some work. And so hopefully SACS will get there and provide a really clear coordinating function and that we'll have federal preemption and that we'll get great direction, you know, out of Congress.

You know, one other thing, Bill, is I'm just thinking about 2025 predictions, you know, because there was a lot of, you know, when we're talking about regulatory capture, I think there was a lot of fearfulness and a lot of it directed at open AI that, you know, these models, these companies with a closed model were trying to, you know, who had the lead, lock everything down before people could catch up with them, trying to squash what was happening in open source.

I think you're gonna see in 2025, a lot more companies open source more of their models. Well, I think we just saw Microsoft put out some open source models last week, which is like really surprising. And so I think that my own sense on this is that there's a lot more commonality between all of the major players around AI than there is division.

And, you know, we'll see. But that's a, what I'm hearing out of a bunch of them is that we're gonna see a bunch more open source. Yeah, but there's two dimensions to this. So you could be right that the bigger companies could all agree on a regulatory framework. There's also the issue of small versus large.

And, you know, there's a lot to pay attention to there. Before we wrap up, I had like four or five things that I'd like to bounce off of you and get your opinion on as I look forward into the year. The one that I've been thinking about a lot is Google.

And we talked a lot about how the, you know, how search is at risk. And clearly for those of us that are doing so many searches first on a chat GBT or other product like that, there is a real argument as, you know, what's the point of Google in the long run and does it still hold?

On the other side, you know, when we've looked at them in the past, we've said, man, they have an incredible number of assets. And when you look at, you know, whether it's their email, Gmail product, and then their docs product, and then their Slack-like competitive products, and their Zoom-like competitive products.

And then the fact that they own Android and this mobile operating system. I was, you know, surprised to hear Jason on All In get super excited about Google in this way, and then announced that he had switched to a Pixel phone. And I'm thinking to myself, you know, this could be kind of the perfect AI basket of assets.

And, you know, you look at a company like Glean, that I know you're in and it's doing well. You know, you can create Glean for the rest of us if a company commits to being on the Google stack and has all these things. And it becomes a combination that no one else has.

Apple doesn't have it, Microsoft doesn't have it. It requires insanely great execution to get it all right. But I find myself, you know, that if a whole bunch of people said, man, I want this so bad that I'm gonna switch from my iPhone and get onto Android, that would be a real tell.

That'd be something special. Yeah, well, you know, I'll tell you, I don't know what it was, late November, maybe early December. Before the recent run-up, you know, I started seeing a bunch of breadcrumbs around Gemini, around Deep Reasoning, around Notebook LM, just the cycle time, you know, improving there.

I talked to a lot of companies. A lot of companies were still seeing growth in their search volumes, you know, their lead generation. So online travel companies, et cetera, from Google. And I started to see a lot more evidence of them just, you know, getting fit and getting more efficient.

And frankly, I think a lot of credit goes to Sundar. I think he's starting to get feisty and see the things that, you know, that need to get done here. I would say, you know, let's keep a few big pieces in perspective. They are facing the largest innovators dilemma in the history, and certainly from my perspective, the history of Silicon Valley.

And I think it's almost impossible to replace a 99% incremental margin business, i.e. a monopoly search business with whatever comes next. Because whatever comes next, they may be important in it, but they're not going to be 99% monopolies, right? Like OpenAI is going to be there, Meta is going to be there, et cetera.

So, and I don't think their margins are going to be the same. So that's the first thing. Number two is like, how could I be wrong? Well, you could envision a world where the pie grows so damn much, Bill, that, you know, Gemini and if they were able to displace Apple on the phone, then all these things could really replace that.

I think you have plenty of time to wait and see, you know, to pick up the breadcrumbs on whether those things are occurring. But I will tell you this, I suspected Apple for the first time. They look at Google's assets and they say, "Okay, this is orthogonal." Like they were never worried about Pixel over the course of the last eight years, but I imagine to your point, this is the first time in a long time that they've worried that Google could appropriately embed Gemini and deep reasoning and a search, you know, an assistant on the phone that can book my hotel, just make it a 10X better product.

I'll tell you another area where they have an advantage, which is hopefully useful to everyone out there. Gemini is really good at local because they have all the local reviews. So I find myself now if I'm going to a restaurant, you can say, "What are the three dishes people have loved the most?

What are the two that you stay away from?" And get down at a level of detail that you wouldn't have used before. And, you know, very valuable. So that's another asset. What are they like at Tony Black's? Well, you have to go do the search. A second one that I think is super interesting to me that we're going to see play out first half of this year, X.AI, Elon's commitment to the largest cluster.

So the concerns about running out of data, the concerns about maybe there's a parameter limit, at least against the text data set draws into question, do you need the largest single cluster? Grok 3 has been promised, I guess, in the first half of the year. It would be super interesting to see if something comes out of that product that is competitive with the cutting edge at OpenAI or maybe above it.

I don't know, but I'm very curious about how that plays out. I mean, I think there's been a lot of attention focused on that. I'm not sure that I would measure X.AI's success solely on the dimension of, is the Grok 3 model better than whatever OpenAI's latest is or Gemini?

Because remember, what they're doing really, Bill, is building larger cluster on pre-training. So they can do more on pre-training. But remember, these other models are infused with post-training and test-time compute, et cetera. So I suspect that what we should- Oh, I agree. I think it's a test of that pre-training argument.

Correct. Let's say they do achieve something and come out with something that's better than everybody else. So that would be a new data point relative to where everybody's thinking. I think, here's the way I think about X and Elon. He's got three vectors on which he is leveraging AI.

Number one is robotics. And he recently said, and I've heard Google and others say this, that he expects there to be millions of humanoids live and in the wild by 2028. Think about that. That's three years away. It's really, it's incredible. And you just, we'll post this video of this Chinese humanoid that, you know, somebody said passed the visual Turing test because humans were confused whether or not it was real.

It was so damn good. And so he's got that vector. The second vector he has is autonomy, right? And I picked up a new Tesla S at the end of last year with hardware for just so I could be on the latest of FSD. It drove me home from San Francisco last night.

So those are two real world, like not language models. Now we're talking in the world of bits and, you know, and atoms. And then the third one is really around-- By that, you're implying, just because those are in separate companies, that X.AI could become the hyperscaler for-- 100%. I think there's no doubt about it.

It already is, and it will continue to be. Like he has a Koretsu, as they say in Japan, of companies that can leverage each other's insights. And I fully expect that they will. And X.AI, so his need to build these large clusters and place these bets is because he's got to support huge potential businesses, not just building Grok to compete with JetGBT, although I think they'll try to do that too as part of X, et cetera.

But he has lots of ways that he can leverage it. And remember, if you build a large coherent cluster for pre-training and you decide, okay, we're hitting up against some scaling limits of pre-training, you can use it for training reasoning models, you can use it for post-training, you can use it for inference.

So it's not like you're wasting this just by building these larger clusters. I think what I'm most convicted in is that these guys are going to spend a lot of money on GPUs because, as Jensen says, more of the world is moving to workloads that demand machine learning and accelerated compute.

Okay, that was number two, Bill. What's number three? One last question on number two. It seems to me that X.AI wants to make the argument that having their own hardware is a competitive advantage versus open AI. Do you buy into that argument? How do you think that plays out?

Well, I think it's not necessarily owning it or having it, but it's being more competent. I do think that the ability to rapidly stand up compute that works, that works really well, that doesn't have as many hiccups in productions, et cetera, like that's a superpower. I mean, Jensen's gone into, you know, talk about this crazy timeline that Elon hit.

I think he's unique in that regard. I think he has a balance sheet and an ability to raise global capital, Bill, that his cost of capital is really low for doing this. So listen, I'm happy. I think it's so great for the US ecosystem that Elon is charging full speed ahead in AI and keeping Anthropic and Google and OpenAI and everybody else honest.

It's part of the reason I'm so bullish on Team America when it comes to AI, because we've got the smartest people in the world investing more money than we've ever seen invested in anything other than perhaps the Apollo project on an inflation-adjusted basis to make sure that we're in the lead.

But at the end of the day, it's a combination of things, right? You gotta have the best research in the world. Like you can have the best computers and chips and biggest cluster in the world, but if you don't get the architecture right around your reasoning model and the other guy does, they're probably gonna win.

All right, number three. We'll do four and then we'll wrap it up. Number three, I don't recall in any of the previous waves, you know, PC wave, client-server wave, mobile wave, the amount of chatter about co-opetition that I'm seeing here. And people took out of our video with Satya where he said, "Oh, I already have my own models," you know, and then they released a few, you know, implying that they're in some kind of co-opetition with open AI.

You've heard all of these hyperscalers talk about doing their own processors of some sort, you know, both Tranium at Amazon and then the TPU at Google compete with NVIDIA, but they're buying NVIDIA's product. NVIDIA put out models this week, which competes with some of those customers. And then they announced this AI-driven PC unit, which would, you know, presumably compete with Dell, even though Dell's on stage helping them with the Elon build.

But I don't recall, you know, at least I think about the Wintel, you know, world, which I was on Wall Street covering. I just felt more like people stayed in their lane and were thankful to be a part of this broad ecosystem that was growing. And I just see so much tension.

Like, it's super interesting to me as just someone that's watching it, but I'm curious what your thoughts are. Yeah, I don't know that I have any strong, you know, perspectives there. Maybe a couple of things. Number one, the amount of capital that it takes to do these things is not the domain of venture capitalists.

So if you're a new entrant, if you're Anthropic or you're OpenAI, frankly, they were forced into this because there was no other choice. When they realized the amount of capital was going to be, you know, a nuclear level of capital, you just had to turn to these big companies.

So these big companies ended up owning, you know, pieces of them, but they couldn't be totally dependent upon them. So you had Google who hedged their bets a little bit with Anthropic, and you had Amazon that made a bet on Anthropic, and Microsoft on OpenAI. And now as these companies become big and successful, like, listen, if they don't become big and successful, inflection just gets subsumed by Microsoft.

Character gets subsumed by Google. But if they get some scale on their own, like OpenAI, and now it probably couldn't be subsumed from an antitrust perspective by Microsoft, and it has standing on its own. And so I think you have that dimension. I think the competition is as aggressive as it's ever been.

I think sometimes we make more out of this competition than there is. Like, I don't think Microsoft, you know, NVIDIA launching a few models, I don't think they intend to be in the frontier model game where they're competing heads up with these other folks. At the same time, there's a long tail of their customers who are more than satisfied to use a tightly coupled model rather than maybe trying to kludge it together themselves by going to Lama or using a lot of these other plumbing services.

And so, you know, we'll have to wait and see. Here's one thing that I would just point out in that regard. We just looked at that CapEx chart. Almost $300 billion of non-governmental R&D being spent by the biggest companies in the world to advance causes that are aligned with America, right?

Like we hearken back to the age of Bell Labs in Silicon Valley, you know, and where you had national government spending. And I just think this is so damn bullish for us that we have printing presses, we have companies generating enough cashflow to invest this much money to put us at the bleeding edge and it creates incredible national strategic advantage.

And they were getting pushed back for doing stock repurchases. Now that people said they don't have any use for this cash, well, now they do. So I guess that's a positive. All right, the last thing that's on my mind and that I'm really looking forward to better understanding in 2025, it does appear like, for the time being, the majority of the enthusiasm on the advancement in AI is around this chain of thought thing.

And as you've talked about, you know, I think it's somewhat ironic that it's less efficient, but now that's a good thing. I think if someone had built a model that was just hyper shit efficient to where the margins expanded, that would be better. But okay, so now the argument is it's less efficient, but it's gonna consume more compute.

So that's a good thing. So seeing how well this applies to different uses of AI will be super interesting. So, you know, what are the coding companies seeing, you know, when they use chain of thought precisely against that? 'Cause when this model was first released, even OpenAI said it's not for every use case, or they hadn't seen it be successful in every use case.

Now, maybe they will in the future, you know, and I've signed up and paid for all these pro versions now in the past few weeks, been throwing problems at, you know, especially the one that's interesting is it goes away. You know, both the Google and the OpenAI products, it'll go away for 10 minutes.

So, and is, you know, how much better is that output? And so it's super interesting. I think that's gonna be something that's really important to watch. Obviously, if the compute price keeps falling, I think you fall into this, why not argument? Like, why not run 20 passes on something if the marginal cost gets somewhat irrelevant?

And with the fast followers, you know, and the way you're using synthetic data to create even smaller models, maybe you can get the smaller model to run, you know, the second, you know, double-checking your work, if you would. But anyway, this, I think this will be the fun part to watch on the edge this year.

I can tell you a couple of things on that. Number one, I can tell you that the researchers inside these labs, they feel like they're looking into AGI, and they're surprised how skeptical the world is. Yeah, like there's a lot of dissonance that I think they have, because I think they're like, "Oh my God, we're getting close." The second thing I would tell you-- Part of the problem on that is, you know, they're seeing things ahead of time.

Correct, correct. And they're unable to release 'em 'cause they're not fully baked in it. 100%, and they don't have the compute. Yeah. They don't have the compute, Bill. But I'll tell you, I'll tell you that, I'll tell you this, they also, part of the reason they're so bullish on these reasoning models is that from a scaling perspective, right?

So remember, reasoning models can get better just because you throw more compute at 'em as well. So where are we on that scaling curve for reasoning? Most of 'em tell me that we're around a Chad GPT 2.0 level from that logarithmic scaling. So they expect a lot of scaling advantages in 2025 and 2026 on the reasoning models.

And then finally, when it comes to utility, so where are these things going to be put into practice? I think you're going to see some pretty dramatic breakthroughs this year on coding, right, so think about all the startups that we've seen in the coding space. A lot of them have used prompt injection and other techniques in order to, you know, kind of get these agents to do things that they want.

I think you're going to see real coding agents baked by Google and by OpenAI, driven by their most sophisticated inference time reasoning models that are going to be meaningfully better than what's in the market today, that's one. And I would say secondly, so if at the enterprise level you think of coding as kind of the tip of the spear as to what these agents can do, remember, like you and I started last year talking a lot about the consumer, right?

Memory and actions. And I think that you're going to see Chad GPT imbued with these capabilities. And I imagine the same happens with Gemini, et cetera. And we've seen it, like we in fact did a, one of my partners did a demo using booking.com and the anthropic computer use where, you know, booking a hotel and doing other things.

It's still fairly embryonic, but I think that, again, that's not built on the back of O3. Now build that on the back of O3, it becomes very, very capable. So if you said to me, like how wide is the use case going to be? I think now the use case is very narrow, it's researchers.

Okay, but I think you're going to see the aperture on the reasoning models expand pretty dramatically. And I think you're going to see increasingly a blending of the pre-trained and the O series model, because ultimately it's one model, you know, that's going to help you, you know, do all the things that you want to have done.

Well, here's, I didn't discuss this with you ahead of time. So we can edit it out if we don't use it, but I would invite, if there are people that are doing particularly interesting things or leveraging this in a particularly interesting way, I would invite them to reach out to us.

And I'd love to not only learn about it, but be obviously willing to share with the audience what these things are as they uncover them. I'll close with this and you didn't ask for my input on it, but you know I'm not a fan of macro analysis and that I think it falls in the too hard bucket because of all the different variables at play.

Great seeing you. And I look forward to a fun year of doing these with some incredible guests. And, you know, I learned a lot kicking around with you. It's a lot of fun. All right, take care. Take care. (upbeat music) (upbeat music) - As a reminder to everybody, just our opinions, not investment advice.