And this was Friedman opining in a very quick rapid fire interview of the 14 government agencies that existed at that moment in time, which he would abolish and which he would keep. And, you know, we'll roll it and put a clip in here. Keep them or abolish them? Department of Agriculture?
Abolish. Gone. Department of Commerce? Abolish. Gone. Department of Defense? Keep. Keep it? Department of Education? Abolish. Gone. Energy? Abolish. Except that energy ties in with the military. Well, then we shove it under defense. The little bit that handles the nuclear plutonium and so forth goes under defense, but we abolish the rest of it.
But, you know, they asked him, Department of Agriculture? Abolish. Commerce? Abolish. Education? Abolish. Hey, Bill, we were in my office last night. My boys put on the cowboy hat. The cowboy hats we made down at your anniversary. Yes. It looks great on you. It's great. You should wear it for the whole episode.
It's good to be here. It's kind of a surprise. This morning I got on the plane to fly up to Seattle. I was going to do a pod with Satya, as you know, and that we were going to do our reaction pod. And a typhoon or something hit Seattle last night, literally like halfway up there.
I got word that Microsoft has no power. Nobody has any power. Trees are down. So I did a U-turn and wished them well. I'm going to go back up there on Monday and do a pod with Satya. But given that, I couldn't wait to get together with you and kick around all the things that we've been sharing back and forth over the last couple of weeks.
Lots of stuff happened. No doubt about it. One of the things certainly popular in our threads has been AI scaling laws. You know, like, are we beginning to see models top out, particularly on pre-training? In fact, there was this Bloomberg headline, "OpenAI, Google, and Anthropic struggle to build more advanced AI." It then goes on to say that Orion or Chat GPT-5, Gemini, and 3.5 Opus are all falling short of internal pre-training targets.
And then they quoted Dario as saying, "Scaling laws are not actually laws of the universe, but they are simply empirical regularities." He said, "I'm going to be in favor of them continuing, but I'm not certain of that." Yeah. So what's been on your mind about whether or not we are, in fact, seeing continued gains from pre-training?
I would add a few other things. Ilya was also quoted as questioning whether the scaling laws were continued. And for whatever reason, Mark and Ben over at Dendres and Horowitz also made the same statement. So in a very short window, we got this point of view from quite a few number of people.
I would add that in Dario's five-hour interview with Lex, which I only grabbed pieces of from the transcript, but he was very positive on scaling laws in that interview, which just recently dropped. So there may be a disagreement between some people, but the breadth of the feedback suggests something's up.
I think something's worth paying attention to. And I would add that there were people that raised this question early on. And I think it's specific to LLMs. I don't think you should say AI scaling laws. I think there was always a question about whether LLMs would run out of steam.
And that was tied to three different things, which you and I talked about on July 11th. And I had even raised this question a year ago. One is, will the parameter count run out? Historically, mathematical algorithms that do some type of fit, and this is a very sophisticated form of that, but when you take the variables up to a certain level, it stops adding value.
You just get too close to the fit. There's a question about how big the context window will be. And that came up on the NVIDIA call today. I think we can talk about that. And then data was brought up. And a lot of people, I know this was a big point Melanie Mitchell had raised, like, are we just going to run out of data?
And there was pushback that there's synthetic data, but there's been papers published that show synthetic data creates a lot of chaos. And so I do think there were people that were thinking this might happen. And there was also a belief, and I share this belief, you don't have to, so we can disagree on this, that there was a argument being made by, you know, OpenAI and Anthropic that, oh, you're just going to see the next number drop, and the next number is going to be way better than the last number, and that's going to happen routinely.
And they both said, you know, we spent whatever, a billion on this model, and we're going to spend 10, and then we're going to spend 100. And that implies, I would say, at least linear scaling or maybe above. And they all said that. And so if the comment, you know, that you read earlier is true and that they're not getting the benefit, there are implications.
You know, it doesn't mean AI is in trouble or AI is done. I mean, there's tons of positive AI news out there, but it may mean we're shifting directions. And it's worth, I think, talking about, well, if this is true, what are the implications? Right, no doubt about it.
You know, and Jensen did just say on the NVIDIA call, he basically said there is no slowdown, but he did say that foundation model pre-training is only one vector of scaling. You obviously have post-training scaling, and now we have inference time reasoning that will scale. We have data and data quality that will scale.
So I think that, you know, it is a question as we look at in the early phases over the last 24 months, everybody was mesmerized by these evals on pre-training. And then I think a lot of people mistakenly, and we'll get to this later, a lot of people mistakenly said if the pre-training ever slows down, this is horrible for NVIDIA, because the only reason NVIDIA is going up is because people are buying these big clusters to do pre-training, which also isn't entirely true.
But I do think it's important to recognize, right, that as you scale up pre-training, a lot of the low-hanging fruit was plucked. And so it makes sense to me that you're seeing a deceleration in the rate of improvement of the pre-training. But when you look in the aggregate at all these various vectors on which we're scaling intelligence on a combinatorial basis, it may be just as good or even better.
But I think it's important to your point, like let's steel man the argument. Let's assume that we do, you know, kind of run out of improvements or see a significant deceleration in the pre-training improvements on these models. What do you think that means? Like who are the people who are hurt by that?
Who are the people who are helped by that? The first thing that comes to my mind, and once again I'm not asking you to agree with this, it's just like what's in my head, is that this is not the expectation that Sam and Dario had 12 months ago. Like because they were promoting a thesis that it was just going to keep going up and up, and they were willing to put 10x over and over again on the size of the cluster they were going to train on.
And if it's a shift, and it may very well be a shift that leads to insane innovation and continued growth, blah, blah, blah, blah, blah. I still think it's a surprise shift. Like I don't think it was what was expected. And so I think that's worth acknowledging if that's what happened.
Two other things that I think are, let's just call them questions that are raised. One, the NVIDIA differentiation, as we've talked about, is greatest at the largest cluster size. So if pre-training, once again I only equate this to LLMs, you know, I don't think of FSD problem, I think that that is more of a core AI problem, and that thing may scale way beyond where it is today.
But for LLMs at least, if we've hit a cap, then, you know, does that impact NVIDIA demand at all for the bigger systems? And then there's a differentiation question, which is if the world shifts to inference sooner or faster, and if the path of differentiation for the language model companies becomes more inference-based than pre-training-based, what does that mean for the hardware systems that people are using?
And, yeah, and then the third thing, and I'll go quick and you can respond, is, you know, might a ceiling on that pre-training allow other people to catch up faster with open AI? And there was an article out today about a, like, pseudo-Chinese model that was performing at or near some of the latest-- Deep-seek.
Deep-seek, I think is what you're referring to. So anyway, those would be the three things that I would say that you would want to analyze if you thought that was true. We've spent a lot of time talking with, for example, the CEOs of most of the hyperscalers, if not all of them.
And then I listened to Jensen on the call today, right? They called Hopper Demand exceptional, and this is the very tail end. We may have-- people thought Hopper Demand would be over by now, frankly, and he said it's going to continue well into next year. And interesting, they called Blackwell Demand staggering and said that, you know, they're increasing the production even against their prior predictions, right?
They beat the quarter in terms of revenue. So the action on the street seems to support Jensen's words on the call, which is there's no slowdown, right? Sam tweeted, I think there is no wall. And so there is a debate here, and some of it may be semantics, right?
Like, I definitely think the second derivative is slowing, right? But you're still seeing gains. And I think the implication we've hit a wall suggests no gains and that there are no different vectors of scaling that intelligence. So I think at least as we look at 2025 for NVIDIA, for the supply chain, and for the model companies themselves, we think we're going to continue to see improvements in the models.
It won't all come from pre-training. We think there's an increasing amount coming from post-training, from data quality, and then obviously from this inference time reasoning. I would add to what you said. I mean, I listened to the call. Yeah, there was certainly no concern raised by Jensen about either his demand or his sense of what was going on at the architectural level.
And Microsoft has been increasingly boisterous as well. If this thing is true, it certainly goes against what Kevin Scott was saying on a podcast four or five months ago. And so I doubt that even if it were true that it would play out in the numbers as fast as we're talking about.
This stuff just came out like weeks ago. But I wouldn't ignore it. I think it's important to understand. I think it's important to know what the drivers are of incremental investment and incremental use and incremental performance and competitiveness and to see where that's going. And I think there are a lot of questions that we need to ask and understand about inference.
Do models that are, as I understand it from the startups that we work with, you can train a model on a big NVIDIA cluster and you can run it on a different system. And there are certainly performance tests that are in the public that show the Groxon, Cerebris, and other TPUs, including the ones at Google, outperforming on inference.
Now, on the call, Jensen said something that I hadn't heard him say before, which is he talked about context windows potentially growing really large. And that might then require a larger cluster, which might bring a point. Prior on the previous call, Jensen had just said, well, the reason you'll use us for inference is because you've got old machines laying around.
This was a step in a different direction saying, well, if context windows are huge, we might be differentiated yet again here with larger systems. So I think those are the things that people need to ask and figure out. There's no doubt. I think one of the implications of what you're saying is as we looked at pre-training, we basically watched what appeared to be the increasing commoditization of the pre-training advantage.
Everybody was catching up pretty quickly, even people who we didn't think were necessarily in the lead pack. So if, in fact, we're seeing a deceleration in the rate of improvement, if that's becoming more commodity, then it shifts the balance of power to post-training, which really relies on quality of data and other vectors of scaling intelligence like inference time reasoning.
And those things may be less commodity either because you have a data flywheel that's better than your competitors or because you've just had some breakthroughs architecturally on other ways to scale intelligence. And so I think 2025 is going to cast more light on those things as well as improvements into the core product like we've talked about before around memory and actions.
Yeah. And there was a lot of talk on memory in the past week. So both Google, Jim, and I released a model of their chatbot that has memory. And Mustafa at Microsoft dropped the phrase "near infinite memory." Now, putting "near" next to the word "infinite" I think is a bit of an oxymoron.
You could say I'm going to live near forever. And I'm not sure what that means. Clever use of clever words of semantics. But they seem super excited about what they've unlocked in memory. And I've talked for a long time about I think that the more memory one of these systems can have, the way more useful it becomes to each individual that uses it.
So I look forward to seeing what they have and seeing when they release that. You know, I would just make one comment kind of over again on if this scaling wall or ceiling is starting to arise. There was talk prior to now, or if you assume that weren't true, that these companies were just going to train on bigger and bigger and bigger systems.
They said, oh, someone will train a model on a $100 billion system one day. If it's true that it's top, that's not going to happen anymore. And so that is a-- it doesn't mean that NVIDIA may still be sold out till 2030. But there was this notion that they were just going to be constantly building larger model for pre-training LLM.
And that may be off the table if this is true. And let me suggest as an investor, at least in one of them, in open AI, it may not be a bad thing that that's off the table. They don't have to build that system. Right? I mean, I do think there was this question around-- That's a fair point.
--returns and where the money was going to come from. And I'm not worried about it from the perspective of NVIDIA because I think, again, while we're all hyper-focused on these evals, right, the real world people are focused on use cases. So if you break down consumer and enterprise here for a second, right, you heard Jensen referring to this a lot on the call.
Remember, Microsoft has said of its $10 billion in AI revenue, it's almost all inference. We know NVIDIA has said half of their revenue is inference. And he was asked on the call if that mix would become higher in the coming years. And he said, I expect it, and it'll be-- He said he hopes it does.
And it will be great for the world when it does because it'll mean a lot more people are getting benefits from it. But let's talk about this vector around consumer AI, right? You mentioned Mustapha referring to infinite memory. It's very clear now. He said 2025 is going to be the year of memory.
You and I started off this year hoping that 2024 might be the year of memory because we think these agents are going to become infinitely more useful to us when we don't have to prompt them about all the things we've done in the past. And we've also seen everybody begin to roll out actions.
Every company now-- Gemini has talked about it. We saw computer use out of Enthropic. I took note that Sam said recently that tools and agents that can do things like book airline tickets are at least as important as better models. We will have better and better models, but I think the next giant breakthrough will be with agents and their actions.
I think that these, again, they're not either/or, right? The core models will get more capable, multi-modality, inference time reasoning, post-training. But I also think we finally see out of these companies a real cadence around product improvements, memory, actions, voice, voice, right? That just dramatically increase the utility of these things, which then leads to massive increase in token production or inference.
And that inference has got to run somewhere, right? So that doesn't-- and then when I look at ChatGPT, you could see the numbers that are out there publicly. It certainly does not seem to be slowing down. In fact, if anything, accelerating. So I think those are indicators, right, because I've heard that Claude and other models are doing-- Perplexity are doing well as well.
I think those are indicators on the consumer side, right, that consumers are finding more and more utility. And I think 25 with these product improvements, I think that could be another step function unlock in terms of consumer use. And enterprise use. I mean, all those things, memory, voice, they can be utilized in an enterprise fashion for sure and will be.
And so, yeah, I think you're right about that. I think those innovations will matter for the exact same reason you said, which is utility will go up. I think Snowflake said on their call tonight, 3,200 of their customers, so they have about 10,000 enterprise customers, 3,200 of them are using some of their AI use cases, AI products now in daily use.
So again, starting to see some of that indication, I noted that Microsoft, which is on a $66 billion run rate at Azure, has said they expect Q1 and Q2 Azure revenue to accelerate. Think about that, Bill. Azure is growing at 34% on a $66 billion base, and they say they expect it to accelerate.
So it's really significant growth on big software businesses. And I think what we've seen in the season is that software certainly has not decelerated. If anything, it's accelerated after a couple years of digestion, I think, around this. But, you know, if I had one takeaway having watched all of this, I would agree with you.
And you've been, you know, rattling my cage about this for a year. And I think you've been proven at least partially right that we're seeing a deceleration in the rate of improvement on pre-training. But my conclusion is I don't think it necessarily matters to the consumption on the NVIDIA side.
Certainly it would be higher, it could be even more, I suppose, if they were able to produce that many more, which I don't think they can. But I think that they're, you know, still in a very strong position. And I think these models, I think, you know, for example, OpenAI could double, triple the number of users of ChatGPT with, frankly, not making any improvements in the core model.
Because consumers today barely scratch the surface on core model capabilities. Product improvements like voice, memory, and actions, I think, in and of themselves could drive tremendous traction and revenue increases in that. But I do think they'll still continue to see some of those model improvements. Maybe one area that I know that we've actually seen a tremendous return, I think, on the investment made in AI, and we saw some news this week, is on national full self-driving regulation, right?
So the Trump administration announced that they want the DOT to develop a national framework to regulate self-driving rather than the state-by-state framework, right? So most states, I think, require certain licensure to be able to do it. Texas and Florida have no restrictions. But at least the indications out of the administration were that they wanted a national policy, which makes sense to me.
I think that's very beneficial to Tesla and to Waymo and to anybody else who's in the full self-driving game. You get to deal with one regulatory authority rather than trying to, you know, deal with the complex nature of 50 of these folks. That happened at about the same time.
Our team's doing a tremendous amount of work around FSD 13, which we expect to roll out in the next few weeks, OK? So you and I did the pod earlier in the year, Bill, on FSD 12.3. That's a big breakthrough as the first imitation learning model really moving away from these deterministic models where we saw a step function in terms of capability.
Elon had said they were going from making 10 percent improvements a year to 10x improvements in a much shorter period of time. So we know about FSD 13. It's the first model they've trained on the Austin supercluster, right? Parameter size is about five times bigger. And what I hear is that Elon's priorities around it are safety, safety, safety, safety, right?
He wants different driver to be able to understand different driver modalities in different situations, right? So that you can you know, the metric which they use is MPCI, miles per critical disengagement, OK? And my partner, Frida, tweeted a couple of weeks ago that at the start of this year, we saw 12.1 come out.
By the middle of this year, when they released 12.5, it showed 100x improvement in miles per critical disengagement. And then with the launch of FSD 13 in the next couple of weeks, we expect another 10x improvement in terms of that MPCI. But here was the crazy stat for me.
So you're already at 1000x better than we were at the start of this year in terms of miles per critical engagement. Now, if you do some back of the envelope, which she did in her tweet, and Elon retweeted it, at launch in Q4 of this year, it looks like version 13 will be somewhere around 25,000 miles per critical disengagement.
And we estimate that Waymo is right around that same level, maybe 20,000 miles. So they're already caught up, we think, with Waymo. And at the current rate of improvement, in Q1 of next year, they could be at 100,000 miles per critical disengagement. By the middle of next year, 500,000 miles per critical disengagement.
Now, just to put this in perspective, there's a human accident every 500,000 miles that's self-reported with human drivers. So if 2024 was the chat GPT moment for full self-driving bill, where we started seeing Waymos on every street corner in San Francisco, where you and I had our, you know, and Michael Dell tweeted about it, our Tesla FSD 12.3 moment, then if this was the chat GPT year, I think next year really is the year of achievement of a safety standard that allows RoboTaxi to go into action, probably in Q2.
They'll pick a couple cities. And that doesn't mean that you can pull the steering wheel out of every Tesla, because, you know, Elon has said we're not there yet from a safety perspective. But the way they'll do it for a RoboTaxi is you can put the car on RoboTaxi mode.
So think about it this way. If I'm driving down Sand Hill Road out here, I have an expectation. I'm sitting behind the wheel. I have an expectation as to how fast I'm going to go. And if the Tesla is not going that fast, I'm going to disengage and drive the car, right?
So because I'm sitting behind the wheel. Now imagine I'm in a RoboTaxi. I'm sitting in the back. I'm leaning back. I'm on my phone. I'm watching something. I will tolerate, just like I tolerate in a Waymo, a different speed of driving, right, a different safety standard, if you will.
So I think that's how they get safe enough in a couple markets to be able to drive RoboTaxi, is to just put it in a safer driving mode. You know, and I've watched these cars do it. They modulate modes depending upon the type of streets that they're on already.
So that for me was paying attention to that regulation combined with the improvements in FSD-13. I think that's going to be big news as we look forward at 2025. There's a couple things I would respond to that I think are going to have to play out. First of all, as of now, Tesla hasn't had cars driving around without drivers picking up people.
And Waymo's got years under their belt doing that, and there's going to be some learnings that come from that. There just are. And so they're going to -- I think I read that they're going to maybe start doing that in the next few months or something like that. But in the states that are allowing Waymo, you have to apply, you have to get approval.
So there's a step that they need to take. And I think there's some natural learning that comes from that. I mean, you can hear how they talk about SpaceX, and there's a similar thing here, like until you get the cars out on the road. Most people believe Waymo still has oversight from a data center somewhere where people are watching when engagements happen or disengagements happen so they can take over and pay attention.
And so is that same infrastructure going to be built at Tesla? I don't know. That might be an important thing as well. To the national standard, I unquestionably think it can be helpful. There are states in this fine country of ours that like to override regardless of a national standard, most notably California.
And so that would -- but of course it would make more sense for this to be done nationally. And so that would be great if they stepped in. I had always been a big proponent of open source in this area. There's a lot of stuff that could be beneficial from a safety standpoint if everyone were doing it the exact same way.
And I could even imagine a communication network, the status of a traffic light, for example, should -- there's no need for that to be inferred. I mean, we're going to use how much compute to look at the light and determine whether it's red or green. That's a state machine.
It's either red or green, and that can be known. And so you could, as you build these intelligent systems, start to have the state of different things, which could include, you know, road openings and road closures and, you know, construction work that's going on and traffic and all kind of other things that should be built into the system as well.
So I do think that -- I do think a national kind of standard could be great. It could also be really bad. Like, if you get overregulation and this massive, you know, stuff that -- and the incumbents might just use it to protect themselves. But my own belief is that it's -- you know, you've got Waymo, who's got the most cars running autonomously on the street right now, Tesla, who's got all the advantages you just talked about.
You've got Uber, who's got this huge network of users. I'm less certain who the next player is after those three that's got a strong argument right now. Oh, actually, that is in the U.S. Obviously, China has tons of autonomous driving, you know, products on the street as we speak.
Well, let's talk about that because I think that's an interesting question, you know, kind of a market structure question that I've been thinking a lot about lately, Bill, certainly on the public fund side of our business, right? When I have a disruption of this magnitude in an industry that's a multitrillion-dollar industry that has, you know, 20, 30 different public players in the supply chain, OEMs, et cetera, like, I just start thinking, who are the winners and who are the losers here, right?
Exactly how I was feeling in 2008, you see the iPhone, you know, you've got Nokia, you've got Motorola, you've got BlackBerry, you've got, you know, all this stuff. And you're like, somebody's winning and somebody's losing. I remember my mother didn't understand the concept of shorting, right? And 2009, I tell her to buy Apple and to short Nokia, and that was my, you know, kind of canonical example for her, you know, and it worked out okay.
It actually went against her first, but then it worked out okay. Let's just talk about OEMs for a second. I think the consensus view in the world is that all these OEMs, so I'm talking the U.S. OEMs, the Stellantis, the GMs, the Fords, the Europeans, the Volkswagens, the BMWs, the Mercedes, the Japanese, the Toyotas, et cetera, the Chinese, that FSD becomes a commodity and that they all end up with some version of it.
Maybe there's one provider like NVIDIA or, you know, Mobileye or somebody else who's providing this capability into these cars. And that, you know, think of that as maybe like this Android version that they're all going to, this box they're going to plug in their cars, and they're all going to go compete, you know, as though they were before with Tesla.
That's one version of the world. The other version of the world is that this is a tectonic disruption, that Tesla currently sells 2 million of the world's 80 million cars. But in the future, this isn't going to be about buying a car. This is going to be about a mode of transportation and that Tesla could end up restructuring the entire industry, end up being tens of millions of cars.
You may only have two or three or four global players, which, you know, we have in a lot of other industries. And so the question I have for you is, do you, you know, what is your pattern recognition? Do you think that we're going to have all the same OEMs, they're all going to have some system, it's going to be basically commoditized?
Or is this an extinction moment for some of these OEMs? Such a hard question. I mean, there's so many different ways this could play out. One of the things that I question in my mind is who's going to own these vehicles. And so, you know, on one hand, you have the Waymo situation where they own all the cars, but they also know what that means from a CapEx standpoint.
And if you look at the subtleties of the kind of the statements they've made in conjunction with Uber, they're at least going to experiment with some models where maybe they don't have to pay for all the cars or licensing their technology to the other OEMs that we talked about.
And people I've heard Dara talk about at Uber talk about, you know, maybe, you know, large, you know, debt companies get behind and own these things, like maybe the containers work in a rail yard, right? Like who owns these things? They move around, but someone owns them, right? People lease them.
So that's a big question, you know, and the Tesla argument is that they'll just be owned by the people that own the Tesla, which is great. Tesla doesn't have the CapEx burden, everybody else has, because they've talked them into buying these cars. But what percentage of people are going to lease out their car every day?
What would you guess? I have no idea what to put on that number. I mean, Airbnb does exist, so it's maybe not zero, but most of us still have our own house, right? So where does it fall? I think, you know, my own hunch on that is that it's going to be a higher percentage, particularly as the price points on the cars fall, right?
Now they're leasing cars for, you know, 200 bucks a month, and as those subscription prices fall, I think you're going to have a whole category of people who don't mind at all, right, earning incremental revenues on their car. I imagine they've done internal survey work on this, and I think if you look at the places where, Bill, they're likely to launch the RoboTaxi, number one, it's going to be in marketplaces where they have sufficient safety, right?
So where they have a density of data that tells them they can put it in RoboTaxi mode and they won't have any problem. I think that could be somewhere like maybe the Bay Area, somewhere like Austin. And then secondly, it's going to be a market where they have a lot of Teslas sitting on the sidelines.
So they come seed it with 25 or 50 Teslas, but in both of these places, they happen to have Tesla factories. There are a lot of people with Teslas, and there are a lot of people who will drop them into the pool. So those will be good test markets, but if I think of a market, Bill, like New York City, right?
New York City seems to me to be a problem because there's massive demand for cars in New York City, but there aren't a lot of Teslas sitting around latent, right, that can be tapped into for supply. So I don't think it will work equally well in all markets, but I think it is a massive advantage.
Remember, out of there, I think they may have 7 million cars on the road today, something like that. About 2.5 million of those I think are hardware 4, and it's going to require hardware 4 in order to run all these cars. So, you know, we're really talking about a multi-year undertaking.
But if you said to me over under on, you know, middle of next year where we have evidence of a single market or two where RoboTaxi is working, I think we will. I'm going to take the over on that, and that is a huge difference from where the world was 18 or 24 months ago.
I just really don't know what percentage of people let their car go out. You know, Turo exists. It's tiny, like tiny compared to what we're talking about. It's a nice business, but that's where people let somebody else let their car go. Everybody who owns a Tesla has Tesla app on their phone, and all they have to do is toggle, says, yes, this time's the day.
I just think if they make it easy enough, it could surprise us. It comes back, and it smells a little funny. You know, your wife says, honey, I really need you to go out to the drugstore right away. I've got this problem, and you're like, I let the car go out.
She's like, what? Bill, you know exactly what's going to happen here. The arbitrage is the same arbitrage you and I saw in Uber, right? You're going to have enterprising young people. They're going to go lease four of these a month, 200 bucks. I could see that. 200 bucks each, and they're going to be arbing this whole system.
I just think the marketplace will fill up. That makes more sense to me than some majority of humans letting their car go out. I have a harder time with that. And then just to this industry structure question, because I think this is one that's going to play out over the course of the next three, five years.
What are we going to be long, and what are we going to be short? What do we think is going to really happen? I'm skeptical. I'm skeptical. We've spent a lot of time looking at the Chinese manufacturers. Frida spent a lot of time in China. I'm skeptical on China.
They don't have the sensors on the car. They don't have the data. We think models and access to the best chips are going to matter. Certainly, they have advantages over non-Chinese manufacturers, but I'm skeptical on the European manufacturers. I'm skeptical on the Japanese manufacturers. I could be wrong, and I'm mentally flexible.
The industry may end up looking exactly like the industry does today, but I think there is a path where this is that event, that extinction event, tectonic event, that takes us from 20 OEMs down to three or four. And if we sat here in five or six or seven years, and we were on a path to three or four global automakers, it wouldn't surprise me.
Maybe we can have an expert on the China thing. I've certainly seen videos online that suggest the Waymo equivalents are already working in China. So, that would be a data point. Well, Frida went and rode in every single one of them. I think there were 12, and she took videos in every car and sent it back to the team.
And some of them were totally terrifying, obviously anecdotal. Yeah, everybody from Huawei to Baidu to Didi to BYD, et cetera, has their version of it. By the way, there's another question, I think, which is both Waymo and Tesla have entered that potentially licensing. And so, that would be an interesting maybe that happens down the road between the historical incumbents and these two new companies, and whether that type of model plays out.
It seems quite likely to me that that's possible. Speaking of Elon. Real quick, one last thing that I forgot to mention on the national footprint. It may be important if they're going to put that together. Assuming that this is an issue that the country wants to support and wants to see leadership in, they may need to consider some type of insurance reform and litigation limitations built into the whole thing.
Because I do think that we have this huge problem in America where litigation is so rampant. And I can just imagine some lawyer in front of a jury talking about how the computer killed somebody and trying to extract tens or hundreds of millions of dollars. And that could be a real problem for the industry.
So, if they're going to consider all the other safety stuff, I think they should also think about liability limitations, especially on just arbitrary jury awards and things like that. Well, listen, I think that that's a perfect segue to talking about Doge and what's happening with this tectonic election we've just had.
And I think those really big ambitious objectives, which frankly we haven't seen a lot of ambitious efforts to reform, I think, over the course of the last few administrations, maybe both Republican and Democrat. The one thing about the current administration, it strikes me, has a level of ambition that I don't know that I've seen in my lifetime, Bill.
But one is the Department of Government Efficiency. So, this is the unofficial government department, which is run by Vivek and Elon and is overseeing trying to reform government spending, government regulations, frankly reducing the size of the federal government with the overall objective of getting control of our national deficit and our national debt, which we all agree is pretty egregious.
And one of the videos I saw going around, I saw several people tweeting it, including Elon, was of somebody you and I, I think, have a lot of affection for, Milton Friedman. And this was Friedman opining in a very quick, rapid fire interview of the 14 government agencies that existed at that moment in time, which he would abolish and which he would keep.
And, you know, we'll roll it and put a clip in here. Keep them or abolish them? Department of Agriculture. Abolish. Gone. Department of Commerce. Abolish. Gone. Department of Defense. Keep. Keep it. Abolish. Department of Education. Abolish. Gone. Energy. Abolish. Except that energy ties in with the military. Well, then we shove it under defense.
The little bit that handles the nuclear. Right. That ought to go under defense. Plutonium and so forth goes under defense. But we abolish the rest of it. But, you know, they asked him, Department of Agriculture. Abolish. Commerce. Abolish. Education. Abolish, et cetera. And I think in tweeting that, it kind of sets some boundary conditions, right, of a level of ambition that I don't think many people -- they heard the words, but I don't think many people said, no, they're not really going to do that.
They're not really serious about going after that. What do you expect out of the Department of Government Efficiency? And, you know, is the agenda going to be that ambitious or do you think it will hit the wall of some congressional reality and maybe they're just setting the boundary way out there that they can negotiate back from?
Well, first of all, I mean, I think if you extrapolate the growth of government, you know, two or three decades into the future, you know, we would get to a point where such a large percentage of the population work for the government that it would self-implode. And many people, very smart people, have also highlighted that our debt is too large, the U.S.
country debt is too large, and interest payments are now a huge part of the federal budget. And how can you possibly solve that if you don't shrink the size of government? Now, prior to the election, the papers said that both candidates were going to spend the same amount. So this kind of new department is, I think, you know, orthogonal to what most of the media thought Trump was going to do.
I would add, I highly recommend everyone watch the two hour interview between Lex Friedman and Malay from Argentina. I think a lot of this is being provoked by him. He was recently in town and hung out with the Doge team. If you take him at his word for what has been accomplished in Argentina in such a short window of time, six to nine months, it's pretty spectacular.
And maybe it's important for someone to fact check that. But really remarkable what can happen if you improve efficiency. And, you know, things like he got rid of a ton of regulation around rent control and housing improved for everyone. You know, and for me, like, that's like an of course, of course, that's what would happen.
And so how do you how do you tear this apart? We we could simultaneously, if they're successful, see a reduction in government and an acceleration or proliferation of growth because you find out governments getting in the way. As I've said on the past two or three podcasts, you know, Governor Shapiro in Pennsylvania keeps getting accolades for moving regulation out of the way.
It's so ridiculous. Like if that's if that's what you get celebrated for, we should just rewrite regulation and get it off the books. And we you know, we know there are states that are more productive and are growing faster right now and are and are actually increasing their population because people can build things.
They can build factories, they can build, you know, distribution centers and everything can happen. They can build solar farms, you know, a lot faster. And so I do think it's super important. I really encourage people to watch the Malay thing. The part that's that's most interesting, because I think the easy pushback is, oh, they'll never be able to get it to happen.
Washington's too, you know, and can't ensconced and you just can't move it or these people will protect themselves. The methodology that I'm going to assume was Elon's idea is basically to shine a flashlight on the idiocracy that exists. And, you know, transparency can be a hell of a disinfectant.
And while you and I have started this, I took a flash look at Twitter and Vivek just announced that he's going to pause his own podcast to launch a new podcast with Elon called Dogecast, where they're going to give regular updates on the cost cutting and what they're going to do.
And by the way, they've asked people to send in their favorite example of government waste. They're going to talk about it. And I think when you highlight to the American population really idiotic spend, they're going to get behind this thing. And that's a point here, you know, comes to this question of can they get it done, right?
Can they get it done? And listen, there have been plenty of presidents, Clinton, Reagan, et cetera, who've talked about getting government control, spending under control, that talked about empowering the individual versus the state. And then they meet the Leviathan of Washington. Maybe just a little background here on the executive power as it relates to this.
So there's this constitutional doctrine that we've been talking about in our thread, the doctrine of impoundment. And it's the ability of the executive branch to withhold or delay the spending of money appropriated by Congress, right? And this was a question as to whether or not the executive branch actually had this authority.
Congress passed an act in 1974, the Impoundment Control Act, which basically said it restricted the president's ability to withhold funding, saying Congress is the one that allocates funding. If we allocate funding, then the executive branch has to implement the law, has to spend the money. Now, notwithstanding that fact, there was a big – and Trump won.
The Trump administration withheld 391 million bucks in military aid to Ukraine that Congress had appropriated. And some argued that this violated the ICA. Now, why do I bring this up? I bring this up on the one hand because there is going to be a constitutional authority question here. But over the weekend, I talked to several members of Congress, both in the Senate and in the House, on the budget committee.
And I will tell you there is strong agreement, and it wasn't just from Republicans, right? There were people who were trying to figure out how they can support and interact with the Department of Government Efficiency on both sides of the aisle, right? And so I think that notwithstanding the ICA, if they can get Congress to cooperate, then there is no constitutional issue.
And then the final one, Bill, which is the one that I think is the most powerful here, at the end of the day, what we've seen time and time again, the power of Twitter, the power of X, going directly to the people with your message, saying here's what government is standing in the way of.
Here's what we want to do. I think that's overwhelmingly powerful, shaming the government, right, really into making what I think a lot of people will agree are logical cuts. And so I'm bullish. And one of the things I keep hearing in the investment world is lots of people are trying to read between the lines, right?
They're saying, well, what do you think the president really says or what do you think is really going to happen? And I said, stop trying to guess. Just listen to the words. After the election, Trump is literally recording on X, reiterating exactly what he's going to do. People are tweeting about what the Department of Government Efficiency is going to do.
You're going to have these dogecasts. And I think that's exactly what they're going to attempt to do. And I think this is going to be a once in a generation reset in the balance of power between the individual and the state. And frankly, for one, I think it's well needed.
I hope it's successful. Certainly, there'll be some things they get wrong, just like anybody who takes on an ambitious project will get some things wrong. But I think you could just undo those. By the way, one of the things that's already come up is I think two of the largest departments in the country have failed audits for like three to five of the past years.
Yes. A single company on our public markets fails an audit or has to change auditors, as we recently saw with Supermicro. You get taken to the goddamn woodshed. How about that? Down 90 percent. Yes. It's considered a faux pas of the worst kind. And I've already seen interviews where someone's asking these department heads about these failed audits, and they're saying it's OK.
And so that should not be OK. The dollars are massive. These people work for the citizens of the country. And if they have an audit requirement, they should be meeting it. And I'm shocked to find out that that's true. Well, one thing I wanted to do, I ran a quick exercise because I wanted to just see – I've heard a lot of people say, oh, it's unachievable.
You can't get the budget to balance in four years, which I totally disagree with. So if you look at the federal revenues, they're about $5.2 trillion or just over $5 trillion expected this year. Government spending $7 trillion. So we basically have a $2 trillion deficit. That spending is up dramatically, by the way, from the $4.5 trillion that we spent in 2019.
So in COVID, we just lost our mind and we've never regained our mind. So I just said what's possible here. Like, what would not even require draconian changes? If you just grow revenues from here at 3 to 4 percent over the course of the next four years, Bill, and you cut costs in the first year by 5 percent, second year by 5 percent, third year by 2 percent.
So all that does is get you back to trendline from 2019 as though we increased spending 3 percent a year from 2019. The budget balances. It's tiny. These are tiny changes in costs. And I shared that with Vivek and Elon, and they both had separate and independent reactions, which is not nearly ambitious enough, right?
And here, here, like, there's no reason in my mind that we can't do more. But what's encouraging to me and exciting to me, and I think will be well received by the bond market, is showing a four-year plan just to get us to balance, right? Just like a company does.
Show me the long-term boundary conditions. Tell me where you're trying to go. And then let's use the power of the bully pulpit, the power of impoundment, to get people in line. I agree. It'll be interesting to watch. And once again, I would encourage people to watch. I'd say watch this because I think Malay is such a fascinating human.
And I'd watch it on YouTube just so you can see his facial expressions, although it's dubbed because they actually did the interview in Spanish. But he's a very unique individual. Like, he was in a rock band. He obviously has studied the history of economics like no other. A disciple of Milton Friedman.
Which comes out in this thing. Like I said, if what he said is true about what's been accomplished, it may be the fastest impact a president of a country has ever had on the trajectory of that country. And so, anyway, I'm going to agree with you. I'm in the optimistic camp.
It's about time. There is zero doubt in my mind that making the government more efficient in our country would be very positive for everyone that lives here. So, maybe just a quick market wrap and then we'll bust out of here. I get a lot of questions. In fact, I'm doing CNBC tomorrow, Bill.
And the question I get right now, like I always do in kind of November, is what about next year? And so, people are beginning to – everybody who's been in the market, we've had a really good year. Lots of our friends have had really good years. So, now we're starting to play a little bit more lockdown poker, tax loss harvest, all the things that one does as you head into December of this year.
And people start thinking about what is the portfolio that I want to start 2025 with? And so, maybe just at a high level, what I would say is I always start with kind of just market backdrop. So, let me give you a sense. At the end of 2022, remember how bad '22 was.
Mike Wilson was saying we're going to have a hard landing. Larry Summers was saying interest rates are going to 7% to 8%. We had seen a tremendous sell-off, almost 30% in the NASDAQ. So, it felt really terrible, but the setup for 2023 was actually pretty good. Just reverting to the mean would be a pretty good year.
Now, we've had two pretty bullish years for technology, but we just had this historic election. So, when I look at '25, here are the things I put on the pro side of the ledger, right? We're going to get lower taxes, right? Not only the extension of the tax cuts that we had 10 years ago, but it looks like we're going to get some additional tax cuts on top of that.
The corporate rate may go even lower from '21 to '15. You may get adjustments down in the capital gains rate. You may see the reintroduction of the SALT. So, a lot of tax stimulus. We're going to get less regulation like you and I just discussed. Rates are headed lower.
They may be going lower, slower than we thought because the economy is doing well, but the Fed has tons of flexibility. Our growth is still high, right? GDP is still ripping. We see these corporate reports coming out. They're doing great. We have balanced employment, right? Employment is not so tight that you can't hire anybody, but our unemployment rate is still at historical lows.
And then we're early in this tech and infrastructure super cycle that you and I started the pod talking about. What's on the negative side? Well, it looks like we may get a lot of tariffs as we've seen Howard Lutnick now propose for commerce. He's advocated, as has Trump. We need to level the playing field and then negotiate from there.
So, leveling the playing field is putting tariffs on a lot of products in a lot of countries that would presumably raise prices. We also see we're going to get reduced government spending. And remember, government spending is a key driver of economic growth. So, you'll put stimulus in by way of tax cuts, but reduce spending.
And then you got all that against the fact that market multiples are near all-time highs, right? So, the S&P is trading at 24 times. I think the NASDAQ is closer to 29 times. So, in my mind, it's not a recipe for an all-in moment, right? Because there's not blood in the streets, right?
We're closer to trumpets in the air than blood in the streets, as Buffett would say. And he says sell when there are trumpets in the air. So, it all comes down to top-line growth and bottom-line growth for next year. And when I look at that, there were two reports today that kind of stick out to me, right?
On the one hand, we were short a little bit of Target because we're worried about tariffs. And we're worried about the retail consumer holding up, especially around these highly levered consumers when it comes to housewares and things that Target has exposure to. They missed their numbers. They took down guidance.
The stock was taken out to the woodshed, down 25%, right? And I saw some data that suggested auto delinquencies are on the rise. So, do you have any concern about the consumer? For sure. I think that, listen, we know that they've spent through all their stimulus money, right? That they're not getting the pay raises and all the overtime that they were a couple years ago.
And we know that when you look at their credit, they're really spending a lot on credit now. And we see that same data on auto delinquencies. And so, if you just go through the things that Trump says he's going to do, right? Imposing tariffs is bad for retailers like Target.
So, that's one side of the equation. Then on the other side, you have somebody like Snowflake or Palo Alto report tonight. And they beat earnings, but expectations are low. Snowflake's up 20% in the after hours. So, just think about that spread trade. Target's down 20%, Snowflake's up 20%. One has super high expectations, super high valuation.
One has super low expectations. And in the case of a company like Snowflake, there's massive room for efficiency, right? They're getting the efficiencies from AI. They don't need to hire at the rate that they were hiring before. And they still just grew 29% in the quarter. So, that's far from the massive deceleration that many people were projecting.
And so, I do think that the beneficiaries in technology, they're both beneficiaries of this top-line growth, but also bottom-line margin expansion. I was looking back in preparation for the pod with Satya on Monday. Since he became CEO of Microsoft, the net income margin has expanded by 1,000 basis points.
Yeah. Right? If you look at the story of Microsoft over the last 10 years, revenue has doubled, but earnings have almost quadrupled because of the expansion of the earnings margin in that business. And that's what I think we're still going to see in technology. And it's going to be accelerated with AI.
So, I head into next year not all in, but I'm definitely constructive on the backdrop. I just think it's going to be a stock picker's market. It's going to be a bunch of stuff that you want to avoid. But you can find your 5, 10, 20 tech companies that are enjoying the tailwinds of AI and also enjoying the possibility of this margin expansion.
I think they can continue to do reasonably well. So, that's how we're positioned. What I'm hearing you say is you can kind of have your cake and eat it, too, if you pick the right courses here. Like, even if the consumer has some struggles, if you've got your portfolio aligned in the right way, you're going to be able to maneuver through that.
Yeah, another way of saying it, I don't expect a 25% up year for a broad-based index. Yeah. And if you look at it, I don't know where it is as of today. But there is one point in the year, Bill, that if you took the top 10 tech companies out of the S&P, right, the S&P was not up 20, but it was actually down five.
Yeah. Right? And I think that's the type of dispersion that I think we're going to see again next year, that there are going to be a lot of winners, either as the result of changing government regulations, changing government policies, lower taxes, or as the result of this tech super cycle.
But there are going to be a whole lot of losers, right? We just talked about the ones in FSD. It may very well be that this is terrible for a lot of OEMs, but really great for Tesla. We have yet to be seen. I'll tell you, as an investor, and we could leave it on this, the rate of change has never been higher in technology, and the rate of change of this election is massive, right?
The magnitude of the policy change is massive. You combine those two things, and let's just say we've been working overtime. Makes sense to me. One of the things we may want to talk about in the future is whether the Trump administration is going to lean into AI regulation as well, which could have implications for everyone around.
Anyway, thanks so much, man. Great to see you. Good to catch up. Sorry this thing didn't happen, but we'll get it next time. We'll get her done on Monday. All right, man. PG2, out. As a reminder to everybody, just our opinions, not investment advice.