There's a picture you can look up that's kind of disgusting, so people may not want to. But there's this thing called a gavage tube, which is what they use to make foie gras. It's how they force feed the geese to get them just super fat. And that's the image I have in my mind, like are we overfeeding these startups?
Hey, Bill. Great to see you. Good to see you, Brent. Man, that was an amazing pod at Diablo Canyon. The inbound regarding just nuclear has been off the charts from literally senior policymakers, senators, and House members on both sides of the aisle. It really feels like the dam is broke.
We half-joked, Bill, that Microsoft-- when we were down there, we discovered that there were four unbuilt nuclear reactors that are all already plotted on the site. And we half-joked that NVIDIA and Microsoft and Oracle could come sponsor these reactors and that they could have a new type of public-private partnership with the government and build data centers right next to them.
And it turns out it wasn't so far out there. I mean, Oracle has announced that they may do some things with small nuclear reactors. Amazon is buying this nuclear-powered Talon data center facility. And now Microsoft this week announces with CEG that they're going to bring Three Mile Island out of retirement.
It's incredible to see the beginnings of what may be a US nuclear renaissance. Certainly, the momentum has-- if maybe it was headed up and then it's kind of reached an apex and kind of fallen over. The FT article yesterday or today highlighting that 14 different banks have shown up at a climate conference with a confirmation of a willingness to invest is just huge.
And I think there are two things that are big takeaways for me. One, we were talking about one of the limits on SMR and on any new innovation in the space was that utility companies are traditionally very conservative. And I like to think about it in the framework of crossing the chasm.
You basically are selling only to laggards. And that's very difficult, especially for a capital-intensive startup, to be selling only to laggards. And what may have transpired literally in the past month is the hyperscalers-- and this may have started before then because Amazon did the deal with CEG a little while back.
But if the hyperscalers become part of the customer set for the nuclear startups, that may be 10x better than selling just to utilities alone. You may have brought innovators to the table on the purchasing side that may be more open-minded, that may be more understanding, may be more willing to share risk, which could be very positive for the SMR market.
So that'd be my one big takeaway. And the second one is just that a lot of times, I think people look at big, big problems and think they're insurmountable. And I remember actually in the past two years being at a off-site conference at a think tank where we were talking about climate change.
And about 80% of the way in, someone raised their hand and said, why aren't we talking about nuclear? And all the scientists in the room said, oh, no, we're not going to put that back on the table. That's too far gone. That's past. And it turned out that wasn't true.
It turned out there was an opportunity to get a renaissance in thinking about this. And it started, I think, with people like Steven Pinker, who are wildly regarded scientists, saying, no, this is our best path out. But then we talked about Patrick Collison. And others kind of jumped on the bandwagon.
And then there were plenty of pro-nuclear advocates that were sticking their neck out. And then Elon gets in the game. And then this data center thing may have been just the impetus you needed to get people over the top. And we were lucky enough to kind of time our thing as this transition was happening.
But it is possible to create kind of wholesale change in how people think about something. But it takes a lot of work by a lot of people. And everyone that kind of stuck their neck out early. Josh Wolf was another one that was sticking his neck out on this topic.
So I congratulate all of them. And it feels like the momentum's now behind us. And I literally feel bad for the citizens of Germany. One thing that is very apparent is that the easiest thing to do is bring-- well, start with don't decommission any of these things. But second, if any have been decommissioned recently, try and bring them back.
And I hope there are some sane minds in Germany that are watching all this. Because I think the world would benefit from them reversing that decision and running back at this. One of the things I learned as well-- because you and I talked a lot just about how do they underwrite-- how would the hyperscalers underwrite building out those nuclear reactors?
And one of the things I learned after our pod was that these companies that are considering nuclear, they are spending billions of dollars a year on carbon offsets. And you know there's a lot of criticism about these carbon offset markets. But I dug up some data. In 2020, Morgan Stanley estimates that about $2 billion was spent on the carbon offset market.
And by 2030, they expect that to be $100 billion by these large hyperscalers that have to buy these massive carbon offsets. Now, if instead you're investing in nuclear clean energy, if the source of the energy that is powering your data centers is clean, then you actually have to get to buy fewer of the carbon offsets.
So that may make it easier, again, for them to pencil out the math. That may be what we're seeing some of the dam break. I think a huge part of this is just the public consensus. Nobody wants to invest in something that all your customers are against. And we know, we've shown the data here, that this is now popular again among consumers because they understand it's clean, it's carbon-free.
The other data point that broke since we did that was the Three Mile Island restart. I think there were rumors of it before we did the podcast with CEG, with Constellation Energy Group. And there were quotes in these articles about a survey of the Pittsburgh residents. And they were supportive of Philadelphia.
And what that says to me-- and look, this is knowing that the customer is really Amazon and not the citizenry. And I was just shocked by that. And so we all know that one of the reasons this happened was that there was an irrational public response to the negative risk of these solutions.
And it is super unfortunate that that takes so long to heal. But time is the best way to get past something like that. And it's been a long time. And I think people have a lot more data. We're not there yet. We need to keep the pressure on. We said we'd like to see Gavin Newsom extend Diablo by another 15 years.
That's on his desk right now. That facility has at least another 40 years left in it. So I think we all need to keep the pressure on. But the nice thing is there's good bipartisan support. People view nuclear as not only a matter of climate security, but now it's a matter of national security because it's the primitive to all of AI.
And so I'm excited for the momentum. But speaking of the exploding need for more baseload power to feed the AI beast, let's talk about our first topic and where things currently stand in AI. And I think since we last talked, people have continued to climb, build this wall of worry, like whether AI is in a bubble or not.
Of course, last week, OpenAI was rumored or is now well-known to be raising capital at $150 billion valuation, or at least that's the Bloomberg headline. I can confirm that Altimeter is talking with the company. So of course, there's some things I can share and things I can't share. But you had Kevin Scott, the CTO of Microsoft, say demand for AI infrastructure is materially outpacing our ability to supply it, even as we are building at a pace unseen.
Jensen said at the Goldman Sachs conference that they will be under-supplied not only this year, but for a while to come. So where do you come down on this, Bill? Let's just start with the demand for training and inference and data centers and power. Do you think we're headed for a glut?
Before I answer that question, which we've talked about many times before, I will state that since two podcasts ago, 'cause we focused on Diablo and didn't talk about this. So you go back, I guess, probably four or five weeks. If anything, the balance of enthusiasm and willingness of individuals to commit capital has gone up, not down.
If you go back maybe a year, I don't know that Oracle was really in this discussion from a spin standpoint, right, on scaling things out. And clearly they are now. They're at the table. They want to be considered like one of the hyperscalers. So in addition to Oracle kind of stepping up and being a big player, we get an announcement that BlackRock and Microsoft are teaming up to raise a fund that would be 30 to a hundred billion dollars just to finance data centers.
They also mentioned energy. But Microsoft was already building their owns and already supporting CoreWeave. And so this is just more and more commitment to rolling out more infrastructure. I think the Middle East is involved in this announcement as well. And there's been a lot of talk of, and there's a lot of companies like G42 that are pushing for even more spend in the Middle East.
So everything screams even more demand. You and I have argued or talked about whether this is really supply or demand, but presumably these people aren't acting irrationally. So since we talked last, I would say the world's gotten more enthusiastic about AI than it was five or six weeks ago.
Right, right. And I agree with you. I think you said something really important, which is when you're planning out three to five years and we're talking about tens of billions of dollars, one has to assume that Sachin, Jensen and Oracle and then Amazon, et cetera, that what they're seeing justifies the spend.
And one of the things driving the debate over that demand bill is all these new models that have been popping out. We finally saw Strawberry, the O1 preview model, whole new vector of scaling around inference time reasoning out of open AI, which is exciting. There's a new Anthropic Opus model, 3.5 that's expected to drop this week.
MetaConnect is in a couple of days, I'll be there. And Mark Zuckerberg is expected to announce several new models there, smaller models and larger models. And so despite the improvements in model training efficiency, the aggregate training and the velocity seems to continue to grind a lot higher. We were lucky to have Noam Brown, who's the inventor and the leader of the O1 preview.
So remember, we talked about Noam before. He was at Meta and did Pluribus and Liberatus, won the game of six-handed poker. He spent a lot of time thinking about this inference time reasoning. And we're really early in this journey, but it's already kind of a big wow, right? This idea that in addition to pre-training, we're now going to allow models to reason and think as part of the response process to the prompt.
I don't know if you looked at any of these tweets out of Noam Brown or others about the O1 preview, but just curious if you had any thoughts or reaction. I know you were blown away by the voice model. It's been slow in dropping, but I don't know if you've played with the O1 preview.
Well, yeah, and on the voice thing, I think they may have been focusing on their enterprise customers. I've talked to some enterprise customers who have that product in-house and they're very excited about it. And let me just say quickly on voice, one thought I had when you think about it being delivered as an API and not just as a consumer product, there is this question is, does the input to the computer become voice for the first time?
Microsoft was talking about that maybe two decades ago and Gates was super excited about it. Yeah, tell me, we were big in tell me, but if it gets so good, and this becomes the way you talk to your computer. And another thing I would say is talk to websites.
Like, could you imagine you walk up to a kayak or on the website and you just start talking, that'll be a radical new dimension for how we use our computers. So I'm excited to see how that plays out. On the strawberry release, the one thing that I think is important for people to realize is the graphs that were shown, and we can certainly put some links in the show notes, but I think everyone's seen it.
The access was logarithmic. The implication being in order to get linear improvement, you have to do maybe 10X the amount of processing. And this is all inference. So what are the implications of that? Well, one, you have to figure out what problem cases are good scenarios for being willing to spend 10X or 100X as much on inference to get to a better solution.
I don't think it's all of them, but many people believe it's a lot of them. And then the second thing that comes out of that is if there are a lot of them that are willing to pay 10 to 100X on inference to get linear improvements, then the percentage of compute, and you need to run math models I don't have in front of me now, but maybe the dollars of compute are gonna move more towards inference than training as we move forward.
- I think there's no doubt. So there's a video of Jensen out there that we'll put in here. - One of the things that Sam introduced recently, the reasoning capability of these AIs are gonna be so much smarter, but it's gonna require so much more computation. And so whereas each one of the prompts today into ChatGPT is a one pass, in the future it's going to be hundreds of passes inside.
It's gonna be reasoning. It's gonna be doing reinforcement learning. It's gonna be trying to figure out how to create a better answer, reason a better answer for you. - When you use a model like Strawberry, you're likely to see 100X more inference, right? Because rather than single shot prompting, there's a lot of recycling that's going on as part of the reasoning process.
Just mathematically, we know that's going to lead to a massive explosion in inference. Now, if you look at these GB200s that NVIDIA is selling, the cost or the improvement in inference, I mean, NVIDIA says it's a 50X improvement in inference. Other people say it's a 3X improvement in inference, but there's clearly a lot of focus on inference.
I think the world is inference constrained. I think part of the reason for that is that you have new models emerging like this that are gonna have machines talking to machines, lots of inference going on in the background. Of course, you have companies like Grok and Cerebus that are bringing really fast inferencing to the table.
But again, I think we're talking about many orders of magnitude increase to the amount of inference, which is going to be needed in the future. And if you're building a data center, think about this, Bill. If I'm making a $10 billion investment in a 300-megawatt data center, I wanna be able to use that for both training and for inference when my training run is not happening.
So when you look at the total cost of operation in one of these data centers, you're gonna see a lot more activity. And my hunch is part of the reason that voice hasn't dropped, Bill, is that they've all said it. I think that OpenAI and Microsoft and others are inference-constrained at the moment in terms of the demand on these systems.
I think the systems will also get more intelligent where they'll route the request to the simplest models to answer the particular question. So you don't need O1 for really basic questions. That you might be able to route, in fact, to a GPT-3-like model. But having intelligent layering of these models, an ensemble of models put together so that you get the answers in the fastest amount of time, and it's gonna be different models for different sorts of questions.
Yeah, and whether or not the engine can interpret which of those it is will be important. I would go back and highlight that even in the announcements from OpenAI on Strawberry, they admit or they disclose or qualify that there are many instances where the extra iteration led to worse results.
So you really do need to be able to figure out the type of problem and whether or not you're gonna get improvement from that effort. And I personally don't think we know enough yet to know which problems fit in there or not. But I do think people are super excited about what's possible on that front.
My biggest thing about it is like, listen, we've been talking about bigger models, more parameters, all of this. That's basically been the exclusive vector of conversation for scaling intelligence. This is a totally new vector. And now you have the compounding benefit, two different ways to scale intelligence that I think is super exciting.
And just real quick, I had a second thing before you move forward. I could restate when you use the phrase inference constraint, that may be a financial problem too. Like it may be super expensive to run advanced voice relative to what you're charging for. And so especially when you talk about logarithmic increase in spend, I think these companies develop these breakthroughs and they're eager to share them with the world.
And so they put them out there maybe in a freemium or maybe in a kind of early test thing. But if some of them do have much higher underlying costs, we do need to figure out, what are the business models for these things? How much are people willing to pay?
I've heard people on other podcasts say, well, for a perfect assistant, I might pay 10 grand a year, but no one has that product on the market right now. And so I think there's a lot of experimentation with business models that's gonna have to happen as well. I couldn't agree more.
I think there's gonna have to be, I mean, think about this. There's gonna be massive price discrimination. You can't charge in the Philippines, what you're gonna be able to charge in the United States. You can't charge to the tail end, what you're gonna be able to charge to the head end.
But one thing that is true is it looks like OpenAI has something like 200 million weekly MAUs. That's a number widely reported. It's a huge, huge number with little to no advertising. And it seems to me, this is really this benefit of going first, Bill. Billions of free ad impressions.
It continues to grow. And so I asked the team to just to take a look at the time to reach a hundred million MAUs, and you could see this chart just, it took Chachapiti a fraction of the time that it took YouTube or Instagram or Facebook to get to the same place.
It took Facebook about five years. Now that obviously has continued to go up. Weekly average users are 200 million. It's pretty extraordinary. And even by country, if you look at the spread of Chachapiti, it's clearly spreading on a global basis. And then finally this tweet by Vivek Goyal on my team, it shows just like Chachapiti beginning to run away with it, Gemini, Meta AI, Claude really are not even keeping up.
So just if you set aside valuation for a second here, Bill, why do you think the game on the field as to consumer has changed so much? Is Chachapiti now in a flywheel? Have they broken out of the pack? Do you think they're gonna be the winner in consumer AI?
A couple of things that we've talked about in the past. So I think everything you said is true. I do think voice and memory are areas where you could really run. And so people are super excited about advanced voice. People that have it, love it. Especially when I'm driving in a car, I'll have long conversations with Chachapiti.
And if the advanced mode makes that even easier, I think that's very, very reinforcing. And anyone that wants to compete would need to catch up on that front fast. And then the second thing is memory. And I would just say, based on all the tools that are out there, they appear to be experimenting with it more than others.
And we've talked about this over and over, but you can go into Chachapiti and look at what it's remembered on your behalf. I think that's big, really big, which would be another vector for them to break through on. You know, another one I feel like is worth mentioning is Sam Altman just continues to do extraordinary things.
Like he's just surviving the whole board thing was something most humans couldn't do. He seems to have remarkable touch in Washington and access, which regulation appears to be coming at us fast and furious. And I've often said that could be used to help reinforce lock-in. And him having that access and control is super valuable.
And we continue to just hear about new initiatives or new programs. You know, he's traveling around the globe. He's got everyone's ear. And he appears to be remarkably ambitious and successful at what he's trying to convey and talking to people into doing things they wouldn't do for any other partner, you know?
- Yeah, it's pretty extraordinary, the pace and velocity. And frankly, we see that on the team side as well. Just extraordinary team. The best people continue to appear to go there. You know, everybody asked me a year ago, they said, "Oh, Gemini's coming. Gemini's gonna go first." But the fact of the matter is, right through the chaos of the moment, Bill, everybody's responding to them.
They launched Strawberry in '01 preview first. They launched Advanced Voice first. So you gotta give them some credit for that. But when it comes to 200 million weekly miles that's reported out there, I would also make this argument, you know, and the team's made a chart on this, and I'm really curious of your thoughts.
So it appears to me that open AI is seeing more and more of a network effect as well as scale advantages, right? So, you know, you've talked a lot about network effects, but there's been debate as to whether or not they exist here. But here's the argument that I would make.
On the network effects side, it seems like more users is leading to better data, the data coming from the interactions with those users. And that's leading to better models and cheaper models, because you can do more of the work in post-training, which then leads to more users, right? And so, you know, here's the chart that we made on it.
Do you buy the network effects argument, that flywheel? Because if that's in place, then it, to me, explains why we're seeing them break away from the pack when it comes to consumer AI. - Yeah, I don't know. I mean, I hate for that to be my answer, but I don't know the material impact of the users translating into data.
You've likely seen, I've seen, you know, occasionally maybe one in 20 prompts that I do into open AI I'll get two results, and it'll ask me which one I like better. So that's the kind of thing you're talking about. And I just don't know if that makes the model 10% better, 20, or 50, or 100.
There's certainly data that suggests the other models are right on their heels if you only look at test scores and, you know, the type of benchmarks. So that would suggest this isn't true. You know, the memory side, you know, the switching costs go through the roof if you get that right.
- For sure, for sure. - Now, the one thing we've talked about that is also at play is that the memory you'd really like to have is on email and chat and all the data sources that already exist in your life. And how open AI would get inside of those systems is less clear to me.
- Yep. - It's not impossible, it's just less clear. And that's where Microsoft and Google have some advantages and maybe Apple as well. And so it'll be fun to watch that fight and how those sources of data, because I think that's where you get the real lock-in. If I have an AI partner where I can simply say, "Who did I send that email to?" Like, that's really, really powerful.
And I think the switching costs are insurmountable if someone gets to that place first. - The data on the field, and I'm just looking at the data on the field, I'm looking at the number of users and meta AI, et cetera. It looks to me like among the new consumer entrants, and I like the guys a lot at Perplexity, as you know, and I know lots of people like that product, but just from a usage perspective, among the new entrants who have the capital, who have the surface area to compete, it looks to me like Chat GPT has now clearly broken away.
And there's gonna be a game between them. Meta, I think, is probably in the second best position. Google probably can't be underestimated. Obviously, Satya has consumer co-pilot with Mustafa there, but it's really interesting to see that game. The other vector here that's interesting is, you know, there's a lot of rumors out there about open AI's revenue, Bill, you know, and we'll post some of these, you know, four or five billion in revenue- - That's what I've heard, yeah.
- Growing at over 100% a year. So I'm just intrigued by that. If that trajectory were to continue, right, that'd give you like roughly 10 billion next year. And the round's rumored to be at $150 billion. So that's about 15 times forward revenue. So I asked the team two things.
I asked them to compare that to other companies, namely Google and Meta, both in terms of the pace to get to five billion in revenue and the valuations once they got there. And so this first chart just shows that open AI was able to get there roughly in, you know, in two years from the launch of the chat GPT in November of '22.
You know, it took Google about two or three more years than that to get there. It took Meta almost six or seven years to get there. And what was interesting, so they got there a lot faster to five billion. We can agree on that. But then I asked, what were the multiples at that point in time?
Because I remember when I bought the Google IPO, everybody said it's overpriced. When Microsoft invested in Meta, everybody said it's overpriced. But what's interesting is Google IPO-ed in 2004 at about 10 times forward revenue, right? Microsoft invested in Meta at 2007 at about 50 times revenue. And then Meta IPO-ed in 2012 at about 13 times revenue.
And now again, if all these rumors are correct, you're talking 15 times revenue. So it's basically in a valuation zip code that is, you know, again, if you accept the trajectory that's similar to those other companies. And I know you have some real thoughts about margin here and whether or not the quality of those revenues are the same.
So I thought I'd just throw that out there and ask you. Yeah, well, look, I think your analysis is exactly correct. And the only area of risk is what you just said. And I wrote a blog post years ago called "All Revenue Is Not Created Equal," which we could put a link in for people wanna look at.
But I think the one question I would have in this case, which is a data point I don't have, is gross margin. And there, everything we talked about, you know, the high cost of maybe the GPU usage to get advanced voice, right? Like there's a chance that open AI or anyone else in this field's gross margins are more in the 10 or 20% range versus the 57 and 81 that you have here in your charts.
And that would be the one thing that might trip it up. And how those scale over time is tied exactly to all the things that we just talked about in the pricing model and the business model. So yeah, I think you could come to the conclusion you just made, but still have exposure in this one area.
Yeah, and I think, you know, I think it's such an important point to make, right? Like when you're investing in a company, you gotta get the top line. Like who's gonna be the winner? And then you need to be able to forecast that top line, but that's not ultimately what drives valuation, right?
What drives valuation as we've often talked about here is the future cash flows that those revenues can produce. And there's a real question on the table here that you've articulated well, which is, you know, is there going to be a layer, a tax here, right? That Nvidia and the cost of inference and the cost of training imposes in perpetuity on these companies such that it's going to always be a less profitable business than a Meta or Google on the consumer side, or an AWS and, you know, and a Google Cloud or Microsoft Azure on the enterprise side.
I certainly think you're right that at the start of the super cycle, like if you go back to the start of AWS, remember the debates then, Bill, in 2009, 2010, can it ever make money? Can it ever make money, right? Because they had to get to massive scale and the cost of delivering that scale had to come down.
So you're betting on two things, I think, with OpenAI with respect to margins. The first thing you're betting on is that they can get to scale because this is clearly a scale business. The second thing you're betting on is you have to believe that the cost of inference is going to come down meaningfully over time, and that the cost of training will come down meaningfully over time.
Now, we already know the cost of inference has come down by over 90% over the course of the last 18 months. And our friend, Sonny, over the weekend, when we were in our group chat, you know, said he expects it to come down by another 90% over the course of the next several years.
So, but those are the types of things you're going to need to be true in order to have a margin structure that is consistent with those legendary businesses like Google and Meta. - Yeah, and yeah, to be fair, your analysis had Google and Meta, but when you make AWS, the comparison that suggests that Amazon might be a better proxy, which, you know, trades at three times.
- Certainly for the enterprise side of OpenAI's business, I think the comp would much more be AWS, but on the consumer side of their business, I think the comp is fairer to be somebody like Google or Meta, but in both instances, you have to assume that the cost of delivering, right?
Let's be clear, AI and AI inference is a much, much more compute-intensive activity than retrieval, which was the business of search, right? And so, like, we just have to see technology ultimately drive that cost down, or there's going to be a higher tax and it will be a lower margin business.
It doesn't mean that it won't be a great business or even a good return, but, you know, to achieve those margins, you got to see the cost of, you know, delivering the product go down. - You know, there've been statements along this journey from both Jensen and Sam Altman that the input to this thing is compute and you're going to need tokens.
And those sound like variable cost inputs. That's, you know, and I guess in the worst case scenario, it's like an airline where fuel costs are just, you know, a big part of what drives the incremental profitability. And so the thing I don't know, and so I'm not suggesting this is absolutely true, is, is an AI business inherently a 20% margin business?
You know, AWS is at 30 and Amazon Commerce was at five until they added, you know, whatever, added advertising, like, or can it be like Google or Meta? And I think until one of these things gets public and we can look at data a little more detail, we don't know.
- Well, it's going to be interesting to watch it unfold, but, you know, there's certainly a related topic, one that I know you're amped up about and is probably even more amped up given this rumored open AI deal, is how the venture model is changing and whether these structural changes are good or bad, whether they're good or bad for LPs, whether they're good or bad for GPs and founders.
So why don't you lead us in a discussion on that topic, you know, on the challenges to venture today? - Yeah, and two things that I would encourage people to check out, our friends at All In talked about this a little bit on their pod last Friday. And then at their conference, Thomas Lafon of Co2 had a long presentation that I think is where everyone worth looking at.
They kind of sets this up, but despite our massive enthusiasm for AI and I'd say the entire community's enthusiasm, you know, we are at a seemingly problematic place in the venture capital industry with regard to how much cash is coming out of the system versus how much is going in.
And everyone seems to be hyper aware that it's a historically low number of IPOs. It's like single digits where the average, even in subpar years has been closer to 70 or 80. M&A has had quite a hiatus, partially driven by the restrictions on the Magnificent 7, although they're finding ways around that.
And so, you know, what's going wrong? You know, the capital markets seem to be doing just fine in terms of how well the S&P is performing. And so why isn't this happening? And so I would offer a couple of thoughts. One, I think everyone now believes in power laws, network effects, scaling laws, that kind of thing.
I think all the investors do. When I entered the venture industry, I think it was a competitive advantage to believe in them when people didn't and you could, you know, find a way to take advantage of that and make money. But today I think everyone gets it. And so the other thing that's happened, I believe, is many of investors have decided late stage investing is better than early stage.
And I'm primarily respond, you know, talking about the venture firms that have gone from being mostly early stage venture, traditional venture to having 10 billion or more, you know, AUM and willing to write checks in the hundreds of millions of dollars, which didn't happen a decade ago, right? And for those type people, you know, the management fee is on a much bigger, you know, you get the same percentage on whether you're deploying it at $5 million a piece or $200 million a piece, and you get way more dollars deployed.
You don't take board seats, so the work's less. And the fees are massively bigger. And I think that for reasons that are just competitive, our whole world's kind of felt this gravity pulling them to that place. And despite the fact that we had this mini correction, I call it mini 'cause that's what it feels like now that AI kind of just brought the sunlight out again, these firms have not had problems raising those dollars.
And so despite the fact that a large number of the unicorns are still private from the previous investment cycle, this kind of behavior continues. And all of these firms wanna be in the hottest deals. You know this, you're on this field, you're a participant in this world. If there's an interesting company out there, it's very likely that they're gonna be approached preemptively and told to take more money.
And so I think you're gonna, until this change, I think you're gonna have very few companies that are considered to be doing well that aren't asked by the industry to raise $500 million or more. And that in and of itself is a very unusual, compared to the traditional venture model from years ago, it's just super unusual because, I'm almost done, because this thing is so competitive and everyone is trying to get in the hottest deals.
The best way to achieve that is to be founder friendly. And I think we talked about the profile that Thrive had, I think on the cover of Fortune. I would encourage people to read that 'cause that's, it was almost, I would call it almost PR perfection for Josh and his team in terms of coming, people vouching for them being founder friendly.
And if you're gonna be founder friendly and write big checks, guess what? You're gonna be supportive of founder secondary and you're gonna be supportive of broad-based employee secondary. When you do those things, you are taking away probably the strongest motivating factor that pushed people to go public, that pushed founders and their teams to wanna be public, which was liquidity.
And so with that off the table, for me, there's no surprise that IPOs are happening or not happening or not happening because there's no incentive for them to go out. And it's a weird place for me when I look at the venture industry writ large, which is what's gonna drive people to go public?
How do large institutions get liquidity? I don't think large institutions can realistically get liquidity to the secondary market, certainly not at a good price. But my biggest, in addition to all those things, one thing I would raise provocatively is, does overfeeding these companies with cash lead to non-optimal execution?
And you and I were deeply involved in the Uber situation, but when you start losing a billion dollars a year, or even I would say $20 million a month, you're very far away from profitability. And we talk a lot about focus and constraints and how that leads to better decision-making.
That's hard to do when you're spending 20 million a month. And the other thing that's hard to do is raise $500 million and not spend it. And so I do propose the question that maybe one of the things that's a problem with that previous generation of unicorns is they were overfed.
There's a picture you can look up that's kind of disgusting, so people may not want to, but there's this thing called a gavage tube, which is what they use to make foie gras. It's how they force feed the geese to get 'em just super fat. And that's the image I have in my mind.
Like, are we overfeeding these startups? And then they get so far away from profitability, they're spending on projects that if they were trying to get to profitability, they wouldn't spend on that are lower return. And then maybe they get stuck. This is my last point. When I entered the venture business, one of the things I thought was an advantage I had coming from Wall Street is I knew what Wall Street wanted.
They were the customer for the venture capital company that would eventually IPO and trade in their markets. And there's an interesting dichotomy right now. If you look at the public markets, which have become much more sane, you would know this better than me, relative to where they were three years ago, there's a high expectation for profitability.
And so I think there's this incredible mismatch between what Wall Street wants to see and the state that a company is forced to be in as a result of this hyper competitive investment market. Yeah, no, I mean, listen, I think that it's a great analysis and framing really of the issue.
And you bring up a lot of great questions and I think the right concerns. So let's try to break them down because I think they fall into roughly like three buckets. Let's start first with the question, just some more dollars, bigger funds, more competition and higher valuations. There's no doubt that VC has grown from 100 billion 10 years ago to 300 billion over the last 10 years.
If you set aside the COVID period where we all know because of ZURP, public markets lost their minds, venture markets went to high levels, we're back to kind of this $300 billion level, which was pre-COVID. And so while we call all of this venture bill, one of the big differences I have here, and I've said many times, is that the venture market really hasn't grown that much.
Much of the investment that we're counting as venture, when you look at all these data sources that we pull is into companies that are higher than $10 billion in valuation, with huge revenues that would have been before captured by public market investors. And so I think it's important as an industry that we start thinking about these things as different.
I call them internally and to our LPs, I call those quasi-public companies, companies like Databricks, Stripe, OpenAI. I think it's silly to call them venture at this stage when they have 5 billion in revenues growing 100% a year. So yes, I would say this, the late stage quasi-public market is much more competitive, just like the public market is more competitive, because it leads to better price discovery, but it also means that there's less arbitrage and returns are more dependent upon long-term compounding than some misinformation in the market, right?
And as we saw in '20 and '21, the public market's corrected. In fact, a lot of the IPOs that happened during that period are still down over 50%, right? So the public market's corrected, just did it quicker. In the quasi-public markets, we've seen a lot of these companies shut down, sell, and still down over 50% from the high.
So I don't think there's a lot of difference there. And when I look at the early stage venture markets, Bill, I would agree with you, there's a lot of excitement around AI, but outside of AI, you look at Series A follow-on rounds or Series B follow-on rounds, I mean, they're down dramatically.
If you look at the number of first-time funds that are getting funded as second-time funds, those are down dramatically. So I see a lot of reversion to the mean happening rather than structural change. I think the big structural change that this data leads me to conclude is that because of the regulatory burdens of going public, because of the change in Silicon Valley around sentiment, because of the ability to get liquid in secondary transactions, because of the liquidity of this late-stage quasi-public market where institutions like Altimeter or the CO2s of the world or Fidelity's or Thrive's or whatever that we're here to provide that liquidity, I agree with you that there's a lot more money there because those companies are choosing to stay private, but we should think about these and compare them to their public company competitors, not to what's happening in the Series A market.
Two things I would highlight. One, when you call it quasi-public, I think you're talking about it primarily from a input point of view. In other words, it looks public relative to how Altimeter would invest or other late-stage players would invest. But, and maybe this is where the word quasi comes in, but if you think about it from an output perspective, there's no liquidity for anybody.
So you've taken a portion of the market that used to serve multiple purposes and now one of those purposes is gone. If we're saying there's a permanent shift from one place to the other. Two, there are these regulatory things that come up because many people believe that one of the SEC's goal is to make sure all investors can participate in that.
Now, that's off the table. I mean, listen, listen. I've said publicly many, many times, we should get these companies public faster. I wanna see Stripe public. I wanna see Databricks public. I think it's better for the companies. I think it's better for the investing public writ large. I'm just trying to explain the game on the field and- - I understand.
- Go ahead. - And that's the third point I was gonna make is quasi public isn't the same as public in terms of how the public markets might shape the motivation and execution of the team. - 100%. - We go back to the meta example where they went public, stock went down, Wall Street says, "You're not ready for mobile." You know, and Zuck later said, "Shit, that actually kicked me into gear." Those things don't exist when this isn't here.
And I agree with you. You know, this is playing the game on the field and I'm not blaming anyone. I'm just highlighting this is where we've matriculated to. You know, this is where we stand today. - I think it's right. So let's move to the second big point, which is this liquidity IPOs.
And I think it is true the number of IPOs has been anemic and the exit amount is now in venture is now at about $100 billion a year. While down a ton from the $700 billion peak in Zerp in 2021, you know, which was really a one-time COVID high, we're basically back at the same exit level for venture that we were pre-COVID.
Lots of people talk about the zombie corns, right? We have a thousand companies that were unicorns. A lot of those will never get back there. I've said 80% of those companies will never get back there. Those companies need to get, you know, merged into other companies, need to get sold, need to get shut down, whatever the case may be, or do down round IPOs like Instacart did, which is now off to the races under some great leadership.
But I look at just our pipeline build, just to give you a counter example here. I think we may have four IPOs in the pipeline in the next four to six months. The rumors out there around companies like Cerebris and CoreWeave and Databricks, those are all in our portfolios.
And on top of that, you know, we recently sold Tabular to Databricks, you know, that had a price rumored to be $2 billion. So exits are increasing. Interest rates are coming down, which I think will lead to more of that. The world is healing. And so again, I'm not so sure this is, I do think there are some things that are structural.
The regulatory things are structural. The more dollars in quasi-public is structural, but there is still enough incentive. These companies want and have boards that will get them public. I just think they may come public a lot longer when they're at 10 or $50 billion valuations, rather than they're at a $2 billion valuation.
But, you know, as far as the companies I'm involved with, I'm pushing for them to come public sooner, or at least when they're ready. And I think a lot of that pressure is off. And I think the number of board members that are actually willing to push for that, and Chamath talks about this a lot, I think is actually few.
You might be one of a few, but I think most of them don't because they've been trained to applaud, and that's what they do. Well, I mean- And it's partially just driven by competition. Once again, I'm not, I'm trying to give you my best view of- Yeah, no, I think it's an important, it's a super important conversation.
And by the way, you talk, if 80% of the zombie corns are never gonna get out or are gonna get out at reduced prices, I'm telling you flat out, those are being held on the large endowments as LPs at unrealistically high prices across the board. I would say yes and no.
So let me just give you a couple of different examples. I mentioned we have a lot of good things in the portfolio, but we had a company that wasn't performing at the levels that it previously had been. It had been priced at many billions of dollars at the peak of ZERP in 2021, a company called Lacework.
And we ultimately, we pushed to sell that company to Fortinet while we still had hundreds of millions of dollars of cash on the balance sheet. It's a great acquisition for that company, but it's out of the system. It's marked. We've distributed the cash to our shareholders. So we're distributing, okay?
So I'm just saying that this is happening. And we have other companies in those portfolios that were marked really high in '21, right? One is company you and I are both invested in, Clickhouse, which is growing through those valuations, right? Or a company like Sigma Computing that we're in, which is growing through those high valuations.
So you just have to break it down and look at these one by one. I think that there are definitely things in there that are held at too high a valuation, and LPs should scrutinize that. But there are other things that are healing. Let's just, I really want to talk about this question that I think is the most important one, Bill.
Great. Which is, does excess capital lead to companies being overfed, which leads to poorer outcomes for innovation? Because I think the potential for that, like, at the end of the day, that would be the worst thing, right? And I think you and I have a lot of shared belief that too much capital does ruin corporate culture.
It does lead to higher burn rates. It does lead to lower financial returns. And it does lead to less innovation, right? I pounded the table on this topic, you know, the time to get fit. And I applaud Mark Zuckerberg. I mean, him stepping out in February of 2023 and writing a letter called the year of efficiency.
And he said, we started doing this, just thinking that it was about getting back to the office. But what we discovered was that smaller is better, right? He said, flatter is faster and leaner is better. And what he meant is the cycle time on innovation, de-layering the organization, getting rid of layers of VPs, right?
Like really getting the organization tight and fit was better for the future growth and future profitability and future innovation of the business. So I tweeted over the weekend, it's great news that I saw that Zoom and Salesforce and Workdays, they're starting to get sober about stock-based compensation. That is another component of too much capital, too many people leads to excessive SBC, which you and I have talked a lot about.
So I think just because you raise a lot of money doesn't necessarily mean you're unfit, right? Remember, OpenAI is not really a VC company at this stage. Google went public on 2 billion of trailing revenues. These guys are rumored to have 5 billion already. So 3X that amount. And we've talked about building AI is just a lot more expensive than building the things that came before it.
So I just think we need to have an apples for apples comparison. But if you want a champion to stand with you on this issue of companies raising too much, spending too much, they gotta be really careful. And so I gave some advice to one of our fellow founders the other day.
He's got money coming in over the bow at a multi-billion dollar valuation. And he's like, "We don't need it. "We already have hundreds of millions on the balance sheet. "Should we raise it?" And I said, "Here are all the downsides of raising it. "You can't go raise another 500 million "and not have pressure from all your employees "for employees secondary, "for spending more money on more projects, et cetera.
"And the NPV on those other activities will be lower. "And the incentive your employees have to stay with you "once they sell 10 or $20 million worth of stock "is gonna be less." And so finding that balance, I think is a critical function of leadership of these companies. - Yeah, and I would just say to that, Brad, I do think this is a huge dichotomy.
Like for the, I do believe that the hyper competition in late stage market leads to incredibly large number of preemptive rounds where hundreds of million dollars are being force-fed to a company. And if you're spending 20 million a month, you're burning 240 million a year. If these companies in AI are 50% gross margin or whatever, that you gotta get revenue to twice that.
You're right at that 500 million run rate before you could think about being profitable. And once you've gone to that place, which is ironically the same number, I think, where Philippe Lafont said, "You gotta be to go public these days." And the thing I would say to you, if that becomes true, if that kind of path dependency is cast upon every venture capital company that comes along, you're gonna end up with an excessive amount of zombie corns because previous to this evolution in the late stage markets, plenty of companies, either were bought at 300 million or went public at 250 or 500 million and created positive returns for their early venture capital investors.
And if it's really 500 million in revenue or bust, I think there's gonna be a lot more bust than we've traditionally had. Yeah, no, net-net, I agree with you. VC is harder, it's more competitive, but I think the structural changes that we're seeing are more reversion to the mean than VC is forever bad.
I think it's, you know, listen, this has always been a hard investment category. And I think if you're not backing a VC who has a right and a process that gets them into the top decile, right, then returns are not gonna be great. It's a power law business, always been a power law business.
I think these are super important issues, but why don't we, just in the spirit of time- Wait, wait, wait, wait. Oh, here you go, here you go. I have to have one response. I would encourage our listener base to look up, there's a piece of research that's called the observer effect.
And it started in physics, but it's used more broadly. And the observer effect is the idea that observing a phenomena or situation changes it. And that's really the point I'm making here is that I think prior to now, the investor and how they behave didn't actually impact the situation on the field in terms of like changing the game and how it's played.
And to me, the way the competition that's evolved in the venture industry is actually perturbing and affecting the situation. So anyway, I'll leave it at that. No, I think it's another way of saying that is negative reflexivity, right? That when dollars come in, it actually leads to poor behavior.
We'll take a deeper dive on that sometime because I also wanna get your thoughts on how that might change. But why don't we finish just in the spirit of time with a quick tech check like we always do? Awesome. So the big, big event, and this macro world is more you than me, but the big event obviously was the Fed decision to lower by 50 basis points.
What's the impact from your point of view? Yeah, I mean, you and I talked a bunch about this along with our friends at All In. I mean, Goolsby had signaled this, right? That really we were in historically restrictive territory. And what that means is basically we just have a little bit of the emergency break in the economy.
And so once they were convinced that inflation was gonna have a two-handle, which it now has, and that they were starting to see some slowdown in the jobs market, what we did is we wanna take a jumpstart to reducing the restrictiveness of the economy. I thought that was a smart thing to do.
I think it's just getting back on side. I don't think they were seeing anything other than what we're seeing in the economy. But it's incredibly significant to the markets that we're now on our way down. Remember, we had two plus years of historically steep rise in interest rates coming out of COVID.
And so this gives predictability as companies enter their budget cycles this year, right? Every company right now is thinking, what can I invest into AI infrastructure next year? And so knowing that interest rates are not going up is a super, super important input into those pieces of analysis. But the debate remains.
I mean, I think it's interesting. You had Jamie Dimon and Gunlach come out and say, "Hey, this battle with inflation is not over, deglobalization, all these other things." So they're suggesting that you could actually see inflation kick back up. That would be a negative surprise. And others are saying that the Fed's already behind the curve and needs to go faster with respect to the slowing economy.
So- - Where do you come out on that? - That's the debate that's in the market. Well, I'll just tell you where we are today. We talked about taking a bunch of units of risk off, which we did in June and July. And as we were heading into these rate cuts, we're back at average levels of exposure today.
And that's because we saw a lot of really positive statements coming out of the earnings this summer. And the economic data continued to be constructed. We expected a 50 basis point rate cut, which I had been sharing with you and folks on this pod for some time. And so we think that's a good setup heading into the fall.
Now we have an election we got to work our way through and we got a lot of other question marks, but you got to take those data points as you get them. - Your team is listening to so many different earnings calls and whatnot. What's your take on the consumer and that side of the demand equation, like irrespective of inflation, is there a mini recession or have we literally landed the soft landing or do we not know yet?
- Listen, I think behind the scenes we've been going through many recessions in a bunch of different industries. Like for example, housing went through a mini recession. I think things like home renovation went through a mini recession. The entire supply chain about that happened. The S&P we talked about, the S&P has continued to outperform, but if you take out the 10 best performers from the S&P up until a couple of weeks ago, it was actually down on the year.
So this has been a period of haves and have nots. I think the economy writ large is pretty stable, but let's just look at multiples here for a second, because I think for tech investors it's really what it comes down to. So this first chart just shows you that multiples for these big tech companies have come up quite a bit, certainly off of the January 23 lows.
We were trading about 21 times, right? This is forward PE for these companies. - For the mag seven, you're showing forward PE for the mag seven. - Forward PE for the mag seven. So if you look at the 10-year average of this, it's about 25 times. In January 23, when we're all talking about Mike Wilson's hard landing, the economy's gonna crash, like all this stuff, people have post-traumatic stress from 2022, we got as low as 21 times.
So now we've run up to 31 times on the forward PE. So on that dimension, you would say that looks pretty darn expensive. But if you go to the next chart, Bill, which I think is an important one, which is this is the PE ratio divided by growth, right?
So this is the expected growth rate of these companies. You can see one of the reasons people are excited is because they expect a lot of growth. So on that dimension, it's below the 10-year average. And so what's my conclusion based on that? You know, that it looks cheap if you expect, if you believe in those growth rates, but if those growth rates don't show up for Microsoft, for Amazon, for Google, et cetera, next year, then you can expect that these companies are gonna, their stocks are gonna go sideways to down, right?
Because the valuations are much more full. And so I think that that's really the debate, you know, now. And I think it's a stock picker's market from here. We have average levels of exposure. I think, for example, we think the entire NVIDIA and AI infrastructure supply chain is gonna continue to be undersupplied.
So NVIDIA has come off from 140 to 115. There's a lot of debate in the world. A lot of people think, you know, that they're not gonna hit the numbers. You know, we're kind of at, you know, 6 million GPUs for next year. The bearish people are at like 4 1/2 million.
Like the numbers will ultimately tell. If they do 6 million next year, the stock's going higher. If they do 4 1/2 million, the stock's going lower. That's the way this business works, right? And so we're just out there trying to collect all our data. In fact, Clark's over in Taipei right now, talking, you know, meeting with the supply chain, understanding what's really going on.
And I think, you know, in January '23, you had a huge margin of safety. All you had to believe is that the world wasn't ending and that we're in the start of a new super cycle and you push chips onto the table, right? If you understood that, you had pocket kings or pocket aces.
As we sit here today, the world is much more bowled up, right? So, you know, even if you have a differentiated point of view, it's more like sitting on pocket nines, you know, not like pocket kings. I think you got to take a more measured view of the market and think about this distribution of probabilities.
There's certainly we could see the economy slow. Certainly we could see Blackwell doesn't get his production levels up. That would be, you know, a challenge for the entire ecosystem. So it's an exciting time. I don't think there are any no-brainers in the market, but I can also see how, you know, when I look at the tailwinds behind tech right now, both in the private and the public markets, I couldn't be more excited about the next five years.
It'll be volatile as they always are, but there's no doubt there are going to be some big winners produced, you know, in this cycle. Let's wrap it there. It's good to see you. Let's do it. It's good to see you too. Look forward to seeing you next time. No, I'm going to be down in Austin soon.
Let's get a poker game going. All right, man. Take care. (upbeat music) (upbeat music) As a reminder to everybody, just our opinions, not investment advice.