Back to Index

The AI opportunity: Sequoia Capital's AI Ascent 2024 opening remarks


Transcript

>> My name is Pat Grady. I'm one of the members of Team Sequoia. I'm here with my partners, Sonia and Konstantin, who will be your MCs for the day. Along with all of our partners at Sequoia, we would like to welcome you to AI Ascent. There's a lot going on in the world of AI.

We have an objective to learn a few things while we're here today. We have an objective to meet a few people who can be helpful in our journey while we're here today and hopefully, we'll have a little bit of fun. So just to frame the opportunity, what is it?

Well, a year ago, it felt like this magic box that could do wonderful, amazing things. I think over the last 12 months, we've been through this contracted form of the hype cycle. We had the peak of inflated expectations, we have the trough of disillusionment, we're crawling back out into the plateau of productivity.

I think we've realized that what LLMs, what AI really brings to us today are three distinct capabilities that can be woven into a wide variety of magical applications. The first is the ability to create, hence the name generative AI. You can create images, you can create text, you can create video, you can create audio, you can create all sorts of things.

Not something software has been able to do before. So that's pretty cool. The second is the ability to reason, could be one-shot, could be multi-step agentic type reasoning. But again, not something software has been able to do before. Because it can create, because it can reason, we've got the right brain and the left brain covered, which means that software can also for the first time, interact in a human-like capacity.

This is huge because this has profound business model implications that we're going to mention on the next slide. So what? A lot of times we try to reason by analogy when we see something new. In this case, the best analogy that we can come up with, which is imperfect for a million reasons, but still useful, is the Cloud transition.

Over the last 20 years or so, that was a major tectonic shift in the technology landscape that led to new business models, new applications, new ways for people to interact with technology. If we go back to some of the early days of that Cloud transition, this is circa about 2010, the entire pie, the entire global TAM for software is about 350 billion, of which this tiny slice, just $6 billion is Cloud software.

Fast forward to last year, the TAM has grown from about 350 to 650, but that slice has become 400 billion of revenue. That's a 40 percent CAGR over 15 years. That's massive growth. Now, if we're going to reason by analogy, Cloud was replacing software with software. Because of what I mentioned about the ability to interact in a human-like capability, one of the big opportunities for AI is to replace services with software.

If that's the TAM that we're going after, the starting point is not hundreds of billions, the starting point is possibly tens of trillions. So you can really dream about what this has a chance to become. We would posit, and this is a hypothesis, as everything we say today will be, we would posit that we are standing at the precipice of the single greatest value creation opportunity mankind has ever known.

Why now? One of the benefits of being part of Sequoia is that we have this long history and we've gotten to study the different waves of technology and understand how they interact and understand how they lead us to the present moment. We're going to take a quick trip down memory lane.

So 1960s, our partner, Don Valentine, who founded Sequoia, was actually the guy who ran the go-to-market for Fairchild Semiconductor, which gave Silicon Valley its name with Silicon-based transistors. We got to see that happen. We got to see the 1970s when systems were built on top of those chips. We got to see the 1980s when they were connected up by networks with PCs as the endpoint and the advent of package software.

We got to see the 1990s when those networks went public-facing in the form of the Internet, changed the way we communicate, changed the way we consume. We got to see the 2000s when the Internet matured to the point where it could support sophisticated applications, which became known as the Cloud.

We got to see the 2010s where all those apps showed up in our pocket in the form of mobile devices and changed the way we work. So why do we bother going through this little build? Well, the point here is that each one of these waves is additive with what came before.

The idea of AI is nothing new. It dates back to the 1940s. I think neural nets first became an idea in the 1940s. But the ingredients required to take AI from idea, from dream, into production, into reality, to actually solve real-world problems in a unique and compelling way that you can build a durable business around.

The ingredients required to do that did not exist until the past couple of years. We finally have compute that is cheap and plentiful. We have networks that are fast and efficient and reliable. Seven of the eight billion people on the planet have a supercomputer in their pockets. Thanks in part to COVID, everything has been forced online, and the data required to fuel all of these delightful experiences is readily available.

So now is the moment for AI to become the theme of the next 10, probably 20 years. So we have as strong conviction as you could possibly have in a hypothesis that is not yet proven, that the next couple of decades are going to be the time of AI.

What shape would that opportunity take? Again, we're going to analogize to the Cloud transition and the mobile transition. These logos on the left side of the page, those are most of the companies born as a result of those transitions that got to a billion dollars plus of revenue. The list is not exhaustive, but this is probably 80 percent or so of the companies formed in those transitions that got to a billion plus of revenue, not valuation, revenue.

The most interesting thing about this slide is the right side. It's not what's there, it's what isn't there. The landscape is wide open. The opportunity set is massive. We think if we were standing here 10 or 15 years from today, that right side is going to have 40 or 50 logos in it.

Chances are, it's going to be a bunch of the logos of companies that are in this room. This is the opportunity, this is why we're excited. With that, I will hand it off to Sonia. >> Thanks Pat. Wow, what a year. ChatGPT came out a year and a half ago.

I think it's been a whirlwind for everybody here. It probably feels like just about all of us have been going non-stop with the ground shifting under our feet constantly. So let's take a pause, zoom out, and take stock on what's happened so far. Last year, we were talking about how AI was going to revolutionize all these different fields and provide amazing productivity gains.

A year later, it's starting to come into focus. Who here has seen this tweet from Sebastian at Klarna? Show of hands. It's pretty incredible. Klarna is now using OpenAI to handle two-thirds of customer service inquiries. They've automated the equivalent of 700 full-time agents jobs. We think there are tens of millions of call center agents globally and one of the most exciting areas where we've already seen AI find product market fit is in this customer support markets.

Legal services. A year ago, the law was considered one of the least tech forward industries, one of the least likely to take risks. Now, we have companies like Harvey that are automating away a lot of the work that lawyers do from day-to-day grunt work and drudgery all the way to more advanced analysis.

Or software engineering. I'm sure a bunch of people in this room have seen some of the demos floating around on Twitter recently. It's remarkable that we've gone from a year ago, AI theoretically writing our code to entirely self-contained AI software engineers. I think it's really exciting. The future's going to have a lot more software.

AI isn't all about revolutionizing work. It's already increasing our quality of life. Now, the other day, I was in a Zoom with Pat, and I noticed that he looked a little bit suspicious, didn't speak the entire time. Having reflected on it more, I'm pretty sure that he actually sent in his virtual AI avatar and was actually hitting the gym, which would explain a lot.

>> Hi, this is Pat Grady. This is definitely me. I'm definitely here and not at the gym right now. >> It even gets the facial scrunches. >> This is courtesy of Haygen. It's pretty amazing. This is how far technology has come in a year. It's scary and exciting to think about how this all plays out in the coming decade.

All kidding aside, two years ago, when we thought that Generative AI might usher in the next great technology shift, we didn't know what to expect. Would real companies come out of it? Would real revenue materialize? I think the sheer scale of user pull and revenue momentum has surprised just about everybody.

Generative AI, we think, is now clocking in around $3 billion of revenues in aggregate, and that's before you count all the incremental revenue generated by the FANG companies and the Cloud providers in AI. To put three billion in context, it took the SaaS market nearly a decade to reach that level of revenue.

Generative AI got there its first year out the gate. The rate and the magnitude of the sea change make it very clear to us that Generative AI is here to stay. The customer pull in AI isn't restricted to one or two apps. It's everywhere. I'm sure everyone's aware of how many users ChatGPT has.

But when you look at the revenue and the usage numbers, for a lot of AI apps, both consumer companies and enterprise companies, startups, and incumbents, many AI products are actually striking a chord with customers and starting to find product market fit across industries. We find the diversity of use cases that are starting to hit really exciting.

The number one thing that has surprised me, at least, about the funding environment over the last year has been how uneven the share of funding has been. If you think of Generative AI as a layer cake where you have foundation models on the bottom, you have developer tools and infra above, and then you have applications on top.

A year ago, we'd expected that there would be a Cambrian explosion in the application layer due to the new enabling technology in the foundation layer. Instead, we've actually found that new company formation and capital has formed in an inverse pattern. More and more foundation models are popping up and raising very large funding rounds, while the application layer feels like it is just getting going.

Our partner, David, is right here, and posed a thought-provoking question last year with his article, AI's $200 billion question. If you look at the amount of money that companies are pouring into GPUs right now, we spent about $50 billion on NVIDIA GPUs just last year. Everybody's assuming if you build it, they will come.

AI is a field of dreams. But so far, remember on the previous slide, we've identified about $3 billion or so of AI revenue plus change from the Cloud providers. We've put 50 billion into the ground, plus energy, plus data center costs and more. We've gotten three out. To me, that means the math isn't mathing yet.

The amount of money it takes to build this stuff has vastly exceeded the amount of money coming out so far. So we've got some real problems to fix still. And even though the usage and-- even though the revenue and the user numbers in AI look incredible, the usage data says that we're still really early.

And so if you look at, for example, the ratio of daily to monthly active users, or if you look at one month retention, generative AI apps are still falling far short of their mobile peers. To me, that is both a problem and an opportunity. It's an opportunity because AI right now is a once a week, once a month kind of tinkery phenomenon for the most part for people.

But we have the opportunity to use AI to create apps that people want to use every single day of their lives. When we interview users, one of the biggest reasons they don't stick on AI apps is the gap between expectations and reality. So that magical Twitter demo becomes a disappointment when you see that the model just isn't smart enough to reliably do the thing that you asked it to do.

The good thing is with that $50 billion plus of GPU spend last year, we now have smarter and smarter base models to build on. And just in the last month, we've seen Sora. We've seen Cloud 3. We saw Grok over the weekend. And so as the level of intelligence of the baseline rises, we should expect AI's product market fit to accelerate.

So unlike in some markets where the future of the market is very unclear, the good thing about AI is you can draw a very clear line to how those apps will get predictably better and better. Let's remember that success takes time. We said this at last year's AI Ascent, and we'll say it again.

If you look at the iPhone, some of the first apps in the V1 of the App Store were the beer drinking app or the lightsaber app or the flip cup app or the flashlight-- kind of the fun, lightweight demonstrations of new technology. Those eventually became either native apps-- AKA the flashlight, et cetera-- or utilities and gimmicks.

The iPhone came out in 2007. The App Store came out in 2008. It wasn't until 2010 that you saw Instagram and DoorDash 2013. So it took time for companies to discover and harness the net new capabilities of the iPhone in creative ways that we couldn't just imagine yet. We think the same thing is playing out in AI.

We think we're already seeing a peek into what some of those next legendary companies might be. Here are a few of the ones that have captured our attention recently, but I think it's much broader than the set of use cases on this page. As I mentioned, we think customer support is one of the first handful of use cases that's really hitting product market fit in the enterprise.

As I mentioned with the Klarna story, I don't think that's an exception. It's the rule. I think that is the rule. AI Friendship has been one of the most surprising applications for many of us. I think it took a few months of thinking for us to wrap our heads around.

But I think the user and the usage metrics in this category imply very strong user love. And then Horizontal Enterprise Knowledge. We'll hear more from Glean and Dusk later today. We think that enterprise knowledge is finally starting to become unlocked. So here are some predictions for what we'll see over the coming year.

Prediction number one, 2024 is the year that we see real applications take us from co-pilots that are kind of helpers on the side and suggest things to you and help you, to agents that can actually take the human out of the loop entirely. AI that feels more like a co-worker than a tool.

We're seeing this start to work in domains like software engineering, customer service. And we'll hear more about this topic today. I think both Andrew Ng and Harrison Chase are planning to speak on it. Prediction number two, one of the biggest knocks against LLMs is that they seem to be parroting the statistical patterns in text and aren't actually taking the time to reason and plan through the tasks at hand.

That's starting to change with a lot of new research, like inference time compute and gameplay-style value iteration. What happens when you give the model the time to actually think through what to do? We think that this is a major research thrust for many of the foundation model companies. And we expect it to result in AI that's more capable of higher-level cognitive tasks like planning and reasoning over the next year.

And we'll hear more about this later today from Noam Brown of OpenAI. Prediction number three, we are seeing an evolution from fun consumer apps or prosumer apps, where you don't really care if the AI says something wrong or crazy occasionally, to real enterprise applications, where the stakes are really high, like hospitals and defense.

The good thing is that there's different tools and techniques emerging to help bring these LLMs sometimes into the 5.9's reliability range, from RLHF to prompt training to vector databases. And I'm sure that's something that you guys can compare notes on later today. I think a lot of folks in this room are doing really interesting things to make LLMs more reliable in production.

And finally, 2024 is the year that we expect to see a lot of AI prototypes and experiments go into production. And what happens when you do that? That means latency matters. That means cost matters. That means you care about model ownership. You care about data ownership. And it means we expect the balance of compute to begin shifting from pre-training over to inference.

So 2024 is a big year. There's a lot of pressure and expectations built into some of these applications as they transition into production. And it's really important that we get it right. With that, I'll transition to Konstantin, who will help us dream about AI over an even longer time horizon.

Thank you, Sonia. And thank you, everyone, for being here today. Pat just set up the "so what?" Why is this so important? Why are we all in the room? And Sonia just walked us through the "what now?" Where are we in the state of AI? This section is going to be about what's next.

We're going to take a step back and think through what this means in the broader concept of technology and society at large. So there are many types of technology revolution. There are communication revolutions, like telephony. There are transportation revolutions, like the locomotive. There are productivity revolutions, like the mechanization of food harvest.

We believe that AI is primarily a productivity revolution. And these revolutions follow a pattern. It starts with a human with a tool. That transitions into a human with a machine assistant. And eventually, that moves into a human with a machine network. The two predictions that we're going to talk about both relate to this concept of humans working with machine networks.

Let's look at a historical example. The sickle has been around as a tool for the human for over 10,000 years. The mechanical reaper, which is a human and a machine assistant, was invented in 1831, a single machine system being used by a human. Today, we live in an era where we have a combine harvester.

The combine harvester is tens of thousands of machine systems working together as a complex network. We're starting to use language in AI to describe this. Language like individual machine participants in the system might be called an agent. We're talking about this quite a bit today. The way that topology and the way that the information is transferred between these agents, we're starting to talk about as reasoning, for example.

In essence, we're creating very complicated layers of abstraction above the primitives of AI. I'll talk about two examples today, two examples that we're experiencing right in front of us in knowledge work. The first is software. So software started off as a very manual process. Here's Ada Lovelace, who wrote logical programming with pen and paper, was able to do these computations, but without the assistant of a machine.

We've been living in an era where we have significant machine assistants for computation, not just the computer, but the integrated development environment, and increasingly more and more technologies to accelerate development of software. We're entering a new era in which these systems are working together in a complex machine network.

What you see is a series of processes that are working together in order to produce complex engineering systems. And what you would see here is agents working together to produce code, not one at a time, but actually in unison and harmony. The same pattern is being applied in writing very commonly.

Writing was a human process, human and a tool. Over time, this has progressed to human and a machine assistant. And now we have a human that's actually leveraging not one, but a network of assistants. I'll tell you in my own personal workflow, now any time I call an AI assistant, I'm not just calling GPT-4, I'm calling Mistral-Large, I'm calling Cloud-3.

I'm having them work together and also against each other to have better answers. This is the future that we're seeing right in front of us. So what? What does this type of revolution mean for everyone in this room, and frankly, everyone outside of this room? In cold, hard economic terms, what this means is significant cost reduction.

So this chart is the number of workers needed at an S&P 500 company to generate 1 million of revenue. It's going down rapidly. We're entering an era where this will continue to decline. What does that mean? Faster and fewer. The good news is it's not so that we can do less.

It's so that we can do more. And we'll get to that in the next set of predictions. Also fortunate is all the areas where we've had this type of progress in the past have been deflationary. I'll call out computer software and accessories. The process of computer software, because we're constantly building on each other, has actually gone down in cost over time.

Televisions are also here. But some of the most important things to our society-- education, college tuition, medical care, housing-- they've gone up far faster than inflation. And it's perhaps a very happy coincidence that artificial intelligence is poised to help drive down costs in these and many other crucial areas.

So that's the first conclusion about the long-term future of artificial intelligence. As a massive cost driver, a productivity revolution that's going to be able to help us do more with less in some of the most critical areas of our society. The second is related to, what is it really doing?

One year ago on the stage, we had Jensen Huang make a powerful prediction. He said that in the future, pixels are not going to be rendered. They're going to be generated. Any given image, even information, will be generated. What did he mean by this? Well, as everyone in this room knows, historically, images have been stored as rote memory.

So let's think about the letter A, ASCII character number 97. That is stored as a matrix of pixels, either the presence or absence, if we use a very simple black and white, presence or absence of those pixels. Well, we're entering a period in which we already are representing concepts, like the letter A, not as rote storage, not as a presence or absence of pixels, but as a concept, a multidimensional point.

I mean, the image to think about here is the concept of an A which is generalizable to any given format for that letter A. So many different typefaces in this multidimensional space. We're sitting at the center. And where do we go from here? Well, the powerful thing is the computers are now starting to understand not just this multidimensional point, not just how to take it and render it and generate that image, like Jensen was talking about.

We are now at the point where we're going to be able to contextualize that understanding. The computer is going to understand the A, be able to render it, understand it's an alphabet, understand it's an English alphabet, and understand what that means in the broader context of this rendering. Computer is going to look at the word multidimensional and not even think about the A, but rather understand the full context of why that's being brought up.

And amazingly, this future is how we think, how humans think. No longer are we going to be storing the rote pixels in a computer memory. That's not how we think. I wasn't taught about the letter A as the presence or absence of a pixel on a page. Instead, we're going to be thinking about that as a concept.

Powerfully, this is how we've thought about it philosophically for thousands of years. Here's my fellow Greek Plato 2,500 years ago, who said this idea of a platonic form is what we all ascribe to, are all striving for, that you have this concept, in this case of a letter A, or this concept of software engineering that we actually are able to build a model around.

So what? Now, we've talked about the second pattern, this idea that we're going to have generalization inside computing itself. What does that mean for each of us? Well, it's going to mean a lot for company building. Today, we're already integrating this into specific processes and KPIs. Sonia just mentioned how Klarna is using this in order to accelerate their KPIs around customer support.

They know that they have certain KPIs that they can drive towards, and they can have a system that's actually delivering information, generating great customer experiences. Tomorrow-- and this is already happening alongside-- new user interfaces. That might be a different interface for how the support is actually being communicated. And this is what I'm personally incredibly excited about, is because of this future in which concepts are rendered, because of this future in which everything is generated, eventually, the entire company might start working like a neural network.

Let me break that down in a specific example. This is a caricature. As with everything in this presentation, in reality, everything is continuous. These are all discrete. This is a caricature of the customer support process. You have customer service that has certain KPIs. These are driven by text-to-voice, language generation, customer personalization, and the like.

This feeds into subpatterns, subtrees that you're optimizing. And eventually, you're actually going to have a fully connected graph here. You're actually going to have feedback from the language generation to the end KPI for the servicing of the customers. This is going to be, at some point, a layer of abstraction, where customer support is managed, optimized, and improved by the neural network.

Now, let's think about unique customers, another part of the important job of building a business. Well, again, you have the primitives of artificial intelligence, from language generation to a growth engine to add customization and optimization. This will all feed into each other, once again. The powerful conclusion here is, eventually, these layers of abstraction will become interoperable to the point where the entire company is able to function like a neural network.

Here comes the rise of the one-person company. The one-person company is going to enable us not to do less, but to do more. More problems can be tackled by more people to create a better society. So what's next? The reality is, the people in the room here are going to decide what's next.

You are the ones who are building this future. We personally are very excited about the future, because we think that AI is positioned to help drive down costs and increase productivity in some of the most crucial areas in our society-- better education, healthier populations, more productive populations. And that's the purpose of convening this group today.

You all are going to be able to talk about, how are we able to take our technologies, abstract away complexity, mundane details, and actually build something that's much more powerful for the future? I'll hand it off to Sonia to introduce our first speaker.