They wondered if there was a better way to find information on the web. On September 15, 1997, they registered Google as a website. One of the greatest entrepreneurs of our times. Someone who really wanted to think outside the box. If that sounds like it's impossible, let's try it. He took a back seat in recent years to other Google leaders.
Bryn is now back helping Google's efforts in artificial intelligence. I feel lucky that I fell into doing something that I feel really matters, getting people information. No introduction needed? Welcome. I just agreed to this last minute, as you know. I don't know where you pulled up that clip so fast.
You guys are... The team is amazing. Kind of amazing. This is kind of amazing. Yeah. I thought... Sergey just asked. He asked to come check out the conference, and I was like, "Definitely. Come hang out." I didn't actually understand, to be perfectly honest. I thought you guys just kind of had a podcast and a little get together or something, but this is kind of mind-blowing.
Yeah. Congratulations. Thank you. Well, I'm glad you came out. Thanks for doing it. Yeah. Yeah. Wow. But thanks for agreeing to chat for a little bit. Absolutely. We're going to talk for a little bit. So this was not on the schedule, but I thought it'd be great to talk to you, given where you sit in the world as AI is on the brink of and is actively changing the world.
Obviously, you founded Google with Larry in 1998, and recently it's been reported that you've kind of spent a lot more time at Google working on AI. I thought maybe ... And a lot of industry analysts and pundits have been kind of arguing that LLMs and conversational AI tools are kind of an existential threat to Google search.
That's one of the ... And I think a lot of those people don't build businesses or they have competitive investments, but we'll leave that to the side. But there's this big kind of narrative on what's going to happen to Google and where's Google sitting with AI. And I know you're spending a lot of time on it, so thanks for coming to talk about it.
How much time are you spending at Google? What are you working on? Yeah. I mean, honestly, like pretty much every day. I mean, like I'm missing today, which is one of the reasons I was a little reluctant, but I'm glad I came. But I think as a computer scientist, I've never seen anything as exciting as all of the AI progress that's happened in the last few years.
Thanks. No, but it's kind of mind blowing. When I went to grad school in the '90s, AI was like kind of like a footnote in the curriculum almost. Like, you're like, "Oh, maybe you have to do this one little test on AI. We tried all these different things. They don't really work.
That's it. That's all you need to know." And then somehow miraculously, all these people who are working on neural nets, which was one of the big discarded approaches to AI in like the '60s, '70s, and so forth, just started to make progress. A little bit more compute, a little bit more data, a few clever algorithms.
And the thing that's happened in this last decade or so is just amazing as a computer scientist. Like every month, you know, well, all of you, I'm sure, use all of the AI tools out there, but like every month there's like a new amazing capability. And I'm like probably doubly wowed as everybody else is that computers can do this.
And so, yeah, for me, I really got back into the technical work because I just don't want to miss out on this as a computer scientist. Is it an extension of search or a rewriting of how people retrieve information? I mean, I just think that the AI touches so many different elements of day-to-day life and sure, search is one of them, but it kind of covers everything.
For example, programming itself, like the way that I think about it, is very different now. Like, you know, writing code from scratch feels really hard compared to just asking the AI to do it. So what do you do then? Actually, I've written a little bit of code myself just for kicks, just for fun.
And then sometimes I've had the AI write the code for me, which was fun. I mean, just one example, I wanted to see how good our AI models were at Sudoku. So I had the AI model itself write a bunch of code that would automatically generate Sudoku puzzles and then feed them to the AI itself and then score it and so forth.
But it could just write that code, and I was talking to the engineers about it, and, you know, whatever. We had some debate back and forth. Like, I came back half an hour later, it's done. And they were kind of impressed because they don't honestly use the AI tools for their own coding as much as I think they ought to.
So that's an interesting example because maybe there's a model that does Sudoku really well. Maybe there's a model that answers information questions for me about facts in the world. Maybe there's an AI model that designs houses. A lot of people are working towards these ginormous general purpose LLMs. Is that where the world goes?
Some people, I don't know who wrote this recently, said there's a god model, like there's going to be a god model. And I think that's why everyone's investing so much is if you can build the god model, you're done. You've got your AGI or whatever terms you want to use.
There's this one thing to rule them all. Or is the reality of AI that there are lots of smaller models that do application-specific things, maybe work together like in an agent system? What is the evolution of model development and how models are ultimately used to do all these cool things?
Yeah, I mean, I think if you looked 10, 15 years ago, there were different AI techniques that were used for different problems altogether, like the chess-playing AI was very different than image generation, which was very different. Like recently the graph neural net at Google that outperformed every physics forecasting model.
I don't know if you know this, but you guys published this. It's pretty awesome. I'm pretty cursed. But it was like a totally different system, and it was trained differently, and it ended up in that particular... So historically there have been different systems, and even recently, like the International Math Olympiad that we participated in, we got silver medal as an AI, actually one point away from gold, but we actually had three different AI models in there.
There was one very formal theorem-proving model that actually did basically the best. There was one specific to geometry problems, believe it or not, that was just a special kind of AI. And then there was a general purpose language model. But since then, we've tried to take the learnings from that, that was just a couple months ago, and try to infuse some of the knowledge and ability from the formal prover into our general language models that still work in progress.
But I do think the trend is to have a more unified model. I don't know if I'd call it a god model, but to have certainly sort of shared architectures and ultimately even shared models. Right. So if that's true, you need a lot of compute to train and develop that model, that big model?
Yeah. Yeah. I mean, you definitely need a lot of compute. I think like I've read some articles out there that just like extrapolate, they're like, you know, it's like 100 megawatts and a gigawatt and 10 gigawatts and 100 gigawatts. And I don't know if I'm quite a believer in, you know, that level of extrapolation, partly because also the algorithmic improvements that have come over the course of the last few years, maybe are actually even outpacing the increased compute that's put into these models.
So is it irrational, the build out that's happening, everyone talking about the NVIDIA revenue, the NVIDIA profit, the NVIDIA market cap, supporting all of what people call the hyperscalers and the growth of the infrastructure needed to build these very large scale models using the techniques of today. Is this irrational or is it rational?
Because if it works, it's so big that it doesn't matter how much you... Well, first of all, I'm not like an economist or like a market watcher the way that you guys very carefully watch companies. So I just want to disclaim my abilities in the space. I think that I know for us, we're kind of building out compute as quickly as we can.
And we just have a huge amount of demand. I mean, for example, our cloud customers just want a huge amount of TPUs, GPUs, you name it. We just can't... We have to turn down customers because we just don't have the compute available. And we use it internally to train our own models, to serve our own models and so forth.
So I guess I think there are very good reasons that companies are currently building out compute at a fast pace. I just don't know that I would look at the training trends and extrapolate three orders of magnitude ahead just blindly from where we are today. But the enterprise demand is there, out there.
You know, I mean, they want to do lots of other things, for example, running inference on all these AI models, applying them to all these new applications. Yeah, there doesn't seem to be a limit right now. And where have you seen the greatest success, surprising success in the application of models, whether it's in robotics or biology?
What are you seeing that you're like, "Wow, this is really working"? And where are things going to be more challenging and take longer than I think some people might be expecting? Yeah, I mean, now that you mention those, well, I would say in biology, you know, we've had AlphaFold for quite a while.
And I'm not personally a biologist, but when I talk to biologists out there, like everybody uses it, and it's more recent variants. And that is, I guess, a different kind of AI. But like I said, I do think all these things tend to converge. You know, robotics, for the most part, I see in this sort of "wow" stage, like, "Wow, you could make a robot do that with just this general purpose language model or just a little bit of fine-tuning this way or that." And it's like, amazing, but maybe not, for the most part, yet at the level of robustness that would make it like day-to-day useful.
But you see a line of sight to it? Yeah. Yeah, I mean, it would be, I don't see any particular obstacles. But Google had the robotics business and then spun it out or sold it? We've had like five or six robotics businesses. They just weren't, the timing wasn't right.
Yeah. Yeah. Unfortunately, I don't know, I guess I think that was just a little too early, to be perfectly honest. Like Boston Dynamics, what was it called, Stark, Stamp, I don't even remember all the ones. We had, anyway, we've had like five or six, embarrassingly. But they're very cool and they're very impressive.
It just feels kind of silly having done all of that work and seeing now how capable these general language models are that include, for example, vision and image and they're multimodal and they can understand the scene and everything and not having had that at the time. Yeah, it just feels like you were sort of on a treadmill that wasn't going to get anywhere without the modern AI technology.
You spend a lot of time on core technology, do you also spend a lot of time on product visioning? Where are things going? And what like the human-computer interaction modality they're going to be in the future in a world of AI everywhere, like what's our life going to be like?
I mean, I guess there's water cooler chit-chat about things like that. Care to share any? I'm trying to think of things that aren't embarrassing, struggling, but I guess it's like just really hard to just forecast, to think five years out because based on the base technical capability of the AI is what enables the applications.
And then sometimes somebody will just whip up a little demo that you just didn't think about and it'll be kind of mind-blowing. And of course, then from demo to actually making it real in production and so forth takes time. I don't know if you've played with the Astra model, but it's just sort of live video and audio and you can chat with the AI about what's going on in your environment.
You'll give me access, right? Yeah, well, once I have access. I mean, I'm sort of sometimes the slowest to get some of these things. But it's, yeah, there's like a moment of wow. And you're like, oh my God, this is amazing. And then you're like, okay, well, it does it correctly like 90% of the time, but am I really like, is that then worth it if 10% of the time it's kind of make a mistake or taking too long or whatever.
And then you have to work, work, work, work, work, work, work to get to perfect all those things, make it responsive, make it available, whatever. And then you actually end up with something kind of amazing. I heard a story that you went in, you were on site, I should have mentioned this to you before you came on stage, see if you were cool about talking about it, but here we are.
And they're like a bunch of engineers showed you that you could like use AI to write code. And it was like, well, we haven't pushed it in Gemini yet, because we want to make sure it doesn't make mistakes. And there was this like hesitation culturally at Google to do that.
And you were like, no, if it writes code, push it and you really, and a lot of people have told me this story because they said, or I've heard this, that it was really important to hear that from you, the founder, in being really clear that Google's conservatism, you know, can't rule the day today, and we need to kind of see Google push the envelope.
Is that accurate? Is that kind of how you've spent some time? I don't remember the specifics just to be honest, but I'm not surprised. I mean, I guess that's the question for me is like, as Google's gotten so big, there's more to lose. I think there's like this, yeah, I think there's a little bit of fearful, I mean, language models to begin with, like we invented them basically with a transformer paper that was whatever, six, eight years ago, something like that.
And oh, Noam, by the way, is back at Google now, which is awesome. And yeah, we were too timid to deploy them. And you know, for a lot of good reasons, like whatever, they make mistakes, they say embarrassing things, whatever, you know, they're, you know, sometimes they're just like, kind of embarrassing how dumb they are.
I mean, today is like the latest and greatest things like make really stupid mistakes people would never make. And at the same time, like they're incredibly powerful, and they can help you do things you never would have done. And you know, like, I've like programmed really complicated things with my kids, like they'll just program it because they just ask the AI, using all these really complicated API's and all kinds of things that will take like a month to like, learn.
So I just think that that capability is magic. And you need to be willing to have some embarrassments, and take some risks. And I think we've gotten better at that. And well, you guys probably seen some more embarrassments. But you're comfortable. I mean, you have super voting stock, you're still like, I mean, you're comfortable with the embarrassments at this stage, because it's so important to do this, like, I mean, not not particular on the basis of my stock, but as you know, I mean, but I am I comfortable.
I mean, I guess I just think of it is this something magical, we're giving the world. And I think as long as we communicate it properly, like saying, like, look, this thing is amazing. And we'll periodically get stuff really wrong, then I think we should put it out there and let people experiment and see what new ways they find to use it.
I just don't think this is the technology you want to just kind of keep close to the chest and hidden until it's like, perfect. Do you think that there's so many places that AI can affect the world and so much value to be created, that it's not really a race between Google and Meta and Amazon, like people frame these things as kind of a race, is there just so much value to be created that you're working on a lot of different opportunities, and it's not really about who built the model that score the LLM that scores the best, that there's so much more to it?
I mean, how do you kind of think about the world out there and Google's place in it? I mean, I think it's very helpful to have competition in the sense that all these guys are vying, and we were number one on LLMSIS for a couple weeks, by the way, just now.
And I think last time I checked, we're still beat the top model. There's just some ELO stuff. Okay, so you do care, yeah. I'm not saying, not to brag, but, and we've come a long way since a couple whatever years ago when ChatGPT launched, and we were quite a ways behind.
I'm really pleased with all the progress we've made. So we definitely pay attention. I mean, I think it's great that there are all these AI companies out there, be it us, OpenAI, Anthropic, you name it. There's Mistral. I mean, it's a big, fast-moving field. But I guess your question is, yeah, I mean, I think there's tremendous value to humanity.
And I think if you think back, you know, like when I was in college, let's say, and there wasn't really a proper internet or like web the way that we know it today, like the amount of effort it would take to get basic information, the amount of effort it would take to communicate with people, you know, before cell phones and things.
Like we've gained so much capability across the world. But the sort of, the new AI is another big capability. And pretty much everybody in the world can get access to it in one form or another these days. And I think super exciting. It's awesome. >> Sorry, we have such limited time.
Sergey, thank you so much for joining us. Please join me in thanking Sergey. >> Thank you. >> Thank you. >> Thank you. (audience applauds)