back to index

In conversation with Sergey Brin | All-In Summit 2024


Chapters

0:0 David Friedberg intros Sergey Brin
1:41 What Sergey is working on at Google
5:45 Is Google chasing a "God Model"?
8:49 Thoughts on the massive AI chip buildout
12:54 Future of human-AI interaction
14:58 Changing Google's conservative product culture
18:21 Is the "Race to AGI" overblown?

Whisper Transcript | Transcript Only Page

00:00:00.000 | They wondered if there was a better way to find information on the web.
00:00:03.200 | On September 15, 1997, they registered Google as a website.
00:00:08.000 | One of the greatest entrepreneurs of our times.
00:00:11.680 | Someone who really wanted to think outside the box.
00:00:14.640 | If that sounds like it's impossible, let's try it.
00:00:17.080 | He took a back seat in recent years to other Google leaders.
00:00:20.320 | Bryn is now back helping Google's efforts in artificial intelligence.
00:00:24.480 | I feel lucky that I fell into doing something that I feel really matters, getting people
00:00:31.040 | information.
00:00:49.080 | No introduction needed?
00:00:50.560 | Welcome.
00:00:51.560 | I just agreed to this last minute, as you know.
00:00:53.880 | I don't know where you pulled up that clip so fast.
00:00:56.200 | You guys are...
00:00:57.200 | The team is amazing.
00:00:58.200 | Kind of amazing.
00:00:59.200 | This is kind of amazing.
00:01:00.200 | Yeah.
00:01:01.200 | I thought...
00:01:02.200 | Sergey just asked.
00:01:03.200 | He asked to come check out the conference, and I was like, "Definitely.
00:01:08.000 | Come hang out."
00:01:09.000 | I didn't actually understand, to be perfectly honest.
00:01:11.440 | I thought you guys just kind of had a podcast and a little get together or something, but
00:01:17.280 | this is kind of mind-blowing.
00:01:18.280 | Yeah.
00:01:19.280 | Congratulations.
00:01:20.280 | Thank you.
00:01:21.280 | Well, I'm glad you came out.
00:01:22.280 | Thanks for doing it.
00:01:23.280 | Yeah.
00:01:24.280 | Yeah.
00:01:26.280 | But thanks for agreeing to chat for a little bit.
00:01:27.280 | Absolutely.
00:01:28.280 | We're going to talk for a little bit.
00:01:29.280 | So this was not on the schedule, but I thought it'd be great to talk to you, given where
00:01:34.280 | you sit in the world as AI is on the brink of and is actively changing the world.
00:01:41.080 | Obviously, you founded Google with Larry in 1998, and recently it's been reported that
00:01:48.280 | you've kind of spent a lot more time at Google working on AI.
00:01:51.640 | I thought maybe ... And a lot of industry analysts and pundits have been kind of arguing
00:01:56.120 | that LLMs and conversational AI tools are kind of an existential threat to Google search.
00:02:01.640 | That's one of the ... And I think a lot of those people don't build businesses or they
00:02:05.160 | have competitive investments, but we'll leave that to the side.
00:02:09.040 | But there's this big kind of narrative on what's going to happen to Google and where's
00:02:12.080 | Google sitting with AI.
00:02:13.080 | And I know you're spending a lot of time on it, so thanks for coming to talk about it.
00:02:15.920 | How much time are you spending at Google?
00:02:17.440 | What are you working on?
00:02:18.440 | Yeah.
00:02:19.440 | I mean, honestly, like pretty much every day.
00:02:21.280 | I mean, like I'm missing today, which is one of the reasons I was a little reluctant, but
00:02:26.560 | I'm glad I came.
00:02:30.000 | But I think as a computer scientist, I've never seen anything as exciting as all of
00:02:37.680 | the AI progress that's happened in the last few years.
00:02:42.640 | Thanks.
00:02:43.640 | No, but it's kind of mind blowing.
00:02:46.320 | When I went to grad school in the '90s, AI was like kind of like a footnote in the curriculum
00:02:53.280 | almost.
00:02:54.280 | Like, you're like, "Oh, maybe you have to do this one little test on AI.
00:02:57.640 | We tried all these different things.
00:02:59.160 | They don't really work.
00:03:00.280 | That's it.
00:03:01.280 | That's all you need to know."
00:03:03.280 | And then somehow miraculously, all these people who are working on neural nets, which was
00:03:08.280 | one of the big discarded approaches to AI in like the '60s, '70s, and so forth, just
00:03:15.840 | started to make progress.
00:03:16.840 | A little bit more compute, a little bit more data, a few clever algorithms.
00:03:23.040 | And the thing that's happened in this last decade or so is just amazing as a computer
00:03:30.400 | scientist.
00:03:31.480 | Like every month, you know, well, all of you, I'm sure, use all of the AI tools out there,
00:03:37.680 | but like every month there's like a new amazing capability.
00:03:40.640 | And I'm like probably doubly wowed as everybody else is that computers can do this.
00:03:47.560 | And so, yeah, for me, I really got back into the technical work because I just don't want
00:03:54.760 | to miss out on this as a computer scientist.
00:03:59.680 | Is it an extension of search or a rewriting of how people retrieve information?
00:04:05.160 | I mean, I just think that the AI touches so many different elements of day-to-day life
00:04:13.280 | and sure, search is one of them, but it kind of covers everything.
00:04:19.720 | For example, programming itself, like the way that I think about it, is very different
00:04:27.200 | Like, you know, writing code from scratch feels really hard compared to just asking
00:04:32.560 | the AI to do it.
00:04:37.800 | So what do you do then?
00:04:38.800 | Actually, I've written a little bit of code myself just for kicks, just for fun.
00:04:45.800 | And then sometimes I've had the AI write the code for me, which was fun.
00:04:53.300 | I mean, just one example, I wanted to see how good our AI models were at Sudoku.
00:05:00.720 | So I had the AI model itself write a bunch of code that would automatically generate
00:05:04.360 | Sudoku puzzles and then feed them to the AI itself and then score it and so forth.
00:05:11.080 | But it could just write that code, and I was talking to the engineers about it, and, you
00:05:17.000 | know, whatever.
00:05:18.000 | We had some debate back and forth.
00:05:19.000 | Like, I came back half an hour later, it's done.
00:05:21.400 | And they were kind of impressed because they don't honestly use the AI tools for their
00:05:25.340 | own coding as much as I think they ought to.
00:05:29.520 | So that's an interesting example because maybe there's a model that does Sudoku really well.
00:05:34.920 | Maybe there's a model that answers information questions for me about facts in the world.
00:05:41.760 | Maybe there's an AI model that designs houses.
00:05:44.840 | A lot of people are working towards these ginormous general purpose LLMs.
00:05:51.280 | Is that where the world goes?
00:05:52.800 | Some people, I don't know who wrote this recently, said there's a god model, like there's going
00:05:56.040 | to be a god model.
00:05:57.040 | And I think that's why everyone's investing so much is if you can build the god model,
00:06:00.520 | you're done.
00:06:01.520 | You've got your AGI or whatever terms you want to use.
00:06:04.280 | There's this one thing to rule them all.
00:06:06.480 | Or is the reality of AI that there are lots of smaller models that do application-specific
00:06:11.800 | things, maybe work together like in an agent system?
00:06:16.520 | What is the evolution of model development and how models are ultimately used to do all
00:06:20.520 | these cool things?
00:06:22.160 | Yeah, I mean, I think if you looked 10, 15 years ago, there were different AI techniques
00:06:30.680 | that were used for different problems altogether, like the chess-playing AI was very different
00:06:36.520 | than image generation, which was very different.
00:06:42.040 | Like recently the graph neural net at Google that outperformed every physics forecasting
00:06:47.480 | model.
00:06:48.480 | I don't know if you know this, but you guys published this.
00:06:49.480 | It's pretty awesome.
00:06:50.480 | I'm pretty cursed.
00:06:51.480 | But it was like a totally different system, and it was trained differently, and it ended
00:06:56.080 | up in that particular...
00:06:57.940 | So historically there have been different systems, and even recently, like the International
00:07:04.960 | Math Olympiad that we participated in, we got silver medal as an AI, actually one point
00:07:10.680 | away from gold, but we actually had three different AI models in there.
00:07:16.240 | There was one very formal theorem-proving model that actually did basically the best.
00:07:21.400 | There was one specific to geometry problems, believe it or not, that was just a special
00:07:26.040 | kind of AI.
00:07:27.640 | And then there was a general purpose language model.
00:07:31.800 | But since then, we've tried to take the learnings from that, that was just a couple months ago,
00:07:37.080 | and try to infuse some of the knowledge and ability from the formal prover into our general
00:07:43.320 | language models that still work in progress.
00:07:46.800 | But I do think the trend is to have a more unified model.
00:07:52.680 | I don't know if I'd call it a god model, but to have certainly sort of shared architectures
00:07:59.120 | and ultimately even shared models.
00:08:02.440 | Right.
00:08:03.440 | So if that's true, you need a lot of compute to train and develop that model, that big
00:08:11.840 | model?
00:08:12.840 | Yeah.
00:08:13.840 | Yeah.
00:08:14.840 | I mean, you definitely need a lot of compute.
00:08:16.840 | I think like I've read some articles out there that just like extrapolate, they're like,
00:08:23.600 | you know, it's like 100 megawatts and a gigawatt and 10 gigawatts and 100 gigawatts.
00:08:27.920 | And I don't know if I'm quite a believer in, you know, that level of extrapolation, partly
00:08:34.640 | because also the algorithmic improvements that have come over the course of the last
00:08:40.320 | few years, maybe are actually even outpacing the increased compute that's put into these
00:08:47.880 | models.
00:08:49.280 | So is it irrational, the build out that's happening, everyone talking about the NVIDIA
00:08:55.920 | revenue, the NVIDIA profit, the NVIDIA market cap, supporting all of what people call the
00:09:01.480 | hyperscalers and the growth of the infrastructure needed to build these very large scale models
00:09:06.320 | using the techniques of today.
00:09:08.480 | Is this irrational or is it rational?
00:09:10.640 | Because if it works, it's so big that it doesn't matter how much you...
00:09:14.160 | Well, first of all, I'm not like an economist or like a market watcher the way that you
00:09:17.600 | guys very carefully watch companies.
00:09:20.720 | So I just want to disclaim my abilities in the space.
00:09:25.080 | I think that I know for us, we're kind of building out compute as quickly as we can.
00:09:30.840 | And we just have a huge amount of demand.
00:09:32.720 | I mean, for example, our cloud customers just want a huge amount of TPUs, GPUs, you name
00:09:41.280 | We just can't...
00:09:42.280 | We have to turn down customers because we just don't have the compute available.
00:09:47.360 | And we use it internally to train our own models, to serve our own models and so forth.
00:09:51.120 | So I guess I think there are very good reasons that companies are currently building out
00:09:57.120 | compute at a fast pace.
00:09:59.600 | I just don't know that I would look at the training trends and extrapolate three orders
00:10:06.800 | of magnitude ahead just blindly from where we are today.
00:10:09.960 | But the enterprise demand is there, out there.
00:10:12.800 | You know, I mean, they want to do lots of other things, for example, running inference
00:10:16.800 | on all these AI models, applying them to all these new applications.
00:10:22.400 | Yeah, there doesn't seem to be a limit right now.
00:10:29.640 | And where have you seen the greatest success, surprising success in the application of models,
00:10:37.440 | whether it's in robotics or biology?
00:10:40.000 | What are you seeing that you're like, "Wow, this is really working"?
00:10:42.760 | And where are things going to be more challenging and take longer than I think some people might
00:10:47.280 | be expecting?
00:10:48.840 | Yeah, I mean, now that you mention those, well, I would say in biology, you know, we've
00:10:56.000 | had AlphaFold for quite a while.
00:10:58.800 | And I'm not personally a biologist, but when I talk to biologists out there, like everybody
00:11:03.320 | uses it, and it's more recent variants.
00:11:08.960 | And that is, I guess, a different kind of AI.
00:11:11.120 | But like I said, I do think all these things tend to converge.
00:11:16.400 | You know, robotics, for the most part, I see in this sort of "wow" stage, like, "Wow, you
00:11:24.440 | could make a robot do that with just this general purpose language model or just a little
00:11:29.960 | bit of fine-tuning this way or that."
00:11:31.600 | And it's like, amazing, but maybe not, for the most part, yet at the level of robustness
00:11:41.560 | that would make it like day-to-day useful.
00:11:43.440 | But you see a line of sight to it?
00:11:46.560 | Yeah.
00:11:47.560 | Yeah, I mean, it would be, I don't see any particular obstacles.
00:11:51.760 | But Google had the robotics business and then spun it out or sold it?
00:11:56.320 | We've had like five or six robotics businesses.
00:11:59.600 | They just weren't, the timing wasn't right.
00:12:02.480 | Yeah.
00:12:03.480 | Yeah.
00:12:04.480 | Unfortunately, I don't know, I guess I think that was just a little too early, to be perfectly
00:12:08.680 | honest.
00:12:09.680 | Like Boston Dynamics, what was it called, Stark, Stamp, I don't even remember all the
00:12:15.480 | ones.
00:12:16.480 | We had, anyway, we've had like five or six, embarrassingly.
00:12:20.160 | But they're very cool and they're very impressive.
00:12:28.000 | It just feels kind of silly having done all of that work and seeing now how capable these
00:12:35.960 | general language models are that include, for example, vision and image and they're
00:12:40.360 | multimodal and they can understand the scene and everything and not having had that at
00:12:45.760 | the time.
00:12:46.760 | Yeah, it just feels like you were sort of on a treadmill that wasn't going to get anywhere
00:12:51.520 | without the modern AI technology.
00:12:54.360 | You spend a lot of time on core technology, do you also spend a lot of time on product
00:12:58.280 | visioning?
00:12:59.280 | Where are things going?
00:13:00.440 | And what like the human-computer interaction modality they're going to be in the future
00:13:04.540 | in a world of AI everywhere, like what's our life going to be like?
00:13:08.680 | I mean, I guess there's water cooler chit-chat about things like that.
00:13:16.120 | Care to share any?
00:13:17.120 | I'm trying to think of things that aren't embarrassing, struggling, but I guess it's
00:13:29.560 | like just really hard to just forecast, to think five years out because based on the
00:13:39.280 | base technical capability of the AI is what enables the applications.
00:13:44.480 | And then sometimes somebody will just whip up a little demo that you just didn't think
00:13:49.080 | about and it'll be kind of mind-blowing.
00:13:58.480 | And of course, then from demo to actually making it real in production and so forth
00:14:02.480 | takes time.
00:14:03.480 | I don't know if you've played with the Astra model, but it's just sort of live video and
00:14:09.160 | audio and you can chat with the AI about what's going on in your environment.
00:14:12.960 | You'll give me access, right?
00:14:15.040 | Yeah, well, once I have access.
00:14:18.400 | I mean, I'm sort of sometimes the slowest to get some of these things.
00:14:25.040 | But it's, yeah, there's like a moment of wow.
00:14:31.720 | And you're like, oh my God, this is amazing.
00:14:34.200 | And then you're like, okay, well, it does it correctly like 90% of the time, but am
00:14:39.640 | I really like, is that then worth it if 10% of the time it's kind of make a mistake or
00:14:44.600 | taking too long or whatever.
00:14:47.360 | And then you have to work, work, work, work, work, work, work to get to perfect all those
00:14:51.160 | things, make it responsive, make it available, whatever.
00:14:54.080 | And then you actually end up with something kind of amazing.
00:14:57.800 | I heard a story that you went in, you were on site, I should have mentioned this to you
00:15:04.160 | before you came on stage, see if you were cool about talking about it, but here we are.
00:15:08.200 | And they're like a bunch of engineers showed you that you could like use AI to write code.
00:15:12.680 | And it was like, well, we haven't pushed it in Gemini yet, because we want to make sure
00:15:16.440 | it doesn't make mistakes.
00:15:17.760 | And there was this like hesitation culturally at Google to do that.
00:15:20.800 | And you were like, no, if it writes code, push it and you really, and a lot of people
00:15:24.840 | have told me this story because they said, or I've heard this, that it was really important
00:15:30.240 | to hear that from you, the founder, in being really clear that Google's conservatism, you
00:15:36.720 | know, can't rule the day today, and we need to kind of see Google push the envelope.
00:15:42.200 | Is that accurate?
00:15:43.200 | Is that kind of how you've spent some time?
00:15:45.680 | I don't remember the specifics just to be honest, but I'm not surprised.
00:15:53.040 | I mean, I guess that's the question for me is like, as Google's gotten so big, there's
00:15:57.040 | more to lose.
00:15:59.360 | I think there's like this, yeah, I think there's a little bit of fearful, I mean, language
00:16:04.960 | models to begin with, like we invented them basically with a transformer paper that was
00:16:09.400 | whatever, six, eight years ago, something like that.
00:16:14.080 | And oh, Noam, by the way, is back at Google now, which is awesome.
00:16:19.600 | And yeah, we were too timid to deploy them.
00:16:25.420 | And you know, for a lot of good reasons, like whatever, they make mistakes, they say embarrassing
00:16:30.360 | things, whatever, you know, they're, you know, sometimes they're just like, kind of embarrassing
00:16:36.760 | how dumb they are.
00:16:37.760 | I mean, today is like the latest and greatest things like make really stupid mistakes people
00:16:43.160 | would never make.
00:16:45.760 | And at the same time, like they're incredibly powerful, and they can help you do things
00:16:51.800 | you never would have done.
00:16:53.680 | And you know, like, I've like programmed really complicated things with my kids, like they'll
00:16:59.480 | just program it because they just ask the AI, using all these really complicated API's
00:17:04.360 | and all kinds of things that will take like a month to like, learn.
00:17:09.480 | So I just think that that capability is magic.
00:17:13.960 | And you need to be willing to have some embarrassments, and take some risks.
00:17:22.260 | And I think we've gotten better at that.
00:17:23.820 | And well, you guys probably seen some more embarrassments.
00:17:28.320 | But you're comfortable.
00:17:29.320 | I mean, you have super voting stock, you're still like, I mean, you're comfortable with
00:17:32.640 | the embarrassments at this stage, because it's so important to do this, like,
00:17:35.720 | I mean, not not particular on the basis of my stock, but as you know, I mean, but I am
00:17:41.960 | I comfortable.
00:17:42.960 | I mean, I guess I just think of it is this something magical, we're giving the world.
00:17:53.140 | And I think as long as we communicate it properly, like saying, like, look, this thing is amazing.
00:17:59.720 | And we'll periodically get stuff really wrong, then I think we should put it out there and
00:18:07.840 | let people experiment and see what new ways they find to use it.
00:18:12.080 | I just don't think this is the technology you want to just kind of keep close to the
00:18:16.920 | chest and hidden until it's like, perfect.
00:18:20.880 | Do you think that there's so many places that AI can affect the world and so much value
00:18:25.480 | to be created, that it's not really a race between Google and Meta and Amazon, like people
00:18:32.000 | frame these things as kind of a race, is there just so much value to be created that you're
00:18:35.760 | working on a lot of different opportunities, and it's not really about who built the model
00:18:41.400 | that score the LLM that scores the best, that there's so much more to it?
00:18:45.360 | I mean, how do you kind of think about the world out there and Google's place in it?
00:18:51.680 | I mean, I think it's very helpful to have competition in the sense that all these guys
00:18:58.120 | are vying, and we were number one on LLMSIS for a couple weeks, by the way, just now.
00:19:05.160 | And I think last time I checked, we're still beat the top model.
00:19:08.360 | There's just some ELO stuff.
00:19:09.360 | Okay, so you do care, yeah.
00:19:12.160 | I'm not saying, not to brag, but, and we've come a long way since a couple whatever years
00:19:22.800 | ago when ChatGPT launched, and we were quite a ways behind.
00:19:28.420 | I'm really pleased with all the progress we've made.
00:19:30.560 | So we definitely pay attention.
00:19:32.880 | I mean, I think it's great that there are all these AI companies out there, be it us,
00:19:38.560 | OpenAI, Anthropic, you name it.
00:19:42.000 | There's Mistral.
00:19:43.000 | I mean, it's a big, fast-moving field.
00:19:48.600 | But I guess your question is, yeah, I mean, I think there's tremendous value to humanity.
00:19:55.040 | And I think if you think back, you know, like when I was in college, let's say, and there
00:20:02.600 | wasn't really a proper internet or like web the way that we know it today, like the amount
00:20:07.520 | of effort it would take to get basic information, the amount of effort it would take to communicate
00:20:13.200 | with people, you know, before cell phones and things.
00:20:18.240 | Like we've gained so much capability across the world.
00:20:24.280 | But the sort of, the new AI is another big capability.
00:20:29.600 | And pretty much everybody in the world can get access to it in one form or another these
00:20:33.520 | days.
00:20:34.520 | And I think super exciting.
00:20:35.520 | It's awesome.
00:20:36.520 | >> Sorry, we have such limited time.
00:20:38.880 | Sergey, thank you so much for joining us.
00:20:40.520 | Please join me in thanking Sergey.
00:20:41.520 | >> Thank you.
00:20:42.520 | [ Applause ]
00:20:42.520 | >> Thank you.
00:20:43.520 | >> Thank you.
00:20:44.520 | [ Applause ]
00:20:44.520 | (audience applauds)