back to indexSundar Pichai: CEO of Google and Alphabet | Lex Fridman Podcast #471

Chapters
0:0 Episode highlight
2:8 Introduction
2:18 Growing up in India
8:27 Advice for young people
10:9 Styles of leadership
14:29 Impact of AI in human history
26:39 Veo 3 and future of video
34:24 Scaling laws
38:9 AGI and ASI
44:33 P(doom)
51:24 Toughest leadership decisions
62:32 AI mode vs Google Search
75:22 Google Chrome
90:52 Programming
97:37 Android
102:49 Questions for AGI
108:5 Future of humanity
111:26 Demo: Google Beam
119:9 Demo: Google XR Glasses
121:54 Biggest invention in human history
00:00:00.000 |
It was a five-year waiting list, and we got a rotary telephone, but it dramatically changed 00:00:08.100 |
You know, people would come to our house to make calls to their loved ones. 00:00:11.440 |
You know, I would have to go all the way to the hospital to get blood test records, and 00:00:16.100 |
it would take two hours to go, and they would say, sorry, it's not ready. 00:00:23.920 |
So as a kid, like, I mean, this light bulb went in my head, you know, this power of technology 00:00:35.240 |
So they would get water in these trucks, maybe eight buckets per household. 00:00:41.060 |
So me and my brother, sometimes my mom, we would wait in line, get that and bring it back 00:00:47.460 |
Many years later, like, we had running water, and we had a water heater, and you would get 00:00:57.720 |
I mean, like, so, you know, for me, everything was discreet like that. 00:01:01.220 |
And so I've always had this thing, you know, first-hand feeling of, like, how technology 00:01:08.140 |
can dramatically change, like, your life and, like, the opportunity it brings. 00:01:13.560 |
I think if PDOOM is actually high, at some point, all of humanity is, like, aligned in making 00:01:21.360 |
And so we'll actually make more progress against it, I think. 00:01:24.780 |
So the irony is, so there is a self-modulating aspect there. 00:01:30.740 |
Like, I think if humanity collectively puts their mind to solving a problem, whatever it is, 00:01:37.000 |
So because of that, I think I'm optimistic on the PDOOM scenarios. 00:01:42.740 |
But that doesn't mean, I think the underlying risk is actually pretty high. 00:01:49.480 |
But I'm, you know, I have a lot of faith in humanity kind of rising up to the, to meet 00:01:55.240 |
Take me through that experience when there's all these articles saying, you're the wrong 00:02:05.700 |
The following is a conversation with Sundar Pichai, the CEO of Google and Alphabet, on this, the 00:02:17.080 |
Your life story is inspiring to a lot of people. 00:02:22.200 |
You grew up in India, whole family living in a humble two-room apartment, very little, 00:02:31.420 |
And from those humble beginnings, you rose to lead a $2 trillion technology company. 00:02:40.760 |
So if you could travel back in time and told that, let's say, 12-year-old Sundar, that you're 00:02:46.120 |
now leading one of the largest companies in human history, what do you think that young 00:02:50.860 |
I would have probably laughed it off, you know, probably too far-fetched to imagine or 00:02:59.920 |
You would have to explain the internet first. 00:03:03.140 |
I mean, computers to me at that time, you know, I was 12 in 1984. 00:03:08.380 |
So probably, you know, by then I had started reading about them. 00:03:28.040 |
Definitely like fond memories of playing cricket outside the home. 00:03:34.880 |
All the neighborhood kids would come out and we would play until it got dark and we couldn't 00:03:45.400 |
Everything would drive through and you would just continue playing, right? 00:03:51.140 |
You know, pre-computers was a lot of free time. 00:03:53.720 |
Now that I think about it, now you have to go and seek that quiet solitude or something. 00:04:00.640 |
Papers, books is how I gained access to the world's information at the time, you will. 00:04:13.620 |
His English, you know, his handwriting till today is the most beautiful handwriting I've ever seen. 00:04:23.260 |
And so he kind of got me introduced into books. 00:04:31.580 |
And, you know, that was there in my family throughout. 00:04:34.680 |
So lots of books, trashy books, good books, everything from Ayn Rand to books on philosophy to stupid crime novels. 00:04:44.900 |
So books was a big part of my life, but that kind of, this whole, it's not surprising I ended up at Google because Google's mission kind of always resonated deeply with me. 00:04:54.360 |
This access to knowledge, I was hungry for it, but definitely have, you know, fond memories of my childhood. 00:05:05.520 |
You know, every aspect of technology I had to wait for a while. 00:05:09.640 |
I've obviously spoken before about how long it took for us to get a phone, about five years. 00:05:24.760 |
You know, people would come to our house to make calls to their loved ones. 00:05:28.100 |
You know, I would have to go all the way to the hospital to get blood test records. 00:05:40.940 |
So as a kid, like, I mean, this light bulb went in my head. 00:05:44.380 |
You know, this power of technology to kind of change people's lives. 00:05:52.400 |
So they would get water in these trucks, maybe eight buckets per household. 00:05:57.740 |
So me and my brother, sometimes my mom, we would wait in line, get that and bring it back home. 00:06:05.420 |
Many years later, like, we had running water and we had a water heater. 00:06:10.360 |
And you would get hot water to take a shower. 00:06:13.120 |
I mean, like, so, you know, for me, everything was discreet like that. 00:06:18.620 |
And so I've always had this thing, you know, first-hand feeling of, like, how technology can dramatically change, like, your life and, like, the opportunity it brings. 00:06:30.100 |
So, you know, that was kind of a subliminal takeaway for me throughout growing up. 00:06:36.280 |
And, you know, I kind of actually observed it and felt it, you know. 00:06:39.960 |
So, we had to convince my dad for a long time to get a VCR. 00:06:50.060 |
But, you know, because before that, you only had, like, kind of one TV channel. 00:06:58.200 |
And so, you know, you can watch movies or something like that. 00:07:03.200 |
But this was by the time I was in 12th grade, we got a VCR. 00:07:07.400 |
You know, it was a, like, a Panasonic, which we had to go to some, like, shop, which had kind of smuggled it in, I guess. 00:07:15.680 |
But then being able to record, like, a World Cup football game and then, or, like, get bootlegged videotapes and watch movies. 00:07:26.780 |
So, like, you know, I had these discreet memories growing up. 00:07:29.900 |
And so, you know, it always left me with the feeling of, like, how getting access to technology drives that step change in your life. 00:07:38.020 |
I don't think you'll ever be able to equal the first time you get hot water. 00:07:41.800 |
To have that convenience of going and opening a tap and have hot water come out? 00:07:51.860 |
If you look at human history, just those plots that look at GDP across 2,000 years, and you see that exponential growth to where most of the progress happened since the Industrial Revolution. 00:08:07.320 |
So, our ability to understand how great we have it and also how quickly technology can improve is quite poor. 00:08:18.500 |
You know, I go back to India now, the power of mobile. 00:08:22.100 |
You know, it's mind-blowing to see the progress through the arc of time. 00:08:26.540 |
What advice would you give to young folks listening to this all over the world who look up to you and find your story inspiring? 00:08:36.820 |
Who want to be maybe the next Sundar Bajai, who want to start, create companies, build something that has a lot of impact on the world? 00:08:43.960 |
Look, you have a lot of luck along the way, but you obviously have to make smart choices. 00:08:51.840 |
But when you do things, I think it's important to kind of get that, listen to your heart and see whether you actually enjoy doing it. 00:09:00.200 |
That feeling of, if you love what you do, it's so much easier. 00:09:06.320 |
And you're going to see the best version of yourself. 00:09:11.880 |
I think it's tough to find things you love doing. 00:09:16.200 |
But I think kind of listening to your heart a bit more than your mind in terms of figuring out what you want to do, I think is one of the best things I would tell people. 00:09:26.160 |
The second thing is, I mean, trying to work with people who you feel at various points in my life. 00:09:32.600 |
I've worked with people who I felt were better than me. 00:09:35.920 |
I kind of like, you know, you almost are sitting in a room talking to someone and they're like, wow. 00:09:40.820 |
Like, you know, you know, and you want that feeling a few times. 00:09:44.200 |
Trying to get yourself in a position where you're working with people who you feel are kind of like stretching your abilities is what helps you grow, I think. 00:09:55.840 |
So, putting yourself in uncomfortable situations. 00:10:01.620 |
So, I think being open-minded enough to kind of put yourself in those positions is maybe another thing I would say. 00:10:08.860 |
Well, lessons can we learn, maybe from an outsider perspective, for me, looking at your story and gotten to know you a bit. 00:10:17.120 |
Usually when I think of somebody who has had a journey like yours and climbs to the very top of leadership, in a cutthroat world, they're usually going to be a bit of an asshole. 00:10:28.940 |
So, what wisdom are we supposed to draw from the fact that your general approach is of balance, of humility, of kindness, listening to everybody? 00:10:43.840 |
I have the same emotions all of us do, right, in the context of work and everything. 00:10:48.960 |
But a few things, right, I think, you know, I, over time, I figured out the best way to get the most out of people. 00:11:00.100 |
You know, you kind of find mission-oriented people who are on the shared journey, who have this inner drive to excellence, to do the best. 00:11:09.080 |
And, you know, you kind of motivate people and you can achieve a lot that way, right? 00:11:19.300 |
But have there been times, like, you know, I lose it? 00:11:23.060 |
But, you know, not, maybe less often than others. 00:11:30.640 |
Because, you know, I find it's not needed to achieve what you need to do. 00:11:35.240 |
So, losing your shit has not been productive. 00:11:43.280 |
Like, but you actually want them to do the right thing. 00:11:45.580 |
And so, you know, maybe there's a bit of, like, sports. 00:11:51.720 |
In football, coaches, in soccer, that football, you know, people often talk about, like, man management, right? 00:12:01.960 |
I think there is an element of that in our lives. 00:12:04.660 |
How do you get the best out of the people you work with? 00:12:07.720 |
You know, at times you're working with people who are so committed to achieving, if they've done something wrong, they feel it more than you do, right? 00:12:16.720 |
So, you treat them differently than, you know, occasionally there are people who you need to clearly let them know, like, that wasn't okay or whatever it is. 00:12:24.920 |
But I've often found that not to be the case. 00:12:27.860 |
And sometimes the right words at the right time, spoken firmly, can reverberate through time. 00:12:37.360 |
You know, people can sometimes see that, like, you know, you're unhappy without you saying it. 00:12:43.420 |
And so, sometimes the silence can deliver that message even more. 00:12:49.260 |
Who's the greatest soccer player of all time? 00:12:55.260 |
I'm going to make, you know, in this question. 00:12:59.440 |
I will tell the truthful answer because the truthful answer. 00:13:04.600 |
You know, it's been interesting because my son is a big Cristiano Ronaldo fan. 00:13:10.080 |
And so, we've had to watch El Classico's together, you know, with that dynamic in there. 00:13:20.280 |
I mean, I've never seen an athlete more committed to that kind of excellence. 00:13:30.860 |
Yeah, when I see Lionel Messi, you just are in awe that humans are able to achieve that level of greatness and genius and artistry. 00:13:39.600 |
When we talk, we'll talk about AI, maybe robotics and this kind of stuff. 00:13:43.300 |
That level of genius, I'm not sure you can possibly match by AI in a long time. 00:13:50.660 |
And you have that kind of greatness in other disciplines. 00:13:52.860 |
But in sport, you get to visually see it unlike anything else. 00:13:58.680 |
And just the timing, the movement, it's just genius. 00:14:03.680 |
I had the chance to see him a couple of weeks ago. 00:14:05.680 |
He played in San Jose, so, against the Quake. 00:14:12.200 |
I was a fan on the, had good seats, knew where he would play in the second half, hopefully. 00:14:16.940 |
And even at his age, just watching him when he gets the ball, that movement, you know, you're right, that special quality. 00:14:24.420 |
It's tough to describe, but you feel it when you see it, yeah. 00:14:28.940 |
If we rank all the technological innovations throughout human history, let's go back, maybe the history of human civilization is 12,000 years ago. 00:14:41.140 |
And you rank them by the, how much of a productivity multiplier they've been. 00:14:46.340 |
So, we can go to electricity or the labor mechanization of the industrial revolution, or we can go back to the first agricultural revolution to 1,000 years ago. 00:14:57.500 |
In that long list of inventions, do you think AI, when history is written 1,000 years from now, do you think it has a chance to be the number one productivity multiplier? 00:15:09.060 |
Look, many years ago, I think it might have been 2017 or 2018. 00:15:12.020 |
You know, I said at the time, like, you know, AI is the most profound technology humanity will ever work on. 00:15:18.540 |
It'll be more profound than fire or electricity. 00:15:26.020 |
You know, when you asked this question, I was thinking, well, do we have a recency bias, right? 00:15:31.660 |
You know, like in sports, it's very tempting to call the current person you're seeing the greatest player, right? 00:15:40.160 |
And, you know, I do think, from first principles, I would argue AI will be bigger than all of those. 00:15:51.240 |
You know, two years ago, I had to go through a surgery, and then I processed that there was a point in time people didn't have anesthesia when they went through these procedures. 00:15:58.960 |
At that moment, I was like, that has got to be the greatest invention humanity has ever done, right? 00:16:05.060 |
So, look, we don't know what it is to have lived through those times. 00:16:09.980 |
But, you know, many of what you're talking about were kind of these general things, which pretty much affected everything, you know, electricity or internet, etc. 00:16:20.980 |
But I don't think we've ever dealt with a technology, both which is progressing so fast, becoming so capable. 00:16:34.260 |
And the main unique, it's recursively self-improving, right? 00:16:39.960 |
And so, the fact it is going, it's the first technology will kind of dramatically accelerate creation itself, like creating things, building new things, can improve and achieve things on its own. 00:16:55.280 |
Right, I think, like puts it in a different league, right? 00:17:00.580 |
And so, I think the impact it will end up having will far surpass everything we've seen before. 00:17:08.720 |
So, obviously, with that comes a lot of important things to think and wrestle with, but I definitely think that will end up being the case. 00:17:15.420 |
Especially if it gets to the point of where we can achieve superhuman performance on the AI research itself. 00:17:20.580 |
So, it's a technology that may, it's an open question, but it may be able to achieve a level to where the technology itself can create itself better than it could yesterday. 00:17:32.860 |
It's like the move 37 of alpha research or whatever it is, right? 00:17:37.220 |
Like, you know, and when, yeah, you're right, when it can do novel, self-directed research. 00:17:45.120 |
Obviously, for a long time, we'll have hopefully always humans in the loop and all that stuff, and these are complex questions to talk about. 00:17:53.300 |
But, yes, I think the underlying technology, you know, I've said this, like, if you watched seeing AlphaGo start from scratch, be clueless, and, like, become better through the course of a day, you know, like, you know, kind of, like, kind of, like, you know, really hits you when you see that happen. 00:18:13.560 |
Even our, like, the VO3 models, if you sample the models when they were, like, 30% done and 60% done and looked at what they were generating, and you kind of see how it all comes together, it's kind of, like, I would say, it's kind of inspiring, a little bit unsettling, right, as a human. 00:18:35.160 |
Well, the interesting thing of the Industrial Revolution, electricity, like you mentioned, you can go back to the, again, the agricultural. 00:18:44.920 |
There's what's called the Neolithic package of the first agricultural revolution. 00:18:50.580 |
It wasn't just that the nomads settled down and started planting food, but all this other kind of technology was born from that, and it's included in this package. 00:19:04.860 |
It's, there's these ripple effects, second and third order effects that happen. 00:19:08.940 |
Everything from something silly, like, silly, profound, like pottery, it can store liquids and food, to something we kind of take for granted, but social hierarchies and political hierarchies. 00:19:23.680 |
So, like, early government was formed, because it turns out if humans stop moving and have some surplus food, they start coming up with, they get bored, and they start coming up with interesting systems, and then trade emerges, which turns out to be a really profound thing. 00:19:40.820 |
And, like I said, government, I mean, there's just second and third order effects from that, including that package is incredible, and probably extremely difficult, if you'll ask one of the people in the nomadic tribes to predict that, that would be impossible. 00:19:55.100 |
It's difficult to predict, but all that said, what do you think are some of the early things we might see in the quote-unquote AI package? 00:20:05.920 |
I mean, most of it, probably, we don't know today, but, like, you know, the one thing which we can tangibly start seeing now is, you know, obviously, with the coding progress, you got a sense of it. 00:20:17.760 |
It's going to be so easy to imagine, like, thoughts in your head, translating that into things that exist, that'll be part of the package, right? 00:20:26.280 |
Like, it's going to empower almost all of humanity to kind of express themselves. 00:20:33.880 |
Maybe in the past, you could have expressed with words, but, like, you could kind of build things into existence, right? 00:20:45.940 |
You know, I've been amazed at what people have put out online with VO3, but it takes a bit of work, right? 00:20:51.320 |
You have to stitch together a set of prompts, but all this is going to get better. 00:20:55.680 |
The thing I always think about, this is the worst it'll ever be, right? 00:21:01.760 |
Yeah, it's interesting, you went there as kind of a first thought, sort of an exponential increase of access to creativity. 00:21:11.360 |
Software, creation, are you creating a program, a piece of content to be shared with others, games down the line, all of that, like, just becomes infinitely more possible. 00:21:25.480 |
Well, I think the big thing is that it makes it accessible. 00:21:29.060 |
It unlocks the cognitive capabilities of the entire 8 billion. 00:21:33.600 |
Look, think about 40 years ago, maybe in the U.S. there were five people who could do what you were doing. 00:21:40.820 |
Like, go do an interview, you know, but today, think about with YouTube and other products, etc., like, how many more people are doing it? 00:21:51.740 |
So, I think this is what technology does, right? 00:21:54.360 |
Like, when the Internet created blogs, you know, you heard from so many more people. 00:22:01.340 |
But with AI, I think that number won't be in the few hundreds of thousands. 00:22:06.940 |
It'll be tens of millions of people, maybe even a billion people, like, putting out things into the world in a deeper way. 00:22:17.340 |
And I think it'll change the landscape of creativity, and it makes a lot of people nervous. 00:22:21.440 |
Like, for example, whatever, Fox, MSNBC, CNN are really nervous about this part. 00:22:27.800 |
Like, you mean this dude in a suit could just do this? 00:22:30.780 |
And YouTube and thousands of others, tens of thousands, millions of other creators can do the same kind of thing? 00:22:38.680 |
And now, you get a podcast from Notebook LM that's about five to ten times better than any podcast I've ever done. 00:22:51.140 |
Because I, on the podcasting front, I'm a fan of podcasts much more than I am a fan of being a host or whatever. 00:22:59.440 |
If there's great podcasts that are both AIs, I'll just stop doing this podcast. 00:23:05.180 |
But you have to evolve, and you have to change, and that makes people really nervous, I think. 00:23:11.040 |
The only thing I may say is, I do think, like, in a world in which there are two AI, I think people value and choose. 00:23:19.020 |
Just like in chess, you and I would never watch Stockfish 10 or whatever and AlphaGo play against each. 00:23:30.080 |
But Magnus Carlsen and Gukesh, that game would be much more fascinating to watch. 00:23:36.180 |
Like, one way to say is, you'll have a lot more content, and so you will be listening to AI-generated content because sometimes it's efficient, etc. 00:23:45.120 |
But the premium experiences you value might be a version of, like, the human essence, wherever it comes through. 00:23:52.620 |
Going back to what we talked earlier about watching Messi dribble the ball. 00:23:56.800 |
One day, I'm sure a machine will dribble much better than Messi. 00:23:59.220 |
But I don't know whether it would evoke that same emotion in us. 00:24:03.380 |
So, I think that would be fascinating to see. 00:24:05.240 |
I think the element of podcasting or audiobooks that is about information gathering, that part might be removed. 00:24:15.640 |
Or that might be more efficiently and in a compelling way done by AI. 00:24:20.400 |
But then it would be just nice to hear humans struggle with the information, contend with the information, try to internalize it, combine it with the complexity of our own emotions and consciousness and all that kind of stuff. 00:24:33.020 |
But if you actually want to find out about a piece of history, you go to Gemini. 00:24:38.760 |
If you want to see Lex struggle with that history, then you look, or other humans, you look at that. 00:24:46.440 |
But the point is, it's going to change the nature, continue to change the nature of how we discover information, how we consume the information, how we create that information. 00:24:56.100 |
The same way that YouTube changed everything completely, changed news. 00:25:00.820 |
And that's something our society is struggling with. 00:25:03.340 |
YouTube, look, YouTube enabled, I mean, you know this better than anyone else. 00:25:09.160 |
There is no doubt in me that, like, we will enable more filmmakers than there have ever been, right? 00:25:19.560 |
So, I think there is an expansionary aspect of this, which is underestimated, I think. 00:25:24.700 |
I think it will unleash human creativity in a way that hasn't been seen before. 00:25:31.960 |
The only way it is, if you brought someone from the 50s or 40s and just put them in front of YouTube, you know, I think it would blow their mind away. 00:25:40.660 |
Similarly, I think we would get blown away by what's possible in a 10 to 20-year timeframe. 00:25:44.620 |
Do you think there's a future, how many years out is it, that, let's say, let's put a mark on it, 50% of content, good content. 00:25:53.900 |
50% of good content is generated by VO4, 5, 6. 00:25:58.700 |
You know, I think it depends on what it is for. 00:26:01.980 |
Like, you know, maybe if you look at movies today with CGI, there are great filmmakers. 00:26:09.340 |
Like, you still look at, like, who the directors are and who use it. 00:26:12.700 |
There are filmmakers who don't use it at all. 00:26:18.300 |
You know, think about somebody like a James Cameron, like, what he would do with these tools in his hands. 00:26:24.380 |
But I think there'll be a lot more content created. 00:26:26.560 |
Like, just like writers today use Google Docs and not think about the fact that they're using a tool like that. 00:26:32.980 |
Like, people will be using the future versions of these things. 00:26:39.000 |
I've gotten a chance to get to know Darren Aronofsky. 00:26:43.500 |
Well, he's been really leaning in and trying to figure out. 00:26:47.240 |
It's fun to watch a genius who came up before any of this was even remotely possible. 00:26:57.000 |
And from there, just continued to create a really interesting variety of movies. 00:27:00.540 |
And now he's trying to see how can AI be used to create compelling films. 00:27:07.280 |
You have people, I've gotten to just know, edgier folks that are AI first, like Doerr Brothers. 00:27:14.380 |
Both Aronofsky and Doerr Brothers create at the edge of the Overton window of society. 00:27:21.440 |
You know, they push, whether it's sexuality or violence, it's edgy, like artists are. 00:27:29.560 |
It doesn't cross that line, whatever that line is. 00:27:33.380 |
You know, Hunter S. Thompson has this line that the only way to find out where the edge, where the line is, is by crossing it. 00:27:48.820 |
I wonder if you can comment on the weird place that puts Google. 00:27:52.920 |
Because Google's line is probably different than some of these artists. 00:27:58.340 |
What's your, how do you think about, specifically Vio and Flow, about like how to allow artists to do crazy shit? 00:28:07.980 |
But also like the responsibility of like, not, for it not to be too crazy. 00:28:16.020 |
Look, part of, you mentioned Darren, you know. 00:28:20.420 |
Part of the reason we started working with him early on Vio is he's one of those people who's able to kind of see that future, get inspired by it, and kind of showing the way for how creative people can express themselves with it. 00:28:38.260 |
Look, I think when it comes to allowing artistic free expression, it's one of the most important values in a society, right? 00:28:45.740 |
I think, you know, artists have always been the ones to push boundaries, expand the frontiers of thought. 00:28:54.140 |
And so, look, I think, I think that's going to be an important value we have. 00:28:58.580 |
So, I think we will provide tools and put it in the hands of artists for them to use and put out their work. 00:29:08.340 |
Those APIs, I mean, I almost think of that as infrastructure. 00:29:13.920 |
Just like when you provide electricity to people or something, you want them to use it and, like, you're not thinking about the use cases on top of it. 00:29:21.880 |
And so, I think that's how, obviously, there have to be some things. 00:29:27.400 |
And, you know, society needs to decide at a fundamental level what's okay, what's not. 00:29:35.200 |
But I do think, you know, when it comes to artistic free expression, I think that's one of those values we should work hard to defend. 00:29:43.320 |
I wonder if you can comment on maybe earlier versions of Gemini were a little bit careful on the kind of things you would be willing to answer. 00:29:55.120 |
I just want to comment on I was really surprised and pleasantly surprised and enjoyed the fact that Gemini 2.5 Pro is a lot less careful in a good sense. 00:30:05.660 |
Don't ask me why, but I've been doing a lot of research on Genghis Khan and the Esteks. 00:30:11.440 |
So, there's a lot of violence there in that history. 00:30:15.320 |
I've also been doing a lot of research on World War I and World War II. 00:30:18.780 |
And earlier versions of Gemini were very, basically, this kind of sense, are you sure you want to learn about this? 00:30:26.680 |
And now it's actually very factual, objective, talks about very difficult parts of human history, and does so with nuance and depth. 00:30:37.380 |
But there's a line there that I guess Google has to kind of walk. 00:30:40.900 |
I wonder if it's also an engineering challenge, how to do that at scale across all the weird queries that people ask. 00:30:51.020 |
How do you allow Gemini to say, again, forgive, pardon my French, crazy shit, but not too crazy? 00:30:59.520 |
I think one of the good insights here has been, as the models are getting more capable, the models are really good at this stuff, right? 00:31:08.660 |
And so, I think in some ways, maybe a year ago, the models weren't fully there. 00:31:13.580 |
So, they would also do stupid things more often. 00:31:15.860 |
And so, you know, you're trying to handle those edge cases, but then you make a mistake in how you handle those edge cases, and it compounds. 00:31:25.040 |
But I think with 2.5, what we particularly found is, once the models cross a certain level of intelligence and sophistication, 00:31:33.060 |
you know, they are able to reason through these nuanced issues pretty well. 00:31:39.100 |
Like, you know, you want as much access to the raw model as possible. 00:31:43.400 |
But I think it's a great area to think about. 00:31:46.440 |
Like, you know, over time, you know, we should allow more and more closer access to it. 00:31:52.780 |
Maybe, obviously, let people custom prompts if they wanted to and, like, you know, experiment with it, et cetera. 00:32:03.740 |
But, look, the first principles we want to think about it is, you know, from a scientific standpoint, 00:32:10.060 |
like, making sure the models, and I'm saying scientific in the sense of, like, how you would approach math or physics or something like that, 00:32:15.980 |
from first principles, having the models reason about the world, be nuanced, et cetera, you know, from the ground up is the right way to build these things, right? 00:32:27.680 |
Not, like, some subset of humans kind of hard-coding things on top of it. 00:32:34.260 |
So, I think it's the direction we've been taking, and I think you'll see us continue to push in that direction. 00:32:39.420 |
Yeah, I actually asked, I gave these notes, I took extensive notes, and I gave them to Gemini and said, 00:32:46.060 |
can you ask a novel question that's not in these notes? 00:32:50.780 |
And it wrote, Gemini continues to really surprise me, really surprise me. 00:32:58.840 |
The question it generated was, you, meaning Sundar, told the world Gemini's churning out 480 trillion tokens a month. 00:33:10.280 |
What's the most life-changing five-word sentence hiding in that haystack? 00:33:18.700 |
But it gave me, it woke me up to, like, all of these tokens are providing little aha moments for people across the globe. 00:33:29.020 |
Those tokens are, people are curious, they ask a question, and they find something out. 00:33:37.960 |
Look, you know, I had the same feeling about Search many, many years ago. 00:33:41.660 |
You know, you definitely, you know, these tokens per month has, like, grown 50 times in the last 12 months. 00:33:53.580 |
But, you know, that number was 9.7 trillion tokens per month 12 months ago. 00:34:10.840 |
Maybe, I don't think it is there today, but maybe one day there's a five-word phrase which says what the actual universe is or something like that and something very meaningful. 00:34:24.620 |
Do you think the scaling laws are holding strong on, there's a lot of ways to describe the scaling laws for AI, but on the pre-training, on the post-training fronts. 00:34:34.620 |
So, the flip side of that, do you anticipate AI progress will hit a wall? 00:34:41.380 |
You know, it's a cherished micro-kitchen conversation. 00:34:44.520 |
Once in a while, I have it, you know, like when Demis is visiting or, you know, Demis, Korai, Jeff, Noam, Sergey, a bunch of our people. 00:34:53.760 |
Like, you know, we sit and, you know, talk about this, right? 00:34:57.340 |
And look, we see a lot of headroom ahead, right? 00:35:02.600 |
I think we've been able to optimize and improve on all fronts, right? 00:35:07.440 |
Pre-training, post-training, test time compute, tool use, right? 00:35:17.720 |
So, getting these models to be more general world models in that direction, like VO3, you know, the physics understanding is dramatically better than what, like, 00:35:29.760 |
So, you kind of see on all those dimensions, I feel, you know, progress is very obvious to see. 00:35:36.480 |
And I feel like there is significant headroom. 00:35:42.440 |
More importantly, you know, I'm fortunate to work with some of the best researchers on the planet, right? 00:35:49.300 |
They think there is more headroom to be had here. 00:35:52.900 |
And so, I think we have an exciting trajectory ahead. 00:35:56.640 |
It's tougher to say, you know, each year I sit and say, okay, we are going to throw 10x more compute over the course of next year at it. 00:36:06.100 |
Sitting here today, I feel like the year ahead will have a lot of progress. 00:36:11.000 |
And do you feel any limitations like that or the bottlenecks, compute limited, data limited, idea limited? 00:36:23.920 |
I think it's compute limited in this sense, right? 00:36:25.980 |
Like, you know, we can all, part of the reason you've seen is to flash, nano flash and pro models. 00:36:34.300 |
It's like for each generation, we feel like we've been able to get the pro model at like, I don't know, 80, 90% of ultra's capability. 00:36:43.380 |
But ultra would be a lot more, like, slow and a lot more expensive to serve. 00:36:52.660 |
But what we've been able to do is to go to the next generation and make the next generation's pro, as good as the previous generation's ultra, but be able to serve it in a way that it's fast and you can use it and so on. 00:37:05.040 |
So I do think scaling laws are working, but it's tough to get at any given time. 00:37:11.840 |
The models we all use the most is maybe like a few months behind the maximum capability we can deliver, right? 00:37:21.880 |
Because that won't be the fastest, easiest to use, et cetera. 00:37:28.280 |
It becomes harder and harder to measure performance, in quotes, because, you know, you could argue, Gemini, 00:37:35.020 |
Flash is much more impactful than pro, just because of the latency. 00:37:42.640 |
I mean, sometimes, like, latency is maybe more important than intelligence, especially when the intelligence is just a little bit less in Flash. 00:37:54.340 |
And so you have to now start measuring impact. 00:37:57.720 |
And then it feels like benchmarks are less and less capable of capturing the intelligence of models, the effectiveness of models, the usefulness, the real-world usefulness of models. 00:38:09.040 |
So lots of folks are talking about timelines for AGI or ASI, artificial superintelligence. 00:38:16.120 |
So AGI, loosely defined, is basically human expert level at a lot of the main fields of pursuit for humans. 00:38:24.260 |
And ASI is what AGI becomes, presumably quickly, by being able to self-improve. 00:38:32.020 |
So becoming far superior in intelligence across all these disciplines, these humans. 00:38:40.320 |
There's one other term we should throw in there. 00:38:53.060 |
Both there are progress and you see what they can do. 00:38:55.880 |
And then, like, you can trivially find they make numeric letters or, like, you know, counting R's in strawberry or something, 00:39:03.460 |
which seems to trip up most models or whatever it is, right? 00:39:09.820 |
I feel like we are in the A-J-I phase where, like, dramatic progress. 00:39:15.900 |
But overall, you know, you're seeing lots of progress. 00:39:18.920 |
But if your question is, will it happen by 2030? 00:39:22.600 |
Look, we constantly move the line of what it means to be AGI. 00:39:28.280 |
There are moments today, you know, like sitting in a Waymo in a San Francisco street with all the crowds and the people and kind of work. 00:39:39.580 |
The car is sometimes kind of impatient, trying to work its way using Astra, like in Gemini Live or seeing, you know, asking questions about the world. 00:39:49.080 |
What's this skinny building doing in my neighborhood? 00:39:55.820 |
That's why I use the word A-J-I because then you see stuff which obviously, you know, we are far from A-J-I too. 00:40:03.900 |
So you have both experiences simultaneously happening to you. 00:40:07.440 |
I'll answer your question, but I'll also throw out this. 00:40:11.600 |
What I know is by 2030, there'll be such dramatic progress. 00:40:15.040 |
We'll be dealing with the consequences of that progress, both the positives, both the positive externalities and the negative externalities that come with it in a big way by 2030. 00:40:30.980 |
Whatever, we may be arguing about the term or maybe Gemini can answer what that moment is in time in 2030. 00:40:38.860 |
But I think the progress will be dramatic, right? 00:40:42.300 |
Will the AI think it has reached A-J-I by 2030? 00:40:45.140 |
I would say we will just fall short of that timeline, right? 00:40:50.660 |
It's amazing in the early days of Google DeepMind in 2010, they talked about a 20-year timeframe to achieve A-J-I, which is kind of fascinating to see. 00:41:02.480 |
You know, I, for me, the whole thing, seeing what Google Brain did in 2012 and when we acquired DeepMind in 2014, right close to where we are sitting in 2012, you know, Jeff Dean showed the image of when the neural networks could recognize a picture of a cat, right? 00:41:21.260 |
And identify it, you know, this is the early versions of Brain, right? 00:41:24.560 |
And so, you know, we all talked about a couple of decades. 00:41:34.460 |
But I would stress it doesn't matter, like, what that definition is, because you will have mind-blowing progress on many dimensions. 00:41:46.540 |
We have to figure out as a society, how do we, we need some system by which we all agree that this is AI generated and we have to disclose it in a certain way, because how do you distinguish reality otherwise? 00:41:58.080 |
Yeah, there's so many interesting things you said. 00:41:59.940 |
So first of all, just looking back at this recent, now it feels like distant history with Google Brain. 00:42:05.600 |
I mean, that was before TensorFlow, before TensorFlow was made public and open-sourced. 00:42:09.880 |
So the tooling matters, too, combined with GitHub ability to share code. 00:42:14.860 |
Then you have the ideas of attention, transformers, and the diffusion now. 00:42:19.280 |
And then there might be a new idea that seems simple in retrospect, but will change everything. 00:42:24.960 |
And that could be the post-training, the inference, time, innovations. 00:42:28.600 |
And I think Shad Cien tweeted that Google is just one great UI from completely winning the AI race. 00:42:42.460 |
I think Logan Kerr-Patrick likes to talk about this right now. 00:42:49.520 |
Where you're talking about shipping systems versus shipping a particular model. 00:42:55.580 |
How the system manifests itself and how it presents itself to the world. 00:43:03.900 |
There are simple UI innovations which have changed the world, right? 00:43:11.080 |
We will see a lot more progress in the next couple of years. 00:43:14.720 |
I think AI itself on a self-improving track for UI itself. 00:43:21.020 |
Like, you know, today we are, like, constraining the models. 00:43:25.640 |
The models can't quite express themselves in terms of the UI, too, to people. 00:43:31.700 |
But that is, like, you know, if you think about it, we've kind of boxed them in that way. 00:43:36.760 |
But given these models can code, you know, they should be able to write the best interfaces to express their ideas over time, right? 00:43:50.740 |
So, you create a really nice agentic system that continuously improves the way you can be talking to an AI. 00:44:02.760 |
And then, of course, the incredible multimodal aspect of the interface that Google has been pushing. 00:44:10.420 |
They can easily take content from any format, put it in any format. 00:44:16.820 |
They probably understand your preferences better over time. 00:44:21.200 |
Like, you know, and so, so all this is, like, the evolution ahead, right? 00:44:25.900 |
And so, that goes back to where we started the conversation. 00:44:30.060 |
Like, I think there'll be dramatic evolutions in the years ahead. 00:44:35.660 |
This even further ridiculous concept of P-Doom. 00:44:41.160 |
So, the philosophically minded folks in the AI community think about the probability that AGI and then ASI might destroy all of human civilization. 00:44:55.180 |
Do you ever think about this kind of long-term threat of ASI? 00:45:04.900 |
Look, I've both been very excited about AI, but I've always felt this is a technology, you know, we have to actively think about the risks and work very, very hard to harness it in a way that it all works out well. 00:45:22.320 |
On the P-Doom question, look, it's, you know, wouldn't surprise you to say that's probably another micro kitchen conversation that pops up once in a while, right? 00:45:29.920 |
And given how powerful the technology is, maybe stepping back, you know, when you're running a large organization, if you can kind of align the incentives of the organization, you can achieve pretty much anything, right? 00:45:41.700 |
Like, you know, if you can get kind of people all marching in towards like a goal in a very focused way, in a mission-driven way, you can pretty much achieve anything. 00:45:49.660 |
But it's very tough to organize all of humanity that way. 00:45:54.180 |
But I think if P-Doom is actually high, at some point, all of humanity is like aligned in making sure that's not the case, right? 00:46:02.960 |
And so we'll actually make more progress against it, I think. 00:46:06.380 |
So the irony is, so there is a self-modulating aspect there. 00:46:12.700 |
Like, I think if humanity collectively puts their mind to solving a problem, whatever it is, I think we can get there. 00:46:18.720 |
So because of that, you know, I think I'm optimistic on the P-Doom scenarios. 00:46:27.540 |
But that doesn't mean, I think the underlying risk is actually pretty high. 00:46:31.480 |
But I'm, you know, I have a lot of faith in humanity kind of rising up to the, to meet that moment. 00:46:40.560 |
I mean, as the threat becomes more concrete and real, humans do really come together and get their shit together. 00:46:47.660 |
Well, the other thing I think people don't often talk about is probability of doom without AI. 00:46:53.900 |
So there's all these other ways that humans can destroy themselves. 00:46:57.000 |
And it's very possible, at least I believe so, that AI will help us become smarter, kinder to each other, more efficient. 00:47:07.400 |
It'll help more parts of the world flourish where it would be less resource constrained, which is often the source of military conflict and tensions and so on. 00:47:17.580 |
So we also have to load into that, what's the P-Doom without AI? 00:47:26.360 |
Because it's very possible that AI will be the thing that saves us, saves human civilizations from all the other threats. 00:47:34.580 |
Look, I felt like to make progress on some of the toughest problems would be good to have AI, like pair, helping you. 00:47:43.660 |
And, and like, you know, so that resonates with me for sure. 00:47:51.760 |
If Notebook LM was the same, like what I saw today with Beam, if it was compelling in the same kind of way, blew my mind. 00:48:05.140 |
One day, my hope was like, can you imagine like the US president and the Chinese president being able to do something like Beam with the live meat translation working well? 00:48:14.900 |
So they're both sitting and talking, make progress a bit more. 00:48:20.600 |
Just for people listening, we took a quick bathroom break. 00:48:25.020 |
We'll probably post it somewhere, somehow, maybe here. 00:48:28.520 |
The, I got a chance to experience Beam and it was, it's hard to, it's hard to describe it in words, how real it felt with just, what is it? 00:48:42.540 |
It's one of the toughest products of, you can't quite describe it to people, even when we show it in slides, et cetera, like you don't know what it is. 00:48:54.000 |
On the world leaders front, on politics, geopolitics, that there's something really special. 00:48:59.480 |
Again, we're studying World War II and how much could have been saved if Chamberlain met Stalin in person. 00:49:06.860 |
And I sometimes also struggle explaining to people, articulating why I believe meeting in person for world leaders is powerful. 00:49:14.660 |
It just seems naive to say that, but there is something there in person. 00:49:19.900 |
And with Beam, I felt that same thing and then I'm unable to explain. 00:49:30.860 |
And I mean, I don't know if that makes meetings more productive or so on, but it certainly makes them more. 00:49:37.640 |
The same reason you want to show up to work versus remote sometimes, that human connection. 00:49:48.500 |
There's some, there's something beautiful about great teams collaborating on a thing that's, that's not captured by the productivity of that team or by whatever on paper. 00:50:03.420 |
Some of the most beautiful moments you experience in life is at work. 00:50:07.700 |
Pursuing a difficult thing together for many months, there's nothing like it. 00:50:13.000 |
You're in the trenches and you do form bonds that way, for sure. 00:50:16.960 |
And to be able to do that like somewhat remotely in that same personal touch, I don't know, that's a deeply fulfilling thing. 00:50:22.900 |
I know a lot of people, I personally hate meetings because a significant percent of meetings when done poorly are, don't serve a clear purpose. 00:50:36.620 |
If you can improve the communication for the meetings that are useful, it's just incredible. 00:50:41.660 |
So yeah, I was blown away by the great engineering behind it. 00:50:48.180 |
That's really interesting, but just incredible engineering. 00:50:51.160 |
It is, and obviously we'll work hard over the years to make it more and more accessible. 00:50:55.660 |
But yeah, even on a personal front, outside of work meetings, you know, a grandmother who's far away from her grandchild and being able to, you know, have that kind of an interaction, right? 00:51:08.420 |
All of that, I think will end up being very mean. 00:51:11.040 |
Nothing substitutes being in person, but you know, it's not always possible. 00:51:15.440 |
You know, you could be a soldier deployed, try trying to talk to your loved ones. 00:51:19.840 |
So I think, uh, you know, so that's what inspires us. 00:51:24.140 |
When you and I hung out last year and took a walk, I remember, I don't think we talked about this, but I remember, uh, you know, outside of that, seeing dozens of articles written by analysts and experts and so on. 00:51:39.440 |
And that, um, Sundar Pichai should step down because the perception was that Google was definitively losing the AI race has lost this magic touch in the, uh, rapidly evolving, uh, technological, uh, landscape. 00:51:58.920 |
You showed this plot of all the things that were shipped over the past year. 00:52:03.660 |
It's incredible and Gemini pro is winning across many benchmarks and products, uh, as we sit here today. 00:52:09.820 |
So take me through that experience when there's all these articles saying you're the wrong guy to lead Google through this. 00:52:19.460 |
It's over to today where Google is winning again. 00:52:27.180 |
Look, I, um, I mean, lots to unpack, um, you know, obviously, like, I mean, the main bet I made as a CEO was to really, uh, you know, make sure the company was approaching everything in a AI first way, really, you know, setting ourselves up to develop AGI responsibly. 00:52:51.180 |
And, and, and, and, and make sure we're putting out products, uh, which, which embodies that things that are very, very useful for people. 00:52:59.680 |
So look, I, I knew even through moments like that last year, uh, you know, I had a good sense of what we were building internally. 00:53:10.840 |
So I'd already made, you know, many important decisions, you know, bringing together teams of the caliber of brain and deep mind and setting up Google deep mind. 00:53:22.920 |
There were things like we made the decision to invest in TPUs 10 years ago. 00:53:28.760 |
So we knew we were scaling up and building big models. 00:53:32.340 |
Anytime you're in a situation like that, a few aspects, uh, I'm good at tuning out. 00:53:47.820 |
Like I'm not good at it, but I've done it a few times, but, but sometimes you jump in the ocean. 00:53:53.580 |
It's so choppy, but you go down one feet under, it's the calmest thing in the entire, uh, universe. 00:54:10.000 |
You know, you may as well be coaching Barcelona or Real Madrid, right? 00:54:15.900 |
So there are aspects to that, but, you know, like, look, I, I'm good at tuning out the noise. 00:54:23.400 |
You know, it's important to separate the signal from the noise. 00:54:25.900 |
So there are good people sometimes making good points outside. 00:54:30.620 |
You want to take that feedback in, but, you know, internally, like, you know, you're making 00:54:39.220 |
As leaders, you're making a lot of decisions. 00:54:46.140 |
Like it feels like, but over time you learn that most of the decisions you're making on 00:54:53.120 |
Like you have to make them and you're making them just to keep things moving, but you have 00:54:59.180 |
to make a few consequential decisions, right? 00:55:01.300 |
And we had set up the right teams, right leaders. 00:55:14.280 |
Internally, there are factors which were, for example, outside people may not have appreciated. 00:55:19.140 |
I mean, TPUs are amazing, but we had to ramp up TPUs too. 00:55:26.300 |
And to scale, actually having enough TPUs to get the compute needed. 00:55:32.040 |
But I could see internally the trajectory we were on and be, you know, I was so excited 00:55:42.640 |
To me, this moment felt like one of the biggest opportunities ahead for us as a company, that 00:55:49.160 |
the opportunity space ahead over the next decade, next 20 years is bigger than what has happened 00:55:55.500 |
And I thought we were set up like better than most companies in the world to go realize 00:56:04.460 |
I mean, you had to make some consequential, bold decisions. 00:56:09.820 |
Like you mentioned the merger of DeepMind and Brain. 00:56:15.860 |
Maybe it's my perspective, just knowing humans. 00:56:22.480 |
And I'm sure there are some hard decisions to be made. 00:56:24.980 |
Can you take me through your process of how you think through that? 00:56:27.980 |
Do you go to pull the trigger and make that decision? 00:56:36.360 |
Look, we were fortunate to have two world-class teams, but you're right. 00:56:39.520 |
Like, it's like somebody coming and telling to you, take Stanford and MIT. 00:56:44.260 |
And then put them together and create a great department. 00:56:53.060 |
Both had their strengths, you know, but they were run very differently. 00:56:58.000 |
Like Brain was kind of a lot of diverse projects, bottoms up, and out of it came a lot of important 00:57:07.420 |
DeepMind at the time had a strong vision of how you want to build AGI. 00:57:16.460 |
But I think through those moments, luckily tapping into, you know, Jeff had expressed a 00:57:22.280 |
desire to be more, to go back to more of a scientific individual contributor roots. 00:57:28.420 |
You know, he felt like management was taking up too much of his time. 00:57:31.640 |
And Demis naturally, I think, you know, was running DeepMind and was a natural choice there. 00:57:42.500 |
You know, it took us a while to bring the teams together. 00:57:45.180 |
Credit to Demis, Jeff, Korai, all the great people there. 00:57:48.600 |
They worked super hard to combine the best of both worlds when you set up that team. 00:57:55.280 |
A few sleepless nights here and there as we put that thing together. 00:57:59.280 |
We were patient in how we did it so that it works well for the long term. 00:58:06.660 |
And some of that in that moment, I think, yes, with things moving fast, I think you definitely 00:58:13.460 |
felt the pressure, but I think we pulled off that transition well. 00:58:18.000 |
And, you know, I think, I think, you know, they were obviously doing incredible work and 00:58:24.060 |
there's a lot more incredible things I had coming from them. 00:58:26.980 |
Like we talked about, you have a very calm, even tempered, respectful demeanor. 00:58:30.420 |
During that time, whether it's the merger or just dealing with the noise, were there times 00:58:40.900 |
Like, did you have to go a bit more intense on everybody than you usually would? 00:58:50.520 |
I think, I think in the sense that, you know, there was a moment where we were all driving 00:58:54.160 |
hard, but when you're in the trenches working with passion, you're going to have days, right? 00:59:01.220 |
You disagree, you argue, but like all that, I mean, just part of the course of working intensely, 00:59:08.840 |
right, and, you know, at the end of the day, all of us are doing what we are doing because 00:59:15.920 |
the impact it can have, we are motivated by it. 00:59:19.420 |
It's like, you know, for many of us, this has been a long-term journey. 00:59:27.120 |
The positive moments far outweigh the kind of stressful moments. 00:59:31.560 |
Just early this year, I had a chance to celebrate back-to-back over two days, like, you know, 00:59:38.640 |
Nobel Prize for Jeff Fenton, and the next day, a Nobel Prize for Demis and John Jumper. 00:59:44.120 |
You know, you worked with people like that, all that is super inspiring. 00:59:48.280 |
Is there something like with you where you had to, like, put your foot down, maybe with less? 00:59:55.460 |
Versus more, we're like, I'm the CEO, and we're doing this. 01:00:00.640 |
To my earlier point about consequential decisions you make, there are decisions you make, people 01:00:05.800 |
can disagree pretty vehemently, and, but at some point, like, you know, you make a clear 01:00:12.400 |
decision, and you just ask people to commit, right? 01:00:17.200 |
Like, you know, you can disagree, but it's time to disagree and commit so that we can get 01:00:22.160 |
moving, and whether it's putting the foot down or, you know, like, you know, it's a natural 01:00:29.920 |
And, you know, I think you can do that calmly and be very firm in the direction you're making 01:00:36.280 |
And I think if you're clear, actually, people over time respect that, right? 01:00:40.860 |
Like, you know, if you can make decisions with clarity. 01:00:43.600 |
I find it very effective in meetings where you're making such decisions to hear everyone 01:00:50.240 |
I think it's important when you can to hear everyone out. 01:00:54.620 |
Sometimes what you're hearing actually influences how you think about, and you're wrestling with 01:01:00.300 |
Sometimes you have a clear conviction, and you state, so, look, I, you know, this is how I feel, 01:01:08.800 |
and, you know, this is my conviction, and you kind of place the bet, and you move on. 01:01:14.740 |
I'm kind of intuitively assume the merger was the big one. 01:01:19.120 |
I think that was a very important decision, you know, for the company to meet the moment. 01:01:25.140 |
I think we had to make sure we were, we were doing that and doing that well. 01:01:32.780 |
We set up an AI infrastructure team, like, to really go meet the moment to scale up the 01:01:38.460 |
compute we needed to, and really brought teams from disparate parts of the company, kind of 01:01:46.540 |
You know, bringing people, like, getting people to kind of work together physically, both in 01:01:54.580 |
London, with DeepMind, and what we call Gradient Canopy, which is where the Mountain View, Google 01:02:01.580 |
DeepMind teams are, but one of my favorite moments is I routinely walk, like, multiple times per 01:02:07.700 |
week to the Gradient Canopy building, where our top researchers are working on the models. 01:02:15.420 |
Sergei is often there amongst them, right, like, you know, just, you know, looking at, you 01:02:21.760 |
know, getting an update on the model, seeing the loss curve, so all that, I think that cultural 01:02:25.620 |
part of getting the teams together back with that energy, I think ended up playing a big 01:02:32.080 |
What about the decision to recently add AI mode? 01:02:36.520 |
So Google search is the, as they say, the front page of the internet. 01:02:42.420 |
It's like a legendary minimalist thing with 10 blue links, like that's, when people think 01:02:49.340 |
internet, they think that page, and now you're starting to mess with that. 01:02:53.780 |
So the AI mode, which is a separate tab, and then integrating AI in the results, I'm sure 01:02:59.000 |
there were some battles in meetings on that one. 01:03:01.680 |
Look, you know, in some ways, when mobile came, you know, people wanted answers to more questions, 01:03:09.680 |
and so we're kind of constantly evolving it, but you're right, this moment, you know, that 01:03:13.880 |
evolution, because the underlying technology is becoming much more capable, you know, you 01:03:21.080 |
can have AI give a lot of context, you know, but one of our important design goals, though, 01:03:25.680 |
is when you come to Google search, you're going to get a lot of context, but you're going to 01:03:33.760 |
So that will be true in AI mode, in AI overviews, and so on. 01:03:37.560 |
But I think to our earlier conversation, we are still giving you access to links, but think 01:03:44.040 |
of the AI as a layer which is giving you context, summary, maybe in AI mode, you can have a dialogue 01:03:51.380 |
with it back and forth on your journey, right? 01:03:56.440 |
And, but through it all, you're kind of learning what's out there in the world. 01:03:59.800 |
So those core principles don't change, but I think AI mode allows us to push the, we have 01:04:08.120 |
Models which are using search as a deep tool, really, for every query you're asking, kind 01:04:14.920 |
of fanning out, doing multiple searches, like kind of assembling that knowledge in a way so 01:04:20.580 |
you can go and consume what you want to, right? 01:04:24.740 |
I got to just listen to a bunch of Elizabeth, Liz, read, describe this. 01:04:30.100 |
Two things stood out to me that she mentioned. 01:04:31.900 |
One thing is what you were talking about is the query fanout, which I didn't even think 01:04:37.940 |
about before, is the powerful aspect of integrating a bunch of stuff on the web for you in one place. 01:04:45.360 |
So yes, it provides that context so that you can decide which page to then go on to. 01:04:51.940 |
The other really, really big thing speaks to the earlier, in terms of productivity multiply 01:04:57.320 |
that we're talking about that she mentioned was language. 01:05:00.500 |
So one of the things you don't quite understand is through AI mode, you make, for non-English 01:05:08.840 |
speakers, you make sort of, let's say, English language websites accessible by, in the reasoning 01:05:17.220 |
process, as you try to figure out what you're looking for. 01:05:20.560 |
Of course, once you show up to a page, you can use a basic translate. 01:05:23.300 |
But that process of figuring it out, if you empathize with a large part of the world that 01:05:29.780 |
doesn't speak English, their web is much smaller in that original language. 01:05:37.080 |
And so it unlocks, again, unlocks that huge cognitive capacity there. 01:05:41.940 |
We don't, you know, you take for granted here with all the bloggers and the journalists writing 01:05:46.980 |
You forget that this now unlocks, because Gemini is really good at translation. 01:05:54.460 |
I mean, the multimodality, the translation, its ability to reason, we are dramatically improving 01:06:01.220 |
tool use, like, as of putting that power in the flow of search. 01:06:09.620 |
With the AI overviews, we've seen the product has gotten much better. 01:06:15.680 |
You know, we measured using all kinds of user metrics. 01:06:19.100 |
It's obviously driven strong growth of the product. 01:06:26.320 |
You know, it's now in the hands of millions of people. 01:06:32.760 |
So, look, I'm excited about this next chapter of search. 01:06:36.840 |
For people who are not thinking through or are aware of this. 01:06:38.760 |
So, there's the 10 blue links with the AI overview on top that provides a nice summarization. 01:07:05.840 |
We should say that in the 90s, I remember the animated GIFs, banner GIFs that take you to some shady websites that have nothing to do with anything. 01:07:17.900 |
It's one of the greatest inventions in recent history because it allows us for free to have access to all these kinds of services. 01:07:28.360 |
So, ads fuel a lot of really powerful services. 01:07:32.280 |
And, at its best, it's showing you relevant ads, but also, very importantly, in a way that's not super annoying, right? 01:07:41.080 |
So, when do you think it's possible to add ads into AI mode? 01:07:47.720 |
And what does that look like from a classy, not annoying perspective? 01:07:52.960 |
Early part of AI mode will obviously focus more on the organic experience to make sure we are getting it right. 01:07:59.940 |
I think the fundamental value of ads are, it enables access to deploy the services to billions of people. 01:08:07.260 |
Second is, ads are, the reason we've always taken ads seriously is we view ads as commercial information, but it's still information. 01:08:16.600 |
And so, we bring the same quality metrics to it. 01:08:19.160 |
I think with AI mode, to our earlier conversation about, I think AI itself will help us, over time, figure out, you know, the best way to do it. 01:08:30.400 |
I think, given we are giving context around everything, I think it'll give us more opportunities to also explain, okay, here's some commercial information. 01:08:38.820 |
Like, today, as a podcaster, you do it at certain spots, and you probably figure out what's best in your podcast. 01:08:47.100 |
I think, so there are aspects of that, but I think, you know, I think the underlying need of people value commercial information, businesses are trying to connect to users, all that doesn't change in an AI moment. 01:09:04.040 |
You've seen us in YouTube now do a mixture of subscription and ads. 01:09:09.660 |
Like, obviously, you know, we are now introducing subscription offerings across everything. 01:09:16.400 |
And so, as part of that, we can optimize, the optimization point will end up being a different place as well. 01:09:22.880 |
Do you see a trajectory in the possible future where AI mode completely replaces the 10 Blue Links plus AI overview? 01:09:31.600 |
Our current plan is AI mode is going to be there as a separate tab for people who really want to experience that. 01:09:38.500 |
But it's not yet at the level where our main search page is. 01:09:43.520 |
But as features work, we'll keep migrating it to the main page. 01:09:51.160 |
AI mode will offer you the bleeding-edge experience. 01:09:54.280 |
But it thinks that work will keep overflowing to AI overviews in the main experience. 01:10:01.640 |
And the idea that AI mode will still take you to the web, to the human-created web. 01:10:06.180 |
Yes, that's going to be a core design principle for us. 01:10:08.580 |
So, really, if users decide, right, they drive this. 01:10:13.000 |
Google has been dominating with a very specific look and idea of what it means to have the internet. 01:10:26.280 |
And as you move to AI mode, I mean, it's just a different experience. 01:10:34.040 |
I think Liz was talking about, I think you've mentioned that you ask more questions, you ask longer questions. 01:10:45.700 |
Like, I think for me, I've been asking just a much larger number of questions of this black box machine, let's say. 01:10:54.040 |
And with AI overview, it's interesting because I still value the human, I still ultimately want to end up on the human-created web. 01:11:06.700 |
But, like you said, the context really helps. 01:11:09.420 |
It helps us deliver higher quality referrals, right? 01:11:13.500 |
You know, where people are, like, they have much higher likelihood of finding what they're looking for. 01:11:24.700 |
It makes the humans that create the web nervous. 01:11:30.040 |
Like I mentioned, CNN is nervous because of podcasts. 01:11:36.560 |
Look, I think news and journalism will play an important role, you know, in the future. 01:11:46.580 |
And so, I think making sure that ecosystem, in fact, I think we'll be able to differentiate ourselves as a company over time because of our commitment there. 01:11:56.580 |
So, it's something I think, you know, I definitely value a lot. 01:12:00.340 |
And as we are designing, we'll continue prioritizing approaches. 01:12:04.240 |
I'm sure for the people who want, they can have a fine-tuned AI model that's clickbait hit pieces that will replace current journalism. 01:12:16.060 |
But I find that if you're looking for really strong criticism of things, that Gemini is very good at providing that. 01:12:25.360 |
For now, I mean, people are concerned that there will be bias that's introduced that as the AI systems become more and more powerful, there's incentive from sponsors to roll in and try to control the output of the AI models. 01:12:41.500 |
But for now, the objective criticism that's provided is way better than journalism. 01:12:45.520 |
Of course, the argument is the journalists are still valuable. 01:12:48.800 |
But then, I don't know, the crowdsourced journalism that we get on the open internet is also very, very powerful. 01:12:56.100 |
I feel like they're all super important things. 01:12:58.700 |
I think it's good that you get a lot of crowdsourced information coming in. 01:13:04.300 |
But I feel like there is real value for high-quality journalism, right? 01:13:10.600 |
And I think these are all complementary, I think, like I view it as I find myself constantly seeking out also, like, try to find objective reporting on things too. 01:13:23.780 |
And sometimes you get more context from the crowdfunded sources you read online. 01:13:28.880 |
But I think both end up playing a super important role. 01:13:31.260 |
So there's, you've spoken a little bit about this. 01:13:36.020 |
It's sort of the slice of the web that will increasingly become about providing information for agents. 01:13:43.180 |
So we can think about it as, like, two layers of the web. 01:13:52.180 |
Do you see the one that's for AI agents growing over time? 01:13:56.300 |
Do you see there still being long-term, five, ten years value for the human-created, human-created for the purpose of human consumption, web? 01:14:08.180 |
Today, like, not everyone does, but, you know, you go to a big retail store, you love walking the aisle, you love shopping. 01:14:22.360 |
But you're also online shopping and they're delivering, right? 01:14:25.520 |
So both are complementary and, like, that's true for restaurants, etc. 01:14:30.140 |
So I do feel like over time, websites will also get better for humans. 01:14:37.560 |
AI might actually design them better for humans. 01:14:41.620 |
So I expect the web to get a lot richer and more interesting and better to use. 01:14:48.100 |
At the same time, I think there'll be an agentic web, which is also making a lot of progress. 01:14:57.360 |
And you have to solve the business value and the incentives to make that work well, right? 01:15:07.440 |
And obviously, the agents may not need the same... 01:15:13.500 |
They won't need the same design and UI paradigms which humans need to interact with. 01:15:24.680 |
I have to say, for me personally, Google Chrome was probably... 01:15:35.680 |
And this is not a recency bias, although it might be a little bit. 01:15:40.680 |
Maybe the number one piece of software for me of all time. 01:15:48.820 |
And Chrome really continued for many years, but even initially, to push the innovation on that front when it was stale. 01:15:58.620 |
It continues to make it more performant, so efficient. 01:16:07.920 |
You were one of the pioneers of Chrome pushing for it when it was an insane idea. 01:16:15.260 |
Probably one of the ideas that was criticized and doubted and so on. 01:16:20.200 |
So can you tell me the story of what it took to push for Chrome? 01:16:28.860 |
Look, it was such a dynamic time, you know, around 2004, 2005, with Ajax, the web suddenly becoming dynamic. 01:16:40.340 |
In a matter of few months, Flickr, Gmail, Google Maps, all kind of came into existence, right? 01:16:49.980 |
Like the fact that you have an interactive dynamic web, the web was evolving from simple text pages, simple HTML to rich dynamic applications. 01:17:00.400 |
But at the same time, you could see the browser was never meant for that world, right? 01:17:07.900 |
Like JavaScript execution was super slow, you know, the browser was far away from being an operating system for that rich, modern web, which is coming into place. 01:17:25.860 |
I still remember the day we got a shell on WebKit running and how fast it was. 01:17:35.300 |
You know, we had a clear vision for building a browser, like we wanted to bring core OS principles into the browser, right? 01:17:49.800 |
These things are common now, but at the time, like it was pretty unique. 01:17:55.340 |
We found an amazing team in Aarhus, Denmark, with a leader who built a V8, the JavaScript VM, which at the time was 25 times faster than any other JavaScript VM out there. 01:18:10.660 |
We open sourced it all and, you know, and put it in Chromium too. 01:18:14.100 |
But we really thought the web could work much better, you know, much faster, and you could be much safer browsing the web. 01:18:23.720 |
And the name Chrome came was because we literally felt people were like the Chrome of the browser was getting clunkier. 01:18:36.180 |
Definitely, obviously, highly biased person here talking about Chrome. 01:18:42.700 |
But, you know, it's the most fun I've had building a product from the ground up. 01:18:51.180 |
I had my co-founders on the project were terrific, so definitely fond memories. 01:18:55.960 |
So for people who don't know, Sundar, it's probably fair to say you're the reason we have Chrome. 01:19:01.380 |
Yes, I know there's a lot of incredible engineers, but pushing for it inside a company that probably was opposing it because it's a crazy idea. 01:19:08.880 |
Because as everybody probably knows, it's incredibly difficult to build a browser. 01:19:13.520 |
You know, look, Eric, who was the CEO at that time, I think it was less than he was supposed to it. 01:19:18.620 |
He kind of firsthand knew what a crazy thing it is to go build a browser. 01:19:23.320 |
And so he definitely was like, this is, you know, there was a crazy aspect to actually wanting to go build a browser. 01:19:36.720 |
I think once we started, you know, building something and we could use it and see how much better from then on, like, you know, you're really tinkering with the product and making it better. 01:19:48.960 |
What wisdom do you draw from that, from pushing through on a crazy idea in the early days that ends up being revolutionary? 01:19:59.900 |
I mean, this, this is something Larry and Sergey have articulated clearly. 01:20:05.060 |
I really internalized this early on, which is, you know, their whole feeling around working on moonshots, like, as a way, when you work on something very ambitious, first of all, it attracts the best people. 01:20:20.920 |
Number two, because it's so ambitious, you don't have others working on something crazy. 01:20:26.200 |
So you pretty much have the path to yourselves. 01:20:30.060 |
Number three, it is, even if you end up quite not accomplishing what you set out to do and you end up doing 60, 80% of it, it'll end up being a terrific success. 01:20:40.240 |
So, so, you know, that's the advice I would give people. 01:20:45.020 |
I think like, you know, it's just aiming for big ideas. 01:20:47.760 |
It has all these advantages and, and it's risky, but it also has all these advantages, which people, I don't think, fully internalize. 01:20:57.340 |
I mean, you mentioned one of the craziest, biggest moonshots, which is Waymo. 01:21:01.140 |
It's one of, when I first saw over a decade ago, a Waymo vehicle, a Google self-driving car vehicle. 01:21:10.360 |
It was, it was, for me, it was an aha moment for robotics. 01:21:15.540 |
It made me fall in love with robotics even more than before. 01:21:21.400 |
I'm truly grateful for that project, for what it symbolizes. 01:21:26.660 |
It's for, for a long time, Waymo has been just, like you mentioned with scuba diving, just not listening to anybody, just calmly improving the system better and better, more testing, just expanding the operational domain more and more. 01:21:43.800 |
First of all, congrats on 10 million paid robo-taxi rides. 01:21:48.080 |
What lessons do you take from Waymo about, like, the perseverance, the persistence on that project? 01:21:56.020 |
I look really proud of the progress we have had with Waymo. 01:21:59.900 |
One of the things I think we were very committed to, you know, the final 20% can look like, I mean, we always say, right, the first 80% is easy. 01:22:11.400 |
I think we definitely were working through that phase with Waymo. 01:22:18.020 |
So, but, you know, we knew we were at that stage. 01:22:20.100 |
We knew we were the technology gap between, while there were many people, many other self-driving companies, we knew the technology gap was there. 01:22:31.160 |
In fact, right at the moment when others were doubting Waymo is when, I don't know, I made the decision to invest more in Waymo, right? 01:22:40.160 |
Because so, so in some ways, it's counterintuitive. 01:22:45.000 |
But I think, look, we've always been a deep technology company and, like, you know, Waymo is a version of kind of building a AI robot that works well. 01:22:55.720 |
And so we get attracted to problems like that, the caliber of the teams there, you know, phenomenal teams. 01:23:03.200 |
And so I know you follow the space super closely, you know, I'm talking to someone who knows the space well, but it was very obvious it's going to get there. 01:23:12.600 |
And, you know, there's still more work to do, but we, you know, it's a good example where we always prioritized being ambitious and safety at the same time, right? 01:23:23.520 |
And equally committed to both and pushed hard and, you know, couldn't be more thrilled with how it's working, how much people love the experience. 01:23:35.740 |
And this year has definitely, we've scaled up a lot and will continue scaling up in 26. 01:23:45.420 |
You've been friendly with Elon, even though technically as a competitor, but you've been friendly with a lot of tech CEOs in that way, just showing respect towards them and so on. 01:23:54.980 |
What do you think about the robotaxi efforts that Tesla is doing? 01:24:01.200 |
We are one of the earliest and biggest backers of SpaceX as Google, right? 01:24:09.200 |
So, you know, thrilled with what SpaceX is doing and fortunate to be investors as a company there, right? 01:24:18.280 |
And, look, we don't compete with Tesla directly. 01:24:25.620 |
We're building a Waymo driver, which is general purpose and can be used in many settings. 01:24:30.940 |
They're obviously working on making Tesla self-driving too. 01:24:35.840 |
I'm just assuming it's a de facto that Elon would succeed in whatever he does. 01:24:40.800 |
So, like, you know, that is not something I question. 01:24:45.340 |
So, but I think we are so far from these spaces are such vast spaces. 01:24:51.820 |
Like, I think, think about transportation, the opportunity space. 01:24:56.660 |
The Waymo driver is a general purpose technology. 01:25:06.540 |
In all future scenarios, I see Tesla doing well and, you know, Waymo doing well. 01:25:13.300 |
Like we mentioned with the Neolithic package, I think it's very possible that in the quote-unquote AI package when the history is written, 01:25:21.120 |
autonomous vehicles, self-driving cars, is like the big thing that changes everything. 01:25:26.640 |
Imagine over a period of a decade or two, just the complete transition from manually driven to autonomous. 01:25:34.340 |
In ways we might not predict, it might change the way we move about the world completely. 01:25:40.960 |
So, that, you know, the possibility of that, and then the second and third order effects, as you're seeing now with Tesla, 01:25:48.240 |
very possibly you would see some internally with Alphabet, maybe Waymo, maybe some of the Gemini robotics stuff. 01:25:56.760 |
It might lead you into the other domains of robotics. 01:26:01.580 |
Because we should remember that Waymo is a robot. 01:26:13.120 |
The big aha moment might be in the space of robotics. 01:26:19.580 |
Demis and the Google DeepMind team is very focused on Gemini robotics, right? 01:26:23.620 |
So, we are definitely building the underlying models well. 01:26:30.740 |
And I think we are also pretty cutting edge in our research there. 01:26:34.080 |
So, we are definitely driving that direction. 01:26:37.380 |
We obviously are thinking about applications in robotics. 01:26:43.480 |
We are partnering with a few companies today. 01:26:47.940 |
We are, you know, we are yet to fully articulate our plans outside. 01:26:52.480 |
But it's an area we are definitely committed to driving a lot of progress. 01:26:56.240 |
But I think AI ends up driving that massive progress in robotics. 01:27:03.720 |
I mean, the hardware has made extraordinary progress. 01:27:10.440 |
But, you know, with AI now and the generalized models we are building, you know, we are building 01:27:17.600 |
these models, getting them to work in the real world in a safe way, in a generalized way, 01:27:22.960 |
is the frontier we're pushing pretty hard on. 01:27:25.320 |
Well, it's really nice to see that the models and the different teams integrated to where all 01:27:29.500 |
of them are pushing towards one world model that's being built. 01:27:32.420 |
So, from all these different angles, multimodal, you're ultimately trying to get 01:27:38.080 |
the same thing that would make AI mode really effective in answering your questions, which 01:27:46.200 |
requires a kind of world model, is the same kind of thing that would help a robot be useful 01:27:53.780 |
That is what makes this moment so unique because running a company, for the first time, you can 01:27:59.860 |
do one investment in a very deep, horizontal way. 01:28:04.600 |
On top of it, you can, like, drive multiple businesses forward, right? 01:28:09.340 |
And, you know, that's effectively what we are doing in Google and Alphabet, right? 01:28:14.240 |
Yeah, it's all coming together like it was planned ahead of time, but it's not, of course. 01:28:18.700 |
I mean, if Gmail and Sheets and all these other incredible services, I can sing Gmail praises 01:28:28.220 |
But the moment you start to integrate AI, Gemini, into Gmail, I mean, that's the other 01:28:33.980 |
Speaking of productivity multiplier, people complain about email, but that changed everything. 01:28:38.060 |
Email, like the invention of email changed everything. 01:28:42.160 |
There's been a few folks trying to revolutionize email, some of them on top of Gmail. 01:28:49.880 |
Not just spam filtering, but you demoed a really nice demo of personalized responses. 01:28:57.620 |
And at first, I was like, at first, I felt really bad about that. 01:29:04.020 |
But then I realized that there's nothing wrong to feel bad about because were you, the example 01:29:09.620 |
you gave is when a friend asks, you know, you went to whatever hiking location, do you have 01:29:16.380 |
And they just search us through all your information to give them good advice. 01:29:20.240 |
And then you put the cherry on top, maybe some love or whatever, camaraderie. 01:29:24.120 |
But the informational aspect, the knowledge transfer it does for you. 01:29:30.020 |
Like, it should be, like, today, if you write a card in your own handwriting and send it 01:29:36.760 |
Similarly, there'll be a time, I mean, to your friends, maybe your friend wrote and said he's 01:29:43.200 |
You know, those are moments you want to save your times for writing something, reaching out. 01:29:48.740 |
But, you know, like saying, give me all the details of the trip you took, you know, to me, 01:29:55.580 |
makes a lot of sense for an AI assistant to help you, right? 01:30:01.840 |
But I think I'm excited about that direction. 01:30:04.740 |
Yeah, I think ultimately it gives more time for us humans to do the things we humans find 01:30:08.980 |
And I think it scares a lot of people because we're going to have to ask ourselves the hard 01:30:14.100 |
question of, like, what do we find meaningful? 01:30:17.680 |
I mean, it's the old question of the meaning of existence is you have to try to figure that 01:30:23.160 |
out that might be ultimately parenting or being creative in some domains of art or writing. 01:30:29.440 |
And it challenges to like, you know, it's a good question of to ask yourself, like, in 01:30:35.340 |
my life, what is the thing that brings me most joy and fulfillment? 01:30:40.000 |
And if I'm able to actually focus more time on that, that's really powerful. 01:30:44.480 |
I think that's the, you know, that's the holy grail. 01:30:47.940 |
If you get this right, I think it allows more people to find that. 01:30:52.260 |
I have to ask you, on the programming front, AI is getting really good at programming. 01:30:56.960 |
Gemini, both the Agentec and just the LLM has been incredible. 01:31:00.540 |
So a lot of programmers are really worried that their jobs, they will lose their jobs. 01:31:08.840 |
And how should they adjust so they can be thriving in this new world where more and more code is 01:31:16.120 |
I think a few things, looking at Google, you know, we've given various stats around like, 01:31:25.720 |
you know, 30% of code now uses like AI-generated suggestions or whatever it is. 01:31:31.880 |
But the most important metric, and we carefully measure it, is like, 01:31:35.420 |
how much has our engineering velocity increased as a company due to AI, right? 01:31:43.960 |
And it's like tough to measure, and we kind of rigorously try to measure it. 01:31:46.920 |
And our estimates are that number is now at 10%, right? 01:31:51.540 |
Like now, across the company, we've accomplished a 10% engineering velocity increase using AI. 01:32:00.660 |
But we plan to hire engineers, more engineers next year, right? 01:32:06.860 |
So because the opportunity space of what we can do is expanding too, right? 01:32:14.480 |
And so I think hopefully, you know, at least in the near to midterm, for many engineers, 01:32:23.580 |
it frees up more and more of the, you know, even in engineering and coding, there are aspects 01:32:32.980 |
You're designing, you're architecting, you're solving a problem. 01:32:35.940 |
There's a lot of grant work, you know, which all goes hand in hand. 01:32:40.940 |
But it hopefully takes a lot of that away, makes it even more fun to code, frees you up more time to 01:32:47.220 |
create, problem solve, brainstorm with your fellow colleagues, and so on, right? 01:32:55.040 |
And second, I think, like, you know, it'll attract, it'll put the creative power in more people's hands, 01:33:05.960 |
That means there'll be more engineers doing more things. 01:33:10.540 |
But, you know, I think in general, in this moment, it feels like, you know, people adopt these tools 01:33:20.980 |
Like there are more people playing chess now than ever before, right? 01:33:25.180 |
So, you know, it feels positive that way to me, at least speaking from within a Google context. 01:33:32.140 |
It's how I would, you know, talk to them about it. 01:33:36.080 |
I still, I just know anecdotally, a lot of great programmers are generating a lot of code. 01:33:41.880 |
So their productivity, they're not always using all the code, just, you know, there's still a lot of editing. 01:33:47.340 |
But, like, even for me, programming is a side thing. 01:33:55.300 |
I don't, I think that's, even for a large code base that's touching a lot of users like Google's does, 01:34:02.960 |
I'm imagining, like, very soon that productivity should be going up even more. 01:34:08.860 |
The big unlock will be as we make the agent capabilities much more robust, right? 01:34:15.260 |
I think that's what unlocks that next big wave. 01:34:20.680 |
Like, you know, tomorrow, like, I showed up and said, like, you can improve, like, a large organization's productivity by 10%. 01:34:27.020 |
When you have tens of thousands of engineers, that's a phenomenal number. 01:34:33.100 |
And, you know, that's different than what others cite as statistics saying, like, you know, like, this percentage of code is now written by AI. 01:34:40.700 |
I'm talking more about, like, overall actual productivity, right? 01:34:44.760 |
Engineering productivity, which is two different things, and which is the more important metric. 01:34:54.740 |
And, like, you know, I think there's no engineer who tomorrow, if you magically became 2x more productive, 01:35:01.460 |
you're just going to create more things, you're going to create more value-added things. 01:35:04.620 |
And so I think you'll find more satisfaction in your job, right? 01:35:09.680 |
I mean, the actual Google code base might just improve because it'll become more standardized, 01:35:14.040 |
more easier for people to move about the code base because AI will help with that. 01:35:18.880 |
And therefore, that will also allow the AI to understand the entire code base better, 01:35:25.160 |
And so I've been using Cursor a lot as a way to program with Gemini and other models is, like, 01:35:31.200 |
one of its powerful things is it's aware of the entire code base. 01:35:38.160 |
It allows the agents to move about that code base in a really powerful way. 01:35:44.200 |
Think about, like, you know, migrations, refactoring old code bases. 01:35:50.700 |
I mean, think about, like, you know, once we can do all this in a much better, more robust 01:35:56.520 |
I think in the end, everything will be written in JavaScript and running Chrome. 01:36:03.700 |
I mean, just for fun, Google has legendary coding interviews, like, rigorous interviews 01:36:12.860 |
Can you comment on how that has changed in the era of AI? 01:36:16.760 |
It's just such a weird, you know, the whiteboard interview, I assume, is not allowed to have 01:36:26.500 |
Look, I do think, you know, we're making sure, you know, we'll introduce at least one round 01:36:35.520 |
of in-person interviews for people just to make sure the fundamentals are there. 01:36:43.420 |
Look, if you can use these tools to generate better code, like, you know, I think that's an 01:36:49.120 |
And so, you know, I think, so overall, I think it's a massive positive. 01:36:57.200 |
Do you recommend people, students interested in programming still get an education in computer 01:37:07.220 |
If you have a passion for computer science, I would. 01:37:09.320 |
You know, computer science is obviously a lot more than programming alone. 01:37:13.300 |
I still don't think I would change what you pursue. 01:37:18.820 |
I think AI will horizontally allow impact every field. 01:37:29.120 |
So any education in which you're learning good first principles thinking, I think is good 01:37:39.140 |
You've revolutionized a lot of things over the years. 01:37:50.900 |
Is it possible it becomes more and more AI-centric? 01:37:57.020 |
Especially now that you throw into the mix Android XR with being able to do augmented reality 01:38:04.880 |
and mixed reality and virtual reality in the physical world. 01:38:08.580 |
You know, the best innovations in computing have come when you're through a paradigm I-O change, 01:38:15.740 |
Like, you know, when with GUI and then with a graphical user interface and then with multi-touch 01:38:26.100 |
Similarly, I feel like, you know, AR is that next paradigm. 01:38:30.440 |
I think it was held back both the system integration challenges of making good AR is very, very hard. 01:38:37.820 |
The second thing is you need AI to actually kind of, otherwise the I-O is too complicated. 01:38:44.540 |
For you to have a natural, seamless I-O to that paradigm, AI ends up being super important. 01:38:52.720 |
So this is why Project Astra ends up being super critical for that Android XR world. 01:39:01.920 |
I think when you use glasses and, you know, always been amazed, like, at the how useful these things are going to be. 01:39:09.260 |
So I, look, I think it's a real opportunity for Android. 01:39:12.800 |
I think XR is one way it'll kind of really come to life. 01:39:17.500 |
But I think there's an opportunity to rethink the mobile OS too, right? 01:39:21.440 |
I think we've been kind of living in this paradigm of, like, apps and shortcuts. 01:39:27.640 |
But again, like, if you're trying to get stuff done at an operating system level, you know, it needs to be more agentic so that you can kind of describe what you want to do. 01:39:39.820 |
Or, like, it proactively understands what you're trying to do, learns from how you're doing things over and over again, and kind of is adapting to you. 01:39:47.140 |
All that is kind of, like, the unlock we need to go and do. 01:39:50.240 |
With a basic, efficient, minimalist UI, I've gotten a chance to try the glasses, and they're incredible. 01:40:02.780 |
Even that little map demo where you look down, and you look up, and there's a very smooth transition between the two. 01:40:09.420 |
And useful, very small amount of useful information is shown to you. 01:40:15.860 |
Enough not to distract from the world outside, but enough to provide a bit of context when you need it. 01:40:21.840 |
And some of that, in order to bring that into reality, you have to solve a lot of the OS problems to make sure it works. 01:40:30.760 |
When you're integrating the AI into the whole thing. 01:40:33.300 |
So, everything you do launches an agent that answers some basic question. 01:40:40.820 |
But, you know, I think we are, you know, but it's much closer to reality than other moonshots. 01:40:49.440 |
You know, we expect to have glasses in the hands of developers later this year, and, you know, in consumer science next year. 01:41:00.880 |
Beam, all this stuff, you know, because sometimes you don't know. 01:41:04.240 |
Like, somebody commented on a top comment on one of the demos of Beam. 01:41:10.360 |
They said this will either be killed off in five weeks or revolutionize all meetings in five years. 01:41:17.940 |
And there's very much Google tries so many things and sometimes, sadly, kills off very promising projects because there's so many other things to focus on. 01:41:33.880 |
Thank you, whoever is defending that because it's awesome. 01:41:39.320 |
I just want to list off just as a big thank you. 01:41:45.080 |
And all of these can be multi-hour conversations. 01:41:53.680 |
Incredible technological innovation on revolutionizing mapping. 01:42:02.060 |
For the academic mind, a Google Scholar is incredible. 01:42:09.100 |
So, making all the world's knowledge accessible, even when that knowledge is a kind of niche thing, which Google Scholar is. 01:42:16.920 |
And then, obviously, with DeepMind, with AlphaZero, AlphaFold, AlphaEvolve. 01:42:26.940 |
And as part of that set of things you've released in this year, when those brilliant articles were written about Google is done. 01:42:35.900 |
And like we talked about, pioneering self-driving cars and quantum computing, which could be another thing that is low-key, is scuba diving. 01:42:49.780 |
So, another potheads slash micro-kitchen question. 01:42:54.260 |
If you build AGI, what kind of question would you ask it? 01:43:04.200 |
definitively, Google has created AGI that can basically answer any question. 01:43:18.660 |
Maybe it's proactive by then and should tell me a few things I should know. 01:43:23.800 |
But I think if I were to ask it, I think it'll help us understand ourselves much better in a way that'll surprise us, I think. 01:43:36.900 |
You already see people do it with the products. 01:43:39.640 |
And so, but, you know, in an AGI context, I think that'll be pretty powerful. 01:43:43.700 |
At a personal level or a general human nature? 01:43:47.240 |
Like, you talking to AGI, I think, you know, there is some chance it'll kind of understand you in a very deep way. 01:43:57.920 |
I think, you know, in a profound way, that's a possibility. 01:44:01.240 |
I think there is also the obvious thing of, like, maybe it helps us understand the universe better, you know, in a way that expands the frontiers of our understanding of the world. 01:44:22.860 |
I think, you know, I haven't had access to something that powerful yet. 01:44:29.340 |
I think on the personal level, asking questions about yourself, could a sequence of questions like that about what makes me happy. 01:44:38.980 |
I think we would be very surprised to learn that those kind of, a sequence of questions and answers, we might explore some profound truths in the way that sometimes art reveals to us, great books reveal to us, great conversations with loved ones reveal. 01:44:55.760 |
Things that are obvious in retrospect, but are nice when they're said. 01:44:59.620 |
But for me, number one question is about how many alien civilizations are there, 100%. 01:45:06.540 |
Number one, how many living and dead alien civilizations, maybe a bunch of follow-ups, like how close are they? 01:45:20.260 |
Or if there's no advanced alien civilizations, but bacteria like life everywhere, why? 01:45:25.500 |
What is the barrier preventing you from getting to that? 01:45:27.880 |
Is it because that there's, that when you get sufficiently intelligent, you end up destroying ourselves? 01:45:35.780 |
Because you need competition in order to develop an advanced civilization. 01:45:40.180 |
And when you have competition, it's going to lead to military conflict, and conflict eventually kills everybody. 01:45:46.080 |
I don't know, I'm going to have that kind of discussion. 01:45:49.500 |
Exactly, and like have a real discussion about it. 01:45:51.940 |
I'm not sure, it's a, I'm realizing now with your answer is a more productive answer, because I'm not sure what I'm going to do with that information. 01:45:59.640 |
But maybe it speaks to the general human curiosity that Liz talked about, that we're all just really curious. 01:46:05.420 |
And making the world's information accessible allows our curiosity to be satiated some with AI even more. 01:46:14.400 |
We can be more and more curious and learn more about the world, about ourselves. 01:46:18.740 |
And so doing, I always wonder if, I don't know if you can comment on like, is it possible to measure the, not the GDP productivity increase, like we talked about, but maybe whatever that increases, the breadth and depth of human knowledge that Google has unlocked with Google search. 01:46:41.320 |
And now with AI mode, with Gemini is a difficult thing to measure. 01:46:47.760 |
Many years ago, there was a, I think it was a MIT study. 01:46:50.120 |
They just estimated the impact of Google search. 01:46:55.260 |
And they basically said, it's the equivalent to, on a per person basis, it's a few thousands of dollars per year per person, right? 01:47:03.060 |
Like it's the value that got created per year, right? 01:47:07.060 |
And, and, but it's, yeah, it's tough to capture these things, right? 01:47:11.280 |
You kind of take it, take it for granted as these things come. 01:47:15.580 |
And, and the frontier keeps moving, but, you know, how do you measure the value of something like alpha fold over time, right? 01:47:25.200 |
And also the increase in quality of life when you learn more. 01:47:27.880 |
I have to say like with some of the programming I do done by AI, for some reason, I'm more excited to program. 01:47:36.040 |
And so the same with knowledge, with discovering things about the world, it makes you more excited to be alive. 01:47:43.600 |
It makes you more curious to, and it keeps, the more curious you are, the more exciting it is to live and experience the world. 01:47:50.800 |
And it's very hard to, I don't know if that makes you more productive. 01:47:53.060 |
It probably, not nearly as much as it makes you happy to be alive. 01:48:00.580 |
The quality of life increases some of these things do. 01:48:05.460 |
As AI continues to get better and better at everything that humans do, what do you think is the biggest thing that makes us humans special? 01:48:11.700 |
Look, I, I, I think it's tough to, I mean, the essence of humanity, there's something about, you know, the consciousness we have, what makes us uniquely human. 01:48:27.980 |
Maybe the lines will blur over time and, and it's tough to articulate, but I hope, hopefully, 01:48:35.260 |
you know, we, you know, we live in a world where if you make resources more plentiful and make the world lesser of a zero-sum game over time. 01:48:48.060 |
And, and, and, and, and, and, and, and so I hope the, the values of what makes us uniquely human, empathy, kindness, all that surfaces more is the aspirational hope I have. 01:49:10.820 |
yeah it multiplies the compassion but also the curiosity just the the banter the debates we'll 01:49:17.460 |
have about the meaning of it all and i i also think in the scientific domains all the incredible 01:49:22.980 |
work that deep mind is doing i think we'll still continue to to play to explore scientific questions 01:49:31.320 |
mathematical questions physics questions even as ai gets better and better at helping us solve 01:49:38.420 |
some of the questions sometimes the question itself is a really difficult thing both the right new 01:49:44.300 |
questions to ask and the answers to them and and and the self-discovery process which it'll drive i 01:49:51.380 |
think you know our early work with both co-scientists and alpha evolve just just super exciting to see 01:49:58.320 |
what gives you hope about the future of human civilization look i've always i'm an optimist and 01:50:08.460 |
now if you were to say you take the journey of human civilization uh it's been you know we've 01:50:16.120 |
relentlessly made the world better right in many ways at any given moment in time there are big 01:50:22.760 |
issues to work through it may look but you know i always ask myself the question would you have been 01:50:28.580 |
born now or any other time in the past i most often not most often almost always would rather be born 01:50:37.980 |
now right you know and so that's the extraordinary thing the human civilization has accomplished right 01:50:44.700 |
and like you know and we've kind of constantly made the world a better place and so something tells me 01:50:51.860 |
as humanity we always rise collectively to drive that uh frontier forward so i expect it to be no 01:50:59.280 |
different in the future i agree with you totally i'm truly grateful to be alive in this moment and 01:51:04.900 |
i'm also really excited for the future and the work uh you and the incredible teams here are doing is 01:51:11.140 |
one of the big reasons i'm excited for the future so thank you thank you for all the cool products 01:51:16.020 |
you've built and please don't kill google voice thank you we won't yeah thank you for talking today 01:51:23.720 |
this was incredible thank you real pleasure i appreciate it thanks for listening to this 01:51:27.940 |
conversation with sundar pichai to support this podcast please check out our sponsors in the 01:51:33.080 |
description or at lexfriedman.com slash sponsors shortly before this conversation i got a chance to 01:51:39.940 |
get a couple of demos that frankly blew my mind the engineering was really impressive 01:51:45.780 |
the first demo was google beam and the second demo was the xr glasses and some of it was caught on 01:51:55.260 |
video so i thought i would include here some of those uh video clips hey lex my name is andrew i lead 01:52:03.320 |
the google beam team and we're going to be excited to show you a demo we're going to show you i think a 01:52:07.260 |
glimpse of something new so that's the idea a way to connect a way to feel present from anywhere with 01:52:12.000 |
anybody you care about here's google beam this is a development platform that we've built so there's a 01:52:18.740 |
prototype here of google beam there's one right down the hallway i'm going to go down and turn 01:52:23.540 |
that on in a second we're going to experience it together we'll be back in the same room wonderful 01:52:27.240 |
whoa whoa okay here we are all right this is real already wow this is real wow this is google beam 01:52:37.580 |
we're trying to make it feel like you and i could be anywhere in the world but when these magic windows 01:52:42.220 |
open we're back together i see you exactly the same where you see me it's almost like we're sitting at the 01:52:48.860 |
table sharing a table together i could learn from you talk to you share a meal with you get to know you 01:52:54.600 |
so you can feel the depth of this yeah wow so for people who probably can't even imagine what this 01:53:01.720 |
looks like there's a there's a 3d version it looks real you look real it looks me it looks real to you 01:53:07.340 |
it looks like you're coming out of the screen we quickly believe um once we're in beam that we're 01:53:13.140 |
just together you settle into it you're naturally attuned to seeing the world like this and you just 01:53:18.620 |
get used to seeing people this way but literally from anywhere in the world with these magic screens 01:53:22.900 |
this is incredible it's a neat technology wow so i saw demos of this but they don't come close to the 01:53:29.080 |
experience of this i think one of the top youtube comments on one of the demos i saw was like why 01:53:33.320 |
would i want a high definition i'm trying to turn off the camera but this actually is this feels like 01:53:38.660 |
the camera has been turned off and we're just in the same room together this is really compelling 01:53:43.280 |
that's right i i know it's kind of late in the day too so i brought you a snack just in case you're a 01:53:48.620 |
little bit hungry but um so what can you push it farther and it just becomes let's try to float it 01:53:53.900 |
between rooms you know it kind of fades it from my room and then and then you see my hand the depth of 01:53:58.520 |
my hand of course yeah of course yeah it feels like you um try this try give me a high five and 01:54:02.700 |
there's almost a sensation of feeling touch yeah almost feel yes because you're so attuned to you 01:54:07.700 |
know that should be a high five it feeling like you could connect with somebody that way so it's kind 01:54:11.800 |
of a magical experience oh this is really nice how much does it cost you got a lot of companies testing 01:54:18.240 |
it we just announced that we're going to be bringing it to offices soon as a set of products 01:54:21.940 |
we've got some companies helping us build these screens um but eventually i think this will be in 01:54:25.860 |
almost every screen there's nothing i'm not wearing anything well i'm wearing a suit and tie to clarify 01:54:31.440 |
i am wearing clothes this is not cgi but outside of that cool and the audio is really good and you 01:54:38.360 |
see me in the same three-dimensional way yeah the audio is spatialized so if i'm talking from here of 01:54:43.780 |
course it sounds like i'm talking from here you know if i move to the other side of the room wow 01:54:47.600 |
so these little subtle cues these really matter to bring people together all the non-verbals all 01:54:53.400 |
the emotion the things that are lost today here it is we put it back into the system you pulled this 01:54:58.300 |
off holy shit they pulled it off and integrated into this i saw the translation also right this is 01:55:05.720 |
yeah we've got a bunch of things let me show you a couple kind of cool things let's do a little bit 01:55:09.320 |
work together maybe we could um critique one of your latest uh so you know it's you and i work 01:55:17.360 |
together so of course we're in the same room but with this superpower i can bring other things in here 01:55:21.560 |
with me um and it's it's nice you know it's like we could sit together we could watch something we 01:55:27.160 |
could work um we've shared meals as a team together in this system but once you do the presence aspect 01:55:33.160 |
you want to bring some other superpowers to it and so you could do review code together 01:55:37.760 |
yeah yeah exactly i've got some uh slides i'm working on you know maybe you could help me with 01:55:42.420 |
this keep your eyes on me for a second i'll slide back into the center i didn't really move but the 01:55:47.340 |
system just kind of puts us in the right spot and knows where we need to be oh so you just turn to 01:55:50.960 |
your laptop the system moves you and then it does the overlay automatically it kind of morphs the room 01:55:56.420 |
to put things in the spot that they need to be in everything has a place in the room everything 01:56:00.840 |
has a sense of presence or spatial consistency and that kind of makes it feel like we're together 01:56:05.440 |
with us another thing i should also say you're not just three-dimensional it feels like you're 01:56:10.560 |
leaning like out of the screen you're like coming out of the screen you're not just in that world 01:56:17.940 |
three-dimensional yeah exactly holy crap move back to center okay okay okay let me find how this works 01:56:24.780 |
you probably already have the the premise of it but there's two things two really hard things that 01:56:29.660 |
we put together one is a ai video model so there's a set of cameras you asked kind of about those 01:56:35.500 |
earlier there's six color cameras just like webcams that we have today taking video streams and feeding 01:56:41.600 |
them into our ai model and turning that into a 3d video of you and i it's effectively a light field 01:56:47.200 |
so it's kind of an interactive 3d video you can see from any perspective that's transmitted over to the 01:56:52.620 |
second thing and that's a light field display and it's happening bi-directionally i see you and you see me 01:56:57.220 |
both in our light field displays these are effectively flat televisions or flat displays but they have the 01:57:03.480 |
sense of dimensionality depth size is correct you can see shadows and lighting are correct and everything's 01:57:11.000 |
correct from your vantage point so if you move around ever so slightly and i hold still you see a 01:57:16.340 |
different perspective here you see kind of things that were occluded become revealed you see shadows 01:57:20.880 |
that you know move in the way they should move all of that's computed and generated using our ai video 01:57:26.020 |
model for you it's based on your eye position where does the right scene need to be placed in this light 01:57:31.720 |
field display for you just to feel present it's real time no latency i'm not seeing like you weren't 01:57:36.340 |
freezing up at all no no i hope not i think it's it's you and i together real time that's what you need for real 01:57:41.780 |
communication and at a quality level this is awesome realistic is it possible to three people like is 01:57:48.740 |
that gonna move that way also yeah let me let me kind of show you so if if she enters the room with us 01:57:53.860 |
you can see her you can see me and if we had more people you eventually lose the sense of presence you 01:58:00.100 |
kind of shrink people down you lose a sense of scale so think of it as the window fits a certain number of 01:58:05.460 |
people if you want to fit a big group of people you want you know the board room or the big room 01:58:09.860 |
you need like a much wider window um if you want to see you know just grandma and the kids you can do 01:58:14.900 |
smaller windows so everybody has a seat at the table or everybody has a sense of where they belong and 01:58:20.020 |
there's kind of the sense of presence that's obeyed if you have too many people you kind of go back to 01:58:23.860 |
like 2d metaphors that we're used to people in tiles placed anywhere for the image i'm seeing did you have 01:58:28.260 |
to get scanned i mean i see you without being scanned so it's just so much easier if you don't 01:58:32.580 |
have to wear anything you don't have to pre-scan you just do it the way it's supposed to happen 01:58:36.740 |
without anybody having to learn anything or put anything on i thought you had to solve the the 01:58:40.580 |
scanning problem but here you don't it's just cameras it's just vision it's video yeah we're not 01:58:47.780 |
trying to kind of make an approximation of you because everything you do every day matters you know 01:58:52.500 |
i cut myself anything i put on a pin um all the little kind of you know aspects of you 01:58:58.260 |
those just happen um we don't have the time to scan or kind of capture those or dress avatars 01:59:03.060 |
we we kind of appear as we appear and so all that's transmitted truthfully as it's happening 01:59:08.100 |
chris still how you doing good to meet you so um as max mentioned about the eye glasses here we start 01:59:14.980 |
with um a foundation of great glasses something stylish lightweight wearable i'm going to say how can 01:59:19.780 |
you build great technology and experiences on top of that um one of the core tenets of the android xr 01:59:24.820 |
platform this idea of a multimodal conversational device see what you see what you hear so you've 01:59:29.780 |
got a camera you've got speakers multiple microphones for speaker isolation um i'll give you a chance to 01:59:35.380 |
try these yourself yeah sorry i won't put up there whoa yeah so the first thing you see it's a super 01:59:42.420 |
simple straightforward home screen so you probably see the time the weather calendar appointments there this is 01:59:49.460 |
designed to be sort of your one-stop shop for quick glanceable information throughout the day 01:59:53.220 |
we want to do something that's um easy to you know get what you need and then go back to what you're 01:59:57.220 |
doing so you can imagine you know turning on the display getting that quick glance and then um 02:00:01.700 |
continuing on your way you can be fully conversational at this point you can um ask 02:00:05.220 |
questions for example about the paintings you can interrupt you can ask follow-up questions 02:00:08.900 |
and as i mentioned before if you want to at any time pause you just tap there on the right 02:00:12.500 |
gemini how much does this painting cost the painting is called the bridge of leaf a leaf 1875 02:00:19.620 |
by armand guillaume it appears to be a print and it would be impossible to know its cost without 02:00:24.500 |
finding the cellar okay so this is fake i understand why do humans pay so much money for paintings people 02:00:30.740 |
pay a lot for paintings because of their artistic merits the artist's thing is that season's rarity 02:00:35.380 |
after the appreciation so a few more things that we want to show you just for uh sake of time if you 02:00:40.980 |
go ahead and long press on the side again to to sleep gemini there um there you go um did you catch 02:00:46.980 |
google io last week by any chance so you might have seen on stage the uh google maps experience very 02:00:52.020 |
briefly i want to give you a chance to get a sense of what that feels like today you can imagine you're 02:00:56.260 |
walking down the street if you look up like you're walking straight ahead you get quick turn-by-turn 02:01:01.060 |
directions so you have a sense of what the next turn is like nice in your phone in your pocket oh that's 02:01:06.740 |
so intuitive sometimes you need that quick sense of which way is the right way sometimes yeah so you 02:01:12.180 |
let's say you're coming every time getting out of a cab you can just glance down at your feet we have 02:01:16.180 |
it set up to translate from russian to english i think i get to wear the glasses and you speak to me if 02:01:20.980 |
you don't mind i can speak russian i'm doing well how are you doing 02:01:30.500 |
tempted to swear tempted to say inappropriate things 02:01:35.220 |
i see it transcribed in real time and so obviously you know based on the uh different languages and the 02:01:47.380 |
sequence of subjects and verbs there's a slight delay sometimes but it's really 02:01:50.820 |
just like subtitles for the real world cool thank you for this all right back to me hopefully watching 02:01:56.180 |
videos of me having my mind blown like the apes in 2001 space honestly playing with a monolith 02:02:03.540 |
was uh somewhat interesting uh like i said i was very impressed and now i thought if it's okay i could 02:02:09.620 |
make a few additional comments about the episode and just in general in this conversation with sundar 02:02:14.500 |
pichai i discussed the concept of the neolithic package which is the set of innovations that came 02:02:20.900 |
along with the first agricultural revolution about 12 000 years ago which included the formation of 02:02:27.060 |
social hierarchies the early primitive forms of government labor specialization domestication of plants and 02:02:34.420 |
animals early forms of trade large-scale cooperations of humans like that required to build yes the pyramids and 02:02:43.860 |
temples like gobekli tepe i think this may be the right way to actually 02:02:49.380 |
talk about the inventions that changed human history not just as a single invention but as a 02:02:56.260 |
kind of network of innovations and transformations that came along with it and the productivity 02:03:03.220 |
multiplier framework that i mentioned in the episode i think is a nice way to try to concretize 02:03:09.300 |
analyze the impact of each of these inventions under consideration and uh we have to remember that each 02:03:14.980 |
node in the network of the sort of fast follow-on inventions is in itself a productivity multiplier 02:03:22.340 |
some are additive some are multiplicative so in some sense the size of the network in the package 02:03:30.900 |
is the truth is the thing that matters when you're trying to rank the impact of uh inventions on human 02:03:39.460 |
history the easy picks for the period of biggest transformation at least instead of uh modern day 02:03:47.140 |
discourse is the industrial revolution or even in the 20th century the computer or the internet 02:03:55.300 |
i think it's because it's easiest to intuit for modern day humans the impact the exponential impact 02:04:02.900 |
of those technologies but recently i suppose this changes week to week but uh i have been doing a lot of 02:04:09.300 |
reading on ancient human history so recently my pick for the number one invention would have to be 02:04:15.700 |
the first agricultural revolution the neolithic package that led to the formation of human civilizations 02:04:23.140 |
that's what enabled the scaling of the collective intelligence machine of humanity and for us to 02:04:29.700 |
become the early bootloader for the next 10 000 years of technological progress which yes includes ai and the 02:04:37.300 |
tech that builds on top of ai and of course it could be argued that the word invention doesn't properly apply 02:04:44.580 |
to the agriculture revolution i think actually uh you've all know harari argues that it wasn't the humans 02:04:52.180 |
who were the inventors but uh a handful of plant species namely wheat rice and potatoes this is strictly 02:05:01.860 |
a fair perspective but i'm having fun like i said with this discussion here i just think of the entire 02:05:08.260 |
earth as a system that continuously transforms and i'm using the term invention in that context 02:05:14.420 |
asking the question of when was the biggest leap on the log scale plot of uh human progress 02:05:22.580 |
will ai agi asi eventually take the number one spot on this ranking i think it has a very good chance to 02:05:30.020 |
do so due again to the size of the network of inventions that will come along with it i think we discuss in this 02:05:38.820 |
this podcast uh the kind of things that would be included in the so-called ai package but i think 02:05:45.940 |
there's a lot more possibilities including uh discussed in previous podcasts and many previous podcasts 02:05:52.660 |
including with daria amadei uh talking on the biological innovation side the science progress side 02:05:59.060 |
in this podcast i think we talk about something that i'm particularly excited about in the near term which is 02:06:04.580 |
unlocking the cognitive capacity of the entire landscape of brains that is the human species 02:06:12.180 |
making it more accessible through education and through machine translation making information 02:06:19.540 |
knowledge and the rapid learning and innovation process accessible to more humans 02:06:27.700 |
to the entire eight billion if you will so i do think language or machine translation applied to 02:06:34.500 |
all the different methods that we use on the internet to discover knowledge 02:06:39.060 |
is a big unlock but there are a lot of other stuff in the so-called ai package like discussed 02:06:45.700 |
with dario curing all major human diseases he really focuses on that in the machines of love and grace 02:06:52.980 |
essay i think there will be huge leaps in productivity for human programmers and semi-autonomous human 02:06:59.860 |
programmers so humans in the loop but most of the programming is done by ai agents and then moving 02:07:05.380 |
that towards a superhuman ai researcher that's doing the research that develops and programs the ai system in 02:07:14.420 |
itself i think it would be huge transformative effects from autonomous vehicles these are the things that we 02:07:20.420 |
maybe don't immediately understand or we understand from an economics perspective but there will be 02:07:26.660 |
a point when ai systems are able to interpret understand interact with the human world to a 02:07:35.780 |
sufficient degree to where many of the manually controlled human in the loop systems we rely on 02:07:41.700 |
become fully autonomous and i think mobility is such a big part of human civilization that there will be 02:07:49.220 |
effects on that that they're not just economic but are social cultural and so on and there's a lot 02:07:56.500 |
more things i could talk about for a long time so obviously the integration utilization of ai in the 02:08:03.860 |
the creation of art film music i think the digitalization and um automating basic functions of government and 02:08:13.140 |
then integrating ai into that process thereby decreasing corruption and costs and increasing transparency and 02:08:20.740 |
efficiency i think we as humans individual humans will continue to transition further and further into cyborgs 02:08:30.740 |
sort of there's already a ai in the loop of the human condition and that will 02:08:38.020 |
become increasingly so as the ai becomes more powerful the thing i'm obviously really excited 02:08:45.060 |
about is major breakthroughs in science and not just on the medical front but uh on physics fundamental 02:08:51.460 |
physics which would then lead to energy breakthroughs increasing the chance that we become we actually become a 02:08:58.500 |
kardashiev type one civilization and then enabling us in so doing to do interstellar exploration of space 02:09:05.300 |
and colonization of space i think they're also in the near term much like with the industrial revolution 02:09:15.300 |
that led to rapid specialization of skills of expertise there might be a great sort of despecialization 02:09:24.660 |
so as the ai ai system become superhuman experts of particular fields there might be greater and greater 02:09:32.500 |
value to being the integrator of ais for humans to be sort of generalists and so the great value of the human 02:09:43.780 |
mind will come from the generalists not the specialist that's a real possibility that that changes the way we are about the world that we want to know a little bit of a lot of things 02:09:54.500 |
and move about the world and move about the world in that way that could have when passing a certain 02:09:58.500 |
threshold a complete shift in who we are as a collective intelligence as a as a human species 02:10:05.540 |
also as an aside when thinking about the invention that was the greatest in human history again for a bit of 02:10:11.380 |
fun we have to remember that all of them build on top of each other and so we need to look at the delta the 02:10:17.940 |
step change on the i would say impossibly to perfectly measure plot of exponential human progress really we 02:10:25.460 |
can go back to the entire history of life on earth and a previous podcast guest nick lane does a great 02:10:32.100 |
job of this in his book life ascending listing these 10 major inventions throughout the evolution of life on earth 02:10:39.940 |
like dna photosynthesis complex cells sex movement sight all those kinds of things i forget the full 02:10:49.540 |
list that's on there but i think that's so far from the human experience that my intuition about 02:10:55.620 |
let's say productivity multipliers of those particular invention completely breaks down and a different 02:11:03.220 |
framework is needed to understand the impact of these inventions of evolution the origin of life on earth 02:11:10.660 |
or even the big bang itself of course is the og invention that set the stage for all the rest of 02:11:18.100 |
it and there are probably many more turtles under that which are yet to be discovered so anyway 02:11:26.980 |
we live in interesting times fellow humans i do believe the set of positive trajectories for humanity 02:11:34.420 |
outnumber the set of negative trajectories but not by much so uh let's not mess this up and now let me leave 02:11:42.980 |
you with some words from french philosopher jean de la bruire out of difficulties grow miracles thank you for 02:11:52.500 |
thanks for listening and hope to see you next time 02:11:54.820 |
thanks for listening and hope to see you next time 02:11:55.540 |
thanks for listening and hope to see you next time 02:11:55.940 |
thanks for listening and hope to see you next time 02:11:56.020 |
thanks for listening and hope to see you next time 02:11:56.100 |
thanks for listening and hope to see you next time 02:11:56.980 |
thanks for listening and hope to see you next time 02:11:56.980 |
thanks for listening and hope to see you next time 02:11:58.180 |
thanks for listening and hope to see you next time 02:11:59.060 |
thanks for listening and hope to see you next time