back to indexGoogle DeepMind C.E.O. Demis Hassabis on Living in an A.I. Future | EP 137

Chapters
0:0 Coming up
0:47 Kevin and Casey go to the circus
1:8 This week on the show
1:40 What we saw at Google I/O 2025
23:16 Demis Hassabis on the state of Google A.I.
52:3 More with Demis Hassabis on the future of society and A.I.
00:00:00.080 |
this year google talked about ai very differently this time they want you to sit up they want you 00:00:04.720 |
to lean in they want you to pay them 250 and they want you to get to work i've been working every 00:00:09.600 |
hour there is for the last 20 years because i felt the how important and momentous this this 00:00:14.800 |
technology would be whether it's five years or ten years or two years that they're all actually 00:00:20.000 |
quite short timelines when you're talking discussing what were the enormity of the 00:00:24.720 |
transformation of this technology you know this technology is going to bring when you see when i 00:00:28.240 |
see a van goff you know hair's gone back my spine because of i remember what they went through 00:00:33.680 |
and um the struggle to produce that right in every brush stroke of van goff's brush strokes 00:00:38.320 |
even if the ai mimicked that and you were told that it was like so what 00:00:41.520 |
now there's a very large what looks like a circus tent over there what do you think's going on in 00:00:51.760 |
there that is from the amphitheater oh that's the amphitheater under that tent yes i thought that was 00:00:56.320 |
just some carnival that they were setting up for employees okay my mistake i thought ringling 00:01:00.800 |
brothers had entered into a partnership with a revival tent they're bringing christianity back 00:01:04.880 |
i'm kevin russe a tech columnist at the new york times i'm casey newton from platformer and this is 00:01:12.400 |
hard fork this week our field trip to google will tell you all about everything the company announced at 00:01:17.520 |
its biggest show of the year then google deep mind ceo demis hasabas returns to the show to discuss the 00:01:23.760 |
road to agi the future of education and what life could look like in 2030 kevin being very old for 00:01:30.000 |
starters somebody did ask me text me to ask why i freaking yell the name of the show every episode 00:01:36.000 |
did you say it's because i started yelling my name i said it's because of the cold brew well 00:01:40.240 |
casey our decor is a little different this week it's i'll say it it looks better yes we are not in 00:01:46.800 |
our normal studio in san francisco we are down in mountain view california where we are inside google's 00:01:51.360 |
headquarters i'm just thrilled to be sitting here surrounded by so much training data that's what 00:01:56.720 |
they call books here at google so we are here because this week is google's annual developer 00:02:02.880 |
conference google io there were many many announcements from a parade of google executives 00:02:07.680 |
about all the ai stuff that they have coming and uh we are going to talk in a little bit with uh 00:02:13.200 |
demis hasabas who is the ceo of google deep mind essentially their ai division who's been driving a 00:02:18.800 |
lot of these ai projects forward but first let's just sort of set the scene for people uh because i don't 00:02:24.320 |
think we've ever been together at an io before so what is it like so google io has a bit of a festival 00:02:31.440 |
atmosphere it takes place at the shoreline amphitheater which is a concert venue uh but once a year it gets 00:02:38.480 |
transformed into a sort of nerd concert where instead of seeing musicians perform you see google employees 00:02:45.920 |
vibe coding on stage yes there's a vibe coding demo um there were many other things i i did actually 00:02:52.800 |
see uh as i was leaving the uh google acapella group google pella was like sort of doing their 00:02:58.160 |
warm-ups in anticipation of doing some concert so you've got some like old school google vibes here 00:03:03.200 |
but uh also a lot of excitement around all the ai stuff so now i didn't see google pella perform 00:03:08.000 |
where was this performance i didn't see them perform either i just saw them warming up they were 00:03:11.360 |
sort of doing their scales they sounded great you know what i bet it was a classic acapella situation 00:03:15.040 |
where they warmed up and someone came up to them and they said please don't perform 00:03:19.920 |
all right kevin well before we get into it shall we say our disclosures yes i work for the new york 00:03:23.520 |
times which is suing open ai and microsoft uh over copyright violations related to training of ai 00:03:28.080 |
systems and my boyfriend works in anthropic a google investment oh that's right yeah so let's talk about 00:03:33.520 |
some of what was announced this week there was so so much we can't get to all of it but uh what were 00:03:38.640 |
the highlights from your perspective well so look i wrote a column about this kevin i felt a little bit like 00:03:44.240 |
like i was in a fever dream at this conference you know i think often it is the case at a developer 00:03:49.760 |
conference where they'll sort of try to break it out into one two three big bullet points this one felt 00:03:54.400 |
a little bit like a fire hose of stuff and so by the end i'm looking at my notes saying okay so email's 00:04:01.840 |
gonna start writing in my voice and i can turn my pdfs into video ted talks sure why not um so i had a 00:04:09.040 |
a little bit of fever of dream mentality what was your feeling yeah i told someone uh yesterday that 00:04:14.800 |
i thought the name of the event should have been everything everywhere all at once like that didn't 00:04:18.720 |
actually feel like what they were saying is like every google product that you use is going to have 00:04:24.160 |
more ai that ai is going to be better and it is all going to make your life better in various ways 00:04:30.320 |
uh but it was a lot to keep track of yeah i mean look if we were going to try to to pull out one very 00:04:35.040 |
obvious theme from everything that we saw it was ai is coming to all of the things and it's probably 00:04:40.720 |
worth drilling down a little bit into what some of those things are yeah so the thing that got 00:04:45.120 |
my attention and i was sitting right next to you uh the one time when i really noticed you perking 00:04:49.680 |
up was when they started talking about this new ai mode in google search their core search product so 00:04:55.920 |
talk about ai mode and what they announced yesterday so kevin this gets a little confusing 00:05:00.880 |
because there are now three different kinds of major google searches i would say there is the 00:05:06.160 |
normal google search which is now augmented in many cases by what they call ai overviews which is sort 00:05:11.680 |
of ai answer at the top yeah that's the little thing that will tell you like what the meaning of phrases 00:05:16.800 |
like you can't like a badger twice is right that's right and if you don't know the meaning of that google 00:05:20.000 |
it um so that's sort of the thing one thing two is the gemini app which is kind of like a one-for-one 00:05:26.160 |
like chat gpt competitor that's in its own you know standalone app standalone website 00:05:31.360 |
and then the big thing that they announced this week was ai mode which has been in testing for a 00:05:36.800 |
little while and i think this sort of lands in between the first two things right it is a tab 00:05:42.400 |
now within search and this is rolling out to everybody in the united states and a few other 00:05:46.480 |
countries and you sort of tap over there and now you can have the sort of longer you know multi-step 00:05:52.400 |
questions that you might have with a gemini or a chat gpt but you can do it right from the google 00:05:57.360 |
search interface yeah and i've been playing with this feature for a few weeks now it was in their labs 00:06:01.840 |
section so you could try it out if you were enrolled in that um and it's it's really nice like it's a 00:06:07.040 |
very clean thing there's no ads yet uh they will probably appear soon it does this thing called the fan 00:06:13.920 |
out which is is very funny to me like you ask it a question and it kind of dispatches like a bunch of 00:06:20.240 |
different google searches to like crawl a bunch of different web pages and like bring you back the 00:06:24.640 |
answer and it actually tells you like how many searches it is doing and how many different websites 00:06:29.040 |
it's doing so i asked it for example like how much does a costco membership cost it's there 72 websites 00:06:35.360 |
for the answer to that question so ai mode is very very eager to answer your question even if it does 00:06:42.480 |
verge on overkill sometimes yeah well so uh you know you and i had a chance to meet with robbie stein who is 00:06:47.680 |
uh one of the people who is leading ai mode and i was surprised by how enthusiastic about it you were 00:06:53.280 |
like you said that you've really actually found this quite useful in a way that i think i have not 00:06:57.440 |
so far so what are you noticing about this i mean the main thing is it's just such a clean experience like 00:07:02.160 |
on a regular google search results page you and i have talked about this like it has just gotten very 00:07:06.240 |
cluttered there's a lot of stuff there there's ads there's carousels of images there's sometimes a 00:07:11.280 |
shopping module there's sometimes a maps module like it's just it's hard to actually like find the 00:07:16.080 |
blue links sometime uh and i imagine that ai mode will become more cluttered as they try to make 00:07:22.480 |
more money off of it but right now if you go to it it's like a much simpler experience it's much easier 00:07:26.640 |
to find what you're looking for yeah and at the same time they're also trying to do some really 00:07:30.000 |
interestingly complex stuff like one of the things that they showed off during the keynote was somebody 00:07:34.480 |
asked a question about baseball statistics that required finding you know three or four different 00:07:39.920 |
kind of you know tricky to locate stats and then combining them all together in an interactive 00:07:44.720 |
chart that was just a demo we don't have access to that yet but that is one of the those things 00:07:48.800 |
where it's like well if that works that could be a meaningful improvement to search yeah it could 00:07:52.400 |
be meaningful improvement to search and we should also say like it's a big unknown how all of this will 00:07:56.880 |
affect uh the main google search product right this is for now it's a tab um they have not sort of 00:08:03.200 |
merged it into the main core google search uh in part because it's not monetized yet it costs a lot 00:08:09.120 |
more to serve those results than a traditional google search but i imagine over time these things 00:08:14.160 |
will kind of merge which will have lots of implications for publishers people who make 00:08:18.320 |
things on the internet the whole sort of economic model of the internet but before we get dragged down 00:08:23.520 |
that rabbit hole um let's just talk about a few other things that they uh said on stage at google io so 00:08:30.880 |
i was really struck by the usage numbers that they trotted out for their products um gemini according 00:08:36.560 |
to them um the app now has 400 million monthly users um that is a lot that is not quite as many as 00:08:43.680 |
chat gpt but it is a lot more than a products like claude and other uh ai chat bots they said that their 00:08:50.320 |
tokens that are being output by gemini has increased 50 times since last year um and is just like way like 00:08:58.080 |
so people are using this stuff in other words this is not just like some feature that google 00:09:02.400 |
is shoving into these products that people are trying to sort of navigate around like people are 00:09:05.760 |
really using gemini i think that that's right and i think it's the gemini number in particular is the 00:09:10.480 |
one that struck me like 400 million is a lot of people and i don't see that many obvious ways that 00:09:16.720 |
google could be like faking that stat uh you know in contrast to for example they said one and a half billion 00:09:22.000 |
people see ai overviews every month it's like well yeah you just put them in google search results like 00:09:27.040 |
that's an entirely passive phenomenon but like gemini you got to go to the website you got to 00:09:31.040 |
download the app so that tells me that people actually are finding real utility there so that's 00:09:35.200 |
gemini but they also released a bunch of other stuff like new image and video models do you want to talk 00:09:39.840 |
about those yeah so um you know like like the other companies they're working on text to image text to 00:09:45.120 |
video and while open ai's models have gotten most of the attention in this regard google's really are 00:09:51.840 |
quite good i think the the marquee feature uh for this year's io is that the video generating model vo3 00:09:59.920 |
can also generate sound so it showed us a demo for example of an owl flapping its wings you hear the 00:10:06.720 |
wings flap it comes down to the ground there's this sort of nervous badger character and they exchanged 00:10:11.760 |
some dialogue which was basically incomprehensible just pure slop but they were able to generate that from 00:10:17.120 |
scratch and i guess that's something they left behind a a ball today it bounced higher than i can 00:10:22.640 |
jump what manner of magic is that yep um they also announced a new ultra subscription to google's ai 00:10:31.200 |
products now if you want to be on the bleeding edge of google's ai offerings you can pay 250 a month 00:10:39.680 |
for gemini ultra and casey i thought to myself no one is gonna do this who is gonna pay 250 a 00:10:49.280 |
month that's a fortune for access to google's leading ai products and then i look over to my right 00:10:53.680 |
and there's casey newton in the middle of the keynote pulling out his credit card from his wallet 00:10:59.760 |
and entering it into buy a subscription to this extremely expensive ai product so you might have been 00:11:05.360 |
the first customer of this product why well and i hope that they don't forget that uh when it comes 00:11:10.640 |
time to feed me into the large language model um look i want to be able to have the the latest models 00:11:17.120 |
and you know one i think clever thing that these ai companies are doing is they're saying we will give 00:11:22.240 |
you the latest and greatest before everyone else but you have to pay us a ridiculous amount of money 00:11:26.320 |
and you know if you're a reporter and you're reporting about this stuff every day i do think you sort 00:11:30.400 |
of want to be in that camp now is it true that i now spend more on monthly ai subscriptions than i paid 00:11:35.280 |
for my apartment in phoenix in the year 2010 yes and i don't feel great about it but i'm trying to be 00:11:40.320 |
a good journalist kevin please your family is dying um another thing i that made me perk up was uh they 00:11:46.400 |
talked a lot about personalization right this is something we've been talking about for years basically 00:11:50.320 |
google has all of you know billions of people's email their search histories their calendars all their 00:11:56.720 |
personal information and we've been sort of waiting for them to start weaving that stuff in so that you 00:12:02.160 |
can use gemini to do things in those products um that has been slow um but they are sort of taking baby 00:12:09.200 |
steps and they did show off a few things including this new personalized smart replies feature that is 00:12:14.960 |
going to be uh available for uh subscribers later this year in gmail so that instead of just getting 00:12:21.840 |
the kind of formulaic suggested replies at the bottom of of an email it'll actually kind of 00:12:26.640 |
learn from how you write and maybe it can access some things in your calendar or your documents and 00:12:31.440 |
really like suggest a better reply you'll still have to like hit send but it'll like sort of pre-populate 00:12:36.720 |
a message for you yeah you know i have to say i'm i'm somewhat bearish on this one kevin only because 00:12:41.520 |
i i think that if this were easy like it would just sort of be here already right like when you think 00:12:46.800 |
about how formulaic so much email is it doesn't seem to me like it should be that hard to figure 00:12:51.120 |
out like what kind of email are you like i'm basically a two-sentence email or you know that 00:12:55.200 |
doesn't seem like that that's hard to mimic um so that's just kind of an area where i've been a 00:12:59.920 |
little bit surprised and disappointed we also know large language models do not have large memories so 00:13:04.800 |
one thing that i would love for gmail to do but it cannot is just sort of understand all of my email 00:13:09.840 |
and use that to inform the tone of my voice but it can't do that it can only take a much more limited 00:13:14.000 |
subset is that going to make it sort of difficult to accurately mimic my tone i know so what i'm trying 00:13:18.080 |
to say here is i think there's a lot of problems here and my expectations are like pretty low on this 00:13:21.440 |
one yeah that was the part where i was like i i will believe that this exists and is good when i can use 00:13:25.920 |
this but as with other companies like like apple uh which demoed a bunch of ai features at its 00:13:32.000 |
developer conference last year and then never launched half of them um i have become like a little bit 00:13:36.480 |
skeptical until i can actually use the thing myself yeah it really is amazing how looking back last 00:13:41.040 |
year's wwdc was just like a movie about what a competent ai company might have done in an alternate 00:13:45.920 |
future it had very little bearing on our reality but it was admittedly an interesting set of proposals 00:13:51.840 |
okay so that is the the software ai portion of io there was also a a demo of a new hardware product 00:14:00.080 |
that google is working on which are these uh android xr glasses basically their version of what 00:14:06.320 |
meta has been showing off it's orion glasses where you have a pair of glasses they have like 00:14:10.720 |
sort of chunky black frames they've got like sort of a hologram lens in them and you can actually like 00:14:15.360 |
see a little like thing overlaid on your vision uh telling you you know what the weather is or what 00:14:20.400 |
time it is or that you have a new message or they have this uh integration with google maps that they 00:14:24.880 |
showed off where you can like it'll like show you you know the little miniature google map right there 00:14:29.760 |
inside your glasses and it'll sort of turn as you turn and tell you where to go um they did say this is a 00:14:34.800 |
prototype but um what did you make of this well i think a lot of it looked really cool like probably 00:14:40.560 |
my favorite part of the demo was uh when the person who was demonstrating looked down at her feet because 00:14:46.160 |
she was getting ready to to walk to a coffee shop and the google map was actually projected at her feet 00:14:52.000 |
and so she know okay go to the left go to the right if you've ever been walking around a sort of foreign 00:14:56.800 |
city and and desperately wanted this feature i think you would see that and be pretty excited yeah what did you think 00:15:01.680 |
um yeah i i thought to myself google glass is back it was away for so long in the wilderness 00:15:09.440 |
and now it's back and it might actually work this time absolutely i did get to try the the glasses there 00:15:13.760 |
was a very long line for the demo but um let me guess you said i'm kevin roost let me to the front of 00:15:18.480 |
the line no they made me wait for two hours i mean i didn't literally wait for two hours i went and did some 00:15:23.120 |
stuff and then came back but i got my demo it was like five minutes long um and it was uh you know it 00:15:28.560 |
was it was pretty basic but it is cool like you can look now look around and you can say hey what's 00:15:32.880 |
this plant and it'll sort of gem and i will kind of like look at what you're seeing and tell you what 00:15:37.280 |
the plant is totally i i did a demo um a few months back and also like really enjoyed it um so i think there's 00:15:42.880 |
something here and i think more importantly kevin consumers now when they look at google and meta 00:15:48.240 |
they finally have a choice whose advertising monopoly do i want to feed with my personal data 00:15:53.440 |
and you have consumer choice now and i think that's beautiful and that's what capitalism is all about 00:15:57.280 |
so okay those are some of the announcements but what did you make of the sort of overall tenor of 00:16:03.280 |
the event what stuck out to you as far as the vibe so the thing that stuck out to me the most was just 00:16:08.560 |
contrasting it with last year's event because last year they had this phrase that they kept repeating 00:16:13.120 |
let google do the googling for you which to me put me in the mind of somebody sort of leaning back into 00:16:18.640 |
your like floating chair from the wall-e movie and just sort of letting the ai like run roughshod over 00:16:23.840 |
your life this year google talked about ai very differently this time they want you to sit up 00:16:28.960 |
they want you to lean in they want you to pay them 250 and they want you to get to work you know 00:16:33.920 |
ai is your your superpower it's your bionic arm and you're going to use it to get sort of further and 00:16:39.040 |
farther than ever before but even while presenting that vision kevin they were also very much like 00:16:44.720 |
but it's it's going to be normal it's going to be chill it's going to be kind of like your life is 00:16:50.080 |
now you're still going to be in the backyard with your kids doing science experiments you're still 00:16:53.600 |
going to be planning a girls weekend in nashville right there was not really a lot of science fiction 00:16:57.680 |
here there was just a little bit of like oh we uh we put a little bit of ai in this so that was 00:17:02.160 |
interesting to me yeah so i had a slightly different take which is that i think google is being agi pilled 00:17:07.600 |
um you know for years now google has sort of distanced itself from the conversation about agi you 00:17:12.560 |
know it had deep mind which was sort of its its agi division but they were over in london and they were 00:17:17.280 |
sort of a separate thing um and people at google would sort of not laugh exactly but kind of chuckle when you 00:17:24.880 |
asked them about agi it just didn't seem real to them or it was so remote that it wasn't worth 00:17:29.920 |
considering they would say what does this have to do with search advertising exactly so now you know 00:17:34.480 |
it's still the case that this is a company that wants you to think about it as a product company 00:17:39.200 |
a search company they're not like going all in on agi but once you start looking for it you do see that 00:17:43.920 |
that the the sort of culture of uh ai and how the people at google talk about ai has really been 00:17:50.560 |
shifting it is it is starting to seep into conversation here in a way that i think is 00:17:55.600 |
unusual and maybe indicative that the technology is just getting better um faster than even a lot of 00:18:01.680 |
people at google were thinking it would so i don't totally agree with you kevin because while i'm 00:18:07.360 |
sure that they're having more conversations about agi here than they were a year ago when you look 00:18:12.160 |
at what they're building it doesn't seem like there's been a lot of rip it up and start again 00:18:16.880 |
it seems a lot like how do we plug ai systems into google shape holes and maybe that will eventually 00:18:23.360 |
ladder up to something like agi but i don't think we've seen it quite yet the other observation i would 00:18:28.800 |
make is that i think the google of 2025 has a lot more swagger and confidence when it comes to ai than 00:18:35.680 |
the google of 2024 or 2023 i mean two years ago um google was still trying to make bard a thing and i 00:18:43.360 |
think they were feeling very insecure that that open ai had beaten them to a consumer chatbot that had 00:18:49.920 |
found some mass adoption and so they were just playing catch up and i don't think anyone would have 00:18:54.960 |
said that google was in the lead when it came to generative ai just a few years ago but now they 00:19:00.800 |
they feel like there is a race and that they are in a good position to win it they were talking about 00:19:06.000 |
how gemini stacks up well against all these other models it's at the top of this leaderboard lm arena 00:19:11.600 |
for all these different categories i don't love the way that ai is sometimes covered as if it were like 00:19:19.360 |
sports you know who's up who's down who's winning who's losing but i do feel like google has the 00:19:26.400 |
confidence now when it comes to ai of a team that like knows it's going to be in the playoffs at least 00:19:31.280 |
and that was evident oh yeah i mean well when you look at the competition just what's happened over the 00:19:35.200 |
past year you have apple doing a bunch of essentially fictional demos at wwdc and you have meta cheating to 00:19:41.760 |
win at lm arena making 27 different versions of a model just to come up with one that would be good at one 00:19:46.400 |
thing right so i think if you're google you're looking at that and you're thinking i could be those guys 00:19:50.320 |
so that is um that is what it felt like inside google io um what was the reaction from outside 00:19:57.680 |
i noticed that for example the company's stock actually fell like not not by a lot but like 00:20:02.880 |
you know to a degree that suggested that wall street was kind of meh on a lot of what was 00:20:07.360 |
announced but what was the reaction like outside of google i think the external reaction that i saw 00:20:11.440 |
was just struggling a little bit to connect the dots right like that is the issue with 00:20:16.880 |
announcing so many things during a two-hour period is sometimes people don't have that one thing that 00:20:22.000 |
they're taking away saying i can't wait to try that and when you're just looking at a bunch of google 00:20:27.120 |
products that you're already using i think if you're an investor it's probably hard to understand well 00:20:31.520 |
i don't understand why this is unlocking so much more value at google now maybe millions of people are 00:20:36.480 |
going to spend 250 a month on gemini ultra but unless that happens i can understand why some people 00:20:42.480 |
feel like hmm this feels a little like the status quo yeah i see that i also think there are like many 00:20:47.200 |
unanswered questions about how all of this will be monetized and you know it's google has built one of 00:20:53.600 |
the most profitable products in the history of capitalism in the google search engine and the advertising business that supports it 00:21:00.480 |
um it is not clear to me that whatever ai mode becomes or whatever ai features it can jam into 00:21:08.480 |
search if search as a category is just declining across the board if people are not going to google.com 00:21:15.920 |
to look things up in the way they were a few years ago um i think it's an open question like what the next 00:21:22.000 |
thing is and whether google can can seize on it as effectively as they did with search well i think that 00:21:26.960 |
they gave us one vision of what that might be and that is shopping a significant portion of the keynote 00:21:33.520 |
was devoted to one executive talking about a new shopping experience inside of google where you can 00:21:39.280 |
take a picture of yourself upload it and then sort of virtually try things on and it will sort of use ai 00:21:44.800 |
to understand your proportions and you know accurately map a garment onto you and there was a lot of stuff in 00:21:50.720 |
there that would just sort of let google take a cut right obviously you can advertise the individual 00:21:55.600 |
thing to buy maybe you're taking some sort of like cut of the payment there's an there's an affiliate fee 00:22:00.800 |
that is in there somewhere so one of the things i'm trying to do is i cover google going forward is 00:22:06.320 |
understanding that yes search is the core but there but gemini could be a springboard to build a lot of 00:22:13.280 |
other really valuable businesses an important question i know that i always ask you when i go to these things 00:22:17.280 |
how was the food let's see i think the food was really nice so here's the thing last year it was 00:22:22.080 |
a purely savory experience at breakfast and i am shamefully an american who likes a little sweet treat 00:22:27.440 |
when i woke up this year they had both bagels and an apple cinnamon coffee cake and so when i was heading 00:22:32.880 |
into that keynote i was in a pretty good mood i had some of that they have like little bottles of cold brew 00:22:38.480 |
and i i'm like a huge caffeine addict so i took two of them um and boy i was on rocket fuel all day i was 00:22:46.000 |
just humming around i was like bouncing off the walls i was like doing parkour i was like i was feeling 00:22:50.160 |
great i thought i saw you warming up with the acapella team now it all makes sense when we come back 00:22:56.000 |
we'll talk with demis asabas ceo of google deep mind about his vision of the ai future 00:23:12.000 |
demis asabas welcome back to hard fork thanks for having me again a lot has happened since the last time you were on the show 00:23:22.400 |
um most notably you won a nobel prize congrats on that thank you um ours must be still in the mail 00:23:29.760 |
can you put in a good word for next year with the committee i will do i will do i imagine it's very 00:23:35.120 |
exciting to win a nobel prize i know that had been a goal for a long time of yours um i imagine it also 00:23:41.040 |
leads to like a lot of people giving you crap like during everyday activities like if you're you know 00:23:46.080 |
struggling to work the printer and people are just like oh mr nobel lorry like does that happen 00:23:52.080 |
um a little bit i mean look i tried to say look i can't you know that maybe it's a good excuse to 00:23:56.720 |
like not have to fix uh those kinds of things right so it's more shield 00:24:00.560 |
um so you just had google io and it was really the gemini show i mean i think gemini's name was 00:24:10.400 |
mentioned something like 95 times in the keynote of all the stuff that was announced what do you think 00:24:15.840 |
will be the biggest deal for the average user wow well i mean we did announce a lot of things i 00:24:22.080 |
think for for the average user i think it's the new powerful models and i hope uh this astro type 00:24:28.720 |
technology coming into gemini live i think it's really magical actually when people use it for the 00:24:33.120 |
first time and they realize that actually ai is capable already today of doing much more than what 00:24:38.000 |
they thought uh and then i guess vo3 was the big uh uh the biggest announcement of the show probably and 00:24:45.280 |
seems to be going viral now and that's pretty exciting as well i think yeah one thing that struck 00:24:49.840 |
me about io this year compared to previous years um is that it seems like google is sort of getting agi 00:24:56.640 |
pilled as they say um i remember interviewing people researchers at google even a couple years ago 00:25:03.440 |
and um there was a little taboo about talking about agi they would sort of be like oh that's 00:25:09.040 |
like demis and his deep mind people in london that's sort of like their crazy thing uh that they're 00:25:14.320 |
excited about but here we're doing like you know real research um but now you've got like senior google 00:25:20.080 |
executives uh talking openly about it what explains that shift i think the sort of ai part of the of the 00:25:27.120 |
equation becoming more and more central like i sometimes describe uh google deep mind now as the engine 00:25:32.080 |
room of google and i think you saw that probably in the keynote yesterday really if you take a step 00:25:36.720 |
back um and then it's very clear uh i think you could sort of say agi pilled is maybe the right 00:25:42.160 |
word that we're quite close to this uh human level general intelligence uh maybe closer than people 00:25:47.920 |
thought even a couple of years ago and it's going to have broad cross-cutting impact and i think there's 00:25:53.360 |
another thing that you saw at the keynote it's sort of literally popping up everywhere because it's 00:25:56.960 |
this horizontal layer that's going to underpin everything and i think everyone is starting to 00:26:00.800 |
understand that and um maybe a bit of the deep mind ethos is bleeding into the into the general google 00:26:05.920 |
which is which is great you mentioned um that project astra is powering some things that maybe 00:26:11.600 |
people don't even realize that ai can yet do i think this speaks to a real challenge in the ai business 00:26:17.600 |
right now which is that the models have these pretty amazing capabilities but either the products 00:26:22.480 |
aren't selling them or the users just sort of haven't figured them out yet so how are you thinking about 00:26:28.160 |
that challenge and how much do you bring yourself to the product question as opposed to the research 00:26:33.440 |
question yeah it's great great question i mean i think um one of the challenges i think of this space 00:26:38.960 |
is obviously the underlying tech is moving unbelievably fast and i think that's quite different even from the 00:26:45.200 |
other big revolutionary techs internet and mobile at some point you get some sort of stabilization of the 00:26:50.720 |
tech stack so that then the you know the focus can be on product right or exploiting that tech stack 00:26:56.800 |
and what we've got here which i think is very unusual but also quite exciting from a researcher perspective 00:27:01.520 |
is that the tech stack itself is evolving incredibly fast as you guys know so i think that makes it uniquely 00:27:08.800 |
challenging actually on the product side um not just for us at google and deep mind but for startups for for 00:27:14.640 |
anyone really any any any uh company small and large is where do you what do you bet on right now when 00:27:21.680 |
that could be a hundred percent better uh in a year as we've seen and and so you've got you've got this 00:27:26.720 |
interesting thing where you need kind of fairly um deeply technical sort of product people product 00:27:31.680 |
designers and managers i think to in order to sort of intercept where the technology may be in a year so 00:27:38.640 |
so there's things it can't do today and you want to design a product that's going to come out in a year 00:27:42.960 |
so you've got to kind of put you've got a pretty deep understanding of the tech and where it might go to 00:27:46.640 |
to sort of work out what features you can rely on and so it's it's it's an interesting one i think that's 00:27:51.680 |
what you're seeing so many different things being tried out and then if something works we've got to 00:27:57.040 |
really double down quickly on that yeah during your keynote you talked about gemini as powering both 00:28:04.000 |
uh sort of productivity assistant style stuff and also fundamental uh science and and research 00:28:10.480 |
challenges and i wonder in your mind is that the same problem that sort of like one great model can 00:28:17.760 |
solve or are those sort of very different problems that just require different approaches i think you 00:28:24.400 |
know when you look at it looks like an incredible breadth of things which is true and how are these 00:28:29.360 |
things related uh other than the fact i'm interested in all of them but is that uh that was always the 00:28:34.480 |
idea with building general intelligence you know truly generally and and and and this in this way 00:28:39.920 |
that we're doing it should be applicable to almost anything right that being productivity which is very 00:28:44.800 |
exciting help billions of people in their everyday lives to cracking some of the biggest problems in 00:28:50.080 |
science um 90 i would say of it is is the underlying core general models uh you know in our case gemini 00:28:58.160 |
especially 2.5 and they're in most of these areas you still need additional applied research or some a 00:29:04.720 |
little bit of um special casing from the domain maybe it's special data or whatever um to tackle that 00:29:10.480 |
problem and you know maybe we work with domain experts in in the scientific uh areas uh but underlying it 00:29:16.000 |
you all the when you crack one of those areas you can also put those learnings back into the general 00:29:20.880 |
model and then the general model gets better and better so it's a kind of very interesting flywheel 00:29:25.680 |
and um it's great fun for someone like me who's very interested in many things you get to use this 00:29:30.960 |
technology and sort of um uh go into almost any field that you find interesting i think that a lot of 00:29:37.120 |
ai companies are wrestling with right now is how many resources to devote to sort of the core ai push on 00:29:43.040 |
the foundation models making the the models better at the basic level versus how much time and energy 00:29:48.880 |
and money do you spend trying to spin out parts of that and commercialize it and turn it into products 00:29:53.840 |
and i i imagine this is both like a resources challenge but also like a a personnel challenge 00:29:59.280 |
because say you join deepmind as an engineer and you want to like build a gi and then someone from google 00:30:04.320 |
comes to you and says like we actually want your help like building the shopping thing that's gonna like 00:30:07.920 |
let people try on clothes yeah is that a challenging conversation to have with people who joined 00:30:13.440 |
for one reason and may be asked to work on something else yeah well we we don't you know it's sort of 00:30:17.920 |
self-selecting internally we don't have to we're that's one advantage of being quite large there are 00:30:22.240 |
enough engineers on the product teams and the product areas you know that can deal with the the product 00:30:27.040 |
development prod eng and the researchers if they want to stay in core research that they're absolutely that's 00:30:32.400 |
fine and we need that um but actually you'll find a lot of researchers are quite um motivated by real 00:30:38.320 |
world impact be that in medicine obviously and and things like isomorphic but also um uh to to have 00:30:44.560 |
billions of people use their research it's actually really motivating and so there's plenty of people that 00:30:50.800 |
like to do both so um yeah we don't there's no need for us to sort of have to pivot people to certain things 00:30:58.240 |
um you did a panel yesterday with uh sergey brin google's co-founder yeah um who has been working 00:31:06.160 |
on this stuff back in the office and uh interestingly he has shorter agi timelines than you um he thought 00:31:13.520 |
agi would arrive before 2030 and you said just after he actually accused you of sandbagging basically like 00:31:19.440 |
artificially pushing out your estimates so that you could like under promise and over deliver 00:31:23.840 |
um but i'm curious about that because you will often hear people at different ai companies arguing 00:31:30.480 |
about when the timelines are but presumably you and sergey have access to all the same information 00:31:35.440 |
and the same roadmaps and you you understand what's possible um and what's not so what is he 00:31:41.520 |
seeing that you're not or vice versa that leads you to different conclusions about when agi is going to 00:31:46.160 |
arrive uh okay well first of all there isn't that much difference in our timelines if he's just before 00:31:50.240 |
2030 and i'm just after also our time my timeline has been pretty consistent since the start of deep 00:31:54.960 |
mind in 2010 so we thought it was roughly a 20-year mission and amazingly we're on track so it's it's 00:32:00.880 |
somewhere around then i would think and i and i feel like between i actually have obviously a probability 00:32:05.600 |
distribution of you know where the most mass of that is between five and ten years from now and i think 00:32:10.400 |
partly it's to do with predicting anything precisely five to ten years out is very difficult so there's 00:32:15.520 |
uncertainty bars around that and then also um there's uncertainty about how many more breakthroughs 00:32:21.040 |
are required right and also about the definition of agi i have quite a high bar which i've always had 00:32:26.160 |
which is it's it should be able to do all of the things that the human brain can do right even 00:32:32.560 |
theoretically and so that's that's a higher bar than say what the typical individual human could do 00:32:37.840 |
which is obviously very economically important that would be a big milestone but not in my view enough to 00:32:42.800 |
call it agi um and and we talked on stage a little bit about what there's missing from today's systems 00:32:49.200 |
sort of true out of the box invention and thinking um sort of inventing a conjecture rather than just 00:32:54.480 |
solving a math conjecture solving one's pretty good but actually inventing like the reaman hypothesis or 00:32:59.680 |
something as significant as that the mathematicians agree is really important is very is much harder 00:33:05.200 |
um and uh also consistency so the consistency is a requirement of generality really and you should it 00:33:13.200 |
should be very very difficult for even top experts to find uh flaws especially trivial flaws in the systems 00:33:20.800 |
which we can easily find today and you know the average person can do that so there's a sort of 00:33:25.200 |
capabilities gap and there's a consistency gap before we get to what i would consider agi and when you think 00:33:30.800 |
about closing that gap do you think it arrives via incremental two five percent improvements in each 00:33:38.800 |
successive model just kind of stacked up over a long period of time or do you think it's more likely that 00:33:44.800 |
we'll hit some sort of technological breakthrough and then all of a sudden there's liftoff and we hit some 00:33:49.920 |
sort of intelligence explosion i i think it's i think it could be both and and and i and i think for sure both 00:33:55.920 |
is going to be useful which is why we push unbelievably hard on the scaling and the you know 00:34:02.720 |
what you would call incremental although actually there's a lot of innovation even in that to keep 00:34:07.040 |
moving that forward in pre-training post-training infants time compute all of that stack so there's 00:34:12.160 |
actually lots of exciting research and we showed some of that that diffusion model um the deep think 00:34:17.200 |
model um so we're innovating it all parts of that the traditional stacks we call it and then on top of that 00:34:23.120 |
we're doing uh more greenfield things more blue sky things like alpha evolve maybe you could you could 00:34:29.280 |
include in that which um is there a difference between a greenfield thing and a blue sky thing 00:34:33.360 |
i'm not sure maybe they're maybe they're pretty similar so uh some new area let's call it and uh and 00:34:41.600 |
uh and then that could come back into the main branch right and we've i've all i mean as you both know i've 00:34:46.560 |
been fundamental believer in sort of foundational research we've always had the broadest deepest 00:34:51.840 |
research bench i think of any lab out there um and that's what allowed us to do past big breakthroughs 00:34:58.560 |
obviously transformers but alpha go alpha zero all of these things distillation um and if to the extent 00:35:05.440 |
any of those things are needed again another big breakthrough of that level um i would back us to do 00:35:10.640 |
that and uh we're you know pursuing lots of very exciting avenues that could bring that sort of step 00:35:17.040 |
change uh as well as the incremental and then they of course also interact um because the better you have 00:35:23.120 |
your your your base models the more you things you can try on top of it um again like alpha evolve you 00:35:29.600 |
know add in evolutionary programming in that case on top of the the llms um we recently talked to 00:35:37.440 |
karen howe who's a journalist um just wrote a book about ai um and she was making an argument essentially 00:35:42.720 |
against scale um that you don't need these big general models that are incredibly energy intensive and 00:35:49.440 |
compute intensive and require billions of dollars and new data centers and and all kinds of uh of 00:35:55.280 |
resources to make happen that instead of doing that kind of thing you could build smaller models you 00:36:00.000 |
could build narrower models you could have a model like alpha fold that is just designed to uh predict 00:36:05.760 |
the 3d structures of proteins you don't need a huge behemoth of a model to accomplish that what's your 00:36:11.200 |
response to that well i think you need those big models we're you know we love big and small models 00:36:16.880 |
so you need the big models often to train the smaller models so uh we're very proud of our kind 00:36:22.000 |
of flash models which are the most you know we call them our workhorse models really efficient some of 00:36:26.560 |
the most popular models we use a ton of those types of size models internally but you can't build those 00:36:31.520 |
kind of models without distilling um from the larger teacher models and um and even things like alpha 00:36:37.920 |
fold which obviously i i'm a huge advocate of more of those types of models that can tackle right now 00:36:43.360 |
we don't have to wait to agi we can tackle now really important problems in science and 00:36:47.760 |
medicine uh today and uh that will require taking the general techniques but then potentially 00:36:54.720 |
specializing it you know in that case around protein structure prediction and i think there's 00:36:59.440 |
huge potential for doing more of those things um and we are largely in our science work ai for science work 00:37:06.720 |
um and i think you know we're producing something pretty cool on that pretty much every month these days 00:37:11.440 |
and um i think there should be a lot more exploration on that probably a lot of startups could be built 00:37:16.400 |
uh combining some kind of general model that exists today with some domain specificity 00:37:21.600 |
and um but if you're interested in agi you've got to push the the again both sides of that it's it's not 00:37:29.520 |
an either or in my mind i'm i'm an and right like let's scale let's let's look at specialized techniques 00:37:35.440 |
combining a and hybrid systems sometimes they're called and let's look at um new uh blue sky 00:37:40.640 |
research that could deliver the next transformers um you know we're betting on all of those things 00:37:46.080 |
you mentioned alpha evolve something that kevin and i were both really fascinated by 00:37:51.120 |
tell us what of all this well a high level it's basically taking our um latest gemini models actually 00:37:57.840 |
two different ones uh uh to generate sort of ideas hypotheses about programs and other uh mathematical 00:38:06.160 |
functions and then it goes they go into sort of evolutionary programming process to decide which ones 00:38:11.920 |
of those are most promising and then that gets sort of ported into the next step and tell us a little 00:38:16.320 |
bit about what evolutionary programming it sounds very exciting yeah so it's it's basically a way for 00:38:21.520 |
uh systems to kind of uh explore new space right so like what things should we you know in genetics like 00:38:29.120 |
mutate to uh to give you a kind of new organism so you can think about the same way in programming or 00:38:35.760 |
mathematics you know you change the program in some way and then uh you compare it to some answer you're 00:38:41.840 |
trying to get and then the ones that fit best according to a sort of evaluation function you put back into 00:38:47.760 |
the next set of generating new ideas uh we have our most efficient model sort of flash model generating 00:38:54.320 |
uh possibilities and then we have the pro model uh critiquing that right and deciding which one of 00:39:00.000 |
those is most promising for the to be selected for the next uh next round of evolution it's sort of like an 00:39:05.200 |
autonomous ai research organization almost where you have some ais coming up with hypotheses other ais 00:39:12.560 |
testing them and supervising them and the goal as i understand it is to have an ai that can kind of 00:39:18.960 |
improve itself or over time or suggest improvements to existing problems yes so it's the beginning of 00:39:24.560 |
i think that's why people are so excited about and we are excited about it's the beginning of a kind of 00:39:28.080 |
automated process it's still not fully automated and also it's still relatively narrow we've applied 00:39:33.280 |
it to many things like chip design uh scheduling uh ai tasks on our on our data centers more efficiently 00:39:40.480 |
um even improving matrix multiplication one of the most fundamental units of training uh training algorithms 00:39:46.960 |
uh so it's it's actually amazingly useful already but um it's still constrained to domains that are 00:39:52.400 |
kind of provably correct right which uh obviously maths and coding are but we need to sort of fully generalize that 00:39:58.000 |
but it's interesting because i think for a lot of people the knock they have on llms in general is 00:40:02.560 |
well all you can really give me is the statistical median of your training data but what you're saying 00:40:08.080 |
is we now have a way of going beyond that to potentially generate novel ideas that are actually 00:40:13.600 |
useful in advancing the state of the art that's right and and but we we already had these this is 00:40:18.240 |
another approach alpha evolve using evolutionary methods but but we already had evidence of that even 00:40:23.440 |
way back in alpha go days so you know it's alpha go came up with new go strategies most famously move 37 00:40:30.400 |
in game two of our big lisa doll world championship match and okay it was limited to a game but it was 00:40:36.160 |
a genuinely new strategy that had never been seen before even though we play go for hundreds of years 00:40:41.840 |
so that's when i kicked off our sort of alpha fold projects and science projects because i was waiting 00:40:47.440 |
for to see evidence of that kind of spark of um creativity you could call it right or originality 00:40:53.440 |
uh at least in the within the domain of what we know but there's still a lot further that has to you 00:40:58.000 |
know so we we know that these kinds of models paired with things like monte carlo tree search or 00:41:03.680 |
reinforcement learning planning techniques uh can get you to new regions of space to explore and evolutionary 00:41:10.080 |
methods is another way of going beyond what the current model knows to explore force it into a new regime 00:41:16.160 |
where it's not seen it before i've been looking for a good monte carlo tree for so long now so if you 00:41:20.720 |
could help me find one it would honestly be a huge one of these things could help yeah okay great um 00:41:25.120 |
so i read the alpha of all paper or to be more precise i fed it into notebook lm and had it make 00:41:30.560 |
a podcast that i could then listen to that would explain it to me at a slightly more elementary level 00:41:34.720 |
and one fascinating thing that stuck out to me um is a detail about how you were able to make alpha 00:41:41.840 |
evolve more creative and one of the ways that you did it was by essentially forcing the model to 00:41:47.440 |
hallucinate i mean so many people right now are obsessed with eliminating hallucinations but it 00:41:53.360 |
seemed to me like one way to read that paper is that there's there is actually a scenario in which you 00:41:58.240 |
want models to hallucinate or be creative whatever you want to call it yes well i think that's right i 00:42:04.080 |
think you you know hallucination in when you want factual things obviously is if you don't want um 00:42:08.960 |
but in creative situations where you know you can think of it as a little bit like lateral thinking 00:42:13.280 |
in an mba course or something right uh is just just create some crazy ideas most of them don't make 00:42:19.600 |
sense um but the odd one or two may get you to a region of the search space that is actually quite 00:42:25.920 |
valuable it turns out once you evaluate it afterwards uh and so um you can substitute the 00:42:31.600 |
word hallucination maybe for imagination at that point right uh there's they're obviously two sides of 00:42:36.480 |
the of the same coin yeah i did talk to one uh ai safety person who was a little bit worried about 00:42:42.880 |
alpha evolve not not because of the actual technology and the experiments which this person said you know 00:42:47.680 |
they're fascinating but because of the way it was rolled out so uh deep google deep mind created alpha 00:42:54.400 |
evolve and then used it to optimize some systems inside google and kept it sort of hidden for a 00:43:00.320 |
number of months and only then sort of released it to the public and this person was saying well 00:43:06.080 |
if we really are getting to the point where these ai systems are starting to become recursively 00:43:10.480 |
self-improving and they can sort of build a better ai doesn't that imply that when google if google deep 00:43:17.120 |
mind does build agi or even super intelligence that it's going to keep it to itself for a while 00:43:23.920 |
rather than doing the responsible thing and informing the public well i think it's a bit of both actually 00:43:28.960 |
you need to for first of all alpha evolve is a very nascent self-improvement thing right and still got 00:43:34.160 |
human in the loop and it's um and it's only shaving off you know albeit important percentage of points off 00:43:39.760 |
of already existing tasks you know that's valuable but it's not some it's not creating any kind of step 00:43:45.360 |
changes uh and there's a there's a trade-off between you know carefully evaluating things internally 00:43:51.040 |
before you release it to the public out into the world um and then also getting the extra critique 00:43:56.320 |
back which is also very useful from the academic community and so on and also we we have a lot of 00:44:01.280 |
trusted tester type of programs that we talk about where people get early access to these things 00:44:06.240 |
um and um and then give us feedback and and stress test them uh including sometimes the the safety 00:44:12.560 |
institutes as well but my understanding was you weren't just like red teaming this internally within 00:44:16.240 |
google you were actually like using it to make the data centers more efficient using it to make the 00:44:20.480 |
kernels that train the ai models more efficient so i guess what this person is saying is like 00:44:25.520 |
it's just we we want to start getting good habits around these things now before they become something 00:44:32.080 |
like agi and uh they were just a little worried that maybe this is going to be something that stays 00:44:36.400 |
hidden for longer than it needs to so i don't like you you i would love to hear your response to that yeah well 00:44:41.680 |
look i i mean i think that that system is not uh uh anything really that i would say you know has any 00:44:48.720 |
risk on the agi type of front i think as we get and i think today's system still are not although 00:44:54.320 |
very impressive are not that powerful um from a you know any kind of agi risk standpoint that maybe this 00:45:00.240 |
person was talking about um and i think you need to have both you need to have incredibly rigorous 00:45:05.760 |
internal tests of these things and then we need to also get collaborative inputs from external so i 00:45:12.160 |
think it's a bit of both i actually don't know the details of uh of of the alpha evolve uh uh process 00:45:17.680 |
for the last few you know the first few months it was just function search before and then it become 00:45:21.920 |
more general so it's it's sort of evolved it's evolved itself over the last year in terms of becoming 00:45:27.520 |
this general purpose tool um and it still has a lot of um way to go before we can actually use it 00:45:32.560 |
in our main branch which is that at that point i think then becomes more serious like with gemini 00:45:36.720 |
it's sort of separate from from that currently let's talk about ai safety a little bit more broadly 00:45:41.920 |
uh it's been my observation that it seemed like if the further back in time you go and the less 00:45:46.640 |
powerful ai systems you have the more everyone seemed to talk about the safety risk and it seems like 00:45:51.760 |
now as the models improve we we hear about it less and less including you know at the keynote yesterday 00:45:56.320 |
so i'm curious what you make of this moment in ai safety uh if you feel like you're paying enough 00:46:03.440 |
attention to the risk that could be created by the systems that you have and if you are as committed to 00:46:08.960 |
it as you were say three or four years ago a lot of these outcomes seem less likely yeah well we're 00:46:14.160 |
we're just as committed as we've ever been i mean we we've we've from the beginning of deep mind we 00:46:18.800 |
plan for success so success meant something looking like this is what we kind of imagined i mean it's 00:46:23.760 |
sort of unbelievable still that it's actually happened but it's it is sort of in the in the 00:46:28.320 |
overton window of what we thought was going to happen if if these technologies really did develop the 00:46:33.280 |
way we thought they were going to um and the risk and attending to mitigating those risks was was part of 00:46:39.840 |
that and so we do a huge amount of work on our systems i think we have very robust red teaming 00:46:45.280 |
uh uh uh processes both pre and post launches um and we've learned a lot uh and i think that's what's 00:46:52.400 |
the difference now between having these systems have albeit early systems contact with the real 00:46:58.000 |
world i think that's actually been i'm sort of persuaded now that that has been a useful thing 00:47:03.040 |
overall and i wasn't sure um i you know i think five years ago ten years ago i may have thought maybe 00:47:08.240 |
maybe it's better staying in a research lab and and you know kind of collaborating with academia 00:47:13.200 |
and that but actually there's a lot of things you don't get to see or understand unless millions of 00:47:18.560 |
people try it so it's this weird trade-off again between um you you can only do it when there's 00:47:24.400 |
there's millions of smart people uh uh try your uh technology and then you find all these edge cases 00:47:30.640 |
so you know however big your your testing team is it's only going to be you know 100 people or 00:47:34.800 |
a thousand people or something so it's not comparable to tens of millions of people using your your systems 00:47:39.600 |
but on the other hand you want to know as much as possible uh ahead of time so you can mitigate the 00:47:44.880 |
risks before they happen so and this is so this is interesting and it's good learning i think what's 00:47:49.600 |
happened in the industry in the last two three years has been great because we've been learning when the 00:47:53.760 |
systems are not that powerful or risky as you were saying earlier right i think things are going to get 00:47:58.640 |
very serious in two three years time when these agent systems start becoming really capable we're 00:48:04.400 |
only seeing the beginnings of the agent era let's call it but you can imagine and i think hopefully you 00:48:09.520 |
understood from the keynote what the ingredients are what it's going to come together with and then i think 00:48:14.560 |
we really need a step change in research on analysis and understanding controllability but the other key 00:48:20.880 |
thing is it's got to be international you know that's pretty difficult and i've been very consistent on that 00:48:25.360 |
because it's an inter it's gonna it's a technology gonna fit everyone in the world it's being built 00:48:29.040 |
by different countries and different companies in different countries so you've got to get some 00:48:32.960 |
international kind of norm i think around uh what we want to use these systems for and and what 00:48:40.080 |
are the kinds of benchmarks that we want to test safety and reliability on um but there's plenty of 00:48:46.320 |
work to get on with now like we don't have those benchmarks we should we and the industry and 00:48:50.720 |
academia should be agreeing to consensus of what those are what role do you want to see export 00:48:55.840 |
controls play in doing what you just said well export controls is a very complicated issue and 00:49:01.440 |
and obviously geopolitics today is extremely complicated um and there you know i can i see 00:49:06.880 |
both sides of the arguments on that you know there's proliferation uncontrolled proliferation of these 00:49:10.880 |
technologies uh do you want different places to have frontier modeling uh training capability uh i'm not 00:49:17.520 |
sure that's a good idea but on the other hand um you want western technology to be to be uh the thing 00:49:24.000 |
that's adopted uh around the world so it's a complicated trade-off like if there was an easy 00:49:29.440 |
answer i think we'd all you know i would be you know shouting from the rooftops but i think there's it's 00:49:34.240 |
it's nuanced like most real world problems are do you think we're heading into a bipolar conflict with 00:49:39.440 |
china over ai if we aren't in one already i mean just recently we saw the trump administration making 00:49:44.800 |
a big push to uh make the middle east uh countries in the gulf like saudi arabia and the uae into ai 00:49:50.800 |
powerhouses have them you know use american chips to to train models that will not be sort of accessible to 00:49:57.200 |
to china and its ai powers do you see that becoming sort of the foundations of a new global conflict well i hope 00:50:04.000 |
not but i i think uh short term you know i feel like ai is getting caught up in the in the bigger 00:50:11.440 |
geopolitical shifts that are going on so i think it's just part of that and it happens to be one of 00:50:16.400 |
the most uh topical new things that's appearing but on the other hand what i'm hoping is as people 00:50:22.640 |
as these technologies get more and more powerful the world will realize we're all in this together 00:50:27.600 |
because we are and so uh you know and the the the the last few steps towards agi um hopefully we're on 00:50:35.280 |
the longer timelines actually right um the more the timelines i'm thinking about then we get time to 00:50:41.200 |
sort of get the the the the collaboration we need at least on a scientific level um before before then 00:50:47.600 |
would be good do you feel like you're in sort of the the final home stretch to agi i mean sergey brin 00:50:52.720 |
google's co-founder had a memo that was reported on by my my colleague at the new york times earlier 00:50:58.080 |
this year that went out to google employees and said you know we're in the sort of the home stretch 00:51:02.080 |
and everyone needs to get back to the office and be working all the time uh because this this is when 00:51:06.800 |
it really matters do you have that sense of like of of finality or sort of entering a new phase or an 00:51:12.400 |
end game i think we are past the middle game that's for sure but i've been working every hour there 00:51:18.080 |
is for the last 20 years because i've felt the how important and momentous this this technology would 00:51:23.520 |
be and we thought it was possible for 20 years and i think it's coming into view now i agree with that 00:51:29.200 |
and um whether it's five years or 10 years or two years that they're all actually quite short timelines 00:51:36.160 |
when you're talking discussing what were the enormity of the transformation of this technology 00:51:41.040 |
you know this technology is going to bring uh that none of those timelines are very long 00:51:45.120 |
when we come back more from demis isabes about the strange futures that lie ahead 00:52:00.720 |
we're going to switch to some more general questions about the ai future sure a lot of people now are 00:52:08.320 |
starting to at least in conversations that i'm involved in think about what the world might look like 00:52:13.600 |
after agi um the context in which i actually hear the most about this is from parents who want to know 00:52:20.320 |
um what their kids should be doing studying will they go to college yeah um you have kids they're older 00:52:26.480 |
than than my kid um how are you thinking about that so i think that the when it comes to the kids and i get 00:52:33.360 |
asked this quite a lot is is uh university students um i think first of all i wouldn't dramatically uh change 00:52:41.760 |
some of the basic advice on stem uh getting good at even the things like coding i would still recommend 00:52:47.360 |
because i think whatever happens with these ai tools you'll be better off understanding how they work 00:52:53.200 |
and how they function and you know what you can do with them um i would also say immerse yourself now 00:52:59.600 |
that's what i would be doing as a teenager today and in trying to become a sort of ninja at using the the 00:53:05.440 |
the latest tools i think you can almost be sort of superhuman in some ways if you got really good at 00:53:10.880 |
using uh all the latest uh coolest ai tools um but don't neglect the basics too because you need the 00:53:17.760 |
fundamentals and then i think uh teach sort of meta skills really of um like learning to learn and the 00:53:24.320 |
only thing we know for sure is there's going to be a lot of change over the next 10 years right so how 00:53:28.880 |
does one get ready for that what kind of skills are useful for that creativity skills um adaptability 00:53:34.720 |
resilience i think all of these sort of you know meta skills is what will be important uh for the 00:53:41.120 |
next generation um and i think it'll be very interesting to see what they do because they're 00:53:45.520 |
going to grow up ai native just like the last generation grew up mobile and and ipad and you know sort 00:53:52.000 |
of that that kind of you know tablet native and then previously internet and computers which was my 00:53:57.120 |
era and um you know they always i think the kids of that era always seem to adapt to uh make use of the 00:54:04.880 |
latest coolest tools and i think there's more we can do on the ai side to make the tools actually um if 00:54:11.200 |
people are going to use them for school and education let's make them really good for that and sort of 00:54:16.400 |
provably good and i'm very excited about bringing it to education in a big way and also to you know 00:54:22.240 |
if you had an ai tutor uh uh to bring it to poorer parts of the world that don't have good educational 00:54:27.600 |
systems um so i think there's a lot of upside there too another thing that kids are doing with ai is 00:54:33.040 |
chatting a lot with digital companions um google deep mind doesn't make any of these companions yet 00:54:39.280 |
um some of what i've seen so far seems pretty worrying it seems pretty easy to create a chat bot that 00:54:44.240 |
just does nothing but tell you how wonderful you are and that can sort of like lead into some dark 00:54:48.480 |
and weird places so i'm curious what observations you've had as you like look at this uh market for 00:54:53.760 |
ai companions and whether you think i i might want to build this someday or i'm gonna leave that to 00:55:00.080 |
other people yeah i think we've got to be very careful as we as we start entering that domain and and 00:55:04.880 |
that's why we we haven't yet and we've been very thoughtful about that my view on this is um more through 00:55:10.640 |
the lens of uh the universal assistant that we talked about yesterday which is something that's 00:55:16.080 |
incredibly useful for your everyday productivity you know gets rid of the boring mundane tasks that we 00:55:22.080 |
all hate doing to give you more time to do the things that you love doing i also really um hope that 00:55:27.760 |
they're going to enrich your lives by giving you incredible recommendations for example on all sorts of 00:55:33.280 |
amazing things that um you didn't realize you would enjoy you know sort of the delight you with surprising 00:55:39.040 |
things um so i think these are the the ways i'm hoping that uh these systems will go and actually 00:55:44.800 |
on the positive side i feel like um we if this assistant becomes really useful and knows you well you 00:55:52.720 |
could sort of program it with you obviously with natural language to protect your attention so you could 00:55:58.720 |
almost think of it as a system that works for you you know as an individual it's yours and um it 00:56:05.360 |
protects your attention from being assaulted by other algorithms that want your attention which 00:56:10.640 |
is actually nothing to do with ai most most social media sites that's what they're doing effectively 00:56:16.080 |
their algorithms are trying to gain your attention and i think that's actually the worst thing 00:56:20.400 |
and it'd be great to to protect that so we can be more in you know creative flow whatever it is that you 00:56:26.080 |
once you want to do that's how i would want these systems to be useful to people if you could build 00:56:31.040 |
a system like that i think people would be so incredibly happy i think right now people feel 00:56:35.440 |
assailed by the algorithms in their life and they don't know what to do about it well the reason is 00:56:39.280 |
is because you have to use your you've got one brain and you have to let's say whatever it is a social 00:56:44.320 |
media stream you have to dip into that torrent to then get the piece of information you want but then 00:56:49.680 |
you've already but you're doing it with the same brain so you've already affected your mind and your mood 00:56:54.240 |
and other things by dipping into that torrent and you know to find the valuable you know the 00:56:58.720 |
piece of information that you wanted but if if an assistant digital assistant did that for you you 00:57:03.600 |
would you know you'd only get the useful nugget and you wouldn't need to um break your you know your 00:57:09.120 |
your mood or what is it you're doing the day or your concentration with your family whatever it is 00:57:13.440 |
um i think that would be wonderful yeah casey loves that idea you love that idea i love this idea of an 00:57:18.480 |
ai agent that protects your attention from all the forces trying to assault it i'm not sure how the the 00:57:24.800 |
ads team at google is gonna feel about this um but we can ask them when the time comes um some people are 00:57:31.920 |
starting to look at the job market especially for recent college graduates and uh worry that there were 00:57:37.360 |
already starting to see signs of ai power job loss um anecdotally i talked to young people who uh you 00:57:45.040 |
know a couple years ago might have been interested in going into fields like tech or consulting or 00:57:49.760 |
finance or law who are just saying like i don't know that these jobs are going to be around much 00:57:54.720 |
longer um a recent article in the atlantic wondered if we're starting to see ai competing with college 00:58:00.160 |
graduates for these entry-level positions do you have a view on that i haven't looked at that you know i 00:58:05.280 |
i don't know i haven't seen the studies on that but um you know maybe it's starting to appear now i i 00:58:11.120 |
don't think there's any hard numbers on that yet at least i haven't seen it um i think for now i mostly 00:58:16.240 |
see these as tools that augmenting what you can do and what you can achieve um i think like with most i 00:58:22.400 |
think the next era i mean maybe after agi things will be different again but over the next five to ten years 00:58:28.160 |
i think we're going to find uh what normally happens with with big sort of new technology shifts which is 00:58:33.360 |
that some jobs get disrupted but then new um you know more valuable usually more interesting jobs 00:58:39.440 |
get created so i do think that's what's going to happen in the in the nearer term um so you know 00:58:44.560 |
today's graduates and the next you know next five years let's say i think it's very difficult to 00:58:48.880 |
predict after that um that's part of this sort of more societal change that we need to get ready for 00:58:53.920 |
i mean i think the the tension there is that you're right these tools do give people so much more 00:58:59.520 |
leverage um but they also like reduce the need for big teams of people doing certain things i was 00:59:04.640 |
talking to someone recently who said you know they had been at a data science uh company in their 00:59:09.760 |
previous job that had 75 people working on some kind of data science tasks and now they're at a startup 00:59:16.000 |
that has one person doing the work that used to require 75 people and so i guess the question 00:59:20.960 |
i'd be curious to get your view on is what are the other 74 people supposed to do well look i think um 00:59:27.440 |
uh these tools are going to unlock uh the ability to create things much more quickly so you know that 00:59:35.520 |
i think there'll be more people that will do startup things i mean there's a lot more surface area one could 00:59:40.800 |
attack and try with these tools um that was possible before so let's take programming for example um you 00:59:47.440 |
know so obviously these these systems are getting better at coding but the best coders i think are 00:59:52.640 |
getting differential value out of it because they still understand how to pose the question and 00:59:57.040 |
architect the whole code base and and check what the coding does but simultaneously at the hobbyist end 01:00:03.280 |
it's allowing designers and maybe non-technical people to vibe code some things you know whether 01:00:08.880 |
that's prototyping games or or websites or uh movie ideas so in theory it should be those 01:00:15.760 |
other 70 people whatever should could be creating new startup ideas maybe it's going to be less of these 01:00:21.360 |
bigger teams and more smaller teams are very empowered by ai tools um but it but that goes back 01:00:27.280 |
to the education thing then which skills are now important it might be different skills like 01:00:32.480 |
creativity sort of vision and uh design sensibility um you know could become increasingly important 01:00:39.520 |
do you think you'll hire as many engineers next year as you hire this year i think so yeah that's that's 01:00:44.800 |
the i mean there's no plan to to hire less but you know we again you have we have to see how 01:00:50.080 |
fast the the coding uh agents improve um today they they they're they're not you know they can't do 01:00:57.040 |
things on their own they need to they need uh they're just helpful for for the best you know for the 01:01:01.680 |
best human coders last time we talked to you we asked you about some of the more pessimistic views 01:01:06.960 |
about ai in the public and one of the things you said to us was that the field needed to demonstrate 01:01:12.080 |
concrete use cases that were just clearly beneficial to people to kind of shift yes my 01:01:16.640 |
observation is that i think there are even more people now who are like actively antagonistic toward 01:01:21.920 |
ai and i think maybe one reason is they hear folks at the big labs saying pretty loudly eventually this 01:01:29.040 |
is going to replace your job and most people just think well i don't want that you know so i'm 01:01:33.520 |
curious like looking on from that past conversation if you feel like we have seen some use cases enough 01:01:40.160 |
use cases to start to shift public opinion or if not what some of those things might be that actually 01:01:46.880 |
changed views here well i think we're we're we're working on those things they take time to develop um i think 01:01:53.440 |
the a kind of universal assistant would be one of those things if it was uh kind of really yours and 01:01:58.960 |
working for you effectively so technology that works for you um i think that this is what economists and 01:02:05.440 |
other experts should be working on is do you have uh does everyone have manager uh a a suite of of you 01:02:13.120 |
know fleet of agents that are doing things for you and you know including potentially earning you money or 01:02:17.760 |
building you things um you know does that become part of the normal job process i could 01:02:22.240 |
imagine that in the next four or five years i also think that as we get closer to agi and we make 01:02:27.760 |
breakthroughs and we probably talked about last time material sciences energy fusion these sorts of things 01:02:33.200 |
help by ai um uh we should have we should start getting to a position in society where we're getting 01:02:39.360 |
towards what i would call radical abundance where there's a lot of resources uh to go around and then 01:02:44.480 |
again it's more of a political question of how would you distribute that in a fair way right so 01:02:49.840 |
i've heard this term like universal high income well something like that uh i think it's gonna probably 01:02:55.040 |
be you know good and necessary but obviously there's a lot of uh complications that they need to be 01:02:59.600 |
thought through um so and and then in between there's this transition period you know between now and 01:03:04.720 |
whenever we we have a that sort of situation where what what do we do about the change in in the in 01:03:10.800 |
the interim and depends on how long that is too what part of the economy do you think agi will 01:03:16.160 |
transform last well i mean i think that parts of the economy where you know involves human to human 01:03:22.160 |
interaction and emotion um and those things i think uh you know will probably be the hardest things for 01:03:30.160 |
for ai to do so um you know are people already you do an ai therapy and talking with chat bots for 01:03:37.680 |
things that they might have paid someone you know a hundred dollars an hour for well therapies are very 01:03:42.080 |
narrow domain and i'm not sure exactly there's a lot of you know hype about those things i'm not actually 01:03:46.880 |
sure how many uh of those things are really going on in terms of actually affecting the real economy and 01:03:52.240 |
rather than just sort of more toy things um and i don't think the ai systems are like capable of doing that 01:03:57.520 |
properly yet um but just the kind of emotional connection uh and uh that we get from talking 01:04:04.320 |
to each other and um doing things in nature in the real world uh i don't think that ai can really 01:04:11.200 |
replicate all of those things so if you lead hikes would be a good job yeah yeah i'm gonna climb everest 01:04:17.200 |
yeah my intuition on this is that it's going to be some heavily regulated industry where there will just 01:04:20.960 |
be like a massive pushback on the use of ai to displace labor or or take people's jobs like health 01:04:27.040 |
care or or education or something like that um but you think it's going to be an easier lift in those 01:04:33.440 |
heavily regulated industries well i don't know i mean it might be but then we have to weigh that up as 01:04:38.160 |
society whether we want all the all the all the positives of that for example you know curing all 01:04:42.880 |
diseases or or um you know i think there's a lot of uh finding new energy sources so i think these 01:04:49.360 |
things would be clearly very beneficial for society and i think we need um to for our other big challenges 01:04:55.200 |
it's not like there's no challenges in society other than uh ai but i think ai can be a solution to a lot of 01:05:00.640 |
those uh other challenges be that energy resource constraints uh aging disease um you know you name 01:05:08.000 |
it and water access etc a ton of problems facing us today climate um i think ai can potentially help 01:05:15.280 |
with all of those and i agree with you society will need to decide what um it wants to use this these 01:05:21.600 |
technologies for and um but then you know what's also changing is what we discussed earlier with products 01:05:27.520 |
is the technology is going to continue advancing um and that will open up new possibilities like uh 01:05:33.680 |
the kind of radical abundance space travel these things um which are a little bit out of scope today 01:05:39.040 |
unless you read a lot of sci-fi but i think rapidly becoming uh real during the industrial revolution 01:05:45.120 |
there were lots of people who embraced new technologies moved from farms to cities to work in the new 01:05:50.160 |
factories uh were sort of early adopters on that curve um but that was also when the transcendentalists 01:05:56.240 |
started retreating into nature and rejecting technology that's when thoreau went to walden 01:06:01.200 |
pond and there was a big movement of americans who just saw the new technology and said i don't think 01:06:06.080 |
so not for me do you think there will be a similar movement around rejection of ai and if so how how 01:06:11.360 |
big do you think it'll be um i don't know if it'll be i mean there could be a get back to nature and 01:06:17.760 |
i mean i think a lot of people will want to do that and and i think this potentially will give 01:06:21.760 |
them the room and space to do it right if you're in a world of radical abundance i fully expect that's 01:06:26.240 |
what what a lot of us will want to do is use it to you know i think again i'm thinking about it sort 01:06:31.680 |
of space faring and and and and more you know kind of um maximum human flourishing but uh i think that 01:06:38.560 |
will be that will be exactly some of the things that a lot of us will choose to do and but i have time 01:06:42.800 |
and the space and the the resources to do it are there parts of your life where you say i'm not 01:06:48.240 |
going to use ai for that even though it might be pretty good at it for some sort of reason wanting 01:06:53.920 |
to protect your creativity or your thought process or something else um i don't think ai is good enough 01:07:00.320 |
yet to impinged on any of those sorts of errors where i would you know it's mostly i'm using it for 01:07:05.680 |
you know things like you did with notebook lm which i feel find great like breaking uh the ice on a new 01:07:10.080 |
topic scientific topic and then deciding if i want to get more deep into it that's one of my 01:07:14.320 |
main use cases summarization those things i think those are all just helpful um but you know we'll 01:07:19.680 |
see i haven't got any examples of what of what you suggested yet but maybe as ai gets more powerful 01:07:24.560 |
there will be when we talked to dario amade of anthropic recently he talked about this feeling of 01:07:30.960 |
excitement mixed with a kind of melancholy about the progress that ai was making in domains where he had 01:07:37.920 |
spent a lot of time trying to be very good yes coding yes where it was like you see a new coding 01:07:43.520 |
system that comes out it's better than you you think that's amazing and then your second thought 01:07:46.960 |
is like oh that stings a little bit yeah have you had any experiences of course so i maybe maybe one 01:07:52.320 |
reason doesn't sting me so much is i've had that experience when i was very young with chess so you 01:07:57.200 |
know um chess was going to be my first career and you know i was playing pretty professionally when i was a 01:08:01.520 |
kid for the england junior teams and then deep blue came along right and clearly uh the computers were 01:08:07.120 |
going to be much more powerful than the world champion forever after that and so but yeah i 01:08:12.400 |
still enjoy playing chess um people still do it's different you know but it's a bit like i can you 01:08:17.840 |
know usain bolt we celebrate him for for running the hundred meters incredibly fast but we've got cars 01:08:23.040 |
but we don't care about that right like it's we're interested in other humans doing it and um i think 01:08:28.480 |
that'll be the same with robotic football and all of these other things so um and that maybe goes back 01:08:34.320 |
to what we discussed earlier about what i think in the end we're interested in in other human beings 01:08:39.040 |
that's why even like a novel maybe it maybe ai could write one day a novel that's sort of technically 01:08:44.720 |
good but i don't think it would have the same soul or connection to the reader that um uh if you knew it was 01:08:50.640 |
was written by an ai at least as far as i can see for now you mentioned robotic football is that a 01:08:55.520 |
real thing we're not sports fans so i just want to make sure i haven't missed something i was meaning 01:08:59.200 |
soccer yeah no yeah no no uh i don't know i i think there are there are um robo cup uh sort of uh 01:09:05.120 |
soccer type little robots trying to kick balls and things uh i'm not sure how serious it is but there 01:09:10.080 |
is a there is a field of robotic football you mentioned the you know sometimes a novel written by a robot 01:09:16.720 |
might not feel like it have a soul i have to say for as incredible as the technology is in 01:09:21.360 |
vo or imagine i sort of feel that way with it where it's like it's beautiful to look at but i don't 01:09:26.880 |
know what to do with it right you know what i mean exactly and that's that's what i was you know that's 01:09:31.120 |
why we work with great artists like darren aronofsky and shanker on the music um is i i totally agree 01:09:37.680 |
i think these are tools and they can come up with technically good things and i mean vo3 is unbelievable 01:09:43.680 |
like when i look at the you know i don't know if you've seen some of the things that are going viral 01:09:46.960 |
being posted the moment with the voices actually i didn't realize how big a difference audio is going 01:09:50.880 |
to make to the video i think it just really brings it to life but it's still not as darren would say 01:09:56.400 |
yesterday when we were discussing on an interview it it doesn't he brings the storytelling it's not got 01:10:02.640 |
deep storytelling like a master filmmaker will do or master novelist you know the top of their game 01:10:09.360 |
and um it might never do right it's just always going to feel something's missing it's a sort of a 01:10:16.480 |
soul for a better word of the piece you know the real humanity the magic if you like the the great pieces 01:10:23.360 |
of art you know art too when you when i see a van gogh or a rothko or you know why does that touch your 01:10:29.840 |
you know i spill you know um sort of you know hair's gone back my my spine because of i remember 01:10:36.000 |
you know and you know about what what they went through and um the struggle to produce that right 01:10:40.800 |
in every brushstroke of van gogh's brushstrokes his his sort of uh torture and i'm not sure what that 01:10:46.880 |
would mean even if the ai mimicked that and you were told that it was like so what right and and and 01:10:51.840 |
so i think that is the piece that at least as far as i can see out to five ten years um the the the 01:10:59.840 |
top human creators will always be bringing and that's why we've done all of our tools vo lyria in income 01:11:06.320 |
in collaboration um with top creative artists the new pope pope leo yes is reportedly interested in agi 01:11:14.880 |
i don't know if he's agi pilled or not but uh that's something that he's spoken about before um 01:11:19.920 |
do you think we will have a religious revival or a renaissance of interest in faith and spirituality 01:11:25.600 |
in a world where agi is forcing us to think about what gives our lives meaning i think that potentially 01:11:31.520 |
could be the case and um i actually did speak to the last pope about that and and the vatican's been 01:11:36.160 |
interested but even prior to this pope haven't spoken to him yet but on these these matters how does ai 01:11:42.000 |
and religion uh and uh technology in general and religion uh interact and and what's interesting 01:11:48.480 |
about the catholic churches and i'm a member of the pontifical academy of sciences is they've always 01:11:53.120 |
had uh which is strange for a religious body a scientific arm you know which they like to always 01:11:58.000 |
say galileo was the founder of and uh although it's interesting so so but then and and it's actually 01:12:05.760 |
really separate and i always thought that was quite interesting and people like stephen hawking and 01:12:09.440 |
and you know avowed atheists were part of the academy and and that's partly why i agreed to 01:12:13.840 |
join it is because it's a fully scientific body and uh it's very interesting and i was fascinated 01:12:19.040 |
they've been interested in this for 10 plus years so they they were on on this early in terms of like 01:12:24.000 |
how interesting or how from a physical philosophical point i think um uh this this this technology will be 01:12:30.960 |
and i and i i actually think we need more of that type of thinking and work from from philosophers 01:12:36.320 |
and theologians uh actually would be really really good so i hope the new pope is genuinely interested 01:12:42.000 |
um we'll close on a question that uh i recently heard tyler cowen ask jack clark from anthropic that 01:12:47.920 |
i thought was so good and decided to just steal it whole cloth in the ongoing ai revolution what is the worst age to be 01:13:01.520 |
gosh i haven't thought about that but i mean i think any age uh uh where you can live to see it 01:13:11.520 |
is a good age because i think we are going to make some great strides uh with things like you know 01:13:17.680 |
medicine and so um i think it's going to be incredible journal none of us know you know 01:13:22.800 |
exactly how it's going to transpire it's very difficult to say but it's going to be very 01:13:26.960 |
interesting to find out try to be young if you can yes young is always better yeah i mean in general 01:13:32.880 |
young is always better all right demis asabes thanks so much thank you very much