back to indexMark Zuckerberg: Future of AI at Meta, Facebook, Instagram, and WhatsApp | Lex Fridman Podcast #383
Chapters
0:0 Introduction
0:28 Jiu-jitsu competition
17:51 AI and open source movement
30:22 Next AI model release
42:37 Future of AI at Meta
63:15 Bots
78:42 Censorship
93:23 Meta's new social network
100:10 Elon Musk
104:15 Layoffs and firing
111:45 Hiring
117:37 Meta Quest 3
124:34 Apple Vision Pro
130:50 AI existential risk
137:13 Power
140:44 AGI timeline
148:7 Murph challenge
153:22 Embodied AGI
156:29 Faith
00:00:00.000 |
The following is a conversation with Mark Zuckerberg, 00:00:14.840 |
We talk about his vision for the future of Meta 00:00:23.520 |
and now, dear friends, here's Mark Zuckerberg. 00:00:28.280 |
So you competed in your first Jiu-Jitsu tournament, 00:00:30.800 |
and me, as a fellow Jiu-Jitsu practitioner and competitor, 00:00:37.560 |
So I gotta ask, what was that experience like? 00:00:47.160 |
- Doing sports that basically require your full attention, 00:00:50.320 |
I think is really important to my mental health 00:01:14.040 |
"I'm not gonna shy away from a challenge like that." 00:01:30.240 |
where you're trying to break your opponent's limbs 00:01:36.320 |
and do so with grace and elegance and efficiency 00:01:46.240 |
And it's basically a game, a sport of human chess 00:01:54.280 |
of using leverage and all that kind of stuff. 00:02:03.160 |
and you get to understand the way the mechanics 00:02:18.520 |
So basically, I rolled up with a hat and sunglasses 00:02:24.920 |
and I registered under my first and middle name, 00:02:28.480 |
And it wasn't until I actually pulled all that stuff off 00:02:41.720 |
is not just the losing but being almost embarrassed. 00:02:44.620 |
It's so raw, the sport, in that it's just you 00:02:52.080 |
especially the first time you're doing the competing 00:03:08.880 |
I mean, all the stuff that you were talking about before 00:03:10.400 |
about getting choked or getting a joint lock, 00:03:18.400 |
if you're not willing to tap once you've already lost. 00:03:22.660 |
But obviously, when you're getting started with something, 00:03:24.580 |
you're not gonna be an expert at it immediately. 00:03:26.500 |
So you just need to be willing to go with that. 00:03:36.420 |
as people grow up, maybe they don't wanna be embarrassed 00:03:39.020 |
or anything, they've built their adult identity 00:03:40.900 |
and they kind of have a sense of who they are 00:03:46.820 |
And I don't know, I think maybe to some degree, 00:03:51.620 |
your ability to keep doing interesting things 00:03:58.220 |
and go back to step one and start as a beginner 00:04:01.740 |
and get your ass kicked and look stupid doing things. 00:04:06.740 |
Yeah, I think so many of the things that we're doing, 00:04:21.900 |
I think of as like 10 plus year missions that we're on 00:04:33.620 |
until something is perfect to put it out there. 00:04:35.340 |
We wanna get it out quickly and get feedback on it. 00:04:43.900 |
that you decide that you're gonna be too embarrassed 00:04:46.580 |
then you're not gonna learn anything anymore. 00:04:53.940 |
Do you feel that in especially stressful moments 00:04:57.660 |
sort of outside of the judgement, just at work, 00:05:05.220 |
big decision moments, how do you deal with that fear? 00:05:19.020 |
I tend to have enough conviction around the values 00:05:22.860 |
of what we're trying to do and what I think matters 00:05:28.260 |
that those don't really keep me up at night that much. 00:05:45.900 |
The thing in running a company for almost 20 years now, 00:05:59.180 |
And, you know, you can run through super hard challenges, 00:06:03.280 |
you can make hard decisions and push really hard 00:06:08.820 |
you know, and kind of optimize something super well. 00:06:18.940 |
who run other companies and things like that, 00:06:20.580 |
I think one of the things that I actually spend 00:06:43.540 |
but like a pretty tight group who you can go work on 00:06:47.700 |
But to me, that's also the most stressful thing 00:06:50.740 |
is when there's tension, you know, that weighs on me. 00:06:55.740 |
I think the, you know, just it's maybe not surprising. 00:06:59.380 |
I mean, we're like a very people-focused company 00:07:07.720 |
But yeah, that I'd say across everything that we do 00:07:12.980 |
- So when there's tension in that inner circle 00:07:15.740 |
of close folks, so when you trust those folks 00:07:29.900 |
the future of the company and the metaverse with AI, 00:07:33.180 |
how do you build that close-knit group of folks 00:07:39.180 |
Is there people that you have to have critical voices, 00:07:42.620 |
very different perspectives on focusing on the past 00:08:01.460 |
And, you know, so I mean, a lot of how I run the company 00:08:04.780 |
is, you know, it's like every Monday morning, 00:08:06.340 |
we get our, it's about the top 30 people together. 00:08:10.220 |
And we, and this is a group that just worked together 00:08:16.340 |
I mean, new people join, people leave the company, 00:08:27.380 |
it's, you know, it can be somewhat unstructured. 00:08:38.500 |
whether it's, okay, there's an issue happening 00:08:44.260 |
There's like a new technology that's developing here. 00:08:49.060 |
You know, there's a design trade-off in WhatsApp 00:08:56.940 |
And we need to kind of decide where we wanna be on that. 00:09:02.540 |
by working through a lot of issues with people 00:09:04.460 |
and doing it openly, people develop an intuition 00:09:09.840 |
And to me, developing that is like a lot of the fun part 00:09:15.580 |
of running a company or doing anything, right? 00:09:17.420 |
I think it's like having people who are kind of along 00:09:20.220 |
on the journey that you feel like you're doing it with. 00:09:24.700 |
- Are there people that disagree often within that group? 00:09:31.900 |
So this is making decisions on design, engineering, 00:09:41.180 |
- I have to ask, just back to jujitsu for a little bit, 00:09:47.060 |
how do you like to submit your opponent, Mark Zuckerberg? 00:09:52.340 |
- Well, first of all, do you prefer no gi or gi jujitsu? 00:09:58.900 |
So gi is this outfit you wear that maybe mimics clothing 00:10:06.700 |
It's like the traditional martial arts or kimono. 00:10:21.060 |
And so I think no gi more closely approximates MMA. 00:10:25.900 |
And I think my style is maybe a little closer 00:10:39.340 |
but in MMA, you don't wanna be on your back, right? 00:10:42.920 |
you're just taking punches while you're on your back. 00:10:51.820 |
And yeah, and I'd probably rather be the top player, 00:11:05.120 |
especially because if I'm doing a competition, 00:11:11.700 |
So back takes probably pretty important, right? 00:11:15.140 |
Because that's where you have the most leverage advantage, 00:11:26.540 |
But I don't know, I feel like the right strategy 00:11:28.340 |
is to not be too committed to any single submission. 00:11:36.060 |
are a somewhat more humane way to go than joint locks. 00:11:44.140 |
So you're basically like a Habib Nurmagomedov type of fighter. 00:11:47.660 |
- So let's go, yeah, back take to a rear naked choke. 00:11:59.380 |
Given how busy you are, given where you are in life, 00:12:02.980 |
that you're able to do this, you're able to train, 00:12:05.100 |
you're able to compete and get to learn something 00:12:18.820 |
I think that there's a flow to all these things. 00:12:31.100 |
running a company and the different activities 00:12:34.300 |
that I like doing are, I really believe that, 00:12:36.860 |
like, if you're gonna accomplish whatever, anything, 00:12:40.100 |
a lot of it is just being willing to push through, right? 00:12:50.020 |
that ends up being sort of a difference maker 00:12:54.300 |
between the people who kind of get the most done and not. 00:12:58.340 |
I mean, there's all these questions about, like, 00:13:05.060 |
who, like, start successful companies or things like that 00:13:09.500 |
But I think one of the things that you learn, 00:13:13.260 |
or, you know, very acutely with things like jujitsu 00:13:15.940 |
or surfing is you can't push through everything. 00:13:30.340 |
Because running a company, the cycle times are so long, 00:13:45.020 |
for the next version of the thing that you're doing. 00:13:46.540 |
Whereas, one of the things that I just think is mentally 00:13:57.220 |
It's like, okay, like, I don't counter someone correctly, 00:14:00.940 |
So not in jujitsu, you don't get punched in jujitsu, 00:14:04.820 |
There are all these analogies between all these things 00:14:18.460 |
you can't go in the other direction on it, right? 00:14:20.900 |
It's like, there are limits to kind of what, you know, 00:14:26.380 |
and push pretty hard in a bunch of directions, 00:14:29.180 |
but like, yeah, you, you know, it's at some level, 00:14:31.260 |
like the momentum against you is strong enough, 00:14:35.420 |
And I do think that that's sort of a humbling, 00:14:45.220 |
or building things, it's like, yeah, you, you, you know, 00:14:48.260 |
a lot of the game is just being able to kind of push 00:14:53.580 |
but you also need to kind of have enough of an understanding 00:14:56.860 |
of like which things you just can't push through 00:15:07.940 |
you made it sound so simple and were so eloquent 00:15:13.100 |
that it's easy to miss, but basically being okay 00:15:26.980 |
I think that's a big gift of the being humbled. 00:15:30.860 |
Somehow being humbled, especially physically, 00:15:34.060 |
opens your mind to the full process of learning, 00:15:41.740 |
And I think jujitsu is just very repetitively, 00:15:45.540 |
efficiently humbles you over and over and over and over 00:15:49.740 |
to where you can carry that lessons to places 00:16:01.620 |
in this period of an hour over and over and over and over, 00:16:20.100 |
because if you don't tap, you're going to die. 00:16:23.980 |
you could have killed me just now, but we're friends, 00:16:27.200 |
so we're gonna agree that you're not going to. 00:16:33.300 |
to your ego that puts it in its proper context 00:16:46.740 |
or rigorously day after day after day after day, 00:17:01.580 |
Like, "I'm okay, it's okay, it's part of the process." 00:17:16.060 |
being able to, and you said it in the very beginning, 00:17:22.660 |
especially as you develop expertise in certain areas, 00:17:25.580 |
the not being willing to be a beginner in a new area. 00:17:37.420 |
saying something stupid, doing something stupid. 00:17:42.420 |
you wanna show that off, and it sucks being a beginner, 00:17:50.260 |
Well, speaking of which, let me ask you about AI. 00:17:59.980 |
for the development of artificial intelligence. 00:18:12.240 |
There's a lot of interesting questions I can ask here, 00:18:24.020 |
and making the complicated decision of how to release it? 00:18:32.020 |
that in the last year, there have been a bunch of advances 00:18:36.420 |
on scaling up these large transformer models. 00:18:43.100 |
There's sort of the image generation equivalent 00:18:48.260 |
There's a lot of fundamental research that's gone into this. 00:19:04.260 |
Part of this is we wanna have the best people 00:19:11.500 |
that they're gonna be able to share their work. 00:19:17.860 |
if you're one of the top AI researchers in the world, 00:19:32.100 |
We do that with a lot of the different AI tools 00:19:39.820 |
And we did a limited open source release for it, 00:19:44.820 |
which was intended for researchers to be able to use it. 00:19:50.620 |
But responsibility and getting safety right on these 00:20:01.980 |
whether we should be releasing this commercially. 00:20:04.180 |
So, we kind of punched it on that for V1 of LLAMA 00:20:09.860 |
Now, obviously, by releasing it for research, 00:20:18.780 |
And we're working on the follow-up models for this 00:20:22.700 |
and thinking through how exactly this should work 00:20:45.940 |
who had the ability to build state-of-the-art technology here 00:20:50.940 |
and not just a small number of big companies. 00:21:05.700 |
So, there are not that many organizations in the world 00:21:20.940 |
But I just think there's all this innovation out there 00:21:29.980 |
by seeing what the whole community of students 00:21:38.620 |
And that's kind of been how we've approached this. 00:21:40.380 |
And it's also how we've done a lot of our infrastructure. 00:21:51.500 |
"All right, if we make it so that more people 00:21:57.980 |
"It'll also make the server design more efficient. 00:22:00.100 |
"And that'll make our business more efficient too." 00:22:03.340 |
And we've just done this with a lot of our infrastructure. 00:22:15.500 |
meaning like it escaped the limited release aspect. 00:22:21.500 |
But it was something you probably anticipated 00:22:28.220 |
given that it's just released to researchers. 00:22:36.780 |
- But from there, I just would love to get your comment 00:22:48.580 |
that makes it more efficient to run on smaller computers. 00:22:51.740 |
There's combining with reinforcement learning 00:22:55.860 |
So, some of the different interesting fine tuning mechanisms. 00:22:59.460 |
There's then also like fine tuning in a GPT-3 generations. 00:23:02.860 |
There's a lot of GPT-4ALL, Alpaca, Colossal AI, 00:23:07.500 |
all these kinds of models just kind of spring up, 00:23:15.020 |
I mean, there's been folks who are getting it to run 00:23:19.340 |
So, if you're an individual who just wants to experiment 00:23:22.420 |
with this at home, you probably don't have a large budget 00:23:26.380 |
to get access to like a large amount of cloud compute. 00:23:37.420 |
And then there were things like, yeah, Lama CPP 00:23:42.500 |
So, now even when we run our own versions of it, 00:23:47.260 |
and it just way more efficient, save a lot of money 00:24:03.500 |
Lama isn't quite as on the frontier as, for example, 00:24:08.500 |
the biggest open AI models or the biggest Google models. 00:24:13.220 |
So, I mean, you mentioned that the largest Lama model 00:24:19.300 |
And no one knows, I guess, outside of open AI, 00:24:27.860 |
But I think the, my understanding is it's like 00:24:32.140 |
And I think Google's Palm model is also, I think, 00:24:47.060 |
is it good for everyone in the world to have access 00:24:55.020 |
And I think as the AI models start approaching something 00:25:06.300 |
But right now, I mean, these are still very basic tools. 00:25:14.260 |
a lot of open source software like databases or web servers 00:25:20.540 |
But I don't think anyone looks at the current generation 00:25:27.020 |
of Lama and thinks it's anywhere near a super intelligence. 00:25:30.220 |
So, I think that a bunch of those questions around, 00:25:40.140 |
for all the reasons that open source software 00:25:49.780 |
because you have more people looking at it openly 00:26:01.980 |
that open source software is generally more secure 00:26:05.140 |
and safer than things that are kind of developed in a silo 00:26:08.780 |
where people try to get through security through obscurity. 00:26:16.420 |
I think we're more likely to get to good alignment 00:26:19.500 |
and good understanding of kind of what needs to do 00:26:23.460 |
to make this work well by having it be open source. 00:26:30.900 |
- Meta released a lot of models as open source. 00:26:39.660 |
- I mean, I'll ask you questions about those, 00:26:42.020 |
but the point is you've open sourced quite a lot. 00:26:45.860 |
You've been spearheading the open source movement. 00:26:52.980 |
Of course, there's folks who are really terrified 00:26:55.300 |
about the existential threat of artificial intelligence 00:27:00.180 |
you have to be careful about the open sourcing step. 00:27:06.420 |
But where do you see the future of open source here 00:27:19.940 |
do you wanna put a powerful tool in the hands 00:27:32.860 |
I don't think anyone looks at the current state of things 00:27:48.020 |
So, I think that at least for the stage that we're at now, 00:28:09.500 |
as to what we would do if we were in that position, 00:28:13.580 |
that we're pretty far off from that position. 00:28:24.060 |
applies to every single thing that we would ever do. 00:28:42.900 |
It's not like a general enough piece of software 00:28:53.340 |
whether it's like an open source server design 00:29:00.140 |
like a good, it was probably our earliest project 00:29:05.220 |
It was probably one of the last things that I coded 00:29:09.360 |
But basically this caching tool for quick data retrieval. 00:29:15.820 |
These are things that are just broadly useful 00:29:22.020 |
And I think that some of the language models now 00:29:25.660 |
have that feel as well as some of the other things 00:29:28.100 |
that we're building, like the translation tool 00:29:44.020 |
which is 40 times more than any known previous technology. 00:29:47.500 |
To me, that's really, really, really exciting 00:29:51.740 |
breaking down barriers that language creates. 00:29:55.900 |
between all of these different pieces in real time, 00:30:04.740 |
that we'd all have, whether it's an earbud or glasses 00:30:10.580 |
or something that can help translate in real time 00:30:19.780 |
So I think, yeah, I think that's pretty exciting. 00:30:24.920 |
What can you say about the next version of LLAMA? 00:30:27.880 |
What can you say about what were you working on 00:30:31.520 |
in terms of release, in terms of the vision for that? 00:30:36.840 |
is taking the first version, which was primarily 00:30:40.720 |
this research version, and trying to now build a version 00:30:58.640 |
But a lot of the work that we're doing internally 00:31:04.360 |
that this is as aligned and responsible as possible. 00:31:12.920 |
we're talking about kind of the open source infrastructure, 00:31:15.900 |
but the main thing that we focus on building here, 00:31:20.120 |
a lot of product experiences to help people connect 00:31:22.860 |
So we're gonna, I've talked about a bunch of this stuff, 00:31:26.700 |
but you'll have an assistant that you can talk to 00:31:31.780 |
I think in the future, every creator will have 00:31:36.300 |
kind of an AI agent that can kind of act on their behalf, 00:31:41.180 |
I wanna get to the point where every small business 00:31:43.700 |
basically has an AI agent that people can talk to 00:31:50.360 |
So there are gonna be all these different things 00:31:53.260 |
and LLAMA or the language model underlying this 00:31:57.580 |
is basically gonna be the engine that powers that. 00:32:06.660 |
is that it basically it unlocks a lot of innovation 00:32:11.660 |
in the ecosystem, will make our products better as well. 00:32:35.140 |
but hopefully for a lot of other things out there too. 00:32:39.140 |
- Do you think the LLAMA or the language model 00:32:41.840 |
underlying that version two will be open sourced? 00:32:51.540 |
- This is, I mean, we were talking about the debates 00:32:58.840 |
I mean, I think we did the research license for V1 00:33:03.700 |
and I think the big thing that we're thinking about 00:33:13.100 |
I don't know if you can comment on it for V1. 00:33:23.180 |
So we were very clear that anyone who uses the code 00:33:28.060 |
and the weights doesn't have a commercial license 00:33:31.020 |
And we've generally seen people respect that, right? 00:33:33.860 |
It's like you don't have any reputable companies 00:33:40.480 |
But yeah, but by sharing it with so many researchers, 00:33:46.740 |
- But what have you learned from that process 00:33:57.620 |
- Yeah, well, I mean, I think a lot of the feedback, 00:33:59.440 |
like I said, is just around different things around, 00:34:03.380 |
how do you fine tune models to make them more aligned 00:34:07.440 |
And you see all the different data recipes that, 00:34:18.420 |
And people have tried a lot of different things 00:34:29.200 |
but also we're able to learn from some of the best ideas 00:34:32.460 |
And I think we wanna just continue pushing that forward. 00:34:36.420 |
But I don't have any news to announce on this, 00:34:41.780 |
I mean, this is a thing that we're still kind of, 00:34:57.540 |
Can you comment on, what do you think of the thing 00:35:01.860 |
which is the reinforcement learning with human feedback? 00:35:10.100 |
'cause I talked to Jan Lekun before talking to you today. 00:35:19.380 |
will need to be crowdsourced Wikipedia style? 00:35:24.780 |
So this kind of idea of how to integrate the human 00:35:29.180 |
in the fine tuning of these foundation models. 00:35:32.980 |
- Yeah, I think that's a really interesting idea 00:35:45.620 |
to be as safe and aligned and responsible as possible? 00:35:50.180 |
And different groups out there who are doing development 00:36:04.180 |
instead of having kind of one group fine tune some stuff 00:36:27.620 |
for a kind of a broader community to fine tune it as well. 00:36:36.020 |
both from an infrastructure and community management 00:36:41.020 |
and product perspective about how you do that. 00:36:45.140 |
But as an idea, I think it's quite compelling. 00:36:50.940 |
of open sourcing the technology is also finding a way 00:36:54.060 |
to have a kind of community driven training of it. 00:36:59.060 |
But I think that there are a lot of questions on this. 00:37:05.420 |
what's the best way to produce aligned AI models, 00:37:12.060 |
And it's one that I think we will need to make 00:37:14.340 |
as much progress on as the kind of core intelligence 00:37:21.100 |
- Well, I just did a conversation with Jimmy Wales, 00:37:24.620 |
And to me, Wikipedia is one of the greatest websites 00:37:31.900 |
And I think it has to do with something that you mentioned, 00:37:45.500 |
and they handle it with balance and with grace, 00:37:48.820 |
despite sort of the attacks that will often happen. 00:37:52.660 |
I mean, it has issues just like any other human system. 00:37:58.580 |
I mean, it's amazing what they've been able to achieve, 00:38:03.020 |
And I think that there's still a lot of challenges. 00:38:21.300 |
we need to be able to generate those same things, 00:38:28.480 |
that they generate something that is closest to truth? 00:38:38.060 |
all this kind of stuff that nobody can define. 00:38:46.840 |
Like what is truth and how to help AI systems generate that. 00:38:50.640 |
'Cause one of the things language models do really well 00:39:16.640 |
- Yeah, and I think that there's also a lot of questions 00:39:20.200 |
about whether the current architecture for LLMs, 00:39:28.180 |
I mean, a lot of what's been exciting in the last year 00:39:33.640 |
is that there's clearly a qualitative breakthrough 00:39:35.960 |
where with some of the GPT models that OpenAI put out 00:39:40.560 |
and that others have been able to do as well. 00:39:43.560 |
I think it reached a kind of level of quality 00:39:45.840 |
where people are like, wow, this feels different 00:39:48.920 |
and like it's gonna be able to be the foundation 00:39:57.000 |
But I think that the other realization that people have 00:40:06.840 |
that maybe we're closer to general intelligence. 00:40:11.120 |
But I think that that idea is predicated on the idea 00:40:13.320 |
that I think people believe that there's still generally 00:40:32.160 |
and getting to higher parameter count models by itself 00:40:50.000 |
But still, the leaps taken with this extra alignment step 00:40:55.000 |
is quite incredible, quite surprising to a lot of folks. 00:41:07.200 |
you can start to see civilization transforming effects 00:41:10.800 |
before you achieve super, quote unquote, super intelligence. 00:41:20.200 |
- Oh yeah, I mean, I think that there are gonna be 00:41:22.480 |
a lot of amazing products and value that can be created 00:41:41.840 |
like if there keep on being stacked breakthroughs, 00:41:49.200 |
into the products and experiences that we all use 00:42:08.740 |
you know, now we're gonna be able to get help coding 00:42:28.520 |
I just think that this is all gonna be super exciting. 00:42:32.000 |
It's gonna create better experiences for people 00:42:34.840 |
and just unlock a ton of innovation and value. 00:42:37.760 |
So I don't know if you know, but you know, what is it? 00:42:41.360 |
Over 3 billion people use WhatsApp, Facebook and Instagram. 00:42:47.240 |
So any kind of AI-fueled products that go into that, 00:42:57.000 |
Do you have ideas and thoughts about possible products 00:43:08.960 |
- Yeah, I think there's three main categories 00:43:13.360 |
The first that I think is probably the most interesting 00:43:33.600 |
that's different about my view of how this plays out 00:43:36.280 |
from what I see with OpenAI and Google and others 00:43:55.440 |
a lot of different AIs that people are gonna wanna engage 00:44:02.120 |
a number of different apps for different things 00:44:03.920 |
and you have relationships with different people 00:44:06.380 |
in your life who fill different emotional roles for you. 00:44:16.600 |
people have a reason that I think you don't just want 00:44:20.260 |
And that I think is probably the biggest distinction 00:44:25.460 |
And a bunch of these things, I think you'll want an assistant. 00:44:28.700 |
I mean, I mentioned a couple of these before. 00:44:30.980 |
I think like every creator who you interact with 00:44:40.360 |
or that allows them to interact with their fans. 00:45:08.460 |
about when you're performing a concert or something like that. 00:45:12.540 |
that's super valuable, but it's not just that, 00:45:16.500 |
I think people are gonna want just one singular AI. 00:45:22.440 |
And then I think there's the business version of this too, 00:45:24.340 |
which we've touched on a couple of times, which is, 00:45:26.840 |
I think every business in the world is gonna want 00:45:34.700 |
it's like you have your page on Instagram or Facebook 00:45:37.660 |
or WhatsApp or whatever, and you wanna point people 00:45:46.220 |
You don't want it recommending your competitors stuff. 00:45:49.220 |
So it's not like there can be like just, you know, 00:45:51.940 |
one singular AI that can answer all the questions 00:45:56.820 |
that AI might not actually be aligned with you 00:46:05.540 |
So I think that there's gonna be a clear need 00:46:17.260 |
the technology that enables the personalization 00:46:28.340 |
that's fine-tuned to particular needs, particular styles, 00:46:37.460 |
to create them really easily for the, you know, 00:46:40.140 |
for your own business, or if you're a creator 00:46:42.660 |
to be able to help you engage with your fans. 00:46:48.420 |
I think that there's a clear kind of interesting 00:46:51.420 |
product direction here that I think is fairly unique 00:46:55.060 |
from what any of the other big companies are taking. 00:46:59.500 |
It also aligns well with this sort of open source approach, 00:47:04.020 |
in this more community-oriented, more democratic approach 00:47:08.220 |
to building out the products and technology around this. 00:47:11.140 |
We don't think that there's gonna be the one true thing. 00:47:12.900 |
We think that there should be kind of a lot of development. 00:47:19.180 |
And we could go, probably spend a lot of time 00:47:21.500 |
talking about that and the kind of implications 00:47:34.540 |
how this finds its way into, like, what do we build? 00:47:37.140 |
There are gonna be a lot of simpler things around, 00:47:40.780 |
okay, you post photos on Instagram and Facebook 00:47:49.580 |
and like, you want the photos to look as good as possible. 00:47:51.820 |
So like having an AI that you can just like take a photo 00:47:56.140 |
okay, I wanna edit this thing or describe this. 00:48:03.900 |
And that's more in the image and media generation side 00:48:08.700 |
but it all kind of plays off of advances in the same space. 00:48:19.580 |
is gonna basically get evolved in this direction, right? 00:48:25.500 |
like, do you need to make your own kind of ad creative? 00:48:29.820 |
It's no, you'll just, you know, you just tell us, 00:48:33.260 |
okay, I'm a dog walker and I am willing to walk people's dogs 00:48:41.900 |
and like create the ad unit that will perform the best 00:48:48.740 |
And it just kind of like connects you with the right people. 00:49:02.100 |
that works to find the best customer for your thing. 00:49:07.460 |
I mean, to me, advertisement, when done well, 00:49:10.860 |
just finds a good match between a human being 00:49:14.820 |
and a thing that will make that human being happy. 00:49:21.180 |
- When it's done well, people actually like it. 00:49:23.660 |
You know, I think that there's a lot of examples 00:49:27.460 |
and I think that that's what kind of gives it a bad rap. 00:49:29.940 |
But yeah, and a lot of this stuff is possible today. 00:49:39.380 |
that can generate the ideas for you about what to A/B test. 00:49:46.860 |
all the metaverse stuff that we're doing, right? 00:49:48.420 |
It's like you wanna create worlds in the future, 00:49:54.420 |
- So natural language becomes the interface we use 00:49:57.500 |
for all the ways we interact with the computer, 00:50:08.860 |
And with translation, you can do it in any kind of language. 00:50:26.180 |
- But this has, since the last time, this becomes-- 00:50:37.740 |
where I'll have a large or a Lex language model, 00:50:44.780 |
and I'll have to have a conversation with him 00:50:51.540 |
and having a conversation where you fine tune that thing 00:50:53.580 |
to be a little bit more respectful or something like this. 00:51:06.500 |
just not just the assistant that's facing the individual, 00:51:11.340 |
but the assistant that represents the individual 00:51:18.060 |
There's basically a layer that is the AI system 00:51:22.220 |
through which you interact with the outside world, 00:51:25.780 |
with the outside world that has humans in it. 00:51:49.260 |
Yeah, and a lot of the interface for this stuff today 00:52:07.820 |
where you have some of the people you're friends with 00:52:19.500 |
having conversations where some heterogeneous, 00:52:31.240 |
then Microsoft released this thing called Chowice 00:52:41.900 |
that was a lot simpler than what's possible today. 00:52:46.180 |
And I think it was like tens of millions of people 00:52:51.340 |
it became quite attached and built relationships with it. 00:52:54.580 |
And I think that there's services today like Replica 00:53:07.020 |
for companionship that people have, older people. 00:53:11.020 |
And I think most people probably don't have as many friends 00:53:25.700 |
the number of close friends that they have is fewer today 00:53:34.580 |
this is like the core thing that I think about 00:53:38.180 |
in terms of building services that help connect people. 00:53:40.740 |
So I think you'll get tools that help people connect 00:53:43.940 |
with each other are gonna be the primary thing 00:53:56.660 |
Right, it's like right now we have like the little box 00:54:02.340 |
But it's, but at some level you don't just wanna like 00:54:10.660 |
Right, so having something that's more of an, 00:54:16.100 |
and like that can update you on what's going on 00:54:22.180 |
to them effectively, help you be a better friend. 00:54:24.900 |
I think that that's something that's super powerful too. 00:54:33.900 |
of kind of personal AI is that I think could exist. 00:54:38.180 |
So I think an assistant is sort of the kind of simplest one 00:54:51.820 |
who can help pick you up through all the challenges 00:54:53.740 |
that inevitably we all go through on a daily basis. 00:55:03.220 |
you can probably just go through a lot of the different type 00:55:10.020 |
And I would bet that there will be companies out there 00:55:17.140 |
I think it's part of the interesting innovation 00:55:18.820 |
that's gonna exist is that there are certainly a lot, 00:55:25.020 |
It's like, I mean, I just look at my kids learning to code 00:55:33.300 |
and they have to wait till I can help answer it, right? 00:55:39.580 |
They'll just, there'll be like a coding assistant 00:55:41.900 |
that they have that is designed to be perfect 00:55:45.780 |
for teaching a five and a seven year old how to code. 00:55:47.980 |
And they'll just be able to ask questions all the time 00:55:58.740 |
kind of relationships or functional relationships 00:56:00.860 |
that we have in our lives that are really interesting. 00:56:05.640 |
And I think one of the big questions is like, 00:56:07.500 |
okay, is this all gonna just get bucketed into 00:56:18.060 |
what the long-term effects of human communication 00:56:21.300 |
when people can "talk with" others through a chatbot 00:56:32.020 |
Will people just communicate by grunts in a generation? 00:56:37.260 |
Do you think about long-term effects at scale 00:56:40.460 |
the integration of AI in our social interaction? 00:56:46.620 |
I mean, that question was sort of framed in a negative way. 00:56:52.700 |
language models helping you communicate with, 00:57:03.260 |
is helping people express themselves better to people 00:57:19.100 |
you know, I don't think people are gonna look at that 00:57:20.780 |
and say, it's sad that we have the capacity to do that 00:57:24.300 |
because I should have just learned your language, right? 00:57:27.980 |
But overall, I'd say there are all these impediments 00:57:46.540 |
- But language is also a mapping of the way you think, 00:58:05.860 |
that they can improve their discipline in class and so on, 00:58:15.020 |
So he uses GPT to translate his original email 00:58:28.220 |
is basically negotiating deals with brands and stuff, 00:58:33.300 |
Because they're like, I mean, they do their thing, right? 00:58:35.860 |
And the creators, they're excellent at what they do 00:58:38.860 |
and they just wanna connect with their community, 00:58:42.020 |
They go into their DMs and they see some brand 00:59:10.100 |
I mean, Priscilla and I just had our third daughter. 00:59:17.160 |
And it's like one of the saddest things in the world 00:59:25.820 |
It's like, well, 'cause babies don't generally 00:59:38.300 |
because she sometimes has a hard time expressing 00:59:44.180 |
And I was thinking about that and I was like, 00:59:46.060 |
well, actually a lot of adults get very frustrated too 00:59:48.380 |
because they have a hard time expressing things 00:59:51.260 |
in a way that, going back to some of the early themes, 01:00:02.740 |
I think that all of these different technologies 01:00:05.100 |
that can help us navigate the social complexity 01:00:26.580 |
I think it's just like, wow, we have so much more capacity 01:00:30.860 |
And I think that that'll be the case here too. 01:00:32.980 |
- You can allocate those cognitive capabilities 01:00:47.340 |
the addition of language models, large language models, 01:00:51.860 |
you basically don't have to remember nearly as much. 01:00:54.740 |
Just like with Stack Overflow for programming, 01:01:01.220 |
I mean, I find that I write like maybe 80%, 90% 01:01:04.980 |
of the code I write is non-generated first and then edited. 01:01:11.660 |
how to write specifics of different functions. 01:01:14.900 |
And it's also, it's not just the specific coding. 01:01:19.580 |
I mean, in the context of a large company like this, 01:01:23.340 |
I think before an engineer can sit down to code, 01:01:26.100 |
they first need to figure out all of the libraries 01:01:29.500 |
and dependencies that tens of thousands of people 01:01:43.660 |
it's tools that can help summarize the whole knowledge base 01:01:51.060 |
in the experiments that I've done with this stuff, 01:02:09.740 |
That was always the most annoying part of coding 01:02:14.620 |
actually figuring out what the resources were 01:02:17.420 |
before you could actually start building the thing. 01:02:21.460 |
the flip side of that, I think for the most part is positive, 01:02:45.380 |
but it's actually wrong in a very subtle way. 01:02:57.820 |
When we stand on the shoulders of taller and taller giants, 01:03:10.180 |
I mean, I think it is very valuable in your life 01:03:24.960 |
that are not necessarily trying to do a good thing, 01:03:29.540 |
or they might be explicitly trying to do a bad thing, 01:03:32.220 |
like phishing scams, like social engineering, 01:03:36.360 |
which has always been a very difficult problem 01:03:43.120 |
- Well, there's a few different parts of this. 01:03:59.960 |
is basically ramping up very sophisticated AI systems, 01:04:07.240 |
to be able to categorize and classify and identify, 01:04:12.240 |
okay, this post looks like it's promoting terrorism. 01:04:21.600 |
This one looks like it might be trying to incite violence. 01:04:25.200 |
This one's an intellectual property violation. 01:04:42.400 |
that advances in generative AI will test those. 01:04:52.280 |
and I'm optimistic that it will continue to be the case, 01:04:55.120 |
that we will be able to bring more computing power to bear 01:05:02.080 |
So we've had to deal with some adversarial issues before. 01:05:14.840 |
let's say someone's saying some kind of racist thing, 01:05:36.120 |
that are basically standing up these networks of bots 01:05:39.960 |
or inauthentic accounts is what we call them, 01:05:47.160 |
who are kind of masquerading as other people, 01:05:53.120 |
And some of that behavior has gotten very sophisticated 01:06:04.240 |
They don't just pack up their bags and go home and say, 01:06:15.080 |
But we have a fair amount of experience dealing with 01:06:22.720 |
where they just keep on getting better and better. 01:06:24.640 |
And I do think that as long as we can keep on 01:06:33.040 |
I'm quite optimistic that we're gonna be able to keep on 01:06:35.960 |
pushing against the kind of normal categories of harm 01:06:47.800 |
- What about like creating narratives and controversy? 01:06:50.760 |
To me, it's kind of amazing how a small collection of, 01:06:59.400 |
- Yeah, I mean, we have sort of this funny name for it, 01:07:01.400 |
but we call it coordinated inauthentic behavior. 01:07:03.720 |
- Yeah, it's kind of incredible how a small collection 01:07:07.160 |
of folks can create narratives, create stories. 01:07:16.560 |
that can catalyze the virality of the narrative. 01:07:21.120 |
- Yeah, and I think there the question is you have to be, 01:07:24.360 |
I'm very specific about what is bad about it, right? 01:07:27.080 |
Because I think a set of people coming together 01:07:43.600 |
and how culture gets created and how art gets created 01:07:46.480 |
So that's why we've kind of focused on this sense 01:07:57.320 |
but you have kind of someone pulling the strings behind it 01:08:01.880 |
and trying to kind of act as if this is a more organic set 01:08:14.920 |
to have coordinated networks and not disclose it as such. 01:08:22.840 |
pretty sophisticated AI and counter-terrorism groups 01:08:29.360 |
a fair number of these coordinated inauthentic networks 01:08:38.360 |
I think it's one thing that if you'd told me 20 years ago, 01:08:41.960 |
it's like, all right, you're starting this website 01:08:48.000 |
part of your organization is gonna be a counter-terrorism 01:08:50.240 |
organization with AI to find coordinated inauthentic. 01:08:56.080 |
but I think that that's part of where we are. 01:09:04.120 |
this is actually where I'd guess most of the challenge 01:09:09.040 |
around AI will be for the foreseeable future. 01:09:12.800 |
I think that there's a lot of debate around things like, 01:09:15.880 |
is this going to create existential risk to humanity? 01:09:29.520 |
it's just really unclear to me that the current technology 01:09:39.280 |
But that doesn't mean that there's no danger. 01:09:48.400 |
And we just need to make sure that we really focus 01:09:57.640 |
- Well, you can basically use large language models 01:10:00.200 |
as an assistant of how to cause harm on social networks. 01:10:05.360 |
meta has very impressive coordinated inauthentic account 01:10:16.080 |
How do I do the coordinating authentic account creation 01:10:25.400 |
And basically there's this kind of part of it. 01:10:33.000 |
Perhaps you can comment on your approach to it, 01:10:41.120 |
to help you coordinate harm in all the full definition 01:10:51.320 |
is basically when we ship AIs across our products, 01:10:56.320 |
a lot of what we're trying to make sure is that 01:11:02.160 |
you can't ask it to help you commit a crime, right? 01:11:07.280 |
So I think training it to kind of understand that 01:11:18.800 |
but just making it so that this isn't an easier way 01:11:37.960 |
And for these, what we see is like for nation states 01:11:46.720 |
these large coordinated and authentic networks 01:11:53.160 |
at some point when we would just make it very difficult, 01:11:55.040 |
they do just try to use other services instead, right? 01:11:58.040 |
It's just like, if you can make it more expensive 01:12:08.400 |
It's not like, okay, are you ever gonna be perfect 01:12:11.240 |
at finding every adversary who tries to attack you? 01:12:14.440 |
It's, I mean, you try to get as close to that as possible, 01:12:21.720 |
it's just inefficient for them to go after that. 01:12:26.480 |
of what is and isn't harm, what is and isn't misinformation. 01:12:34.760 |
I remember asking GPT about whether the virus 01:12:39.560 |
leaked from a lab or not, and the answer provided 01:12:42.840 |
was a very nuanced one, and a well-cited one, 01:12:47.400 |
almost, dare I say, well-thought-out one, balanced. 01:12:56.700 |
Wikipedia does a good job on that particular thing too, 01:13:00.560 |
but from pressures from governments and institutions, 01:13:21.000 |
through the process of moderation of AI systems. 01:13:26.520 |
- I really agree with what you're pushing on. 01:13:51.480 |
and try to push off as much as possible today. 01:14:00.120 |
We went through a bunch of these types of harms before. 01:14:05.040 |
But then I do think that you get to a set of harms 01:14:18.240 |
because there are things that are kind of obviously false, 01:14:28.960 |
So it's like, all right, are you gonna censor someone 01:14:37.360 |
I think that there's a bunch of real kind of issues 01:14:57.120 |
And unfortunately, I think a lot of the kind of establishment 01:15:03.960 |
and asked for a bunch of things to be censored 01:15:06.600 |
that in retrospect ended up being more debatable or true. 01:15:15.680 |
And so I do think that the questions around how to manage 01:15:28.720 |
I think it's best to generally boil things down 01:15:41.280 |
is this going to potentially lead to physical harm 01:15:46.280 |
for someone and kind of think about it in that sense. 01:15:51.400 |
I think people just have different preferences 01:15:53.000 |
on how they want things to be flagged for them. 01:15:57.720 |
to kind of have a flag on something that says, 01:16:00.320 |
hey, a fact checker thinks that this might be false. 01:16:02.400 |
Or I think Twitter's community notes implementation 01:16:09.680 |
It's like just kind of discretionarily adding a flag 01:16:14.600 |
But it's not trying to take down the information or not. 01:16:17.440 |
I think that you want to reserve the kind of censorship 01:16:21.000 |
of content to things that are of known categories 01:16:31.640 |
but there's other topics where there's just deep disagreement 01:16:36.640 |
fueled by politics about what is and isn't harmful. 01:16:41.320 |
There's even just the degree to which the virus is harmful 01:17:03.720 |
has very different views from another part of the world? 01:17:07.000 |
Is there a way for Meta to stay out of the moderation of this? 01:17:19.120 |
But I think we should be clear about which of these things 01:17:28.000 |
in terms of how people want information flagged. 01:17:30.320 |
Right, so we did recently introduce something 01:17:43.440 |
All right, well, you can turn that off if you want, 01:17:49.840 |
like it's inciting violence or something like that, 01:17:53.040 |
So I think that you wanna honor people's preferences 01:17:58.440 |
But look, I mean, this is really difficult stuff. 01:18:01.960 |
I think it's really hard to know where to draw the line 01:18:20.520 |
and scrutinizing frameworks that have been long held. 01:18:24.920 |
And every once in a while, you throw out something 01:18:28.000 |
that was working for a very long period of time, 01:18:35.880 |
doesn't mean that you should not try to give people 01:18:54.080 |
I wanna ask a question how to withstand that pressure 01:18:58.320 |
for a world where AI moderation starts becoming a thing too. 01:19:03.320 |
So what's Meta's approach to resist the pressure 01:19:19.520 |
I mean, I think we basically have the principles around, 01:19:27.600 |
but we have developed clear categories of things 01:19:32.600 |
that we think are wrong that we don't want on our services, 01:19:40.800 |
So then the question is, okay, what do you do 01:19:43.360 |
when a government says that they don't want something 01:19:55.320 |
Because on the one hand, if there's a, you know, 01:20:00.320 |
and people around the world just have different values 01:20:03.160 |
in different places, then should we as a, you know, 01:20:07.560 |
California based company tell them that something 01:20:16.520 |
Actually, like that we need to be able to express that? 01:20:20.920 |
I mean, I think that there's a certain amount 01:20:28.840 |
where, you know, it's like a little more autocratic 01:20:42.560 |
And it's not necessarily against their culture, 01:20:47.240 |
is just trying to push in a certain direction. 01:21:23.400 |
is on requests for access to information, right? 01:21:28.280 |
if you get told that you can't say something, 01:21:44.000 |
in a way that seems like it would be unlawful 01:21:48.480 |
in our country exposes people to real physical harm. 01:21:59.360 |
And then, so that flows through like all of our policies 01:22:11.840 |
I'd say a bunch of the stuff starts a lot higher up 01:22:15.680 |
in the decision of where do we put data centers. 01:22:18.400 |
Then there are a lot of countries where, you know, 01:22:21.920 |
we may have a lot of people using the service in a place. 01:22:24.560 |
It might be, you know, good for the service in some ways, 01:22:28.240 |
good for those people if we could reduce the latency 01:22:34.440 |
But, you know, for whatever reason, we just feel like, 01:22:36.680 |
hey, this government does not have a good track record 01:22:40.000 |
on basically not trying to get access to people's data. 01:22:49.680 |
and the government wants to get access to people's data, 01:22:52.560 |
then, you know, they do at the end of the day 01:22:54.840 |
have the option of having people show up with guns 01:22:59.040 |
So I think that there's like a lot of decisions 01:23:01.400 |
that go into like how you architect the systems 01:23:03.800 |
years in advance of these actual confrontations 01:23:21.720 |
And I think that that ends up being a more critical thing 01:23:24.760 |
to defend against governments than, you know, 01:23:30.240 |
has a different view of what should be acceptable speech 01:23:34.200 |
especially if it's a democratically elected government, 01:23:36.360 |
and, you know, it's, then I think that there's 01:23:38.720 |
a certain amount of deference that you should have to that. 01:23:41.240 |
- So that's speaking more to the direct harm that's possible 01:23:49.460 |
to the more nuanced kind of pressure to censor, 01:23:54.800 |
but pressure to censor from political entities, 01:23:57.840 |
which has kind of received quite a bit of attention 01:24:08.580 |
what have you learned from the kind of pressure 01:24:14.280 |
from US government agencies that was seen in Twitter files? 01:24:18.540 |
And what do you do with that kind of pressure? 01:24:27.840 |
to know exactly what happened in each of these cases. 01:24:38.320 |
where agencies or different folks will just say, 01:24:48.600 |
It's not really pressure as much as it is just, 01:24:51.320 |
you know, flagging something that our security systems 01:24:57.680 |
I get how some people could think of it as that, 01:25:15.360 |
- Well, so you don't feel like there'll be consequences 01:25:18.880 |
if, you know, anybody, the CIA, the FBI, a political party, 01:25:26.480 |
of high powerful political figures write emails. 01:25:34.320 |
- I guess what I say is there's so much pressure 01:25:36.080 |
from all sides that I'm not sure that any specific thing 01:25:44.240 |
And there are obviously a lot of people who think 01:25:55.440 |
There are, as you say, all kinds of different groups 01:26:02.320 |
and politicians themselves, there's the agencies, 01:26:18.120 |
So it's just a very, it's a very active debate, 01:26:23.920 |
I mean, these kind of questions get to really 01:26:39.200 |
they haven't yet been hardened into a single truth, 01:26:48.400 |
Maybe in a few hundred years, everyone will look back 01:27:11.360 |
and just say, "Hey, this is something that I wanna flag 01:27:26.560 |
If you don't, then I'm gonna try to make your life difficult 01:27:37.520 |
So I just think that there are so many people 01:27:40.240 |
in different forces that are trying to apply pressure 01:27:53.680 |
are gonna be upset no matter where you come out on that. 01:28:03.660 |
And now with AI, where the freedom might apply to them, 01:28:12.920 |
not just to the humans, but to the personalized agents 01:28:31.520 |
that gives people tools to express themselves 01:28:33.400 |
unless you think that people expressing themselves 01:28:36.620 |
So I didn't get into this to try to prevent people 01:29:04.340 |
It's like, you're still not allowed to do things 01:29:11.380 |
So there are those, but then there's also a very active core 01:29:18.720 |
where some people may think that something is true or false. 01:29:33.160 |
But one of the lessons that I feel like I've learned 01:29:44.960 |
the best way to handle this stuff more practically 01:30:00.720 |
Is the person basically just having a repeat behavior 01:30:16.840 |
health of the community, health of the state. 01:30:19.320 |
It's tricky though because like how do you know 01:30:21.640 |
there could be people that have a very controversial 01:30:27.720 |
long-term effect on the health of the community 01:30:30.040 |
because it challenges the community to think. 01:30:33.360 |
Yeah, no, I think you wanna be careful about that. 01:30:36.280 |
I'm not sure I'm expressing this very clearly 01:30:38.680 |
because I certainly agree with your point there. 01:30:42.520 |
And my point isn't that we should not have people 01:30:46.960 |
on our services that are being controversial. 01:30:51.600 |
It's that often I think it's not just looking 01:30:59.600 |
that it's most effective to handle this stuff. 01:31:05.160 |
specific binary decisions of kind of this is allowed 01:31:09.240 |
I mean, we talked about, you know, it's fact-checking 01:31:15.280 |
It's like, it's not a question of is this allowed or not? 01:31:18.520 |
It's just a question of adding more context to the thing. 01:31:22.640 |
So in the context of AI, which is what you were asking about, 01:31:26.200 |
I think there are lots of ways that an AI can be helpful. 01:31:29.680 |
With an AI, it's less about censorship, right? 01:31:33.920 |
Because it's more about what is the most productive answer 01:31:41.520 |
that I was reviewing with the team is someone asked, 01:31:59.800 |
Right, it's like basically just like shut it down 01:32:01.280 |
immediately, which I think is some of what you see. 01:32:21.280 |
And it's like, okay, that's actually a respectful 01:32:24.840 |
And I may have not known that specific thing. 01:32:27.720 |
And so there are different ways to handle this 01:32:31.160 |
that I think kind of, you can either assume good intent, 01:32:40.280 |
Or you could like kind of come at it as like, 01:32:42.200 |
no, I need to shut this thing down immediately. 01:32:44.040 |
Right, it's like, I just am not gonna talk about this. 01:32:46.720 |
And there may be times where you need to do that. 01:32:55.240 |
informative approach where you generally assume 01:32:57.880 |
good intent from people is probably a better balance 01:33:04.680 |
You're not gonna be able to do that for everything. 01:33:06.160 |
But you were kind of asking about how I approach this 01:33:09.640 |
and I'm thinking about this as it relates to AI. 01:33:22.040 |
- I have to ask, there's rumors you might be working 01:33:29.000 |
that might be a competitor to Twitter, code named P92. 01:33:32.880 |
Is there something you can say about those rumors? 01:33:40.120 |
You know, I've always thought that sort of a text-based 01:33:54.160 |
has not lived up to what I would have thought 01:34:00.840 |
And that's probably one of the reasons why he bought it. 01:34:11.400 |
And one that I think is potentially interesting 01:34:18.720 |
and you're seeing that a little bit with Blue Sky. 01:34:21.160 |
And I think that it's possible that something 01:34:25.880 |
that melds some of those ideas with the graph 01:34:30.240 |
and identity system that people have already cultivated 01:34:32.520 |
on Instagram could be a kind of very welcome contribution 01:34:46.680 |
and this is certainly one that I think could be interesting. 01:35:00.200 |
- But I, and look, I mean, I don't know exactly 01:35:15.440 |
to just ask this question and throw Twitter into the mix. 01:35:22.080 |
that is Facebook, that is Instagram, that is WhatsApp, 01:35:26.640 |
and then think of a text-based social network, 01:35:37.920 |
And what are the needs, what are the use cases, 01:35:40.640 |
what are the products, what is the aspect of them 01:35:45.760 |
and a connection between humans that is somehow distinct? 01:35:56.720 |
So it, I think, ends up being a good format for discussion, 01:36:09.000 |
or other networks that are focused on one type of content, 01:36:21.360 |
as the primary thing, and there's comments after that. 01:36:35.800 |
is that the comments can also be first class. 01:36:39.280 |
And that makes it so that conversations can just filter 01:36:48.840 |
that are really awesome about the experience. 01:36:50.840 |
It just always struck me, I always thought that, 01:36:54.120 |
you know, Twitter should have a billion people using it, 01:37:07.600 |
and it's very hard to diagnose this stuff from the outside. 01:37:20.520 |
- Well, I just think it's hard to build these companies. 01:37:23.000 |
So it's not that every idea automatically goes 01:37:30.880 |
coupled with good execution, should get there. 01:37:34.160 |
But I mean, look, we hit certain thresholds over time 01:37:37.800 |
where, you know, we kind of plateaued early on 01:37:41.800 |
and it wasn't clear that we were ever gonna reach 01:37:48.360 |
internationalization and helping the service grow 01:37:56.480 |
And helping people basically spread the service 01:38:01.600 |
That was one of the things, once we got very good at that, 01:38:03.800 |
that was one of the things that made me feel like, 01:38:08.760 |
then I felt like we could help grow that quickly. 01:38:11.400 |
And I think that that's sort of been a core competence 01:38:13.520 |
that we've developed and been able to execute on. 01:38:17.040 |
I mean, ByteDance obviously have done a very good job 01:38:19.720 |
with TikTok and have reached more than a billion people 01:38:23.040 |
there, but it's certainly not automatic, right? 01:38:26.640 |
I think you need a certain level of execution 01:38:39.560 |
But they just haven't kind of cracked that piece yet. 01:38:45.320 |
so that you're seeing all these other things, 01:39:07.120 |
And if it's like the stack is more open throughout. 01:39:14.760 |
To some degree, end-to-end encryption on WhatsApp, 01:39:26.840 |
I mean, the server part of this that we run for WhatsApp 01:39:31.840 |
is relatively very thin compared to what we do 01:39:38.400 |
in how the apps kind of negotiate with each other 01:39:40.880 |
to pass information in a fully end-to-end encrypted way. 01:39:44.400 |
But I don't know, I think that that is a good model. 01:39:48.040 |
I think it puts more power in individuals' hands 01:39:55.440 |
I mean, I think that it's hard from the outside 01:40:08.160 |
but I don't really know where this one's gonna go. 01:40:21.040 |
So let me ask, in the hope and the name of camaraderie, 01:40:26.040 |
what do you think Elon is doing well with Twitter? 01:40:29.040 |
And what, as a person who has run for a long time 01:40:33.040 |
you social networks, Facebook, Instagram, WhatsApp, 01:40:41.600 |
What can he improve on that text-based social network? 01:40:53.900 |
is that everyone has opinions on what you should do 01:41:04.600 |
And I often think that some of the discourse around us 01:41:18.480 |
that there's certain things that we're seeing internally 01:41:26.360 |
I think that, you know, I do think that Elon led a push 01:41:48.800 |
with exactly all the tactics and how we did that. 01:41:53.240 |
every leader has their own style for if they, 01:41:56.440 |
you know, if you need to make dramatic changes for that, 01:41:59.880 |
But a lot of the specific principles that he pushed on 01:42:04.520 |
around basically trying to make the organization 01:42:09.360 |
more technical, around decreasing the distance 01:42:19.960 |
I think that those were generally good changes. 01:42:23.280 |
And I'm also, I also think that it was probably good 01:42:28.640 |
because my sense is that there were a lot of other people 01:42:33.240 |
but who may have been a little shy about doing them. 01:42:45.280 |
and how people have reacted to the things that we've done, 01:42:47.200 |
you know, what I've heard from a lot of folks 01:42:48.800 |
is just, hey, you know, when someone like you, 01:42:54.360 |
the organizational changes that I wanted to make 01:43:09.200 |
that, you know, hopefully can be good for the industry 01:43:13.600 |
and make all these companies more productive over time. 01:43:16.080 |
So, something that that was one where I think he was 01:43:19.480 |
quite ahead of a bunch of the other companies on. 01:43:26.720 |
you know, again, from the outside, very hard to know. 01:43:31.480 |
I don't think it's like my place to opine on that. 01:43:35.000 |
And you asked for a positive framing of the question 01:43:46.120 |
led me and I think a lot of other folks in the industry 01:43:49.800 |
to think about, hey, are we kind of doing this 01:43:54.680 |
Like, can we, like, could we make our companies better 01:43:59.400 |
Well, the two of you are in the top of the world 01:44:04.000 |
And I wish there was more both way camaraderie and kindness, 01:44:09.000 |
more love in the world, because love is the answer. 01:44:19.200 |
You recently announced multiple stages of layoffs at Meta. 01:44:22.580 |
What are the most painful aspects of this process 01:44:27.600 |
given for the individuals, the painful effects 01:44:35.920 |
And you basically have a significant number of people 01:44:41.280 |
who, this is just not the end of their time at Meta 01:44:51.240 |
And I mean, running a company, people are constantly joining 01:44:57.640 |
and leaving the company for different directions, 01:45:03.880 |
But layoffs are uniquely challenging and tough 01:45:13.040 |
for reasons that aren't connected to their own performance 01:45:17.600 |
or the culture not being a fit at that point. 01:45:22.080 |
It's really just, it's a kind of strategy decision 01:45:33.760 |
I mean, especially on the changes that we made this year, 01:45:40.600 |
where I wanted us to become a stronger technology company 01:45:44.480 |
with more of a focus on building more technical 01:45:56.080 |
And I wanted to make sure that we had a stable position 01:46:01.720 |
in these long-term ambitious projects that we have 01:46:07.680 |
and continuing to push forward all the metaverse work. 01:46:10.320 |
And in order to do that in light of the pretty big thrash 01:46:28.880 |
I don't know, I just, I decided that we needed to get 01:46:36.720 |
it's one thing to do that, to like decide that 01:46:40.360 |
how do you execute that as compassionately as possible? 01:46:53.440 |
we've certainly spent a lot of time just thinking, 01:47:05.200 |
- And you mentioned there's an increased focus 01:47:08.480 |
on engineering, on tech, so technology teams, 01:47:25.680 |
the people who are building things, the technical teams. 01:47:44.880 |
you have individual contributor engineers reporting 01:47:49.200 |
which I think is important because you're running a company. 01:47:56.400 |
We talked about this a bit earlier in terms of 01:47:59.480 |
kind of the joy of, and the feedback that you get doing 01:48:07.360 |
But I actually think part of the art of running a company 01:48:18.480 |
I kind of think that every layer that you have 01:48:24.360 |
might not need to get reviewed before it goes to you. 01:48:27.280 |
And I think, you know, making it so that the people 01:48:29.160 |
doing the work are as close as possible to you as possible 01:48:38.320 |
very large support functions that are not doing 01:48:45.400 |
but I think having them in the right proportion 01:48:48.400 |
And if you try to do good work, but you don't have, 01:48:56.240 |
or the right legal advice, like you're gonna, you know, 01:49:15.120 |
Maybe you're too conservative or you move a lot slower 01:49:27.640 |
- Yeah, no, but that's, it's a constant equilibrium 01:49:36.200 |
- Well, I mean, I believe a lot in management. 01:49:39.400 |
that it doesn't matter as much, but look, I mean, 01:49:41.480 |
we have a lot of younger people at the company 01:49:43.680 |
for whom this is their first job and, you know, 01:49:46.240 |
people need to grow and learn in their career. 01:49:48.320 |
And I think that all that stuff is important, 01:49:50.240 |
but here's one mathematical way to look at it. 01:49:58.480 |
I asked our people team, what was the average number 01:50:03.840 |
And I think it was around three, maybe three to four, 01:50:10.080 |
I was like, wow, like a manager can, you know, 01:50:18.240 |
But there was a reason why it was closer to three. 01:50:20.120 |
It was because we were growing so quickly, right? 01:50:22.720 |
And when you're hiring so many people so quickly, 01:50:32.560 |
you may not wanna have them have seven direct reports 01:50:36.880 |
But the thing is going forward, I don't want us 01:50:39.800 |
to actually hire that many people that quickly, right? 01:50:42.640 |
So I actually think we'll just do better work 01:50:44.960 |
if we have more constraints and we're, you know, 01:50:53.840 |
to have a lot of managers who have extra capacity 01:50:57.760 |
So now we can sort of defragment the organization 01:51:01.640 |
and get to a place where the average is closer 01:51:10.880 |
which, you know, decreases the latency on information 01:51:19.480 |
it doesn't kind of undervalue the importance of management 01:51:23.000 |
and the kind of the personal growth or coaching 01:51:28.000 |
that people need in order to do their jobs well. 01:51:32.040 |
we're just not gonna hire as many people going forward. 01:51:34.200 |
So I think that you need a different structure. 01:51:36.360 |
- This whole incredible hierarchy and network of humans 01:51:48.400 |
How do you hire great, now with the focus on engineering 01:51:51.760 |
and technical teams, how do you hire great engineers 01:52:05.680 |
I think attract is work on cool stuff and have a vision. 01:52:11.040 |
- I think that's right, and have a track record 01:52:12.600 |
that people think you're actually gonna be able to do it. 01:52:14.080 |
- Yeah, to me, the select seems like more of the art form, 01:52:23.880 |
and can get integrated the most effectively and so on. 01:52:34.400 |
through the paperwork and all this kind of stuff, 01:52:42.880 |
- So there are lots of different cuts on this question. 01:52:46.480 |
I mean, I think when an organization is growing quickly, 01:52:53.440 |
is do I hire this person who's in front of me now 01:52:57.520 |
or do I hold out to get someone who's even better? 01:53:01.400 |
And the heuristic that I always focused on for myself 01:53:06.400 |
and my own kind of direct hiring that I think works 01:53:13.800 |
is that you should only hire someone to be on your team 01:53:22.080 |
and that's basically how I've tried to build my team. 01:53:24.680 |
I'm not in a rush to not be running the company, 01:53:30.360 |
where one of these other folks was running the company, 01:53:41.840 |
And I think that that gives you some rubric for... 01:53:51.760 |
then you'll have a pretty strong organization. 01:53:54.040 |
Okay, in an organization that's not growing as quickly, 01:53:59.040 |
the questions might be a little different though. 01:54:01.400 |
And there, you asked about young people specifically, 01:54:12.560 |
but we have a much better sense of who the best people are 01:54:16.640 |
who have interned at the company for a couple of months 01:54:25.160 |
I mean, obviously the in-person feel that you get 01:54:27.320 |
from someone probably tells you more than the resume, 01:54:33.600 |
but a lot of the stuff really just is cultural. 01:54:38.480 |
and on different teams, even within a specific company. 01:54:47.480 |
for even a short period of time over a summer 01:54:59.120 |
is basically it's a very useful sorting function, 01:55:15.880 |
- Yeah, I mean, I think it's a thing that's here to stay, 01:55:20.040 |
but I think that there's value in both, right? 01:55:27.320 |
I wouldn't wanna run a fully remote company yet, at least. 01:55:31.400 |
I think there's an asterisk on that, which is that- 01:55:34.200 |
- Some of the other stuff you're working on, yeah. 01:55:39.880 |
and the ability to be, to feel like you're truly present. 01:55:53.880 |
But, I don't know, today it still does, right? 01:56:04.520 |
there are all these people who have special skills 01:56:07.000 |
and wanna live in a place where we don't have an office. 01:56:09.880 |
Are we better off having them at the company? 01:56:12.760 |
And are a lot of people who work at the company 01:56:28.000 |
And we've done all these studies that show it's like, okay, 01:56:32.400 |
But, you know, for the new folks who are joining 01:56:36.480 |
and for people who are earlier in their career 01:56:40.480 |
and need to learn how to solve certain problems 01:56:57.440 |
I don't know, it's like we just haven't replaced 01:57:15.640 |
in the next few years in virtual reality and mixed reality 01:57:18.960 |
that are possible with high resolution scans. 01:57:21.840 |
I mean, I, as a person who loves in-person interaction, 01:57:29.560 |
it would be incredible to achieve the level of realism 01:58:07.240 |
What features are you most excited about there? 01:58:22.720 |
you can think about virtual reality as you have the headset 01:58:29.000 |
and you're basically immersed in a different world. 01:58:32.160 |
Mixed reality is where you see the physical world around you 01:58:44.840 |
are coming out through the wall and you need to shoot them. 01:58:47.600 |
Or we're playing Dungeons and Dragons or some board game 01:58:50.760 |
and we just have a virtual version of the board 01:58:59.800 |
the next big capability on top of virtual reality. 01:59:05.520 |
I have to say as a person who experienced it today 01:59:08.440 |
with zombies, having a full awareness of the environment 01:59:13.440 |
and integrating that environment in the way they run at you 01:59:20.560 |
the pass through is really, really, really well done. 01:59:23.400 |
And the fact that it's only $500 is really, it's well done. 01:59:33.920 |
into making the device both as good as possible 01:59:40.640 |
because a big part of our mission and ethos here 01:59:43.240 |
is we want people to be able to connect with each other. 01:59:46.320 |
We want to reach and we want to serve a lot of people. 01:59:49.120 |
We want to bring this technology to everyone. 02:00:08.280 |
to put an unlimited amount of hardware in this. 02:00:12.960 |
that works really well, but in an affordable package. 02:00:23.000 |
And now we've lowered the price to a thousand, 02:00:25.240 |
but in a lot of ways, the mixed reality in Quest 3 02:00:31.680 |
than what we were able to deliver in Quest Pro. 02:00:33.480 |
So I'm really proud of where we are with Quest 3 on that. 02:00:38.240 |
It's going to work with all of the virtual reality titles 02:00:42.840 |
So people who want to play fully immersive games, 02:00:45.240 |
social experiences, fitness, all that stuff will work, 02:00:54.240 |
because sometimes you want to be super immersed in a game, 02:01:00.440 |
especially when you're moving around, if you're active, 02:01:11.120 |
So that way you know that I'm not going to punch a lamp 02:01:14.480 |
And I don't know if you got to play with this experience, 02:01:23.800 |
we're like in a conference room or your living room 02:01:26.560 |
and you have the guy there and you're boxing him 02:01:42.480 |
you're seeing them also, they can cheer you on, 02:01:58.800 |
This becomes integrated into your life, into your world. 02:02:03.560 |
- Yeah, and it, so I think it's a completely new capability 02:02:09.280 |
And I think it'll also just make the experience 02:02:13.120 |
who didn't want to have only fully immersive experiences. 02:02:16.840 |
I think if you want experiences where you're grounded in, 02:02:19.280 |
you know, your living room and the physical world around you, 02:02:35.520 |
- Just the zombie one, but it's still outside. 02:02:39.360 |
to your living room where zombies come out of, 02:02:41.000 |
but yes, in the context of that game, it's yeah, yeah. 02:02:43.920 |
- I enjoyed it 'cause you could see the nature outside. 02:02:47.560 |
And me as a person that doesn't have windows, 02:02:56.440 |
I know it's a zombie game, but there's a zen nature, 02:03:09.040 |
There will probably be better, more zen ways to do that 02:03:16.160 |
of sort of having your physical environment on pass-through, 02:03:20.480 |
but then being able to bring in different elements, 02:03:46.260 |
of basically you just have a kind of a hologram 02:03:50.060 |
will hopefully apply once we get the AR glasses too. 02:03:53.560 |
Now that's got its own whole set of challenges and it's- 02:04:01.720 |
And the other thing that I think is good about it, 02:04:03.360 |
yeah, so mixed reality was the first big thing. 02:04:10.160 |
It's, I mean, it's got 2X the graphics processing power, 02:04:13.220 |
40% sharper screens, 40% thinner, more comfortable, 02:04:18.880 |
better strap architecture, all this stuff that, 02:04:24.040 |
it's like all the content that you might've played 02:04:25.880 |
in Quest 2 is just gonna get sharper automatically 02:04:29.000 |
So it's, I think people are really gonna like it. 02:04:40.020 |
called Vision Pro for $3,500, available in early 2024. 02:04:48.640 |
- Well, I saw the materials when they launched. 02:04:51.800 |
I haven't gotten a chance to play with it yet. 02:04:53.800 |
So kind of take everything with a grain of salt, 02:04:58.980 |
I mean, first, you know, I do think that this is 02:05:03.980 |
a certain level of validation for the category, right? 02:05:09.940 |
Where, you know, we were the primary folks out there 02:05:13.520 |
before saying, hey, I think that this, you know, 02:05:16.800 |
virtual reality, augmented reality, mixed reality, 02:05:19.000 |
this is gonna be a big part of the next computing platform. 02:05:21.960 |
I think having Apple come in and share that vision 02:05:28.240 |
will make a lot of people who are fans of their products 02:05:36.400 |
And then, you know, of course the $3,500 price, 02:05:45.000 |
for with all the stuff that they're trying to pack in there. 02:05:47.200 |
On the other hand, a lot of people aren't gonna find that 02:05:51.240 |
So I think that there's a chance that them coming in 02:05:53.840 |
actually increases demand for the overall space 02:05:57.600 |
and that Quest 3 is actually the primary beneficiary of that 02:06:03.880 |
hey, you know, this, like I'm gonna give another 02:06:09.280 |
now I understand maybe what mixed reality is more 02:06:18.280 |
I think that that's, and, you know, in our own way, 02:06:20.920 |
I think we're, and there are a lot of features that we have 02:06:24.280 |
So I think that that's, that I think is gonna be a very, 02:06:36.960 |
Apple has always, you know, I think focused on building 02:06:55.440 |
You know, we've sold tens of millions of Quest devices. 02:07:10.600 |
So just in terms of like what you kind of expect 02:07:19.040 |
Quest is gonna be the primary thing that people 02:07:27.000 |
it's up to the companies to see how well we each executed 02:07:31.320 |
But we kind of come at it from different places. 02:07:33.000 |
We're very focused on social interaction, communication, 02:07:40.800 |
So there's fitness, there's gaming, there are those things. 02:07:43.800 |
You know, whereas I think a lot of the use cases 02:07:53.640 |
you know, people looking at screens, which are great. 02:07:56.800 |
I think that you will replace your laptop over time 02:08:03.440 |
the different use cases that the companies are going after, 02:08:07.280 |
they're a bit different for where we are right now. 02:08:10.360 |
- Yeah, so gaming wasn't a big part of the presentation, 02:08:20.960 |
It was interesting to see it missing in the presentation. 02:08:24.400 |
- Well, I mean, look, there are certain design trade-offs 02:08:30.800 |
I think they made this point about not wanting 02:08:35.360 |
there's a certain elegance about just being able 02:08:37.040 |
to navigate the system with eye gaze and hand tracking. 02:08:41.120 |
And by the way, you'll be able to just navigate Quest 02:08:43.920 |
with your hands too, if that's what you want. 02:08:51.760 |
with computer vision to detect certain aspects of the hand, 02:09:05.600 |
And one of the demos that we have is basically 02:09:09.720 |
an MR experience teaching you how to play piano 02:09:14.760 |
we're just all, it's hands, it's no controllers. 02:09:20.480 |
having a controller allows you to have a more tactile feel 02:09:25.480 |
and allows you to capture fine motor movement 02:09:30.280 |
much more precisely than what you can do with hands 02:09:36.000 |
So again, I think there are certain questions 02:09:38.520 |
which are just around what use cases are you optimizing for? 02:09:45.760 |
then I think that you wanna design the system 02:09:52.160 |
on kind of social experiences, entertainment experiences. 02:09:56.960 |
Whereas if what you want is to make sure that the text 02:10:01.560 |
that you read on a screen is as crisp as possible, 02:10:04.600 |
then you need to make the design and cost trade-offs 02:10:08.080 |
that they made that lead you to making a $3,500 device. 02:10:12.320 |
So I think that there is a use case for that for sure, 02:10:17.360 |
is we've basically made different design trade-offs 02:10:20.360 |
to get to the use cases that we're trying to serve. 02:10:24.560 |
- There's a lot of other stuff I'd love to talk to you 02:10:27.520 |
about the Metaverse, especially the Kodak Avatar, 02:10:41.420 |
well, I think there's a lot more to show off in that regard. 02:10:55.640 |
that folks like Eliezer Yudkowsky has to worry about 02:11:00.640 |
and others of the existential, the serious threats of AI 02:11:09.880 |
Do you worry about the existential risks of AI 02:11:14.680 |
as Eliezer does, about the alignment problem, 02:11:20.640 |
- Anytime where there's a number of serious people 02:11:23.120 |
who are raising a concern that is that existential 02:11:31.000 |
So I've spent quite a bit of time thinking about it 02:11:35.160 |
The thing that I, where I basically have come out 02:11:42.200 |
on this for now is I do think that there are, 02:11:44.640 |
over time, I think that we need to think about this 02:12:01.760 |
we spent a fair amount of time on this before, 02:12:11.720 |
but there's concerns that already exist, right? 02:12:14.960 |
About people using AI tools to do harmful things 02:12:25.320 |
And that's going to be a pretty big set of challenges 02:12:34.320 |
regardless of whether there is an existential concern 02:12:51.600 |
and then not do as good of a job as we need to 02:12:59.440 |
as real risks that kind of manifest themselves 02:13:04.680 |
So for me, I've spent most of my time on that 02:13:12.080 |
that the size of models that we're talking about now 02:13:16.400 |
are just quite far from the super intelligence type concerns 02:13:22.200 |
But I think once we get a couple of steps closer to that, 02:13:27.920 |
there are going to be some novel risks and issues 02:13:32.520 |
about how we make sure that the systems are safe for sure. 02:13:40.160 |
I think in some of these debates around safety, 02:13:45.720 |
I think the concepts of intelligence and autonomy 02:13:55.120 |
as an analogy, they get kind of conflated together. 02:14:03.440 |
that you can make something in scale intelligence quite far, 02:14:07.240 |
but that may not manifest the safety concerns 02:14:23.000 |
And it's, but it's not really calling the shots 02:14:26.640 |
We have a much more primitive old brain structure 02:14:31.080 |
for which our neocortex, which is this powerful machinery 02:14:34.160 |
is basically just a kind of prediction and reasoning engine 02:14:38.080 |
to help it kind of like our very simple brain. 02:14:44.640 |
Decide how to plan and do what it needs to do 02:14:48.400 |
in order to achieve these like very kind of basic impulses. 02:14:52.440 |
And I think that you can think about some of the development 02:15:00.200 |
where just like our neocortex doesn't have free will 02:15:03.240 |
or autonomy, we might develop these wildly intelligent 02:15:22.160 |
It's, I think that it's not out of the question 02:15:25.400 |
that very intelligent systems that have the capacity 02:15:28.160 |
to think will kind of act as that is sort of an extension 02:15:33.640 |
So I think my own view is that where we really need 02:15:37.520 |
to be careful is on the development of autonomy 02:15:44.280 |
because it's actually the case that relatively simple 02:15:49.280 |
and unintelligent things that have runaway autonomy 02:15:56.960 |
It's, I mean, like it's can be simple computer code 02:16:01.000 |
but just spreads itself and does a lot of harm, 02:16:06.880 |
And I just think that these are somewhat separable things. 02:16:15.680 |
when people talk about safety and responsibility 02:16:23.720 |
And to me, if I were a policymaker or thinking about this, 02:16:28.720 |
I would really wanna think about that distinction 02:16:31.440 |
between these, where I think building intelligent systems 02:17:06.920 |
I love the distinction between intelligence and autonomy 02:17:28.220 |
that will develop the superintelligent system. 02:17:31.360 |
And you are a man who's at the head of this company. 02:17:37.960 |
Do you worry that that power will corrupt you? 02:17:51.800 |
which is I don't think that the world will be best served 02:18:11.200 |
it's when there are these sort of like unipolar advances 02:18:19.240 |
that they're due into being kind of weird situations. 02:18:23.960 |
So this is one of the reasons why I think open sources 02:18:31.520 |
And I think it's a categorically different question today 02:18:37.760 |
that even once we get closer to superintelligence, 02:18:51.680 |
that the system is as secure and safe as possible, 02:19:02.360 |
But I think that this is a pretty widely accepted thing 02:19:04.440 |
about open source is that you have the code out there, 02:19:11.240 |
Anyone can kind of mess with it in different ways. 02:19:29.520 |
that that ends up being the way that this goes to, 02:19:39.120 |
both leads to a healthier development of the technology 02:19:43.120 |
and also leads to a more balanced distribution 02:20:18.680 |
that open sourcing models now is net positive. 02:20:28.720 |
but I've not, I've certainly have not decided on that yet. 02:20:39.520 |
between now and then to make those decisions. 02:20:45.120 |
- What year do you think we'll have a superintelligence? 02:20:48.960 |
- I don't know, I mean, that's pure speculation. 02:20:52.080 |
I think it's very clear just taking a step back 02:20:55.320 |
that we had a big breakthrough in the last year, right? 02:21:05.280 |
And then I think the question is what happens from here? 02:21:17.440 |
If we just have like another breakthrough like that, 02:21:23.240 |
And is like, is just like so much more advanced. 02:21:28.240 |
And on that side of the argument, it's like, okay, 02:21:45.800 |
And the other side, which is what we've historically seen 02:21:50.960 |
in that Gartner hype cycle, there's like the hype, 02:21:58.320 |
and then there's the trough of disillusionment after, 02:22:00.960 |
when like people think that there's a chance that, hey, 02:22:04.640 |
Maybe we're about to get another big breakthrough. 02:22:19.800 |
until you figure out the, kind of the next big thing 02:22:25.600 |
And, but I think that the fact that we just had 02:22:28.680 |
this breakthrough sort of makes it so that we're at a point 02:22:32.720 |
of almost a very wide error bars on what happens next. 02:22:42.240 |
would suggest that we're not just going to stack 02:22:48.120 |
on top of breakthrough every six months or something. 02:22:54.600 |
I would guess that it will take somewhat longer 02:23:01.400 |
I tend to be pretty optimistic about breakthroughs too. 02:23:05.800 |
for my normal optimism, then maybe it would be 02:23:10.320 |
But even within that, like I'm not even opining 02:23:13.480 |
on the question of how many breakthroughs are required 02:23:15.400 |
to get to general intelligence, because no one knows. 02:23:21.520 |
such a small step that resulted in such a big leap 02:23:25.920 |
in performance as experienced by human beings 02:23:33.280 |
as we stumble across this very open world of research, 02:23:43.960 |
And also we don't know exactly at which stage 02:23:59.720 |
of what year we're going to have super intelligence. 02:24:04.520 |
But is there something you could say about the timeline 02:24:14.920 |
- Sure, so I still don't think I have any particular insight 02:24:21.800 |
that is a general intelligence will get created. 02:24:28.840 |
haven't really grappled with is that we do seem to have 02:24:37.560 |
that exhibit greater than human intelligence already. 02:24:44.440 |
It acts as an entity, it has a singular brand. 02:24:57.760 |
But I think that that would be pretty bad if it didn't. 02:25:01.640 |
Another example that I think is even more removed 02:25:11.000 |
which is often implied in some of these questions, 02:25:13.480 |
is think about something like the stock market. 02:25:21.680 |
that probably millions of people around the world 02:25:30.800 |
But it's basically this organism or structure 02:25:44.480 |
And I do think that this notion that there are already 02:25:49.480 |
these cybernetic systems that are either melding 02:26:00.400 |
or melding the intelligence of multiple people 02:26:22.200 |
as long as we basically build these structures 02:26:27.200 |
So I don't know, I mean, that at least gives me hope 02:26:33.360 |
and I don't know how long exactly it's gonna be, 02:26:44.400 |
that is generally productive in advancing humanity. 02:26:53.240 |
to make that collective intelligence machinery 02:26:57.800 |
So it's not like AI is becoming super intelligent, 02:27:20.440 |
just making the whole collective intelligence 02:27:26.940 |
that are trained on human data anyway are becoming. 02:27:30.500 |
Maybe the collective intelligence of the human species 02:27:44.700 |
is basically coming from feedback from people, 02:27:51.860 |
It's not like a machine will just be able to go learn 02:27:55.860 |
all the stuff about how people think about stuff, 02:28:04.660 |
and that you're at the forefront of developing. 02:28:38.820 |
- Yeah, I was pretty happy. - How crazy are you? 02:28:44.980 |
but anything under 40 minutes I'm happy with. 02:28:49.580 |
- No, I think I've done it a little faster before, 02:28:53.260 |
Of my friends, I did not win on Memorial Day. 02:29:01.940 |
But just to clear up one thing that I think was, 02:29:04.860 |
I saw a bunch of questions about this on the internet. 02:29:07.060 |
There are multiple ways to do the Murph Challenge. 02:29:12.060 |
where you do sets of pull-ups, push-ups, and squats together, 02:29:25.740 |
And obviously if you're doing them unpartitioned, 02:29:29.700 |
then it takes longer to get through the 100 pull-ups 02:29:33.020 |
'cause anytime you're resting in between the pull-ups, 02:29:49.060 |
I think it's a good way to honor Memorial Day. 02:30:00.900 |
and I just try to do it on Memorial Day each year. 02:30:05.660 |
I got my older daughters to do it with me this time. 02:30:12.940 |
because she sees me doing it with a weight vest. 02:30:16.340 |
should be using a weight vest to do pull-ups. 02:30:19.100 |
- Difficult question a parent must ask themselves, yes. 02:30:23.460 |
- I was like, maybe I can make you a very lightweight vest, 02:30:28.940 |
So she ran a quarter mile and then did 25 pull-ups, 02:31:12.700 |
given how much stuff you have going on in your life, 02:31:14.860 |
what's like the perfect exercise regimen for you 02:31:25.300 |
to keep yourself productive in your main line of work? 02:31:49.740 |
So I do that, you know, three to four times a week. 02:31:57.940 |
like just cardio conditioning, strength building, mobility. 02:32:02.060 |
- So you try to do something physical every day? 02:32:08.460 |
But then I'll still try to like go for a walk or something. 02:32:48.540 |
and like physically and a lot of the sensations 02:32:55.620 |
And I think that that's a lot of what makes you a human 02:33:01.740 |
having that set of sensations and experiences around that 02:33:13.160 |
I think it's important for balance to kind of get out, 02:33:23.440 |
- Do you think AI, in order to become super intelligent, 02:33:35.320 |
I think that there's this assumption in that question 02:33:39.080 |
that intelligence should be kind of person-like. 02:34:06.640 |
My guess is there will be limits to what a system 02:34:10.840 |
that is purely an intelligence can understand 02:34:13.120 |
about the human condition without having the same, 02:34:17.040 |
not just senses, but our bodies change as we get older. 02:34:24.080 |
And I think that those very subtle physical changes 02:34:29.080 |
just drive a lot of social patterns and behavior 02:34:37.280 |
Like just like all these, that's not even subtle, 02:34:40.120 |
But like how you design things around the house. 02:34:43.960 |
So, yeah, I mean, I think if the goal is to understand 02:34:48.420 |
people as much as possible, I think that that's, 02:34:58.960 |
even that is separate from that, it's a separate thing. 02:35:12.960 |
Do you think there'll be AI replicas of you and me 02:35:29.420 |
that we've had to struggle with in the context 02:35:43.920 |
Right, I don't think anyone should be able to choose 02:35:46.360 |
to make a Lex bot that people can choose to talk to 02:35:53.640 |
And we have this precedent of making some of these calls 02:36:00.020 |
for a Lex fan club, but you can't create a page 02:36:11.260 |
I mean, maybe, you know, someone maybe can make a, 02:36:14.420 |
should be able to make an AI that's a Lex admirer 02:36:18.420 |
but I think it should ultimately be your call 02:36:44.500 |
- Yeah, I think that there's a few different parts 02:37:01.060 |
It's like God creates the earth and creates people 02:37:06.920 |
And there's the question of, you know, what does that mean? 02:37:09.940 |
And all, the only context that you have about God 02:37:15.620 |
So I always thought that like one of the interesting lessons 02:37:18.380 |
from that is that there's a virtue in creating things 02:37:29.980 |
that are functionally useful for other people. 02:37:39.380 |
And that kind of drives a lot of how I think about morality 02:37:51.620 |
I think it's one where you're helping the people around you 02:37:56.620 |
and you're being a kind of positive creative force 02:38:01.620 |
in the world that is helping to bring new things 02:38:05.200 |
into the world, whether they're amazing other people, kids, 02:38:09.520 |
or just leading to the creation of different things 02:38:17.140 |
And so that's a value for me that matters deeply. 02:38:20.560 |
And I just, I mean, I just love spending time with the kids 02:38:27.740 |
And it's like, I mean, nothing makes me happier 02:38:33.980 |
and I see like my daughter's like building Legos 02:38:38.980 |
It's like, all right, I did that when I was a kid, right? 02:38:46.860 |
and you wanna just continue building different things 02:38:49.780 |
no matter what it is, to me, that's a lot of what matters. 02:38:56.120 |
I think the cultural piece is just about community 02:39:04.000 |
You know, it's almost autopilot when you're a kid, 02:39:08.020 |
you're in the kind of getting imparted to phase of your life. 02:39:10.980 |
But, and I didn't really think about religion 02:39:16.380 |
You know, I was in college, you know, before I had kids. 02:39:23.500 |
of really making you think about what traditions 02:39:34.460 |
And I mean, a bunch of the questions that you've asked 02:39:37.300 |
and a bunch of the things that we're talking about. 02:39:52.540 |
and we are definitely living in a simulation, 02:40:00.580 |
- A lot of the topics that we've talked about today 02:40:18.900 |
And I always also just think that it's very grounding 02:40:25.900 |
that is much bigger than you that is guiding things. 02:40:28.500 |
- That amongst other things gives you a bit of humility. 02:40:41.500 |
that you spoke to, creating beauty in the world, 02:40:43.980 |
and as Dostoevsky said, beauty will save the world. 02:40:53.220 |
and I am looking forward to both kicking your ass 02:40:57.980 |
and you kicking my ass on the mat tomorrow in jiu-jitsu, 02:41:02.020 |
this incredible sport and art that we both participate in. 02:41:09.460 |
in so many exciting realms of technology and human life. 02:41:14.020 |
I can't wait to talk to you again in the metaverse. 02:41:22.100 |
please check out our sponsors in the description. 02:41:25.100 |
And now let me leave you with some words from Isaac Asimov. 02:41:29.100 |
It is change, continuing change, inevitable change 02:41:33.620 |
that is the dominant factor in society today. 02:41:40.040 |
without taking into account not only the world as it is, 02:41:45.720 |
Thank you for listening and hope to see you next time.