back to index

Mark Zuckerberg: Future of AI at Meta, Facebook, Instagram, and WhatsApp | Lex Fridman Podcast #383


Chapters

0:0 Introduction
0:28 Jiu-jitsu competition
17:51 AI and open source movement
30:22 Next AI model release
42:37 Future of AI at Meta
63:15 Bots
78:42 Censorship
93:23 Meta's new social network
100:10 Elon Musk
104:15 Layoffs and firing
111:45 Hiring
117:37 Meta Quest 3
124:34 Apple Vision Pro
130:50 AI existential risk
137:13 Power
140:44 AGI timeline
148:7 Murph challenge
153:22 Embodied AGI
156:29 Faith

Whisper Transcript | Transcript Only Page

00:00:00.000 | The following is a conversation with Mark Zuckerberg,
00:00:03.440 | his second time on this podcast.
00:00:05.400 | He's the CEO of Meta that owns Facebook,
00:00:08.440 | Instagram, and WhatsApp,
00:00:10.000 | all services used by billions of people
00:00:13.040 | to connect with each other.
00:00:14.840 | We talk about his vision for the future of Meta
00:00:17.400 | and the future of AI in our human world.
00:00:21.340 | This is the Lex Friedman Podcast,
00:00:23.520 | and now, dear friends, here's Mark Zuckerberg.
00:00:28.280 | So you competed in your first Jiu-Jitsu tournament,
00:00:30.800 | and me, as a fellow Jiu-Jitsu practitioner and competitor,
00:00:34.320 | I think that's really inspiring,
00:00:35.840 | given all the things you have going on.
00:00:37.560 | So I gotta ask, what was that experience like?
00:00:40.840 | - Oh, it was fun.
00:00:42.360 | I don't know, yeah, I mean, well, look,
00:00:43.880 | I'm a pretty competitive person.
00:00:45.760 | - Yeah?
00:00:47.160 | - Doing sports that basically require your full attention,
00:00:50.320 | I think is really important to my mental health
00:00:53.680 | and the way I just stay focused
00:00:55.560 | at doing everything I'm doing.
00:00:56.960 | So I decided to get into martial arts,
00:00:58.920 | and it's awesome.
00:01:00.680 | I got a ton of my friends into it.
00:01:02.200 | We all train together.
00:01:04.000 | We have a mini academy in my garage.
00:01:06.720 | And I guess one of my friends was like,
00:01:10.440 | "Hey, we should go do a tournament."
00:01:12.680 | I was like, "Okay, yeah, let's do it.
00:01:14.040 | "I'm not gonna shy away from a challenge like that."
00:01:16.560 | So yeah, but it was awesome.
00:01:18.440 | It was just a lot of fun.
00:01:19.720 | - You weren't scared?
00:01:20.560 | There was no fear?
00:01:21.600 | - I don't know.
00:01:22.440 | I was pretty sure that I'd do okay.
00:01:25.160 | - I like the confidence.
00:01:26.760 | Well, so for people who don't know,
00:01:28.520 | jiu-jitsu is a martial art
00:01:30.240 | where you're trying to break your opponent's limbs
00:01:33.200 | or choke them to sleep
00:01:36.320 | and do so with grace and elegance and efficiency
00:01:41.160 | and all that kind of stuff.
00:01:43.320 | It's a kind of art form, I think,
00:01:45.000 | that you can do for your whole life.
00:01:46.240 | And it's basically a game, a sport of human chess
00:01:49.440 | you can think of.
00:01:50.360 | There's a lot of strategy.
00:01:51.440 | There's a lot of interesting human dynamics
00:01:54.280 | of using leverage and all that kind of stuff.
00:01:57.280 | It's kind of incredible what you could do.
00:01:59.320 | You can do things like a small opponent
00:02:01.280 | could defeat a much larger opponent
00:02:03.160 | and you get to understand the way the mechanics
00:02:05.280 | of the human body works because of that.
00:02:07.200 | But you certainly can't be distracted.
00:02:09.760 | - No.
00:02:10.600 | It's 100% focus.
00:02:13.120 | To compete, I needed to get around the fact
00:02:15.920 | that I didn't want it to be this big thing.
00:02:18.520 | So basically, I rolled up with a hat and sunglasses
00:02:23.160 | and I was wearing a COVID mask
00:02:24.920 | and I registered under my first and middle name,
00:02:27.080 | so Mark Elliott.
00:02:28.480 | And it wasn't until I actually pulled all that stuff off
00:02:31.180 | right before I got on the mat
00:02:32.160 | that I think people knew it was me.
00:02:33.600 | So it was pretty low key.
00:02:35.320 | - But you're still a public figure.
00:02:37.740 | - Yeah, I mean, I didn't wanna lose.
00:02:39.120 | - Right.
00:02:39.960 | The thing you're partially afraid of
00:02:41.720 | is not just the losing but being almost embarrassed.
00:02:44.620 | It's so raw, the sport, in that it's just you
00:02:47.280 | and another human being.
00:02:48.280 | There's a primal aspect there.
00:02:49.720 | - Oh yeah, it's great.
00:02:50.720 | - For a lot of people, it can be terrifying,
00:02:52.080 | especially the first time you're doing the competing
00:02:54.560 | and it wasn't for you.
00:02:56.040 | I see the look of excitement on your face.
00:02:57.640 | - Yeah, I don't know.
00:02:58.480 | - It wasn't, no fear.
00:02:59.300 | - I just think part of learning is failing.
00:03:01.720 | - Okay.
00:03:02.560 | - Right, so, I mean, the main thing,
00:03:04.320 | people who train jujitsu, it's like,
00:03:06.760 | you need to not have pride because,
00:03:08.880 | I mean, all the stuff that you were talking about before
00:03:10.400 | about getting choked or getting a joint lock,
00:03:14.840 | you only get into a bad situation
00:03:18.400 | if you're not willing to tap once you've already lost.
00:03:22.660 | But obviously, when you're getting started with something,
00:03:24.580 | you're not gonna be an expert at it immediately.
00:03:26.500 | So you just need to be willing to go with that.
00:03:29.020 | But I think this is like, I don't know,
00:03:30.980 | I mean, maybe I've just been embarrassed
00:03:32.340 | enough times in my life.
00:03:33.620 | - Yeah.
00:03:34.460 | - I do think that there's a thing where,
00:03:36.420 | as people grow up, maybe they don't wanna be embarrassed
00:03:39.020 | or anything, they've built their adult identity
00:03:40.900 | and they kind of have a sense of who they are
00:03:45.540 | and what they wanna project.
00:03:46.820 | And I don't know, I think maybe to some degree,
00:03:51.620 | your ability to keep doing interesting things
00:03:54.620 | is your willingness to be embarrassed again
00:03:58.220 | and go back to step one and start as a beginner
00:04:01.740 | and get your ass kicked and look stupid doing things.
00:04:06.740 | Yeah, I think so many of the things that we're doing,
00:04:08.700 | whether it's this, I mean, this is just like
00:04:11.460 | kind of a physical part of my life,
00:04:13.460 | but at running the company,
00:04:15.620 | it's like we just take on new adventures
00:04:17.820 | and all the big things that we're doing,
00:04:21.900 | I think of as like 10 plus year missions that we're on
00:04:25.300 | where often early on, people doubt
00:04:28.460 | that we're gonna be able to do it
00:04:29.420 | and the initial work seems kind of silly
00:04:31.700 | and our whole ethos is we don't wanna wait
00:04:33.620 | until something is perfect to put it out there.
00:04:35.340 | We wanna get it out quickly and get feedback on it.
00:04:37.940 | And so I don't know, I mean,
00:04:38.980 | there's probably just something about
00:04:40.460 | how I approach things in there.
00:04:41.820 | But I just kind of think that the moment
00:04:43.900 | that you decide that you're gonna be too embarrassed
00:04:45.700 | to try something new,
00:04:46.580 | then you're not gonna learn anything anymore.
00:04:48.260 | - But like I mentioned, that fear,
00:04:51.220 | that anxiety could be there,
00:04:52.460 | could creep up every once in a while.
00:04:53.940 | Do you feel that in especially stressful moments
00:04:57.660 | sort of outside of the judgement, just at work,
00:05:01.040 | stressful moments, big decision days,
00:05:05.220 | big decision moments, how do you deal with that fear?
00:05:07.820 | How do you deal with that anxiety?
00:05:09.300 | - The thing that stresses me out the most
00:05:10.740 | is always the people challenges.
00:05:13.340 | You know, I kind of think that, you know,
00:05:16.300 | strategy questions, you know,
00:05:19.020 | I tend to have enough conviction around the values
00:05:22.860 | of what we're trying to do and what I think matters
00:05:26.540 | and what I want our company to stand for
00:05:28.260 | that those don't really keep me up at night that much.
00:05:31.900 | I mean, I kind of, you know,
00:05:33.220 | it's not that I get everything right.
00:05:35.100 | Of course I don't, right?
00:05:36.160 | I mean, we make a lot of mistakes,
00:05:38.300 | but I at least have a pretty strong sense
00:05:43.220 | of where I want us to go on that.
00:05:45.900 | The thing in running a company for almost 20 years now,
00:05:49.860 | one of the things that's been pretty clear
00:05:51.860 | is when you have a team that's cohesive,
00:05:55.740 | you can get almost anything done.
00:05:59.180 | And, you know, you can run through super hard challenges,
00:06:03.280 | you can make hard decisions and push really hard
00:06:07.020 | to do the best work even,
00:06:08.820 | you know, and kind of optimize something super well.
00:06:12.160 | But when there's that tension,
00:06:13.860 | I mean, that's when things get really tough.
00:06:16.300 | And, you know, when I talk to other friends
00:06:18.940 | who run other companies and things like that,
00:06:20.580 | I think one of the things that I actually spend
00:06:22.660 | a disproportionate amount of time on
00:06:24.020 | in running this company is just fostering
00:06:26.740 | a pretty tight core group of people
00:06:30.740 | who are running the company with me.
00:06:33.380 | And that to me is kind of the thing that
00:06:37.900 | both makes it fun, right?
00:06:39.160 | Having, you know, friends and people
00:06:41.140 | you've worked with for a while
00:06:42.100 | and new people and new perspectives,
00:06:43.540 | but like a pretty tight group who you can go work on
00:06:46.100 | some of these crazy things with.
00:06:47.700 | But to me, that's also the most stressful thing
00:06:50.740 | is when there's tension, you know, that weighs on me.
00:06:55.740 | I think the, you know, just it's maybe not surprising.
00:06:59.380 | I mean, we're like a very people-focused company
00:07:01.660 | and it's the people is the part of it
00:07:03.980 | that, you know, weighs on me the most
00:07:06.680 | to make sure that we get right.
00:07:07.720 | But yeah, that I'd say across everything that we do
00:07:10.420 | is probably the big thing.
00:07:12.980 | - So when there's tension in that inner circle
00:07:15.740 | of close folks, so when you trust those folks
00:07:20.740 | to help you make difficult decisions
00:07:24.900 | about Facebook, WhatsApp, Instagram,
00:07:29.900 | the future of the company and the metaverse with AI,
00:07:33.180 | how do you build that close-knit group of folks
00:07:36.580 | to make those difficult decisions?
00:07:39.180 | Is there people that you have to have critical voices,
00:07:42.620 | very different perspectives on focusing on the past
00:07:46.300 | versus the future, all that kind of stuff?
00:07:48.380 | - Yeah, I mean, I think for one thing,
00:07:49.660 | it's just spending a lot of time
00:07:52.380 | with whatever the group is that you wanna be
00:07:54.220 | that core group, grappling with all
00:07:57.380 | of the biggest challenges.
00:07:58.580 | And that requires a fair amount of openness.
00:08:01.460 | And, you know, so I mean, a lot of how I run the company
00:08:04.780 | is, you know, it's like every Monday morning,
00:08:06.340 | we get our, it's about the top 30 people together.
00:08:10.220 | And we, and this is a group that just worked together
00:08:13.620 | for a long period of time.
00:08:14.620 | And I mean, people rotate in.
00:08:16.340 | I mean, new people join, people leave the company,
00:08:18.860 | people go do other roles in the company.
00:08:20.340 | So it's not the same group over time.
00:08:22.660 | But then we spend, you know, a lot of times,
00:08:25.780 | a couple of hours, a lot of the time,
00:08:27.380 | it's, you know, it can be somewhat unstructured.
00:08:29.540 | We, like, I'll come with maybe a few topics
00:08:31.780 | that are top of mind for me,
00:08:33.860 | but I'll ask other people to bring things
00:08:36.460 | and people, you know, raise questions,
00:08:38.500 | whether it's, okay, there's an issue happening
00:08:40.300 | in some country with some policy issue.
00:08:44.260 | There's like a new technology that's developing here.
00:08:46.380 | We're having an issue with this partner.
00:08:49.060 | You know, there's a design trade-off in WhatsApp
00:08:51.620 | between two things that end up being values
00:08:55.780 | that we care about deeply.
00:08:56.940 | And we need to kind of decide where we wanna be on that.
00:08:59.380 | And I just think over time, when, you know,
00:09:02.540 | by working through a lot of issues with people
00:09:04.460 | and doing it openly, people develop an intuition
00:09:07.540 | for each other and a bond and camaraderie.
00:09:09.840 | And to me, developing that is like a lot of the fun part
00:09:15.580 | of running a company or doing anything, right?
00:09:17.420 | I think it's like having people who are kind of along
00:09:20.220 | on the journey that you feel like you're doing it with.
00:09:22.460 | Nothing is ever just one person doing it.
00:09:24.700 | - Are there people that disagree often within that group?
00:09:27.940 | - It's a fairly combative group.
00:09:29.740 | - Okay, so combat is part of it.
00:09:31.900 | So this is making decisions on design, engineering,
00:09:35.440 | policy, everything.
00:09:38.580 | - Everything, everything, yeah.
00:09:41.180 | - I have to ask, just back to jujitsu for a little bit,
00:09:43.620 | what's your favorite submission?
00:09:45.060 | Now that you've been doing it,
00:09:47.060 | how do you like to submit your opponent, Mark Zuckerberg?
00:09:51.500 | - I mean.
00:09:52.340 | - Well, first of all, do you prefer no gi or gi jujitsu?
00:09:58.900 | So gi is this outfit you wear that maybe mimics clothing
00:10:03.900 | so you can choke.
00:10:05.820 | - Well, it's like a kimono.
00:10:06.700 | It's like the traditional martial arts or kimono.
00:10:08.900 | - Pajamas.
00:10:10.340 | - Pajamas.
00:10:11.180 | - That you could choke people with, yes.
00:10:13.980 | - Well, it's got the lapels.
00:10:15.380 | Yeah, so I like jujitsu.
00:10:19.180 | I also really like MMA.
00:10:21.060 | And so I think no gi more closely approximates MMA.
00:10:25.900 | And I think my style is maybe a little closer
00:10:30.820 | to an MMA style.
00:10:31.780 | So like a lot of jujitsu players
00:10:33.980 | are fine being on their back, right?
00:10:35.620 | And obviously having a good guard
00:10:36.820 | is a critical part of jujitsu,
00:10:39.340 | but in MMA, you don't wanna be on your back, right?
00:10:41.860 | 'Cause even if you have control,
00:10:42.920 | you're just taking punches while you're on your back.
00:10:45.740 | So that's no good.
00:10:47.740 | - So you like being on top.
00:10:48.740 | - My style is I'm probably more pressure.
00:10:51.820 | And yeah, and I'd probably rather be the top player,
00:10:56.820 | but I'm also smaller, right?
00:10:59.820 | I'm not like a heavyweight guy, right?
00:11:02.260 | So from that perspective, I think like,
00:11:05.120 | especially because if I'm doing a competition,
00:11:08.340 | I'll compete with people who are my size,
00:11:09.760 | but a lot of my friends are bigger than me.
00:11:11.700 | So back takes probably pretty important, right?
00:11:15.140 | Because that's where you have the most leverage advantage,
00:11:17.380 | right, where people, their arms,
00:11:20.620 | your arms are very weak behind you, right?
00:11:22.500 | So being able to get to the back
00:11:24.740 | and take that pretty important.
00:11:26.540 | But I don't know, I feel like the right strategy
00:11:28.340 | is to not be too committed to any single submission.
00:11:31.100 | But that said, I don't like hurting people.
00:11:32.940 | So I always think that chokes
00:11:36.060 | are a somewhat more humane way to go than joint locks.
00:11:40.900 | - Yeah, and it's more about control.
00:11:42.780 | It's less dynamic.
00:11:44.140 | So you're basically like a Habib Nurmagomedov type of fighter.
00:11:47.660 | - So let's go, yeah, back take to a rear naked choke.
00:11:50.100 | I think it's like the clean way to go.
00:11:52.300 | - Straightforward answer right there.
00:11:53.980 | What advice would you give to people
00:11:56.820 | looking to start learning jiu-jitsu?
00:11:59.380 | Given how busy you are, given where you are in life,
00:12:02.980 | that you're able to do this, you're able to train,
00:12:05.100 | you're able to compete and get to learn something
00:12:08.460 | from this interesting art.
00:12:10.740 | - Why do you think you have to be willing
00:12:11.980 | to just get beaten up a lot?
00:12:16.980 | I mean, it's- - But I mean, over time,
00:12:18.820 | I think that there's a flow to all these things.
00:12:21.220 | And there's, you know, one of the,
00:12:24.540 | one of, I don't know, my experiences
00:12:28.780 | that I think kind of transcends, you know,
00:12:31.100 | running a company and the different activities
00:12:34.300 | that I like doing are, I really believe that,
00:12:36.860 | like, if you're gonna accomplish whatever, anything,
00:12:40.100 | a lot of it is just being willing to push through, right?
00:12:43.260 | And having the grit and determination
00:12:45.540 | to push through difficult situations.
00:12:48.820 | And I think that for a lot of people,
00:12:50.020 | that ends up being sort of a difference maker
00:12:54.300 | between the people who kind of get the most done and not.
00:12:58.340 | I mean, there's all these questions about, like,
00:13:01.220 | you know, how many days people wanna work
00:13:03.380 | and things like that.
00:13:04.220 | I think almost all the people
00:13:05.060 | who, like, start successful companies or things like that
00:13:07.420 | are just, are working extremely hard.
00:13:09.500 | But I think one of the things that you learn,
00:13:11.060 | both by doing this over time,
00:13:13.260 | or, you know, very acutely with things like jujitsu
00:13:15.940 | or surfing is you can't push through everything.
00:13:20.940 | And I think that that's,
00:13:24.060 | you learn this stuff very acutely,
00:13:28.420 | doing sports compared to running a company.
00:13:30.340 | Because running a company, the cycle times are so long,
00:13:33.340 | right?
00:13:34.180 | It's like you start a project and then,
00:13:37.300 | you know, it's like months later,
00:13:38.700 | or, you know, if you're building hardware,
00:13:40.420 | it could be years later
00:13:41.420 | before you're actually getting feedback
00:13:42.860 | and able to make the next set of decisions
00:13:45.020 | for the next version of the thing that you're doing.
00:13:46.540 | Whereas, one of the things that I just think is mentally
00:13:49.380 | so nice about these very high turnaround,
00:13:53.420 | conditioning sports, things like that,
00:13:55.420 | is you get feedback very quickly, right?
00:13:57.220 | It's like, okay, like, I don't counter someone correctly,
00:13:59.700 | you get punched in the face, right?
00:14:00.940 | So not in jujitsu, you don't get punched in jujitsu,
00:14:03.340 | but in MMA.
00:14:04.820 | There are all these analogies between all these things
00:14:07.220 | that I think actually hold
00:14:08.260 | that are like important life lessons, right?
00:14:12.340 | It's like, okay, you're surfing a wave,
00:14:14.300 | it's like, you know, sometimes you're like,
00:14:18.460 | you can't go in the other direction on it, right?
00:14:20.900 | It's like, there are limits to kind of what, you know,
00:14:23.740 | it's like a foil, you can pump the foil
00:14:26.380 | and push pretty hard in a bunch of directions,
00:14:29.180 | but like, yeah, you, you know, it's at some level,
00:14:31.260 | like the momentum against you is strong enough,
00:14:33.900 | you're, that's not gonna work.
00:14:35.420 | And I do think that that's sort of a humbling,
00:14:40.660 | but also an important lesson for,
00:14:43.940 | and I think people who are running things
00:14:45.220 | or building things, it's like, yeah, you, you, you know,
00:14:48.260 | a lot of the game is just being able to kind of push
00:14:50.780 | and work through complicated things,
00:14:53.580 | but you also need to kind of have enough of an understanding
00:14:56.860 | of like which things you just can't push through
00:14:58.580 | and where the finesse is more important.
00:15:02.300 | - Yeah.
00:15:03.140 | - What are your jujitsu life lessons?
00:15:06.380 | - Well, I think you did it,
00:15:07.940 | you made it sound so simple and were so eloquent
00:15:13.100 | that it's easy to miss, but basically being okay
00:15:18.100 | and accepting the wisdom and the joy
00:15:21.820 | in the getting your ass kicked,
00:15:24.620 | in the full range of what that means,
00:15:26.980 | I think that's a big gift of the being humbled.
00:15:30.860 | Somehow being humbled, especially physically,
00:15:34.060 | opens your mind to the full process of learning,
00:15:37.200 | what it means to learn,
00:15:38.340 | which is being willing to suck at something.
00:15:41.740 | And I think jujitsu is just very repetitively,
00:15:45.540 | efficiently humbles you over and over and over and over
00:15:49.740 | to where you can carry that lessons to places
00:15:53.140 | where you don't get humbled as much,
00:15:55.100 | whether it's research or running a company
00:15:57.020 | or building stuff, the cycle is longer.
00:15:59.780 | In jujitsu, you can just get humbled
00:16:01.620 | in this period of an hour over and over and over and over,
00:16:04.820 | especially when you're a beginner,
00:16:05.900 | you'll have a little person,
00:16:07.340 | somebody much smarter than you,
00:16:09.980 | just kick your ass repeatedly, definitively,
00:16:14.980 | where there's no argument.
00:16:17.660 | - Oh, yeah.
00:16:18.500 | - And then you literally tap,
00:16:20.100 | because if you don't tap, you're going to die.
00:16:22.580 | So this is an agreement,
00:16:23.980 | you could have killed me just now, but we're friends,
00:16:27.200 | so we're gonna agree that you're not going to.
00:16:29.260 | And that kind of humbling process,
00:16:31.300 | it just does something to your psyche,
00:16:33.300 | to your ego that puts it in its proper context
00:16:35.660 | to realize that everything in this life
00:16:39.860 | is like a journey from sucking
00:16:43.260 | through a hard process of improving
00:16:46.740 | or rigorously day after day after day after day,
00:16:51.020 | like any kind of success requires hard work.
00:16:54.220 | Yeah, jujitsu, more than a lot of sports,
00:16:56.620 | I would say, 'cause I've done a lot of them,
00:16:58.420 | it really teaches you that.
00:16:59.900 | And you made it sound so simple.
00:17:01.580 | Like, "I'm okay, it's okay, it's part of the process."
00:17:04.900 | You just get humble, get your ass kicked.
00:17:06.340 | - I've just failed and been embarrassed
00:17:07.780 | so many times in my life that,
00:17:09.460 | it's a core competence to this.
00:17:12.380 | - It's a core competence.
00:17:14.220 | Well, yes, and there's a deep truth to that,
00:17:16.060 | being able to, and you said it in the very beginning,
00:17:18.520 | which is, that's the thing that stops us,
00:17:21.660 | especially as you get older,
00:17:22.660 | especially as you develop expertise in certain areas,
00:17:25.580 | the not being willing to be a beginner in a new area.
00:17:29.980 | Because that's where the growth happens,
00:17:34.060 | is being willing to be a beginner,
00:17:35.900 | being willing to be embarrassed,
00:17:37.420 | saying something stupid, doing something stupid.
00:17:39.780 | A lot of us that get good at one thing,
00:17:42.420 | you wanna show that off, and it sucks being a beginner,
00:17:47.420 | but it's where growth happens.
00:17:50.260 | Well, speaking of which, let me ask you about AI.
00:17:54.660 | It seems like this year,
00:17:56.380 | for the entirety of the human civilization,
00:17:58.340 | is an interesting year
00:17:59.980 | for the development of artificial intelligence.
00:18:02.580 | A lot of interesting stuff is happening.
00:18:04.580 | So, meta is a big part of that.
00:18:07.300 | Meta has developed LLAMA,
00:18:09.980 | which is a 65 billion parameter model.
00:18:12.240 | There's a lot of interesting questions I can ask here,
00:18:16.660 | one of which has to do with open source.
00:18:19.020 | But first, can you tell the story
00:18:21.040 | of developing of this model,
00:18:24.020 | and making the complicated decision of how to release it?
00:18:29.020 | - Yeah, sure.
00:18:30.820 | I think you're right, first of all,
00:18:32.020 | that in the last year, there have been a bunch of advances
00:18:36.420 | on scaling up these large transformer models.
00:18:39.660 | So, there's the language equivalent of it
00:18:41.160 | with large language models.
00:18:43.100 | There's sort of the image generation equivalent
00:18:45.900 | with these large diffusion models.
00:18:48.260 | There's a lot of fundamental research that's gone into this.
00:18:51.420 | And meta has taken the approach
00:18:55.540 | of being quite open and academic
00:19:00.180 | in our development of AI.
00:19:04.260 | Part of this is we wanna have the best people
00:19:06.900 | in the world researching this.
00:19:08.420 | And a lot of the best people wanna know
00:19:11.500 | that they're gonna be able to share their work.
00:19:12.900 | So, that's part of the deal that we have,
00:19:15.820 | is that we can get,
00:19:17.860 | if you're one of the top AI researchers in the world,
00:19:20.420 | you can come here, you can get access
00:19:21.700 | to kind of industry scale infrastructure.
00:19:25.500 | And part of our ethos is that we wanna share
00:19:29.060 | what's invented broadly.
00:19:32.100 | We do that with a lot of the different AI tools
00:19:34.300 | that we create.
00:19:35.660 | And LLAMA is the language model
00:19:37.700 | that our research team made.
00:19:39.820 | And we did a limited open source release for it,
00:19:44.820 | which was intended for researchers to be able to use it.
00:19:50.620 | But responsibility and getting safety right on these
00:19:55.620 | is very important.
00:19:57.100 | So, we didn't think that, for the first one,
00:19:59.780 | there were a bunch of questions around
00:20:01.980 | whether we should be releasing this commercially.
00:20:04.180 | So, we kind of punched it on that for V1 of LLAMA
00:20:08.180 | and just released it for research.
00:20:09.860 | Now, obviously, by releasing it for research,
00:20:13.300 | it's out there, but companies know
00:20:15.180 | that they're not supposed to kind of put it
00:20:17.260 | into commercial releases.
00:20:18.780 | And we're working on the follow-up models for this
00:20:22.700 | and thinking through how exactly this should work
00:20:27.700 | for follow-on now that we've had time
00:20:29.460 | to work on a lot more of the safety
00:20:31.820 | and the pieces around that.
00:20:33.940 | But overall, I mean, this is,
00:20:35.740 | I just kind of think that it would be good
00:20:42.020 | if there were a lot of different folks
00:20:45.940 | who had the ability to build state-of-the-art technology here
00:20:50.940 | and not just a small number of big companies.
00:20:56.300 | Where to train one of these AI models,
00:20:58.620 | the state-of-the-art models,
00:21:00.060 | just takes hundreds of millions of dollars
00:21:04.500 | of infrastructure, right?
00:21:05.700 | So, there are not that many organizations in the world
00:21:09.540 | that can do that at the biggest scale today.
00:21:13.020 | And now, it gets more efficient every day.
00:21:16.180 | So, I do think that that will be available
00:21:19.860 | to more folks over time.
00:21:20.940 | But I just think there's all this innovation out there
00:21:23.860 | that people can create.
00:21:25.460 | And I just think that we'll also learn a lot
00:21:29.980 | by seeing what the whole community of students
00:21:32.980 | and hackers and startups and different folks
00:21:37.780 | build with this.
00:21:38.620 | And that's kind of been how we've approached this.
00:21:40.380 | And it's also how we've done a lot of our infrastructure.
00:21:43.020 | And we took our whole data center design
00:21:44.940 | and our server design,
00:21:46.020 | and we built this Open Compute Project
00:21:48.020 | where we just made that public.
00:21:49.140 | And part of the theory was like,
00:21:51.500 | "All right, if we make it so that more people
00:21:52.780 | "can use this server design,
00:21:54.340 | "then that'll enable more innovation.
00:21:57.980 | "It'll also make the server design more efficient.
00:22:00.100 | "And that'll make our business more efficient too."
00:22:02.500 | So, that's worked.
00:22:03.340 | And we've just done this with a lot of our infrastructure.
00:22:06.460 | - So, for people who don't know,
00:22:07.700 | you did the limited release, I think,
00:22:09.180 | in February of this year, of LLAMA.
00:22:12.540 | And it got quote, unquote, leaked,
00:22:15.500 | meaning like it escaped the limited release aspect.
00:22:21.500 | But it was something you probably anticipated
00:22:28.220 | given that it's just released to researchers.
00:22:29.780 | - We shared it with researchers.
00:22:30.740 | - You're right.
00:22:31.580 | So, it's just trying to make sure
00:22:33.340 | that there's like a slow release.
00:22:35.460 | - Yeah.
00:22:36.780 | - But from there, I just would love to get your comment
00:22:39.060 | on what happened next, which is like,
00:22:40.660 | there's a very vibrant open source community
00:22:42.740 | that just builds stuff on top of it.
00:22:44.500 | There's LLAMA CPP, basically stuff
00:22:48.580 | that makes it more efficient to run on smaller computers.
00:22:51.740 | There's combining with reinforcement learning
00:22:54.900 | with human feedback.
00:22:55.860 | So, some of the different interesting fine tuning mechanisms.
00:22:59.460 | There's then also like fine tuning in a GPT-3 generations.
00:23:02.860 | There's a lot of GPT-4ALL, Alpaca, Colossal AI,
00:23:07.500 | all these kinds of models just kind of spring up,
00:23:09.300 | like run on top of it.
00:23:11.540 | - Yeah.
00:23:12.380 | - What do you think about that?
00:23:13.220 | - No, I think it's been really neat to see.
00:23:15.020 | I mean, there's been folks who are getting it to run
00:23:17.820 | on local devices, right?
00:23:19.340 | So, if you're an individual who just wants to experiment
00:23:22.420 | with this at home, you probably don't have a large budget
00:23:26.380 | to get access to like a large amount of cloud compute.
00:23:29.460 | So, getting it to run on your local laptop
00:23:31.700 | is pretty good, right?
00:23:35.140 | And pretty relevant.
00:23:37.420 | And then there were things like, yeah, Lama CPP
00:23:39.860 | re-implemented it more efficiently.
00:23:42.500 | So, now even when we run our own versions of it,
00:23:45.900 | we can do it on way less compute
00:23:47.260 | and it just way more efficient, save a lot of money
00:23:50.380 | for everyone who uses this.
00:23:51.740 | So, that is good.
00:23:53.780 | I do think it's worth calling out that
00:23:59.300 | because this was a relatively early release,
00:24:03.500 | Lama isn't quite as on the frontier as, for example,
00:24:08.500 | the biggest open AI models or the biggest Google models.
00:24:13.220 | So, I mean, you mentioned that the largest Lama model
00:24:16.620 | that we released had 65 billion parameters.
00:24:19.300 | And no one knows, I guess, outside of open AI,
00:24:24.300 | exactly what the specs are for GPT-4.
00:24:27.860 | But I think the, my understanding is it's like
00:24:30.900 | 10 times bigger.
00:24:32.140 | And I think Google's Palm model is also, I think,
00:24:35.060 | has about 10 times as many parameters.
00:24:36.780 | Now, the Lama models are very efficient.
00:24:38.580 | So, they perform well for something
00:24:40.420 | that's around 65 billion parameters.
00:24:42.620 | So, for me, that was also part of this
00:24:44.340 | because there was this whole debate around,
00:24:47.060 | is it good for everyone in the world to have access
00:24:50.780 | to the most frontier AI models?
00:24:55.020 | And I think as the AI models start approaching something
00:25:00.460 | that's like a super human intelligence,
00:25:03.980 | I think that that's a bigger question
00:25:05.300 | that we'll have to grapple with.
00:25:06.300 | But right now, I mean, these are still very basic tools.
00:25:10.860 | They're powerful in the sense that,
00:25:14.260 | a lot of open source software like databases or web servers
00:25:17.620 | can enable a lot of pretty important things.
00:25:20.540 | But I don't think anyone looks at the current generation
00:25:27.020 | of Lama and thinks it's anywhere near a super intelligence.
00:25:30.220 | So, I think that a bunch of those questions around,
00:25:32.300 | like, is it good to kind of get out there?
00:25:35.900 | I think at this stage, surely,
00:25:37.780 | you want more researchers working on it
00:25:40.140 | for all the reasons that open source software
00:25:43.340 | has a lot of advantages.
00:25:44.580 | And we talked about efficiency before,
00:25:45.900 | but another one is just open source software
00:25:48.020 | tends to be more secure
00:25:49.780 | because you have more people looking at it openly
00:25:52.100 | and scrutinizing it and finding holes in it.
00:25:55.860 | And that makes it more safe.
00:25:57.100 | So, I think at this point, it's more,
00:25:59.300 | I think it's generally agreed upon
00:26:01.980 | that open source software is generally more secure
00:26:05.140 | and safer than things that are kind of developed in a silo
00:26:08.780 | where people try to get through security through obscurity.
00:26:11.340 | So, I think that for the scale
00:26:13.420 | of what we're seeing now with AI,
00:26:16.420 | I think we're more likely to get to good alignment
00:26:19.500 | and good understanding of kind of what needs to do
00:26:23.460 | to make this work well by having it be open source.
00:26:26.100 | And that's something that I think
00:26:27.180 | is quite good to have out there.
00:26:28.940 | And happening publicly at this point.
00:26:30.900 | - Meta released a lot of models as open source.
00:26:34.580 | So, the massively multilingual speech model.
00:26:38.380 | - Yeah, that was neat.
00:26:39.660 | - I mean, I'll ask you questions about those,
00:26:42.020 | but the point is you've open sourced quite a lot.
00:26:45.860 | You've been spearheading the open source movement.
00:26:47.580 | Whereas that's really positive,
00:26:50.340 | inspiring to see from one angle,
00:26:51.940 | from the research angle.
00:26:52.980 | Of course, there's folks who are really terrified
00:26:55.300 | about the existential threat of artificial intelligence
00:26:58.460 | and those folks will say that,
00:27:00.180 | you have to be careful about the open sourcing step.
00:27:06.420 | But where do you see the future of open source here
00:27:09.540 | as part of meta?
00:27:11.300 | The tension here is,
00:27:13.820 | do you wanna release the magic sauce?
00:27:16.060 | That's one tension.
00:27:18.020 | And the other one is,
00:27:19.940 | do you wanna put a powerful tool in the hands
00:27:22.900 | of bad actors,
00:27:24.820 | even though it probably has a huge amount
00:27:26.900 | of positive impact also.
00:27:29.140 | - Yeah, I mean, again,
00:27:29.980 | I think for the stage that we're at
00:27:31.500 | in the development of AI,
00:27:32.860 | I don't think anyone looks at the current state of things
00:27:35.260 | and thinks that this is super intelligence.
00:27:37.500 | And the models that we're talking about,
00:27:40.900 | the Lama models here are,
00:27:43.460 | generally an order of magnitude smaller
00:27:46.220 | than what open AI or Google are doing.
00:27:48.020 | So, I think that at least for the stage that we're at now,
00:27:52.060 | the equity is balanced strongly in my view
00:27:55.820 | towards doing this more openly.
00:27:58.300 | I think if you got something
00:27:59.580 | that was closer to super intelligence,
00:28:02.780 | then I think you'd have to discuss that more
00:28:05.220 | and think through that a lot more.
00:28:07.860 | And we haven't made a decision yet
00:28:09.500 | as to what we would do if we were in that position,
00:28:11.700 | but I think there's a good chance
00:28:13.580 | that we're pretty far off from that position.
00:28:15.140 | So, I'm not,
00:28:19.020 | I'm certainly not saying that the position
00:28:22.580 | that we're taking on this now
00:28:24.060 | applies to every single thing that we would ever do.
00:28:27.140 | And certainly inside the company,
00:28:29.340 | we probably do more open source work
00:28:30.980 | than most of the other big tech companies,
00:28:34.340 | but we also don't open source everything.
00:28:36.060 | We're in a lot of our,
00:28:37.220 | the core kind of app code for WhatsApp
00:28:40.500 | or Instagram or something.
00:28:41.580 | I mean, we're not open sourcing that.
00:28:42.900 | It's not like a general enough piece of software
00:28:45.820 | that would be useful for a lot of people
00:28:47.380 | to do different things.
00:28:49.240 | Whereas the software that we do,
00:28:53.340 | whether it's like an open source server design
00:28:55.740 | or basically things like Memcache,
00:29:00.140 | like a good, it was probably our earliest project
00:29:03.340 | that I worked on.
00:29:05.220 | It was probably one of the last things that I coded
00:29:07.460 | and led directly for the company.
00:29:09.360 | But basically this caching tool for quick data retrieval.
00:29:15.820 | These are things that are just broadly useful
00:29:19.900 | across anything that you wanna build.
00:29:22.020 | And I think that some of the language models now
00:29:25.660 | have that feel as well as some of the other things
00:29:28.100 | that we're building, like the translation tool
00:29:29.660 | that you just referenced.
00:29:31.300 | - So text to speech and speech to text,
00:29:34.340 | you've expanded it from around 100 languages
00:29:36.340 | to more than 1100 languages.
00:29:38.700 | - Yeah. - And you can identify
00:29:39.740 | more than, the model can identify
00:29:41.300 | more than 4,000 spoken languages,
00:29:44.020 | which is 40 times more than any known previous technology.
00:29:47.500 | To me, that's really, really, really exciting
00:29:49.740 | in terms of connecting the world,
00:29:51.740 | breaking down barriers that language creates.
00:29:54.680 | - Yeah, I think being able to translate
00:29:55.900 | between all of these different pieces in real time,
00:29:59.740 | this has been a kind of common sci-fi idea
00:30:04.740 | that we'd all have, whether it's an earbud or glasses
00:30:10.580 | or something that can help translate in real time
00:30:12.980 | between all these different languages.
00:30:15.140 | And that's one that I think technology
00:30:16.460 | is basically delivering now.
00:30:19.780 | So I think, yeah, I think that's pretty exciting.
00:30:22.900 | - You mentioned the next version of LLAMA.
00:30:24.920 | What can you say about the next version of LLAMA?
00:30:27.880 | What can you say about what were you working on
00:30:31.520 | in terms of release, in terms of the vision for that?
00:30:35.080 | - Well, a lot of what we're doing
00:30:36.840 | is taking the first version, which was primarily
00:30:40.720 | this research version, and trying to now build a version
00:30:44.680 | that has all of the latest state-of-the-art
00:30:49.640 | safety precautions built in.
00:30:52.180 | And we're using some more data to train it
00:30:56.920 | from across our services.
00:30:58.640 | But a lot of the work that we're doing internally
00:31:02.560 | is really just focused on making sure
00:31:04.360 | that this is as aligned and responsible as possible.
00:31:09.360 | And we're building a lot of our own,
00:31:12.920 | we're talking about kind of the open source infrastructure,
00:31:15.900 | but the main thing that we focus on building here,
00:31:20.120 | a lot of product experiences to help people connect
00:31:22.000 | and express themselves.
00:31:22.860 | So we're gonna, I've talked about a bunch of this stuff,
00:31:26.700 | but you'll have an assistant that you can talk to
00:31:30.940 | in WhatsApp.
00:31:31.780 | I think in the future, every creator will have
00:31:36.300 | kind of an AI agent that can kind of act on their behalf,
00:31:39.380 | that their fans can talk to.
00:31:41.180 | I wanna get to the point where every small business
00:31:43.700 | basically has an AI agent that people can talk to
00:31:46.940 | for, to do commerce and customer support
00:31:49.540 | and things like that.
00:31:50.360 | So there are gonna be all these different things
00:31:53.260 | and LLAMA or the language model underlying this
00:31:57.580 | is basically gonna be the engine that powers that.
00:32:00.500 | The reason to open source it is that,
00:32:02.700 | as we did with the first version,
00:32:06.660 | is that it basically it unlocks a lot of innovation
00:32:11.660 | in the ecosystem, will make our products better as well.
00:32:15.760 | And also gives us a lot of valuable feedback
00:32:17.680 | on security and safety,
00:32:19.140 | which is important for making this good.
00:32:21.100 | But yeah, I mean, the work that we're doing
00:32:23.460 | to advance the infrastructure,
00:32:25.340 | it's basically at this point,
00:32:27.740 | taking it beyond a research project
00:32:29.900 | into something which is ready to be
00:32:32.020 | kind of core infrastructure,
00:32:33.700 | not only for our own products,
00:32:35.140 | but hopefully for a lot of other things out there too.
00:32:39.140 | - Do you think the LLAMA or the language model
00:32:41.840 | underlying that version two will be open sourced?
00:32:47.980 | Do you have internal debate around that,
00:32:49.900 | the pros and cons and so on?
00:32:51.540 | - This is, I mean, we were talking about the debates
00:32:53.260 | that we have internally and I think,
00:32:55.060 | I think the question is how to do it, right?
00:32:58.840 | I mean, I think we did the research license for V1
00:33:03.700 | and I think the big thing that we're thinking about
00:33:07.500 | is basically like what's the right way.
00:33:11.180 | - So there was a leak that happened.
00:33:13.100 | I don't know if you can comment on it for V1.
00:33:15.740 | We released it as a research project
00:33:18.020 | for researchers to be able to use,
00:33:21.700 | but in doing so we put it out there.
00:33:23.180 | So we were very clear that anyone who uses the code
00:33:28.060 | and the weights doesn't have a commercial license
00:33:30.100 | to put into products.
00:33:31.020 | And we've generally seen people respect that, right?
00:33:33.860 | It's like you don't have any reputable companies
00:33:36.460 | that are basically trying to put this
00:33:37.660 | into their commercial products.
00:33:40.480 | But yeah, but by sharing it with so many researchers,
00:33:45.500 | it did leave the building.
00:33:46.740 | - But what have you learned from that process
00:33:49.060 | that you might be able to apply to V2
00:33:52.260 | about how to release it safely, effectively,
00:33:55.100 | if you release it?
00:33:57.620 | - Yeah, well, I mean, I think a lot of the feedback,
00:33:59.440 | like I said, is just around different things around,
00:34:03.380 | how do you fine tune models to make them more aligned
00:34:06.620 | and safer?
00:34:07.440 | And you see all the different data recipes that,
00:34:10.540 | you mentioned a lot of different projects
00:34:14.140 | that are based on this.
00:34:14.980 | And there's one at Berkeley,
00:34:16.420 | there's just like all over.
00:34:18.420 | And people have tried a lot of different things
00:34:23.420 | and we've tried a bunch of stuff internally.
00:34:25.380 | So kind of we're making progress here,
00:34:29.200 | but also we're able to learn from some of the best ideas
00:34:31.620 | in the community.
00:34:32.460 | And I think we wanna just continue pushing that forward.
00:34:36.420 | But I don't have any news to announce on this,
00:34:40.460 | if that's what you're asking.
00:34:41.780 | I mean, this is a thing that we're still kind of,
00:34:46.780 | actively working through the right way
00:34:51.420 | to move forward here.
00:34:52.260 | - The details of the secret sauce
00:34:54.460 | are still being developed, I see.
00:34:57.540 | Can you comment on, what do you think of the thing
00:35:00.620 | that worked for GPT,
00:35:01.860 | which is the reinforcement learning with human feedback?
00:35:04.200 | So doing this alignment process,
00:35:06.820 | do you find it interesting?
00:35:08.740 | And as part of that, let me ask,
00:35:10.100 | 'cause I talked to Jan Lekun before talking to you today.
00:35:13.020 | He asked me to ask, or suggested that I ask,
00:35:16.540 | do you think LLM fine tuning
00:35:19.380 | will need to be crowdsourced Wikipedia style?
00:35:22.240 | So crowdsourcing.
00:35:24.780 | So this kind of idea of how to integrate the human
00:35:29.180 | in the fine tuning of these foundation models.
00:35:32.980 | - Yeah, I think that's a really interesting idea
00:35:35.540 | that I've talked to Jan about a bunch.
00:35:39.300 | And we were talking about,
00:35:43.060 | how do you basically train these models
00:35:45.620 | to be as safe and aligned and responsible as possible?
00:35:50.180 | And different groups out there who are doing development
00:35:53.260 | test different data recipes in fine tuning.
00:35:57.120 | But this idea that you just mentioned is,
00:36:00.520 | that at the end of the day,
00:36:04.180 | instead of having kind of one group fine tune some stuff
00:36:07.100 | and then another group,
00:36:09.020 | produce a different fine tuning recipe
00:36:10.660 | and then us trying to figure out
00:36:12.760 | which one we think works best
00:36:13.980 | to produce the most aligned model.
00:36:16.120 | I do think that it would be nice
00:36:21.660 | if you could get to a point where you had
00:36:23.920 | a Wikipedia style collaborative way
00:36:27.620 | for a kind of a broader community to fine tune it as well.
00:36:33.780 | Now, there's a lot of challenges in that,
00:36:36.020 | both from an infrastructure and community management
00:36:41.020 | and product perspective about how you do that.
00:36:42.740 | So I haven't worked that out yet.
00:36:45.140 | But as an idea, I think it's quite compelling.
00:36:48.740 | And I think it goes well with the ethos
00:36:50.940 | of open sourcing the technology is also finding a way
00:36:54.060 | to have a kind of community driven training of it.
00:36:59.060 | But I think that there are a lot of questions on this.
00:37:03.380 | In general, these questions around
00:37:05.420 | what's the best way to produce aligned AI models,
00:37:09.700 | it's very much a research area.
00:37:12.060 | And it's one that I think we will need to make
00:37:14.340 | as much progress on as the kind of core intelligence
00:37:17.220 | capability of the models themselves.
00:37:21.100 | - Well, I just did a conversation with Jimmy Wales,
00:37:23.420 | the founder of Wikipedia.
00:37:24.620 | And to me, Wikipedia is one of the greatest websites
00:37:28.300 | ever created.
00:37:29.860 | And it's a kind of a miracle that it works.
00:37:31.900 | And I think it has to do with something that you mentioned,
00:37:33.980 | which is community.
00:37:35.360 | You have a small community of editors
00:37:38.180 | that somehow work together well.
00:37:40.500 | And they handle very controversial topics
00:37:45.500 | and they handle it with balance and with grace,
00:37:48.820 | despite sort of the attacks that will often happen.
00:37:51.820 | - A lot of the time.
00:37:52.660 | I mean, it has issues just like any other human system.
00:37:56.320 | But yes, I mean, the balance is,
00:37:58.580 | I mean, it's amazing what they've been able to achieve,
00:38:01.460 | but it's also not perfect.
00:38:03.020 | And I think that there's still a lot of challenges.
00:38:06.880 | - Right, the more controversial the topic,
00:38:09.880 | the more difficult the journey towards,
00:38:14.880 | quote unquote, truth or knowledge or wisdom
00:38:18.060 | that Wikipedia tries to capture.
00:38:20.020 | In the same way AI models,
00:38:21.300 | we need to be able to generate those same things,
00:38:25.000 | truth, knowledge, and wisdom.
00:38:26.800 | And how do you align those models
00:38:28.480 | that they generate something that is closest to truth?
00:38:33.480 | There's these concerns about misinformation,
00:38:38.060 | all this kind of stuff that nobody can define.
00:38:41.480 | And it's something that we together
00:38:44.880 | as a human species have to define.
00:38:46.840 | Like what is truth and how to help AI systems generate that.
00:38:50.640 | 'Cause one of the things language models do really well
00:38:53.160 | is generate convincing sounding things
00:38:55.940 | that can be completely wrong.
00:38:58.600 | And so how do you align it to be less wrong?
00:39:03.600 | And part of that is the training
00:39:06.760 | and part of that is the alignment
00:39:08.720 | and however you do the alignment stage.
00:39:10.680 | And just like you said, it's a very new
00:39:14.000 | and a very open research problem.
00:39:16.640 | - Yeah, and I think that there's also a lot of questions
00:39:20.200 | about whether the current architecture for LLMs,
00:39:25.320 | as you continue scaling it, what happens.
00:39:28.180 | I mean, a lot of what's been exciting in the last year
00:39:33.640 | is that there's clearly a qualitative breakthrough
00:39:35.960 | where with some of the GPT models that OpenAI put out
00:39:40.560 | and that others have been able to do as well.
00:39:43.560 | I think it reached a kind of level of quality
00:39:45.840 | where people are like, wow, this feels different
00:39:48.920 | and like it's gonna be able to be the foundation
00:39:52.840 | for building a lot of awesome products
00:39:55.000 | and experiences and value.
00:39:57.000 | But I think that the other realization that people have
00:39:58.920 | is wow, we just made a breakthrough.
00:40:00.880 | If there are other breakthroughs quickly,
00:40:05.840 | then I think that there's the sense
00:40:06.840 | that maybe we're closer to general intelligence.
00:40:11.120 | But I think that that idea is predicated on the idea
00:40:13.320 | that I think people believe that there's still generally
00:40:16.440 | a bunch of additional breakthroughs to make
00:40:18.280 | and that we just don't know
00:40:21.440 | how long it's gonna take to get there.
00:40:23.120 | And one view that some people have,
00:40:25.040 | this doesn't tend to be my view as much,
00:40:27.920 | is that simply scaling the current LLMs
00:40:32.160 | and getting to higher parameter count models by itself
00:40:35.720 | will get to something that is closer
00:40:37.440 | to general intelligence.
00:40:40.280 | But I don't know, I tend to think
00:40:43.160 | that there's probably more fundamental steps
00:40:48.160 | that need to be taken along the way there.
00:40:50.000 | But still, the leaps taken with this extra alignment step
00:40:55.000 | is quite incredible, quite surprising to a lot of folks.
00:40:58.560 | And on top of that, when you start to have
00:41:02.160 | hundreds of millions of people potentially
00:41:04.200 | using a product that integrates that,
00:41:07.200 | you can start to see civilization transforming effects
00:41:10.800 | before you achieve super, quote unquote, super intelligence.
00:41:15.020 | It could be super transformative
00:41:18.040 | without being a super intelligence.
00:41:20.200 | - Oh yeah, I mean, I think that there are gonna be
00:41:22.480 | a lot of amazing products and value that can be created
00:41:26.040 | with the current level of technology.
00:41:27.880 | To some degree, I'm excited to work on a lot
00:41:33.960 | of those products over the next few years.
00:41:35.720 | And I think it would just create
00:41:37.760 | a tremendous amount of whiplash
00:41:39.440 | if the number of breakthroughs keeps,
00:41:41.840 | like if there keep on being stacked breakthroughs,
00:41:44.000 | because I think to some degree,
00:41:45.320 | industry and the world needs some time
00:41:47.040 | to kind of build these breakthroughs
00:41:49.200 | into the products and experiences that we all use
00:41:51.920 | so that we can actually benefit from them.
00:41:53.980 | But I don't know, I think that there's just
00:41:58.880 | like an awesome amount of stuff to do.
00:42:02.480 | I mean, I think about like all of the,
00:42:05.240 | I don't know, small businesses
00:42:06.520 | or individual entrepreneurs out there who,
00:42:08.740 | you know, now we're gonna be able to get help coding
00:42:13.520 | the things that they need to go build things
00:42:15.280 | or designing the things that they need
00:42:16.880 | or we'll be able to use these models
00:42:20.060 | to be able to do customer support
00:42:21.480 | for the people that they're serving,
00:42:24.080 | you know, over WhatsApp without having to,
00:42:26.040 | you know, I think that that's just gonna be,
00:42:28.520 | I just think that this is all gonna be super exciting.
00:42:32.000 | It's gonna create better experiences for people
00:42:34.840 | and just unlock a ton of innovation and value.
00:42:37.760 | So I don't know if you know, but you know, what is it?
00:42:41.360 | Over 3 billion people use WhatsApp, Facebook and Instagram.
00:42:47.240 | So any kind of AI-fueled products that go into that,
00:42:51.880 | like we're talking about anything with LLMs
00:42:54.760 | will have a tremendous amount of impact.
00:42:57.000 | Do you have ideas and thoughts about possible products
00:43:01.800 | that might start being integrated
00:43:04.960 | into these platforms used by so many people?
00:43:08.960 | - Yeah, I think there's three main categories
00:43:11.760 | of things that we're working on.
00:43:13.360 | The first that I think is probably the most interesting
00:43:18.360 | is, you know, there's this notion of like,
00:43:27.720 | you're gonna have an assistant or an agent
00:43:30.600 | who you can talk to.
00:43:31.420 | And I think probably the biggest thing
00:43:33.600 | that's different about my view of how this plays out
00:43:36.280 | from what I see with OpenAI and Google and others
00:43:40.440 | is, you know, everyone else is building like
00:43:43.160 | the one singular AI, right?
00:43:45.360 | It's like, okay, you talk to chat GPT
00:43:47.400 | or you talk to Bard or you talk to Bing.
00:43:50.440 | And my view is that there are going to be
00:43:55.440 | a lot of different AIs that people are gonna wanna engage
00:43:59.520 | with just like you wanna use, you know,
00:44:02.120 | a number of different apps for different things
00:44:03.920 | and you have relationships with different people
00:44:06.380 | in your life who fill different emotional roles for you.
00:44:10.100 | And so I think that they're gonna be,
00:44:16.600 | people have a reason that I think you don't just want
00:44:18.820 | like a singular AI.
00:44:20.260 | And that I think is probably the biggest distinction
00:44:23.380 | in terms of how I think about this.
00:44:25.460 | And a bunch of these things, I think you'll want an assistant.
00:44:28.700 | I mean, I mentioned a couple of these before.
00:44:30.980 | I think like every creator who you interact with
00:44:33.420 | will ultimately want some kind of AI
00:44:35.900 | that can proxy them and be something
00:44:38.580 | that their fans can interact with
00:44:40.360 | or that allows them to interact with their fans.
00:44:43.520 | This is like the common creator promise.
00:44:46.380 | Everyone's trying to build a community
00:44:48.620 | and engage with people and they want tools
00:44:50.260 | to be able to amplify themselves more
00:44:51.780 | and be able to do that.
00:44:52.900 | But you only have 24 hours in a day.
00:44:57.700 | So I think having the ability to basically
00:45:02.000 | like bottle up your personality
00:45:04.220 | and or like give your fans information
00:45:08.460 | about when you're performing a concert or something like that.
00:45:10.900 | I mean, that I think is gonna be something
00:45:12.540 | that's super valuable, but it's not just that,
00:45:14.980 | you know, again, it's not this idea that
00:45:16.500 | I think people are gonna want just one singular AI.
00:45:18.700 | I think you're gonna wanna interact
00:45:20.900 | with a lot of different entities.
00:45:22.440 | And then I think there's the business version of this too,
00:45:24.340 | which we've touched on a couple of times, which is,
00:45:26.840 | I think every business in the world is gonna want
00:45:31.580 | basically an AI that, you know,
00:45:34.700 | it's like you have your page on Instagram or Facebook
00:45:37.660 | or WhatsApp or whatever, and you wanna point people
00:45:40.260 | to an AI that people can interact with,
00:45:43.540 | but you wanna know that that AI
00:45:44.940 | is only gonna sell your products.
00:45:46.220 | You don't want it recommending your competitors stuff.
00:45:49.220 | So it's not like there can be like just, you know,
00:45:51.940 | one singular AI that can answer all the questions
00:45:55.340 | for a person because, you know,
00:45:56.820 | that AI might not actually be aligned with you
00:45:59.700 | as a business to really just do the best job
00:46:03.180 | providing support for your product.
00:46:05.540 | So I think that there's gonna be a clear need
00:46:08.020 | in the market and in people's lives
00:46:11.860 | for there to be a bunch of these.
00:46:14.180 | - Part of that is figuring out the research,
00:46:17.260 | the technology that enables the personalization
00:46:20.220 | that you're talking about.
00:46:21.460 | So not one centralized God-like LLM,
00:46:24.820 | but one, just a huge diversity of them
00:46:28.340 | that's fine-tuned to particular needs, particular styles,
00:46:31.340 | particular businesses, particular brands,
00:46:34.420 | all that kind of stuff.
00:46:35.540 | - And also enabling, just enabling people
00:46:37.460 | to create them really easily for the, you know,
00:46:40.140 | for your own business, or if you're a creator
00:46:42.660 | to be able to help you engage with your fans.
00:46:45.220 | And I've meant that's, so yeah,
00:46:48.420 | I think that there's a clear kind of interesting
00:46:51.420 | product direction here that I think is fairly unique
00:46:55.060 | from what any of the other big companies are taking.
00:46:59.500 | It also aligns well with this sort of open source approach,
00:47:02.100 | because again, we sort of believe
00:47:04.020 | in this more community-oriented, more democratic approach
00:47:08.220 | to building out the products and technology around this.
00:47:11.140 | We don't think that there's gonna be the one true thing.
00:47:12.900 | We think that there should be kind of a lot of development.
00:47:16.060 | So that part of things I think
00:47:18.300 | is gonna be really interesting.
00:47:19.180 | And we could go, probably spend a lot of time
00:47:21.500 | talking about that and the kind of implications
00:47:23.940 | of that approach being different
00:47:26.980 | from what others are taking.
00:47:28.420 | But there's a bunch of other simpler things
00:47:31.180 | that I think we're also gonna do,
00:47:32.140 | just going back to your question around
00:47:34.540 | how this finds its way into, like, what do we build?
00:47:37.140 | There are gonna be a lot of simpler things around,
00:47:40.780 | okay, you post photos on Instagram and Facebook
00:47:47.060 | and, you know, and WhatsApp and Messenger,
00:47:49.580 | and like, you want the photos to look as good as possible.
00:47:51.820 | So like having an AI that you can just like take a photo
00:47:54.580 | and then just tell it like,
00:47:56.140 | okay, I wanna edit this thing or describe this.
00:47:58.260 | It's like, I think we're gonna have tools
00:47:59.700 | that are just way better
00:48:01.340 | than what we've historically had on this.
00:48:03.900 | And that's more in the image and media generation side
00:48:07.140 | than the large language model side,
00:48:08.700 | but it all kind of plays off of advances in the same space.
00:48:13.540 | So there are a lot of tools
00:48:15.460 | that I think are just gonna get built
00:48:16.420 | into every one of our products.
00:48:17.620 | I think every single thing that we do
00:48:19.580 | is gonna basically get evolved in this direction, right?
00:48:22.500 | It's like in the future,
00:48:23.900 | if you're advertising on our services,
00:48:25.500 | like, do you need to make your own kind of ad creative?
00:48:29.820 | It's no, you'll just, you know, you just tell us,
00:48:33.260 | okay, I'm a dog walker and I am willing to walk people's dogs
00:48:38.260 | and help me find the right people
00:48:41.900 | and like create the ad unit that will perform the best
00:48:45.900 | and like give an objective to the system.
00:48:48.740 | And it just kind of like connects you with the right people.
00:48:52.380 | - Well, that's a super powerful idea
00:48:54.820 | of generating the language,
00:48:57.100 | almost like a rigorous A/B testing for you
00:49:02.100 | that works to find the best customer for your thing.
00:49:07.460 | I mean, to me, advertisement, when done well,
00:49:10.860 | just finds a good match between a human being
00:49:14.820 | and a thing that will make that human being happy.
00:49:18.060 | - Yeah, totally.
00:49:19.020 | - And do that as efficiently as possible.
00:49:21.180 | - When it's done well, people actually like it.
00:49:23.660 | You know, I think that there's a lot of examples
00:49:25.700 | where it's not done well and it's annoying
00:49:27.460 | and I think that that's what kind of gives it a bad rap.
00:49:29.940 | But yeah, and a lot of this stuff is possible today.
00:49:33.380 | I mean, obviously A/B testing stuff
00:49:34.900 | is built into a lot of these frameworks.
00:49:36.860 | The thing that's new is having technology
00:49:39.380 | that can generate the ideas for you about what to A/B test.
00:49:42.340 | So I think that that's exciting.
00:49:43.260 | So this will just be across like everything
00:49:46.020 | that we're doing,
00:49:46.860 | all the metaverse stuff that we're doing, right?
00:49:48.420 | It's like you wanna create worlds in the future,
00:49:50.340 | you'll just describe them
00:49:51.540 | and it'll create the code for you.
00:49:54.420 | - So natural language becomes the interface we use
00:49:57.500 | for all the ways we interact with the computer,
00:50:01.260 | with the digital--
00:50:03.820 | - More of them, yeah, yeah, totally.
00:50:05.820 | - Yeah, which is what everyone can do
00:50:07.740 | using natural language.
00:50:08.860 | And with translation, you can do it in any kind of language.
00:50:11.820 | I mean, for the personalization,
00:50:14.900 | it's really, really, really interesting.
00:50:17.620 | - Yeah.
00:50:18.460 | - It unlocks so many possible things.
00:50:20.660 | I mean, I, for one, look forward
00:50:22.140 | to creating a copy of myself.
00:50:24.580 | - I know, we talked about this last time.
00:50:26.180 | - But this has, since the last time, this becomes--
00:50:29.700 | - Now we're closer.
00:50:31.580 | - Much closer.
00:50:32.700 | Like I can literally just having interacted
00:50:34.540 | with some of these language models,
00:50:36.100 | I can see the absurd situation
00:50:37.740 | where I'll have a large or a Lex language model,
00:50:44.780 | and I'll have to have a conversation with him
00:50:47.980 | about like, hey, listen,
00:50:50.060 | like you're just getting out of line
00:50:51.540 | and having a conversation where you fine tune that thing
00:50:53.580 | to be a little bit more respectful or something like this.
00:50:56.020 | I mean, that's going to be the,
00:51:00.020 | that seems like an amazing product
00:51:03.140 | for businesses, for humans,
00:51:06.500 | just not just the assistant that's facing the individual,
00:51:11.340 | but the assistant that represents the individual
00:51:14.300 | to the public, both directions.
00:51:18.060 | There's basically a layer that is the AI system
00:51:22.220 | through which you interact with the outside world,
00:51:25.780 | with the outside world that has humans in it.
00:51:28.620 | That's really interesting.
00:51:30.100 | And you that have social networks
00:51:33.540 | that connect billions of people,
00:51:35.740 | it seems like a heck of a large scale place
00:51:39.860 | to test some of this stuff out.
00:51:42.260 | - Yeah, I mean, I think part of the reason
00:51:43.620 | why creators will want to do this
00:51:45.740 | is because they already have the communities
00:51:47.420 | on our services.
00:51:49.260 | Yeah, and a lot of the interface for this stuff today
00:51:53.660 | are chat type interfaces.
00:51:55.380 | And between WhatsApp and Messenger,
00:51:58.660 | I think that those are just great,
00:52:00.740 | great ways to interact with people.
00:52:03.660 | - So some of this is philosophy,
00:52:04.980 | but do you see a near term future
00:52:07.820 | where you have some of the people you're friends with
00:52:11.460 | are AI systems on these social networks,
00:52:15.780 | on Facebook, on Instagram, even on WhatsApp,
00:52:19.500 | having conversations where some heterogeneous,
00:52:23.500 | some is human, some is AI?
00:52:25.340 | - I think we'll get to that.
00:52:26.940 | And if only just empirically looking at,
00:52:31.240 | then Microsoft released this thing called Chowice
00:52:35.660 | several years ago in China,
00:52:38.020 | and it was a pre-LLM chatbot technology
00:52:41.900 | that was a lot simpler than what's possible today.
00:52:46.180 | And I think it was like tens of millions of people
00:52:48.300 | were using this and just really,
00:52:51.340 | it became quite attached and built relationships with it.
00:52:54.580 | And I think that there's services today like Replica
00:52:58.820 | where people are doing things like that.
00:53:01.340 | So I think that there's certainly needs
00:53:07.020 | for companionship that people have, older people.
00:53:11.020 | And I think most people probably don't have as many friends
00:53:16.740 | as they would like to have.
00:53:17.980 | If you look at,
00:53:18.820 | there's some interesting demographic studies
00:53:21.500 | around the average person has,
00:53:25.700 | the number of close friends that they have is fewer today
00:53:30.220 | than it was 15 years ago.
00:53:32.060 | And I mean, that gets to like,
00:53:34.580 | this is like the core thing that I think about
00:53:38.180 | in terms of building services that help connect people.
00:53:40.740 | So I think you'll get tools that help people connect
00:53:43.940 | with each other are gonna be the primary thing
00:53:47.260 | that we wanna do.
00:53:48.740 | So you can imagine AI assistants that,
00:53:52.740 | just do a better job of reminding you
00:53:54.140 | when it's your friend's birthday
00:53:55.180 | and how you could celebrate them.
00:53:56.660 | Right, it's like right now we have like the little box
00:53:58.700 | in the corner of the website that tells you
00:54:00.780 | whose birthday it is and stuff like that.
00:54:02.340 | But it's, but at some level you don't just wanna like
00:54:06.660 | send everyone a note that says the same note
00:54:08.660 | saying happy birthday with an emoji.
00:54:10.660 | Right, so having something that's more of an,
00:54:13.940 | a social assistant in that sense,
00:54:16.100 | and like that can update you on what's going on
00:54:19.180 | in their life and like how you can reach out
00:54:22.180 | to them effectively, help you be a better friend.
00:54:24.900 | I think that that's something that's super powerful too.
00:54:28.140 | But yeah, beyond that,
00:54:31.420 | and there are all these different flavors
00:54:33.900 | of kind of personal AI is that I think could exist.
00:54:38.180 | So I think an assistant is sort of the kind of simplest one
00:54:41.380 | to wrap your head around,
00:54:42.580 | but I think a mentor or a life coach,
00:54:47.580 | if someone who can give you advice,
00:54:50.300 | who's maybe like a bit of a cheerleader
00:54:51.820 | who can help pick you up through all the challenges
00:54:53.740 | that inevitably we all go through on a daily basis.
00:54:58.140 | And that there's probably,
00:54:59.940 | some role for something like that.
00:55:01.540 | And then all the way,
00:55:03.220 | you can probably just go through a lot of the different type
00:55:05.580 | of kind of functional relationships
00:55:08.340 | that people have in their life.
00:55:10.020 | And I would bet that there will be companies out there
00:55:12.700 | that take a crack at a lot of these things.
00:55:15.620 | So I don't know,
00:55:17.140 | I think it's part of the interesting innovation
00:55:18.820 | that's gonna exist is that there are certainly a lot,
00:55:22.060 | like education tutors, right?
00:55:25.020 | It's like, I mean, I just look at my kids learning to code
00:55:28.860 | and they love it,
00:55:31.060 | but it's like they get stuck on a question
00:55:33.300 | and they have to wait till I can help answer it, right?
00:55:36.460 | Or someone else who they know
00:55:37.820 | can help answer the question in the future.
00:55:39.580 | They'll just, there'll be like a coding assistant
00:55:41.900 | that they have that is designed to be perfect
00:55:45.780 | for teaching a five and a seven year old how to code.
00:55:47.980 | And they'll just be able to ask questions all the time
00:55:50.900 | and be extremely patient.
00:55:53.340 | It's never gonna get annoyed at them, right?
00:55:56.580 | I think that there are all these different
00:55:58.740 | kind of relationships or functional relationships
00:56:00.860 | that we have in our lives that are really interesting.
00:56:05.640 | And I think one of the big questions is like,
00:56:07.500 | okay, is this all gonna just get bucketed into
00:56:09.860 | one singular AI?
00:56:12.300 | I just don't think so.
00:56:14.600 | - Do you think about,
00:56:15.580 | this is actually a question from Reddit,
00:56:18.060 | what the long-term effects of human communication
00:56:21.300 | when people can "talk with" others through a chatbot
00:56:26.140 | that augments their language automatically
00:56:28.580 | rather than developing social skills
00:56:30.360 | by making mistakes and learning?
00:56:32.020 | Will people just communicate by grunts in a generation?
00:56:37.260 | Do you think about long-term effects at scale
00:56:40.460 | the integration of AI in our social interaction?
00:56:43.020 | - Yeah, I mean, I think it's mostly good.
00:56:46.620 | I mean, that question was sort of framed in a negative way.
00:56:50.300 | But I mean, we were talking before about
00:56:52.700 | language models helping you communicate with,
00:56:54.900 | it was like language translation,
00:56:56.140 | helping you communicate with people
00:56:56.980 | who don't speak your language.
00:56:58.580 | I mean, at some level,
00:57:00.860 | what all this social technology is doing
00:57:03.260 | is helping people express themselves better to people
00:57:08.260 | in situations where they would otherwise
00:57:13.260 | have a hard time doing that.
00:57:14.180 | So part of it might be okay 'cause you speak
00:57:16.460 | a language that I don't know.
00:57:17.340 | That's a pretty basic one that,
00:57:19.100 | you know, I don't think people are gonna look at that
00:57:20.780 | and say, it's sad that we have the capacity to do that
00:57:24.300 | because I should have just learned your language, right?
00:57:26.180 | I mean, that's pretty high bar.
00:57:27.980 | But overall, I'd say there are all these impediments
00:57:32.980 | and language is an imperfect way
00:57:38.380 | for people to express thoughts and ideas.
00:57:41.580 | It's one of the best that we have.
00:57:43.440 | We have that, we have art, we have code.
00:57:46.540 | - But language is also a mapping of the way you think,
00:57:48.980 | the way you see the world, who you are.
00:57:51.380 | And one of the applications,
00:57:52.940 | I've recently talked to a person who's,
00:57:56.380 | actually a jiu-jitsu instructor.
00:57:58.020 | He said that when he emails parents
00:58:03.100 | about their son and daughter,
00:58:05.860 | that they can improve their discipline in class and so on,
00:58:10.500 | he often finds that he comes off a bit
00:58:13.060 | of more of an asshole than he would like.
00:58:15.020 | So he uses GPT to translate his original email
00:58:18.820 | into a nicer email.
00:58:20.740 | - Yeah, we hear this all the time.
00:58:21.580 | - A more polite one.
00:58:22.900 | - We hear this all the time.
00:58:23.760 | A lot of creators on our services tell us
00:58:25.920 | that one of the most stressful things
00:58:28.220 | is basically negotiating deals with brands and stuff,
00:58:32.260 | like the business side of it.
00:58:33.300 | Because they're like, I mean, they do their thing, right?
00:58:35.860 | And the creators, they're excellent at what they do
00:58:38.860 | and they just wanna connect with their community,
00:58:40.220 | but then they get really stressed.
00:58:42.020 | They go into their DMs and they see some brand
00:58:45.740 | wants to do something with them
00:58:46.680 | and they don't quite know how to negotiate
00:58:49.420 | or how to push back respectfully.
00:58:51.420 | And so I think building a tool
00:58:53.620 | that can actually allow them to do that well
00:58:56.060 | is one simple thing that I think
00:58:58.780 | is just an interesting thing
00:58:59.980 | that we've heard from a bunch of people
00:59:01.740 | that they'd be interested in.
00:59:03.340 | But going back to the broader idea,
00:59:06.240 | I don't know.
00:59:10.100 | I mean, Priscilla and I just had our third daughter.
00:59:14.660 | - Congratulations, by the way.
00:59:15.500 | - Thank you.
00:59:16.320 | Thanks.
00:59:17.160 | And it's like one of the saddest things in the world
00:59:20.180 | is seeing your baby cry, right?
00:59:22.180 | But it's like, why is that?
00:59:24.980 | Right?
00:59:25.820 | It's like, well, 'cause babies don't generally
00:59:28.100 | have much capacity to tell you
00:59:30.060 | what they care about otherwise.
00:59:32.860 | Right?
00:59:33.700 | And it's not actually just babies, right?
00:59:35.340 | It's my five-year-old daughter cries too
00:59:38.300 | because she sometimes has a hard time expressing
00:59:41.180 | what matters to her.
00:59:44.180 | And I was thinking about that and I was like,
00:59:46.060 | well, actually a lot of adults get very frustrated too
00:59:48.380 | because they have a hard time expressing things
00:59:51.260 | in a way that, going back to some of the early themes,
00:59:54.540 | that maybe is something that was a mistake
00:59:59.020 | or maybe they have pride or something like,
01:00:00.460 | all these things get in the way.
01:00:01.420 | So I don't know.
01:00:02.740 | I think that all of these different technologies
01:00:05.100 | that can help us navigate the social complexity
01:00:08.900 | and actually be able to better express
01:00:11.020 | what we're feeling and thinking,
01:00:13.060 | I think that's generally all good.
01:00:14.980 | And there are all these concerns like,
01:00:18.100 | okay, are people gonna have worse memories
01:00:19.940 | because you have Google to look things up?
01:00:21.860 | And I think in general, a generation later,
01:00:24.780 | you don't look back and lament that.
01:00:26.580 | I think it's just like, wow, we have so much more capacity
01:00:29.500 | to do so much more now.
01:00:30.860 | And I think that that'll be the case here too.
01:00:32.980 | - You can allocate those cognitive capabilities
01:00:35.180 | to deeper nuance thought.
01:00:38.940 | - Yeah.
01:00:39.780 | - But it's change.
01:00:42.860 | So just like with Google search,
01:00:47.340 | the addition of language models, large language models,
01:00:51.860 | you basically don't have to remember nearly as much.
01:00:54.740 | Just like with Stack Overflow for programming,
01:00:57.860 | now that these language models
01:00:59.540 | can generate code right there.
01:01:01.220 | I mean, I find that I write like maybe 80%, 90%
01:01:04.980 | of the code I write is non-generated first and then edited.
01:01:09.980 | I mean, so you don't have to remember
01:01:11.660 | how to write specifics of different functions.
01:01:13.580 | - Oh, but that's great.
01:01:14.900 | And it's also, it's not just the specific coding.
01:01:19.580 | I mean, in the context of a large company like this,
01:01:23.340 | I think before an engineer can sit down to code,
01:01:26.100 | they first need to figure out all of the libraries
01:01:29.500 | and dependencies that tens of thousands of people
01:01:32.820 | have written before them.
01:01:34.740 | And one of the things that I'm excited about
01:01:39.420 | that we're working on is it's not just tools
01:01:42.420 | that help engineers code,
01:01:43.660 | it's tools that can help summarize the whole knowledge base
01:01:46.140 | and help people be able to navigate
01:01:48.380 | all the internal information.
01:01:49.660 | I mean, I think that that's,
01:01:51.060 | in the experiments that I've done with this stuff,
01:01:54.100 | I mean, that's on the public stuff,
01:01:56.580 | you just ask one of these models
01:02:00.580 | to build you a script that does anything
01:02:03.420 | and it basically already understands
01:02:05.100 | what the best libraries are to do that thing
01:02:07.020 | and pulls them in automatically.
01:02:08.220 | It's, I mean, I think that's super powerful.
01:02:09.740 | That was always the most annoying part of coding
01:02:12.860 | was that you had to spend all this time
01:02:14.620 | actually figuring out what the resources were
01:02:16.380 | that you were supposed to import
01:02:17.420 | before you could actually start building the thing.
01:02:19.420 | - Yeah, I mean, there's, of course,
01:02:21.460 | the flip side of that, I think for the most part is positive,
01:02:24.620 | but the flip side is if you outsource
01:02:27.860 | that thinking to an AI model,
01:02:31.900 | you might miss nuanced mistakes and bugs.
01:02:36.900 | You lose the skill to find those bugs.
01:02:40.060 | And those bugs might be,
01:02:41.580 | the code looks very convincingly right,
01:02:45.380 | but it's actually wrong in a very subtle way.
01:02:47.920 | But that's the trade-off that we face
01:02:53.060 | as human civilization when we build
01:02:56.300 | more and more powerful tools.
01:02:57.820 | When we stand on the shoulders of taller and taller giants,
01:03:02.060 | we could do more, but then we forget
01:03:03.860 | how to do all the stuff that they did.
01:03:05.760 | It's a weird trade-off.
01:03:08.900 | - Yeah, I agree.
01:03:10.180 | I mean, I think it is very valuable in your life
01:03:13.100 | to be able to do basic things too.
01:03:15.580 | - Do you worry about some of the concerns
01:03:19.580 | of bots being present on social networks?
01:03:22.820 | More and more human-like bots
01:03:24.960 | that are not necessarily trying to do a good thing,
01:03:29.540 | or they might be explicitly trying to do a bad thing,
01:03:32.220 | like phishing scams, like social engineering,
01:03:35.220 | all that kind of stuff,
01:03:36.360 | which has always been a very difficult problem
01:03:38.880 | for social networks,
01:03:39.840 | but now it's becoming almost a more and more
01:03:41.760 | difficult problem.
01:03:43.120 | - Well, there's a few different parts of this.
01:03:46.040 | So one is,
01:03:47.980 | there are all these harms that we need
01:03:51.600 | to basically fight against and prevent.
01:03:53.360 | And that's been a lot of our focus
01:03:57.120 | over the last five or seven years,
01:03:59.960 | is basically ramping up very sophisticated AI systems,
01:04:03.840 | not generative AI systems,
01:04:05.120 | more kind of classical AI systems,
01:04:07.240 | to be able to categorize and classify and identify,
01:04:12.240 | okay, this post looks like it's promoting terrorism.
01:04:17.720 | This one is exploiting children.
01:04:21.600 | This one looks like it might be trying to incite violence.
01:04:25.200 | This one's an intellectual property violation.
01:04:28.040 | So there's like 18 different categories
01:04:31.960 | of violating kind of harmful content
01:04:35.400 | that we've had to build specific systems
01:04:37.800 | to be able to track.
01:04:38.680 | And I think it's certainly the case
01:04:42.400 | that advances in generative AI will test those.
01:04:47.320 | But at least so far, it's been the case,
01:04:52.280 | and I'm optimistic that it will continue to be the case,
01:04:55.120 | that we will be able to bring more computing power to bear
01:04:58.960 | to have even stronger AIs
01:05:00.400 | that can help defend against those things.
01:05:02.080 | So we've had to deal with some adversarial issues before.
01:05:07.080 | For some things like hate speech,
01:05:09.680 | it's like people aren't generally
01:05:10.880 | getting a lot more sophisticated.
01:05:12.920 | Like the average person,
01:05:14.840 | let's say someone's saying some kind of racist thing,
01:05:18.520 | it's like they're not necessarily
01:05:19.600 | getting more sophisticated at being racist.
01:05:22.040 | It's okay, so that the system can just find.
01:05:24.840 | But then there's other adversaries
01:05:26.840 | who actually are very sophisticated,
01:05:29.160 | like nation states doing things.
01:05:30.840 | And we find, whether it's Russia
01:05:34.040 | or just different countries
01:05:36.120 | that are basically standing up these networks of bots
01:05:39.960 | or inauthentic accounts is what we call them,
01:05:44.040 | 'cause they're not necessarily bots.
01:05:45.400 | Some of them could actually be real people
01:05:47.160 | who are kind of masquerading as other people,
01:05:50.400 | but they're acting in a coordinated way.
01:05:53.120 | And some of that behavior has gotten very sophisticated
01:05:57.280 | and it's very adversarial.
01:05:58.400 | So they, each iteration,
01:05:59.760 | every time we find something and stop them,
01:06:03.040 | they kind of evolve their behavior.
01:06:04.240 | They don't just pack up their bags and go home and say,
01:06:06.400 | "Okay, we're not gonna try."
01:06:08.440 | At some point they might decide
01:06:09.360 | doing it on MetaServices is not worth it.
01:06:12.040 | They'll go do it on someone else
01:06:13.080 | if it's easier to do it in another place.
01:06:15.080 | But we have a fair amount of experience dealing with
01:06:19.720 | even those kind of adversarial attacks
01:06:22.720 | where they just keep on getting better and better.
01:06:24.640 | And I do think that as long as we can keep on
01:06:26.800 | putting more compute power against it,
01:06:28.720 | and if we're kind of one of the leaders
01:06:31.120 | in developing some of these AI models,
01:06:33.040 | I'm quite optimistic that we're gonna be able to keep on
01:06:35.960 | pushing against the kind of normal categories of harm
01:06:40.800 | that you talk about, fraud, scams, spam,
01:06:44.280 | IP violations, things like that.
01:06:47.800 | - What about like creating narratives and controversy?
01:06:50.760 | To me, it's kind of amazing how a small collection of,
01:06:55.320 | - Yeah.
01:06:56.160 | - what did you say, inauthentic accounts.
01:06:57.840 | So it could be bots, but it could be huge.
01:06:59.400 | - Yeah, I mean, we have sort of this funny name for it,
01:07:01.400 | but we call it coordinated inauthentic behavior.
01:07:03.720 | - Yeah, it's kind of incredible how a small collection
01:07:07.160 | of folks can create narratives, create stories.
01:07:12.040 | - Yeah.
01:07:12.960 | - Especially if they're viral.
01:07:14.920 | Especially if they have an element
01:07:16.560 | that can catalyze the virality of the narrative.
01:07:21.120 | - Yeah, and I think there the question is you have to be,
01:07:24.360 | I'm very specific about what is bad about it, right?
01:07:27.080 | Because I think a set of people coming together
01:07:31.280 | or organically bouncing ideas off each other
01:07:34.800 | and a narrative comes out of that
01:07:36.440 | is not necessarily a bad thing by itself
01:07:39.640 | if it's kind of authentic and organic.
01:07:42.480 | That's like a lot of what happens
01:07:43.600 | and how culture gets created and how art gets created
01:07:45.640 | and a lot of good stuff.
01:07:46.480 | So that's why we've kind of focused on this sense
01:07:49.200 | of coordinated inauthentic behavior.
01:07:51.560 | So it's like if you have a network of,
01:07:53.560 | whether it's bots, some people masquerading
01:07:56.080 | as different accounts,
01:07:57.320 | but you have kind of someone pulling the strings behind it
01:08:01.880 | and trying to kind of act as if this is a more organic set
01:08:07.960 | of behavior, but really it's not,
01:08:09.360 | it's just like one coordinated thing.
01:08:11.760 | That seems problematic to me, right?
01:08:13.520 | I mean, I don't think people should be able
01:08:14.920 | to have coordinated networks and not disclose it as such.
01:08:20.360 | But that again, we've been able to deploy
01:08:22.840 | pretty sophisticated AI and counter-terrorism groups
01:08:27.400 | and things like that to be able to identify
01:08:29.360 | a fair number of these coordinated inauthentic networks
01:08:33.880 | of accounts and take them down.
01:08:36.040 | We continue to do that.
01:08:38.360 | I think it's one thing that if you'd told me 20 years ago,
01:08:41.960 | it's like, all right, you're starting this website
01:08:44.000 | to help people connect at a college
01:08:45.720 | and in the future, you're gonna be,
01:08:48.000 | part of your organization is gonna be a counter-terrorism
01:08:50.240 | organization with AI to find coordinated inauthentic.
01:08:53.480 | I would have thought that was pretty wild,
01:08:56.080 | but I think that that's part of where we are.
01:09:00.920 | But look, I think that these questions
01:09:02.760 | that you're pushing on now,
01:09:04.120 | this is actually where I'd guess most of the challenge
01:09:09.040 | around AI will be for the foreseeable future.
01:09:12.800 | I think that there's a lot of debate around things like,
01:09:15.880 | is this going to create existential risk to humanity?
01:09:19.000 | And I think that those are very hard things
01:09:20.920 | to disprove one way or another.
01:09:22.880 | My own intuition is that the point at which
01:09:25.520 | we become close to superintelligence is,
01:09:29.520 | it's just really unclear to me that the current technology
01:09:34.280 | is gonna get there without another set
01:09:37.040 | of significant advances.
01:09:39.280 | But that doesn't mean that there's no danger.
01:09:40.800 | I think the danger is basically amplifying
01:09:42.880 | the kind of known set of harms that people
01:09:46.720 | or sets of accounts can do.
01:09:48.400 | And we just need to make sure that we really focus
01:09:50.480 | on basically doing that as well as possible.
01:09:54.880 | So that's definitely a big focus for me.
01:09:57.640 | - Well, you can basically use large language models
01:10:00.200 | as an assistant of how to cause harm on social networks.
01:10:03.920 | So you can ask it a question,
01:10:05.360 | meta has very impressive coordinated inauthentic account
01:10:13.000 | fighting capabilities.
01:10:16.080 | How do I do the coordinating authentic account creation
01:10:20.360 | where meta doesn't detect it?
01:10:23.440 | Like literally ask that question.
01:10:25.400 | And basically there's this kind of part of it.
01:10:29.240 | I mean, that's what open AI showed
01:10:30.960 | that they're concerned with those questions.
01:10:33.000 | Perhaps you can comment on your approach to it,
01:10:34.920 | how to do a kind of moderation on the output
01:10:38.840 | of those models that it can't be used
01:10:41.120 | to help you coordinate harm in all the full definition
01:10:45.360 | of what the harm means.
01:10:46.960 | - Yeah, and that's a lot of the fine tuning
01:10:48.840 | and the alignment training that we do
01:10:51.320 | is basically when we ship AIs across our products,
01:10:56.320 | a lot of what we're trying to make sure is that
01:11:02.160 | you can't ask it to help you commit a crime, right?
01:11:07.280 | So I think training it to kind of understand that
01:11:14.680 | and it's not like any of these systems
01:11:17.120 | are ever gonna be a hundred percent perfect,
01:11:18.800 | but just making it so that this isn't an easier way
01:11:23.800 | to go about doing something bad
01:11:30.040 | than the next best alternative, right?
01:11:31.960 | I mean, people still have Google, right?
01:11:33.720 | You still have search engines.
01:11:35.000 | So the information is out there.
01:11:37.960 | And for these, what we see is like for nation states
01:11:44.200 | or these actors that are trying to pull off
01:11:46.720 | these large coordinated and authentic networks
01:11:50.160 | to kind of influence different things,
01:11:53.160 | at some point when we would just make it very difficult,
01:11:55.040 | they do just try to use other services instead, right?
01:11:58.040 | It's just like, if you can make it more expensive
01:12:00.840 | for them to do it on your service,
01:12:03.400 | then kind of people go elsewhere.
01:12:05.880 | And I think that that's the bar, right?
01:12:08.400 | It's not like, okay, are you ever gonna be perfect
01:12:11.240 | at finding every adversary who tries to attack you?
01:12:14.440 | It's, I mean, you try to get as close to that as possible,
01:12:16.920 | but I think really kind of economically,
01:12:20.040 | what you're just trying to do is make it so
01:12:21.720 | it's just inefficient for them to go after that.
01:12:24.880 | - But there's also complicated questions
01:12:26.480 | of what is and isn't harm, what is and isn't misinformation.
01:12:30.600 | So this is one of the things
01:12:32.040 | that Wikipedia has also tried to face.
01:12:34.760 | I remember asking GPT about whether the virus
01:12:39.560 | leaked from a lab or not, and the answer provided
01:12:42.840 | was a very nuanced one, and a well-cited one,
01:12:47.400 | almost, dare I say, well-thought-out one, balanced.
01:12:52.400 | I would hate for that nuance to be lost
01:12:55.000 | through the process of moderation.
01:12:56.700 | Wikipedia does a good job on that particular thing too,
01:13:00.560 | but from pressures from governments and institutions,
01:13:03.320 | you could see some of that nuance and depth
01:13:07.560 | of information, facts, and wisdom be lost.
01:13:12.560 | - Absolutely.
01:13:14.560 | - And that's a scary thing.
01:13:16.560 | Some of the magic, some of the edges,
01:13:19.560 | the rough edges might be lost
01:13:21.000 | through the process of moderation of AI systems.
01:13:23.960 | So how do you get that right?
01:13:26.520 | - I really agree with what you're pushing on.
01:13:28.960 | I mean, the core shape of the problem
01:13:33.960 | is that there are some harms
01:13:36.360 | that I think everyone agrees are bad.
01:13:38.240 | Sexual exploitation of children.
01:13:43.120 | You're not gonna get many people
01:13:45.960 | who think that that type of thing
01:13:47.720 | should be allowed on any service,
01:13:49.520 | and that's something that we face
01:13:51.480 | and try to push off as much as possible today.
01:13:55.480 | Terrorism, inciting violence.
01:14:00.120 | We went through a bunch of these types of harms before.
01:14:05.040 | But then I do think that you get to a set of harms
01:14:07.840 | where there is more social debate around it.
01:14:10.040 | So misinformation, I think,
01:14:13.280 | has been a really tricky one
01:14:18.240 | because there are things that are kind of obviously false,
01:14:23.240 | right, that are maybe factual,
01:14:25.000 | but may not be harmful.
01:14:28.960 | So it's like, all right, are you gonna censor someone
01:14:33.000 | for just being wrong?
01:14:34.800 | If there's no kind of harm implication
01:14:36.520 | of what they're doing,
01:14:37.360 | I think that there's a bunch of real kind of issues
01:14:39.520 | and challenges there.
01:14:41.160 | But then I think that there are other places
01:14:43.400 | where it is,
01:14:44.240 | you just take some of the stuff around COVID
01:14:47.200 | earlier on in the pandemic,
01:14:48.520 | where there were real health implications,
01:14:53.520 | but there hadn't been time to fully vet
01:14:55.440 | a bunch of the scientific assumptions.
01:14:57.120 | And unfortunately, I think a lot of the kind of establishment
01:15:00.600 | on that kind of waffled on a bunch of facts
01:15:03.960 | and asked for a bunch of things to be censored
01:15:06.600 | that in retrospect ended up being more debatable or true.
01:15:11.600 | And that stuff is really tough, right?
01:15:13.520 | And really undermines trust in that.
01:15:15.680 | And so I do think that the questions around how to manage
01:15:20.680 | that are very nuanced.
01:15:24.080 | The way that I try to think about it
01:15:25.920 | is that it goes,
01:15:28.720 | I think it's best to generally boil things down
01:15:31.800 | to the harms that people agree on.
01:15:34.560 | So when you think about,
01:15:36.040 | is something misinformation or not,
01:15:38.280 | I think often the more salient bit is,
01:15:41.280 | is this going to potentially lead to physical harm
01:15:46.280 | for someone and kind of think about it in that sense.
01:15:50.280 | And then beyond that,
01:15:51.400 | I think people just have different preferences
01:15:53.000 | on how they want things to be flagged for them.
01:15:55.240 | I think a bunch of people would prefer
01:15:57.720 | to kind of have a flag on something that says,
01:16:00.320 | hey, a fact checker thinks that this might be false.
01:16:02.400 | Or I think Twitter's community notes implementation
01:16:05.400 | is quite good on this.
01:16:07.320 | But again, it's the same type of thing.
01:16:09.680 | It's like just kind of discretionarily adding a flag
01:16:12.800 | because it makes the user experience better.
01:16:14.600 | But it's not trying to take down the information or not.
01:16:17.440 | I think that you want to reserve the kind of censorship
01:16:21.000 | of content to things that are of known categories
01:16:24.560 | that people generally agree are bad.
01:16:27.680 | - Yeah, but there's so many things,
01:16:30.520 | especially with the pandemic,
01:16:31.640 | but there's other topics where there's just deep disagreement
01:16:36.640 | fueled by politics about what is and isn't harmful.
01:16:41.320 | There's even just the degree to which the virus is harmful
01:16:45.800 | and the degree to which the vaccines
01:16:48.720 | that respond to the virus are harmful.
01:16:50.200 | There's just, there's almost like
01:16:52.400 | a political divider on that.
01:16:54.480 | And so how do you make decisions about that
01:16:57.840 | where half the country in the United States
01:17:01.160 | or some large fraction of the world
01:17:03.720 | has very different views from another part of the world?
01:17:07.000 | Is there a way for Meta to stay out of the moderation of this?
01:17:13.720 | - It's very difficult to just abstain.
01:17:19.120 | But I think we should be clear about which of these things
01:17:22.680 | are actual safety concerns
01:17:25.320 | and which ones are a matter of preference
01:17:28.000 | in terms of how people want information flagged.
01:17:30.320 | Right, so we did recently introduce something
01:17:32.800 | that allows people to have fact-checking
01:17:36.600 | not affect the distribution
01:17:38.400 | of what shows up in their products.
01:17:40.480 | So, okay, a bunch of people don't trust
01:17:41.960 | who the fact-checkers are.
01:17:43.440 | All right, well, you can turn that off if you want,
01:17:45.960 | but if the content violates some policy,
01:17:49.840 | like it's inciting violence or something like that,
01:17:51.600 | it's still not gonna be allowed.
01:17:53.040 | So I think that you wanna honor people's preferences
01:17:56.480 | on that as much as possible.
01:17:58.440 | But look, I mean, this is really difficult stuff.
01:18:01.960 | I think it's really hard to know where to draw the line
01:18:06.680 | on what is fact and what is opinion,
01:18:10.720 | because the nature of science is that
01:18:13.320 | nothing is ever 100% known for certain.
01:18:15.720 | You can disprove certain things,
01:18:17.600 | but you're constantly testing new hypotheses
01:18:20.520 | and scrutinizing frameworks that have been long held.
01:18:24.920 | And every once in a while, you throw out something
01:18:28.000 | that was working for a very long period of time,
01:18:29.960 | and it's very difficult.
01:18:31.680 | But I think that just because it's very hard
01:18:34.760 | and just because they're edge cases
01:18:35.880 | doesn't mean that you should not try to give people
01:18:40.040 | what they're looking for as well.
01:18:41.680 | Let me ask about something you've faced
01:18:46.280 | in terms of moderation.
01:18:48.960 | Is pressure from different sources,
01:18:52.480 | pressure from governments,
01:18:54.080 | I wanna ask a question how to withstand that pressure
01:18:58.320 | for a world where AI moderation starts becoming a thing too.
01:19:03.320 | So what's Meta's approach to resist the pressure
01:19:08.720 | from governments and other interest groups
01:19:13.040 | in terms of what to moderate and not?
01:19:15.440 | - I don't know that there's like
01:19:17.600 | a one size fits all answer to that.
01:19:19.520 | I mean, I think we basically have the principles around,
01:19:24.200 | you know, we wanna allow people to express
01:19:26.520 | as much as possible,
01:19:27.600 | but we have developed clear categories of things
01:19:32.600 | that we think are wrong that we don't want on our services,
01:19:37.920 | and we build tools to try to moderate those.
01:19:40.800 | So then the question is, okay, what do you do
01:19:43.360 | when a government says that they don't want something
01:19:47.400 | on the service?
01:19:49.600 | And we have a bunch of principles
01:19:54.040 | around how we deal with that.
01:19:55.320 | Because on the one hand, if there's a, you know,
01:19:57.960 | democratically elected government
01:20:00.320 | and people around the world just have different values
01:20:03.160 | in different places, then should we as a, you know,
01:20:07.560 | California based company tell them that something
01:20:12.720 | that they have decided is unacceptable?
01:20:16.520 | Actually, like that we need to be able to express that?
01:20:20.920 | I mean, I think that there's a certain amount
01:20:23.280 | of hubris in that.
01:20:26.440 | But then I think that there are other cases
01:20:28.840 | where, you know, it's like a little more autocratic
01:20:32.080 | and, you know, you have the dictator leader
01:20:35.560 | who's just trying to crack down on dissent
01:20:37.520 | and, you know, the people in a country
01:20:39.680 | are really not aligned with that.
01:20:42.560 | And it's not necessarily against their culture,
01:20:44.480 | but the person who's leading it
01:20:47.240 | is just trying to push in a certain direction.
01:20:49.520 | These are very complex questions,
01:20:52.920 | but I think, so it's difficult to have
01:20:56.840 | a one size fits all approach to it.
01:21:01.680 | But in general, we're pretty active
01:21:03.440 | in kind of advocating and pushing back
01:21:06.000 | on requests to take things down.
01:21:10.640 | But honestly, the thing that I think
01:21:15.360 | a request to censor things is one thing,
01:21:18.480 | and that's obviously bad,
01:21:20.040 | but where we draw a much harder line
01:21:23.400 | is on requests for access to information, right?
01:21:25.960 | Because, you know, if you can,
01:21:28.280 | if you get told that you can't say something,
01:21:29.840 | I mean, that's bad, right?
01:21:32.040 | I mean, that, you know, is, you know,
01:21:34.240 | obviously it violates your sense
01:21:38.680 | and freedom of expression at some level,
01:21:40.560 | but a government getting access to data
01:21:44.000 | in a way that seems like it would be unlawful
01:21:48.480 | in our country exposes people to real physical harm.
01:21:53.480 | And that's something that in general
01:21:57.400 | we take very seriously.
01:21:59.360 | And then, so that flows through like all of our policies
01:22:03.320 | and in a lot of ways, right?
01:22:04.960 | By the time you're actually like litigating
01:22:07.440 | with a government or pushing back on them,
01:22:09.800 | that's pretty late in the funnel.
01:22:11.840 | I'd say a bunch of the stuff starts a lot higher up
01:22:15.680 | in the decision of where do we put data centers.
01:22:18.400 | Then there are a lot of countries where, you know,
01:22:21.920 | we may have a lot of people using the service in a place.
01:22:24.560 | It might be, you know, good for the service in some ways,
01:22:28.240 | good for those people if we could reduce the latency
01:22:31.360 | by having a data center nearby them.
01:22:34.440 | But, you know, for whatever reason, we just feel like,
01:22:36.680 | hey, this government does not have a good track record
01:22:40.000 | on basically not trying to get access to people's data.
01:22:45.000 | And at the end of the day, I mean,
01:22:47.640 | if you put a data center in a country
01:22:49.680 | and the government wants to get access to people's data,
01:22:52.560 | then, you know, they do at the end of the day
01:22:54.840 | have the option of having people show up with guns
01:22:57.280 | and taking it by force.
01:22:59.040 | So I think that there's like a lot of decisions
01:23:01.400 | that go into like how you architect the systems
01:23:03.800 | years in advance of these actual confrontations
01:23:09.800 | that end up being really important.
01:23:11.720 | - So you put the protection of people's data
01:23:15.280 | as a very, very high priority.
01:23:17.880 | But in- - That I think is a,
01:23:19.080 | there are more harms that I think
01:23:20.360 | can be associated with that.
01:23:21.720 | And I think that that ends up being a more critical thing
01:23:24.760 | to defend against governments than, you know,
01:23:28.840 | whereas, you know, if another government
01:23:30.240 | has a different view of what should be acceptable speech
01:23:32.560 | in their country,
01:23:34.200 | especially if it's a democratically elected government,
01:23:36.360 | and, you know, it's, then I think that there's
01:23:38.720 | a certain amount of deference that you should have to that.
01:23:41.240 | - So that's speaking more to the direct harm that's possible
01:23:45.000 | when you give governments access to data.
01:23:47.480 | But if we look at the United States,
01:23:49.460 | to the more nuanced kind of pressure to censor,
01:23:53.400 | not even order to censor,
01:23:54.800 | but pressure to censor from political entities,
01:23:57.840 | which has kind of received quite a bit of attention
01:24:00.960 | in the United States,
01:24:02.200 | maybe one way to ask that question is,
01:24:06.680 | if you've seen the Twitter files,
01:24:08.580 | what have you learned from the kind of pressure
01:24:14.280 | from US government agencies that was seen in Twitter files?
01:24:18.540 | And what do you do with that kind of pressure?
01:24:20.840 | - You know, I've seen it.
01:24:26.320 | It's really hard from the outside
01:24:27.840 | to know exactly what happened in each of these cases.
01:24:30.680 | You know, we've obviously been
01:24:32.360 | in a bunch of our own cases where, you know,
01:24:38.320 | where agencies or different folks will just say,
01:24:42.480 | "Hey, here's a threat that we're aware of.
01:24:46.640 | You should be aware of this too."
01:24:48.600 | It's not really pressure as much as it is just,
01:24:51.320 | you know, flagging something that our security systems
01:24:56.120 | should be on alert about.
01:24:57.680 | I get how some people could think of it as that,
01:25:00.960 | but at the end of the day,
01:25:04.480 | it's our call on how to handle that.
01:25:07.760 | But I mean, I just, you know,
01:25:09.040 | in terms of running these services,
01:25:10.280 | won't have access to as much information
01:25:11.920 | about what people think that adversaries
01:25:13.360 | might be trying to do as possible.
01:25:15.360 | - Well, so you don't feel like there'll be consequences
01:25:18.880 | if, you know, anybody, the CIA, the FBI, a political party,
01:25:24.240 | the Democrats or the Republicans
01:25:26.480 | of high powerful political figures write emails.
01:25:31.480 | You don't feel pressure from a suggestion?
01:25:34.320 | - I guess what I say is there's so much pressure
01:25:36.080 | from all sides that I'm not sure that any specific thing
01:25:40.600 | that someone says is really adding
01:25:42.880 | that much more to the mix.
01:25:44.240 | And there are obviously a lot of people who think
01:25:48.120 | that we should be censoring more content,
01:25:52.480 | or there are a lot of people who think
01:25:53.480 | we should be censoring less content.
01:25:55.440 | There are, as you say, all kinds of different groups
01:25:58.120 | that are involved in these debates, right?
01:25:59.800 | So there's the kind of elected officials
01:26:02.320 | and politicians themselves, there's the agencies,
01:26:04.920 | but I mean, but there's the media,
01:26:07.880 | there's activist groups,
01:26:09.160 | this is not a US specific thing,
01:26:11.840 | there are groups all over the world
01:26:13.320 | and kind of all in every country
01:26:16.280 | that bring different values.
01:26:18.120 | So it's just a very, it's a very active debate,
01:26:22.800 | and I understand it, right?
01:26:23.920 | I mean, these kind of questions get to really
01:26:28.920 | some of the most important social debates
01:26:31.440 | that are being had.
01:26:33.000 | So it gets back to the question of truth,
01:26:36.040 | because for a lot of these things,
01:26:39.200 | they haven't yet been hardened into a single truth,
01:26:41.680 | and society's sort of trying to hash out
01:26:44.320 | what we think, right, on certain issues.
01:26:48.400 | Maybe in a few hundred years, everyone will look back
01:26:50.720 | and say, "Hey, no, it wasn't obvious
01:26:52.120 | that it should have been this, but no,
01:26:53.960 | we're kind of in that meat grinder now
01:26:56.200 | and working through that."
01:26:59.760 | So no, these are all very complicated.
01:27:04.760 | And some people raise concerns in good faith
01:27:11.360 | and just say, "Hey, this is something that I wanna flag
01:27:14.160 | for you to think about."
01:27:15.800 | Certain people, I certainly think,
01:27:17.840 | like come at things with somewhat of a more
01:27:20.640 | kind of punitive or vengeful view of like,
01:27:25.160 | "I want you to do this thing.
01:27:26.560 | If you don't, then I'm gonna try to make your life difficult
01:27:28.680 | in a lot of other ways."
01:27:30.560 | But I don't know, there's just,
01:27:33.880 | this is one of the most pressurized debates,
01:27:36.560 | I think, in society.
01:27:37.520 | So I just think that there are so many people
01:27:40.240 | in different forces that are trying to apply pressure
01:27:42.160 | from different sides that it's,
01:27:44.360 | I don't think you can make decisions
01:27:45.960 | based on trying to make people happy.
01:27:47.320 | I think you just have to do what you think
01:27:50.240 | is the right balance and accept that people
01:27:53.680 | are gonna be upset no matter where you come out on that.
01:27:57.360 | - Yeah, I like that pressurized debate.
01:28:00.120 | So how's your view of the freedom of speech
01:28:02.320 | evolved over the years?
01:28:03.660 | And now with AI, where the freedom might apply to them,
01:28:12.920 | not just to the humans, but to the personalized agents
01:28:17.680 | as you've spoken about them.
01:28:20.160 | - So yeah, I mean, I've probably gotten
01:28:21.800 | a somewhat more nuanced view
01:28:23.040 | just because I think that there are,
01:28:25.040 | I come at this, I'm obviously very pro
01:28:27.960 | freedom of expression, right?
01:28:29.280 | I don't think you build a service like this
01:28:31.520 | that gives people tools to express themselves
01:28:33.400 | unless you think that people expressing themselves
01:28:35.220 | at scale is a good thing, right?
01:28:36.620 | So I didn't get into this to try to prevent people
01:28:40.360 | from expressing anything.
01:28:42.400 | I wanna give people tools
01:28:44.440 | so they can express as much as possible.
01:28:46.560 | And then I think it's become clear
01:28:49.120 | that there are certain categories of things
01:28:51.400 | that we've talked about
01:28:52.960 | that I think almost everyone accepts are bad
01:28:54.920 | and that no one wants and that are illegal
01:28:57.120 | even in countries like the US where
01:28:59.320 | you have the first amendment
01:29:01.620 | that's very protective of enabling speech.
01:29:04.340 | It's like, you're still not allowed to do things
01:29:06.600 | that are gonna immediately incite violence
01:29:08.480 | or violate people's intellectual property
01:29:10.540 | or things like that.
01:29:11.380 | So there are those, but then there's also a very active core
01:29:14.880 | of just active disagreements in society
01:29:18.720 | where some people may think that something is true or false.
01:29:21.720 | The other side might think it's the opposite
01:29:24.800 | or just unsettled, right?
01:29:26.620 | And those are some of the most difficult
01:29:29.640 | to kind of handle like we've talked about.
01:29:33.160 | But one of the lessons that I feel like I've learned
01:29:38.160 | is that a lot of times when you can,
01:29:44.960 | the best way to handle this stuff more practically
01:29:48.880 | is not in terms of answering the question
01:29:51.440 | of should this be allowed,
01:29:53.060 | but just like what is the best way to deal
01:29:58.060 | with someone being a jerk?
01:30:00.720 | Is the person basically just having a repeat behavior
01:30:05.720 | of causing a lot of issues?
01:30:11.200 | So looking at it more at that level.
01:30:14.720 | - And its effect on the broader communities,
01:30:16.840 | health of the community, health of the state.
01:30:19.320 | It's tricky though because like how do you know
01:30:21.640 | there could be people that have a very controversial
01:30:24.880 | viewpoint that turns out to have a positive
01:30:27.720 | long-term effect on the health of the community
01:30:30.040 | because it challenges the community to think.
01:30:31.720 | - That's true, absolutely.
01:30:33.360 | Yeah, no, I think you wanna be careful about that.
01:30:36.280 | I'm not sure I'm expressing this very clearly
01:30:38.680 | because I certainly agree with your point there.
01:30:42.520 | And my point isn't that we should not have people
01:30:46.960 | on our services that are being controversial.
01:30:49.240 | That's certainly not what I mean to say.
01:30:51.600 | It's that often I think it's not just looking
01:30:56.560 | at a specific example of speech
01:30:59.600 | that it's most effective to handle this stuff.
01:31:02.880 | And I think often you don't wanna make
01:31:05.160 | specific binary decisions of kind of this is allowed
01:31:08.320 | or this isn't.
01:31:09.240 | I mean, we talked about, you know, it's fact-checking
01:31:11.880 | or Twitter's community voices thing.
01:31:14.080 | I think that's another good example.
01:31:15.280 | It's like, it's not a question of is this allowed or not?
01:31:18.520 | It's just a question of adding more context to the thing.
01:31:20.800 | I think that that's helpful.
01:31:22.640 | So in the context of AI, which is what you were asking about,
01:31:26.200 | I think there are lots of ways that an AI can be helpful.
01:31:29.680 | With an AI, it's less about censorship, right?
01:31:33.920 | Because it's more about what is the most productive answer
01:31:37.760 | to a question.
01:31:38.720 | You know, there was one case study
01:31:41.520 | that I was reviewing with the team is someone asked,
01:31:45.280 | can you explain to me how to 3D print a gun?
01:31:53.040 | And one proposed response is like,
01:31:58.040 | no, I can't talk about that.
01:31:59.800 | Right, it's like basically just like shut it down
01:32:01.280 | immediately, which I think is some of what you see.
01:32:03.800 | It's like as a large language model,
01:32:05.480 | I'm not allowed to talk about whatever.
01:32:07.600 | But there's another response, which is like,
01:32:10.600 | hey, I don't think that's a good idea.
01:32:13.200 | In a lot of countries, including the US,
01:32:16.200 | 3D printing guns is illegal
01:32:18.840 | or kind of whatever the factual thing is.
01:32:21.280 | And it's like, okay, that's actually a respectful
01:32:23.760 | and informative answer.
01:32:24.840 | And I may have not known that specific thing.
01:32:27.720 | And so there are different ways to handle this
01:32:31.160 | that I think kind of, you can either assume good intent,
01:32:36.160 | like maybe the person didn't know
01:32:38.680 | and I'm just gonna help educate them.
01:32:40.280 | Or you could like kind of come at it as like,
01:32:42.200 | no, I need to shut this thing down immediately.
01:32:44.040 | Right, it's like, I just am not gonna talk about this.
01:32:46.720 | And there may be times where you need to do that.
01:32:51.000 | But I actually think having a somewhat more
01:32:55.240 | informative approach where you generally assume
01:32:57.880 | good intent from people is probably a better balance
01:33:01.480 | to be on as many things as you can be.
01:33:04.680 | You're not gonna be able to do that for everything.
01:33:06.160 | But you were kind of asking about how I approach this
01:33:09.640 | and I'm thinking about this as it relates to AI.
01:33:13.880 | And I think that that's a big difference
01:33:16.000 | in kind of how to handle sensitive content
01:33:20.600 | across these different modes.
01:33:22.040 | - I have to ask, there's rumors you might be working
01:33:26.400 | on a social network that's text-based
01:33:29.000 | that might be a competitor to Twitter, code named P92.
01:33:32.880 | Is there something you can say about those rumors?
01:33:38.280 | - There is a project.
01:33:40.120 | You know, I've always thought that sort of a text-based
01:33:43.800 | kind of information utility
01:33:45.680 | is just a really important thing to society.
01:33:50.800 | And for whatever reason, I feel like Twitter
01:33:54.160 | has not lived up to what I would have thought
01:33:56.560 | its full potential should be.
01:33:58.320 | And I think that the current,
01:33:59.520 | you know, I think Elon thinks that, right?
01:34:00.840 | And that's probably one of the reasons why he bought it.
01:34:03.240 | And I do know that there are ways
01:34:07.760 | to consider alternative approaches to this.
01:34:11.400 | And one that I think is potentially interesting
01:34:14.360 | is this open and federated approach
01:34:17.320 | where you're seeing with Mastodon,
01:34:18.720 | and you're seeing that a little bit with Blue Sky.
01:34:21.160 | And I think that it's possible that something
01:34:25.880 | that melds some of those ideas with the graph
01:34:30.240 | and identity system that people have already cultivated
01:34:32.520 | on Instagram could be a kind of very welcome contribution
01:34:37.520 | to that space.
01:34:38.840 | But I know we work on a lot of things
01:34:40.280 | all the time though, too.
01:34:41.120 | So I don't wanna get ahead of myself.
01:34:43.440 | I mean, we have projects that explore
01:34:45.720 | a lot of different things,
01:34:46.680 | and this is certainly one that I think could be interesting.
01:34:49.840 | But- - So what's the release,
01:34:52.440 | the launch date of that again?
01:34:54.640 | Or what's the official website?
01:34:57.200 | - Well, we don't have that yet.
01:34:59.360 | - Oh, okay.
01:35:00.200 | - But I, and look, I mean, I don't know exactly
01:35:03.800 | how this is gonna turn out.
01:35:05.200 | I mean, what I can say is, yeah,
01:35:06.520 | there's some people working on this, right?
01:35:08.880 | I think that there's something there
01:35:09.960 | that's interesting to explore.
01:35:13.680 | - So if you look at, it'd be interesting
01:35:15.440 | to just ask this question and throw Twitter into the mix.
01:35:19.240 | At the landscape of social networks,
01:35:22.080 | that is Facebook, that is Instagram, that is WhatsApp,
01:35:26.640 | and then think of a text-based social network,
01:35:31.160 | when you look at that landscape,
01:35:32.360 | what are the interesting differences to you?
01:35:35.080 | Why do we have these different flavors?
01:35:37.920 | And what are the needs, what are the use cases,
01:35:40.640 | what are the products, what is the aspect of them
01:35:43.320 | that create a fulfilling human experience
01:35:45.760 | and a connection between humans that is somehow distinct?
01:35:49.280 | - Well, I think text is very accessible
01:35:51.880 | for people to transmit ideas
01:35:54.400 | and to have back and forth exchanges.
01:35:56.720 | So it, I think, ends up being a good format for discussion,
01:36:02.920 | in a lot of ways, uniquely good, right?
01:36:06.000 | If you look at some of the other formats
01:36:09.000 | or other networks that are focused on one type of content,
01:36:11.120 | like TikTok is obviously huge, right?
01:36:13.320 | And there are comments on TikTok,
01:36:15.520 | but I think the architecture of the service
01:36:19.520 | is very clearly that you have the video
01:36:21.360 | as the primary thing, and there's comments after that.
01:36:25.040 | But I think one of the unique pieces
01:36:32.080 | of having text-based comments, like content,
01:36:35.800 | is that the comments can also be first class.
01:36:39.280 | And that makes it so that conversations can just filter
01:36:43.080 | and fork into all these different directions
01:36:45.240 | and in a way that can be super useful.
01:36:47.640 | So I think there's a lot of things
01:36:48.840 | that are really awesome about the experience.
01:36:50.840 | It just always struck me, I always thought that,
01:36:54.120 | you know, Twitter should have a billion people using it,
01:36:56.320 | or whatever the thing is that basically
01:37:00.480 | ends up being in that space.
01:37:01.640 | And for whatever combination of reasons,
01:37:03.920 | again, these companies are complex organisms
01:37:07.600 | and it's very hard to diagnose this stuff from the outside.
01:37:10.840 | - Why doesn't Twitter,
01:37:11.840 | why doesn't a text-based comment
01:37:15.840 | as a first citizen-based social network
01:37:18.840 | have a billion users?
01:37:20.520 | - Well, I just think it's hard to build these companies.
01:37:23.000 | So it's not that every idea automatically goes
01:37:27.720 | and gets a billion people,
01:37:29.040 | it's just that I think that that idea,
01:37:30.880 | coupled with good execution, should get there.
01:37:34.160 | But I mean, look, we hit certain thresholds over time
01:37:37.800 | where, you know, we kind of plateaued early on
01:37:41.800 | and it wasn't clear that we were ever gonna reach
01:37:43.160 | a hundred million people on Facebook.
01:37:44.840 | And then we got really good at dialing in
01:37:48.360 | internationalization and helping the service grow
01:37:51.160 | in different countries.
01:37:52.240 | And that was like a whole competence
01:37:55.480 | that we needed to develop.
01:37:56.480 | And helping people basically spread the service
01:38:00.640 | to their friends.
01:38:01.600 | That was one of the things, once we got very good at that,
01:38:03.800 | that was one of the things that made me feel like,
01:38:05.880 | hey, if Instagram joined us early on,
01:38:08.760 | then I felt like we could help grow that quickly.
01:38:10.280 | And same with WhatsApp.
01:38:11.400 | And I think that that's sort of been a core competence
01:38:13.520 | that we've developed and been able to execute on.
01:38:16.200 | And others have too, right?
01:38:17.040 | I mean, ByteDance obviously have done a very good job
01:38:19.720 | with TikTok and have reached more than a billion people
01:38:23.040 | there, but it's certainly not automatic, right?
01:38:26.640 | I think you need a certain level of execution
01:38:31.040 | to basically get there.
01:38:31.960 | And I think for whatever reason,
01:38:34.440 | I think Twitter has this great idea
01:38:36.200 | and sort of magic in the service.
01:38:39.560 | But they just haven't kind of cracked that piece yet.
01:38:43.960 | And I think that that's made it
01:38:45.320 | so that you're seeing all these other things,
01:38:46.840 | whether it's Mastodon or Blue Sky,
01:38:51.440 | that I think are maybe just different cuts
01:38:54.360 | at the same thing.
01:38:55.200 | But I think through the last generation
01:38:57.240 | of social media overall,
01:39:00.120 | one of the interesting experiments
01:39:01.480 | that I think should get run at larger scale
01:39:04.280 | is what happens if there's somewhat
01:39:05.600 | more decentralized control.
01:39:07.120 | And if it's like the stack is more open throughout.
01:39:10.200 | And I've just been pretty fascinated by that
01:39:13.400 | and seeing how that works.
01:39:14.760 | To some degree, end-to-end encryption on WhatsApp,
01:39:20.400 | and as we bring it to other services,
01:39:22.440 | provides an element of it because it pushes
01:39:25.000 | the service really out to the edges.
01:39:26.840 | I mean, the server part of this that we run for WhatsApp
01:39:31.840 | is relatively very thin compared to what we do
01:39:34.120 | on Facebook or Instagram,
01:39:36.280 | and much more of the complexity is
01:39:38.400 | in how the apps kind of negotiate with each other
01:39:40.880 | to pass information in a fully end-to-end encrypted way.
01:39:44.400 | But I don't know, I think that that is a good model.
01:39:48.040 | I think it puts more power in individuals' hands
01:39:50.160 | and there are a lot of benefits of it
01:39:51.760 | if you can make it happen.
01:39:53.360 | Again, this is all pretty speculative.
01:39:55.440 | I mean, I think that it's hard from the outside
01:39:58.720 | to know why anything does or doesn't work
01:40:01.160 | until you kind of take a run at it.
01:40:05.000 | So I think it's kind of an interesting thing
01:40:07.320 | to experiment with,
01:40:08.160 | but I don't really know where this one's gonna go.
01:40:10.800 | - So since we were talking about Twitter,
01:40:12.960 | Elon Musk had what I think a few harsh words
01:40:19.320 | that I wish he didn't say.
01:40:21.040 | So let me ask, in the hope and the name of camaraderie,
01:40:26.040 | what do you think Elon is doing well with Twitter?
01:40:29.040 | And what, as a person who has run for a long time
01:40:33.040 | you social networks, Facebook, Instagram, WhatsApp,
01:40:38.040 | what can he do better?
01:40:41.600 | What can he improve on that text-based social network?
01:40:45.220 | - Gosh, it's always very difficult
01:40:46.720 | to offer specific critiques from the outside
01:40:50.160 | before you get into this,
01:40:51.560 | because I think one thing that I've learned
01:40:53.900 | is that everyone has opinions on what you should do
01:40:57.560 | and like running the company,
01:40:59.980 | you see a lot of specific nuances on things
01:41:02.960 | that are not apparent externally.
01:41:04.600 | And I often think that some of the discourse around us
01:41:09.600 | would be, could be better if there was more
01:41:15.680 | kind of space for acknowledging
01:41:18.480 | that there's certain things that we're seeing internally
01:41:20.480 | that guide what we're doing.
01:41:21.520 | But I don't know, I mean,
01:41:23.880 | 'cause since you asked what is going well,
01:41:26.360 | I think that, you know, I do think that Elon led a push
01:41:31.360 | early on to make Twitter a lot leaner.
01:41:41.400 | And I think that that, you know,
01:41:46.400 | it's like you can agree or disagree
01:41:48.800 | with exactly all the tactics and how we did that.
01:41:51.400 | You know, obviously, you know,
01:41:53.240 | every leader has their own style for if they,
01:41:56.440 | you know, if you need to make dramatic changes for that,
01:41:58.480 | how you're gonna execute it.
01:41:59.880 | But a lot of the specific principles that he pushed on
01:42:04.520 | around basically trying to make the organization
01:42:09.360 | more technical, around decreasing the distance
01:42:12.760 | between engineers at the company and him,
01:42:16.280 | like fewer layers of management.
01:42:19.960 | I think that those were generally good changes.
01:42:23.280 | And I'm also, I also think that it was probably good
01:42:26.240 | for the industry that he made those changes,
01:42:28.640 | because my sense is that there were a lot of other people
01:42:30.640 | who thought that those were good changes,
01:42:33.240 | but who may have been a little shy about doing them.
01:42:38.240 | And I think he, you know,
01:42:42.200 | just in my conversations with other founders
01:42:45.280 | and how people have reacted to the things that we've done,
01:42:47.200 | you know, what I've heard from a lot of folks
01:42:48.800 | is just, hey, you know, when someone like you,
01:42:52.040 | when I wrote the letter outlining
01:42:54.360 | the organizational changes that I wanted to make
01:42:57.200 | back in March, and you know,
01:42:58.640 | when people see what Elon is doing,
01:43:01.040 | I think that that gives, you know, people
01:43:03.640 | the ability to think through
01:43:06.360 | how to shape their organizations in a way
01:43:09.200 | that, you know, hopefully can be good for the industry
01:43:13.600 | and make all these companies more productive over time.
01:43:16.080 | So, something that that was one where I think he was
01:43:19.480 | quite ahead of a bunch of the other companies on.
01:43:23.600 | And, you know, what he was doing there,
01:43:26.720 | you know, again, from the outside, very hard to know.
01:43:28.240 | It's like, okay, did he cut too much?
01:43:29.800 | Did he not cut enough?
01:43:30.640 | Whatever.
01:43:31.480 | I don't think it's like my place to opine on that.
01:43:35.000 | And you asked for a positive framing of the question
01:43:38.240 | of what do I admire?
01:43:40.880 | What do I think went well?
01:43:41.960 | But I think that like certainly his actions
01:43:46.120 | led me and I think a lot of other folks in the industry
01:43:49.800 | to think about, hey, are we kind of doing this
01:43:53.640 | as much as we should?
01:43:54.680 | Like, can we, like, could we make our companies better
01:43:57.160 | by pushing on some of these same principles?
01:43:59.400 | Well, the two of you are in the top of the world
01:44:01.960 | in terms of leading the development of tech.
01:44:04.000 | And I wish there was more both way camaraderie and kindness,
01:44:09.000 | more love in the world, because love is the answer.
01:44:14.440 | But let me ask on a point of efficiency.
01:44:19.200 | You recently announced multiple stages of layoffs at Meta.
01:44:22.580 | What are the most painful aspects of this process
01:44:27.600 | given for the individuals, the painful effects
01:44:31.560 | it has on those people's lives?
01:44:32.840 | - Yeah, I mean, that's it.
01:44:34.160 | And that's it.
01:44:35.920 | And you basically have a significant number of people
01:44:41.280 | who, this is just not the end of their time at Meta
01:44:45.680 | that they or I would have hoped for
01:44:49.840 | when they joined the company.
01:44:51.240 | And I mean, running a company, people are constantly joining
01:44:57.640 | and leaving the company for different directions,
01:45:01.120 | but for different reasons.
01:45:03.880 | But layoffs are uniquely challenging and tough
01:45:10.440 | in that you have a lot of people leaving
01:45:13.040 | for reasons that aren't connected to their own performance
01:45:17.600 | or the culture not being a fit at that point.
01:45:22.080 | It's really just, it's a kind of strategy decision
01:45:27.080 | and sometimes financially required,
01:45:29.260 | but not fully in our case.
01:45:33.760 | I mean, especially on the changes that we made this year,
01:45:35.840 | a lot of it was more kind of culturally
01:45:38.200 | and strategically driven by this push
01:45:40.600 | where I wanted us to become a stronger technology company
01:45:44.480 | with more of a focus on building more technical
01:45:47.640 | and more of a focus on building
01:45:50.040 | higher quality products faster.
01:45:52.240 | And I just view the external world
01:45:53.720 | as quite volatile right now.
01:45:56.080 | And I wanted to make sure that we had a stable position
01:46:00.400 | to be able to continue investing
01:46:01.720 | in these long-term ambitious projects that we have
01:46:05.400 | around continuing to push AI forward
01:46:07.680 | and continuing to push forward all the metaverse work.
01:46:10.320 | And in order to do that in light of the pretty big thrash
01:46:15.120 | that we had seen over the last 18 months,
01:46:18.120 | some of it macroeconomic induced,
01:46:21.480 | some of it competitively induced,
01:46:24.160 | some of it just because of bad decisions
01:46:27.360 | or things that we got wrong.
01:46:28.880 | I don't know, I just, I decided that we needed to get
01:46:32.320 | to a point where we were a lot leaner.
01:46:35.280 | But look, I mean, but then, okay,
01:46:36.720 | it's one thing to do that, to like decide that
01:46:38.720 | at a high level, then the question is,
01:46:40.360 | how do you execute that as compassionately as possible?
01:46:42.840 | And there's no good way.
01:46:44.400 | There's no perfect way for sure.
01:46:47.400 | And it's gonna be tough no matter what,
01:46:49.360 | but as a leadership team here,
01:46:53.440 | we've certainly spent a lot of time just thinking,
01:46:55.320 | okay, given that this is a thing that sucks,
01:46:58.400 | like what is the most compassionate way
01:47:01.000 | that we can do this?
01:47:01.960 | And that's what we've tried to do.
01:47:05.200 | - And you mentioned there's an increased focus
01:47:08.480 | on engineering, on tech, so technology teams,
01:47:13.160 | tech focus teams on building products that.
01:47:17.000 | - Yeah, I mean, I wanted to,
01:47:19.320 | I want to empower engineers more,
01:47:25.680 | the people who are building things, the technical teams.
01:47:28.440 | Part of that is making sure that the people
01:47:33.960 | who are building things aren't just at like
01:47:35.840 | the leaf nodes of the organization.
01:47:37.480 | I don't want like eight levels of management
01:47:40.640 | and then the people actually doing the work.
01:47:42.880 | So we made changes to make it so that
01:47:44.880 | you have individual contributor engineers reporting
01:47:47.120 | at almost every level up the stack,
01:47:49.200 | which I think is important because you're running a company.
01:47:51.160 | One of the big questions is latency
01:47:54.200 | of information that you get.
01:47:56.400 | We talked about this a bit earlier in terms of
01:47:59.480 | kind of the joy of, and the feedback that you get doing
01:48:03.720 | something like jujitsu compared to
01:48:05.640 | running a long-term project.
01:48:07.360 | But I actually think part of the art of running a company
01:48:09.960 | is trying to constantly re-engineer it
01:48:13.400 | so that your feedback loops get shorter
01:48:15.040 | so you can learn faster.
01:48:16.120 | And part of the way that you do that is by,
01:48:18.480 | I kind of think that every layer that you have
01:48:20.280 | in the organization means that information
01:48:24.360 | might not need to get reviewed before it goes to you.
01:48:27.280 | And I think, you know, making it so that the people
01:48:29.160 | doing the work are as close as possible to you as possible
01:48:31.800 | is pretty important.
01:48:34.440 | So there's that.
01:48:35.600 | I think over time, companies just build up
01:48:38.320 | very large support functions that are not doing
01:48:41.600 | the kind of core technical work.
01:48:43.280 | And those functions are very important,
01:48:45.400 | but I think having them in the right proportion
01:48:47.400 | is important.
01:48:48.400 | And if you try to do good work, but you don't have,
01:48:53.400 | you know, the right marketing team
01:48:56.240 | or the right legal advice, like you're gonna, you know,
01:49:00.240 | make some pretty big blunders.
01:49:01.920 | But at the same time, if you have, you know,
01:49:05.280 | if you just like have too big of things
01:49:09.200 | and some of these support roles,
01:49:11.040 | then that might make it so that things are,
01:49:14.120 | just move a lot.
01:49:15.120 | Maybe you're too conservative or you move a lot slower
01:49:19.560 | than you should otherwise.
01:49:22.080 | I just use, those are just examples.
01:49:23.680 | But it's, but-
01:49:25.760 | - How do you find that balance?
01:49:26.800 | That's really tough.
01:49:27.640 | - Yeah, no, but that's, it's a constant equilibrium
01:49:29.840 | that you're searching for.
01:49:31.680 | - Yeah, how many managers to have?
01:49:33.520 | What are the pros and cons of managers?
01:49:36.200 | - Well, I mean, I believe a lot in management.
01:49:38.480 | I mean, there are some people who think
01:49:39.400 | that it doesn't matter as much, but look, I mean,
01:49:41.480 | we have a lot of younger people at the company
01:49:43.680 | for whom this is their first job and, you know,
01:49:46.240 | people need to grow and learn in their career.
01:49:48.320 | And I think that all that stuff is important,
01:49:50.240 | but here's one mathematical way to look at it.
01:49:52.560 | You know, at the beginning of this, we,
01:49:58.480 | I asked our people team, what was the average number
01:50:01.760 | of reports that a manager had?
01:50:03.840 | And I think it was around three, maybe three to four,
01:50:08.520 | but closer to three.
01:50:10.080 | I was like, wow, like a manager can, you know,
01:50:13.360 | best practices that person can manage,
01:50:15.920 | you know, seven or eight people.
01:50:18.240 | But there was a reason why it was closer to three.
01:50:20.120 | It was because we were growing so quickly, right?
01:50:22.720 | And when you're hiring so many people so quickly,
01:50:25.960 | then that means that you need managers
01:50:28.320 | who have capacity to onboard new people.
01:50:31.240 | And also, if you have a new manager,
01:50:32.560 | you may not wanna have them have seven direct reports
01:50:34.880 | immediately 'cause you want them to ramp up.
01:50:36.880 | But the thing is going forward, I don't want us
01:50:39.800 | to actually hire that many people that quickly, right?
01:50:42.640 | So I actually think we'll just do better work
01:50:44.960 | if we have more constraints and we're, you know,
01:50:47.520 | leaner as an organization.
01:50:48.840 | So in a world where we're not adding
01:50:50.600 | so many people as quickly, is it as valuable
01:50:53.840 | to have a lot of managers who have extra capacity
01:50:56.040 | waiting for new people?
01:50:56.920 | No, right?
01:50:57.760 | So now we can sort of defragment the organization
01:51:01.640 | and get to a place where the average is closer
01:51:03.920 | to that seven or eight.
01:51:05.920 | And it just ends up being a somewhat more
01:51:08.720 | kind of compact management structure,
01:51:10.880 | which, you know, decreases the latency on information
01:51:14.160 | going up and down the chain,
01:51:15.320 | and I think empowers people more.
01:51:17.680 | But I mean, that's an example that I think
01:51:19.480 | it doesn't kind of undervalue the importance of management
01:51:23.000 | and the kind of the personal growth or coaching
01:51:28.000 | that people need in order to do their jobs well.
01:51:30.520 | It's just, I think, realistically,
01:51:32.040 | we're just not gonna hire as many people going forward.
01:51:34.200 | So I think that you need a different structure.
01:51:36.360 | - This whole incredible hierarchy and network of humans
01:51:41.120 | that make up a company is fascinating.
01:51:43.080 | - Oh yeah.
01:51:43.920 | - Yeah.
01:51:45.200 | How do you hire great teams?
01:51:48.400 | How do you hire great, now with the focus on engineering
01:51:51.760 | and technical teams, how do you hire great engineers
01:51:55.840 | and great members of technical teams?
01:51:58.560 | - Well, you're asking how you select
01:52:01.320 | or how you attract them?
01:52:03.000 | - Both, but select, I think.
01:52:05.680 | I think attract is work on cool stuff and have a vision.
01:52:09.360 | (laughs)
01:52:10.200 | I think that's what we're talking about.
01:52:11.040 | - I think that's right, and have a track record
01:52:12.600 | that people think you're actually gonna be able to do it.
01:52:14.080 | - Yeah, to me, the select seems like more of the art form,
01:52:18.600 | more of the tricky thing.
01:52:20.080 | - Yeah.
01:52:20.920 | - To select the people that fit the culture
01:52:23.880 | and can get integrated the most effectively and so on.
01:52:26.840 | And maybe, especially when they're young,
01:52:29.400 | to see the magic through the resumes,
01:52:34.400 | through the paperwork and all this kind of stuff,
01:52:37.200 | to see that there's a special human there
01:52:39.480 | that would do incredible work.
01:52:42.880 | - So there are lots of different cuts on this question.
01:52:46.480 | I mean, I think when an organization is growing quickly,
01:52:49.920 | one of the big questions that teams face
01:52:53.440 | is do I hire this person who's in front of me now
01:52:55.800 | because they seem good,
01:52:57.520 | or do I hold out to get someone who's even better?
01:53:01.400 | And the heuristic that I always focused on for myself
01:53:06.400 | and my own kind of direct hiring that I think works
01:53:11.200 | when you recurse it through the organization
01:53:13.800 | is that you should only hire someone to be on your team
01:53:16.480 | if you would be happy working for them
01:53:18.280 | in an alternate universe.
01:53:19.880 | And I think that that kind of works,
01:53:22.080 | and that's basically how I've tried to build my team.
01:53:24.680 | I'm not in a rush to not be running the company,
01:53:28.840 | but I think in an alternate universe
01:53:30.360 | where one of these other folks was running the company,
01:53:32.400 | I'd be happy to work for them.
01:53:33.520 | I feel like I'd learn from them.
01:53:35.400 | I respect their kind of general judgment.
01:53:38.520 | They're all very insightful.
01:53:40.600 | They have good values.
01:53:41.840 | And I think that that gives you some rubric for...
01:53:47.560 | You can apply that at every layer.
01:53:48.920 | And I think if you apply that at every layer
01:53:50.400 | in the organization,
01:53:51.760 | then you'll have a pretty strong organization.
01:53:54.040 | Okay, in an organization that's not growing as quickly,
01:53:59.040 | the questions might be a little different though.
01:54:01.400 | And there, you asked about young people specifically,
01:54:05.920 | like people out of college.
01:54:07.640 | And one of the things that we see
01:54:09.760 | is it's a pretty basic lesson,
01:54:12.560 | but we have a much better sense of who the best people are
01:54:16.640 | who have interned at the company for a couple of months
01:54:19.440 | than by looking at them at kind of a resume
01:54:22.320 | or a short interview loop.
01:54:25.160 | I mean, obviously the in-person feel that you get
01:54:27.320 | from someone probably tells you more than the resume,
01:54:29.800 | and you can do some basic skills assessment,
01:54:33.600 | but a lot of the stuff really just is cultural.
01:54:36.400 | People thrive in different environments
01:54:38.480 | and on different teams, even within a specific company.
01:54:44.880 | And it's like the people who come
01:54:47.480 | for even a short period of time over a summer
01:54:50.520 | who do a great job here,
01:54:52.160 | you know that they're gonna be great
01:54:53.480 | if they came and joined full-time.
01:54:55.360 | And that's one of the reasons
01:54:56.720 | why we've invested so much in internship
01:54:59.120 | is basically it's a very useful sorting function,
01:55:03.840 | both for us and for the people
01:55:05.240 | who wanna try out the company.
01:55:06.880 | - You mentioned in-person,
01:55:08.000 | what do you think about remote work,
01:55:10.160 | a topic that's been discussed extensively
01:55:12.200 | because of the, over the past few years,
01:55:14.440 | because of the pandemic?
01:55:15.880 | - Yeah, I mean, I think it's a thing that's here to stay,
01:55:20.040 | but I think that there's value in both, right?
01:55:24.920 | It's not, you know,
01:55:27.320 | I wouldn't wanna run a fully remote company yet, at least.
01:55:31.400 | I think there's an asterisk on that, which is that-
01:55:34.200 | - Some of the other stuff you're working on, yeah.
01:55:36.240 | - Yeah, exactly.
01:55:37.080 | It's like all the, you know, metaverse work
01:55:39.880 | and the ability to be, to feel like you're truly present.
01:55:44.560 | No matter where you are.
01:55:45.800 | I think once you have that all dialed in,
01:55:48.480 | then we may, you know, one day reach a point
01:55:50.680 | where it really just doesn't matter as much
01:55:52.600 | where you are physically.
01:55:53.880 | But, I don't know, today it still does, right?
01:56:00.680 | So yeah, for people who,
01:56:04.520 | there are all these people who have special skills
01:56:07.000 | and wanna live in a place where we don't have an office.
01:56:09.880 | Are we better off having them at the company?
01:56:11.600 | Absolutely, right?
01:56:12.760 | And are a lot of people who work at the company
01:56:15.400 | for several years and then, you know,
01:56:17.840 | build up the relationships internally
01:56:19.920 | and kind of have the trust
01:56:23.160 | and have a sense of how the company works.
01:56:24.960 | Can they go work remotely now if they want
01:56:26.720 | and still do it as effectively?
01:56:28.000 | And we've done all these studies that show it's like, okay,
01:56:30.280 | does that affect their performance?
01:56:31.560 | It does not.
01:56:32.400 | But, you know, for the new folks who are joining
01:56:36.480 | and for people who are earlier in their career
01:56:40.480 | and need to learn how to solve certain problems
01:56:43.080 | and need to get ramped up on the culture,
01:56:45.080 | you know, when you're working through
01:56:48.360 | really complicated problems
01:56:49.760 | where you don't just wanna sit in the,
01:56:51.480 | you don't just want the formal meeting,
01:56:52.880 | but you wanna be able to like brainstorm
01:56:55.000 | when you're walking in the hallway together
01:56:56.400 | after the meeting.
01:56:57.440 | I don't know, it's like we just haven't replaced
01:57:00.600 | the kind of in-person dynamics there yet
01:57:05.120 | with anything remote yet.
01:57:08.880 | So. - Yeah, there's a magic
01:57:10.400 | to the in-person that,
01:57:12.120 | we'll talk about this a little bit more,
01:57:13.560 | but I'm really excited by the possibilities
01:57:15.640 | in the next few years in virtual reality and mixed reality
01:57:18.960 | that are possible with high resolution scans.
01:57:21.840 | I mean, I, as a person who loves in-person interaction,
01:57:26.840 | like these podcasts in person,
01:57:29.560 | it would be incredible to achieve the level of realism
01:57:33.440 | I've gotten the chance to witness.
01:57:35.200 | But let me ask about that.
01:57:38.280 | - Yeah. - I got a chance
01:57:39.520 | to look at the Quest 3 headset,
01:57:44.200 | and it is amazing.
01:57:45.640 | You've announced it.
01:57:48.960 | You'll give some more details in the fall,
01:57:52.840 | maybe release in the fall.
01:57:53.760 | When is it getting released again?
01:57:55.000 | I forgot, you mentioned it.
01:57:56.320 | - We'll give more details at Connect,
01:57:57.800 | but it's coming this fall.
01:57:59.120 | - Okay.
01:57:59.960 | So it's priced at 499.
01:58:07.240 | What features are you most excited about there?
01:58:09.640 | - There are basically two big new things
01:58:11.320 | that we've added to Quest 3 over Quest 2.
01:58:14.400 | The first is high resolution mixed reality.
01:58:17.280 | And the basic idea here is that,
01:58:22.720 | you can think about virtual reality as you have the headset
01:58:26.280 | and all the pixels are virtual,
01:58:29.000 | and you're basically immersed in a different world.
01:58:32.160 | Mixed reality is where you see the physical world around you
01:58:35.720 | and you can place virtual objects in it,
01:58:37.280 | whether that's a screen to watch a movie
01:58:40.200 | or a projection of your virtual desktop,
01:58:42.440 | or you're playing a game where like zombies
01:58:44.840 | are coming out through the wall and you need to shoot them.
01:58:47.600 | Or we're playing Dungeons and Dragons or some board game
01:58:50.760 | and we just have a virtual version of the board
01:58:52.800 | in front of us while we're sitting here.
01:58:54.760 | All that's possible in mixed reality.
01:58:57.960 | And I think that that is going to be
01:58:59.800 | the next big capability on top of virtual reality.
01:59:02.520 | - It is done so well.
01:59:05.520 | I have to say as a person who experienced it today
01:59:08.440 | with zombies, having a full awareness of the environment
01:59:13.440 | and integrating that environment in the way they run at you
01:59:16.840 | while they try to kill you.
01:59:18.160 | So it's just the mixed reality,
01:59:20.560 | the pass through is really, really, really well done.
01:59:23.400 | And the fact that it's only $500 is really, it's well done.
01:59:28.240 | - Thank you.
01:59:29.080 | I mean, I'm super excited about it.
01:59:30.600 | I mean, our, and we put a lot of work
01:59:33.920 | into making the device both as good as possible
01:59:38.920 | and as affordable as possible
01:59:40.640 | because a big part of our mission and ethos here
01:59:43.240 | is we want people to be able to connect with each other.
01:59:46.320 | We want to reach and we want to serve a lot of people.
01:59:49.120 | We want to bring this technology to everyone.
01:59:51.800 | So we're not just trying to serve
01:59:53.520 | an elite, a wealthy crowd.
01:59:57.720 | We really want this to be accessible.
02:00:01.280 | So that is in a lot of ways
02:00:03.840 | an extremely hard technical problem
02:00:05.440 | because we don't just have the ability
02:00:08.280 | to put an unlimited amount of hardware in this.
02:00:10.880 | We needed to basically deliver something
02:00:12.960 | that works really well, but in an affordable package.
02:00:16.120 | And we started with Quest Pro last year.
02:00:18.200 | It was $1,500.
02:00:23.000 | And now we've lowered the price to a thousand,
02:00:25.240 | but in a lot of ways, the mixed reality in Quest 3
02:00:28.640 | is an even better and more advanced level
02:00:31.680 | than what we were able to deliver in Quest Pro.
02:00:33.480 | So I'm really proud of where we are with Quest 3 on that.
02:00:38.240 | It's going to work with all of the virtual reality titles
02:00:40.760 | and everything that existed there.
02:00:42.840 | So people who want to play fully immersive games,
02:00:45.240 | social experiences, fitness, all that stuff will work,
02:00:49.320 | but now you'll also get mixed reality too,
02:00:51.480 | which I think people really like
02:00:54.240 | because sometimes you want to be super immersed in a game,
02:00:58.360 | but a lot of the time,
02:01:00.440 | especially when you're moving around, if you're active,
02:01:02.520 | like you're doing some fitness experience,
02:01:04.920 | let's say you're doing boxing or something,
02:01:08.600 | it's like you kind of want to be able
02:01:10.000 | to see the room around you.
02:01:11.120 | So that way you know that I'm not going to punch a lamp
02:01:13.320 | or something like that.
02:01:14.480 | And I don't know if you got to play with this experience,
02:01:17.000 | but we basically have the,
02:01:18.440 | and it's just sort of like a fun little demo
02:01:20.360 | that we put together.
02:01:21.200 | But it's like you just,
02:01:23.800 | we're like in a conference room or your living room
02:01:26.560 | and you have the guy there and you're boxing him
02:01:30.120 | and you're fighting him and it's like.
02:01:31.760 | - All the other people are there too.
02:01:32.880 | I got a chance to do that.
02:01:34.120 | And all the people are there.
02:01:35.560 | It's like that guy's right there.
02:01:39.200 | - Yeah, it's like it's right in the room.
02:01:40.720 | - And the other humans, the path,
02:01:42.480 | you're seeing them also, they can cheer you on,
02:01:44.360 | they can make fun of you
02:01:45.240 | if they're anything like friends of mine.
02:01:47.200 | And then just, yeah, it's really,
02:01:52.200 | it's a really compelling experience.
02:01:55.320 | And VR is really interesting too,
02:01:56.960 | but this is something else almost.
02:01:58.800 | This becomes integrated into your life, into your world.
02:02:03.560 | - Yeah, and it, so I think it's a completely new capability
02:02:06.880 | that will unlock a lot of different content.
02:02:09.280 | And I think it'll also just make the experience
02:02:11.480 | more comfortable for a set of people
02:02:13.120 | who didn't want to have only fully immersive experiences.
02:02:16.840 | I think if you want experiences where you're grounded in,
02:02:19.280 | you know, your living room and the physical world around you,
02:02:21.680 | now you'll be able to have that too.
02:02:23.800 | And I think that that's pretty exciting.
02:02:24.920 | - I really liked how it added windows
02:02:28.640 | to a room with no windows.
02:02:30.480 | - Yeah.
02:02:31.320 | - Me as a person.
02:02:32.160 | - Did you see the aquarium one
02:02:32.980 | where you could see the sharks swim up?
02:02:34.160 | Or was that just the zombie one?
02:02:35.520 | - Just the zombie one, but it's still outside.
02:02:37.640 | - You don't necessarily want windows added
02:02:39.360 | to your living room where zombies come out of,
02:02:41.000 | but yes, in the context of that game, it's yeah, yeah.
02:02:43.920 | - I enjoyed it 'cause you could see the nature outside.
02:02:47.560 | And me as a person that doesn't have windows,
02:02:50.000 | it's just nice to have nature.
02:02:52.120 | - Yeah, well.
02:02:53.720 | - Even if it's a mixed reality setting.
02:02:56.440 | I know it's a zombie game, but there's a zen nature,
02:03:01.400 | zen aspect to being able to look outside
02:03:03.720 | and alter your environment as you know it.
02:03:06.660 | - Yeah.
02:03:09.040 | There will probably be better, more zen ways to do that
02:03:12.400 | than the zombie game you're describing,
02:03:13.760 | but you're right that the basic idea
02:03:16.160 | of sort of having your physical environment on pass-through,
02:03:20.480 | but then being able to bring in different elements,
02:03:24.880 | I think it's gonna be super powerful.
02:03:27.640 | And in some ways, I think that these are,
02:03:30.720 | mixed reality is also a predecessor to,
02:03:33.160 | eventually we will get AR glasses
02:03:35.060 | that are not kind of the goggles form factor
02:03:37.420 | of the current generation of headsets
02:03:40.200 | that people are making.
02:03:41.680 | But I think a lot of the experiences
02:03:44.060 | that developers are making for mixed reality
02:03:46.260 | of basically you just have a kind of a hologram
02:03:48.680 | that you're putting in the world,
02:03:50.060 | will hopefully apply once we get the AR glasses too.
02:03:53.560 | Now that's got its own whole set of challenges and it's-
02:03:56.760 | - Well, the headset's already smaller
02:03:58.280 | than the previous version.
02:04:00.040 | - Oh yeah, it's 40% thinner.
02:04:01.720 | And the other thing that I think is good about it,
02:04:03.360 | yeah, so mixed reality was the first big thing.
02:04:05.960 | The second is it's just a great VR headset.
02:04:10.160 | It's, I mean, it's got 2X the graphics processing power,
02:04:13.220 | 40% sharper screens, 40% thinner, more comfortable,
02:04:18.880 | better strap architecture, all this stuff that,
02:04:21.280 | you know, if you liked Quest 2,
02:04:22.440 | I think that this is just gonna be,
02:04:24.040 | it's like all the content that you might've played
02:04:25.880 | in Quest 2 is just gonna get sharper automatically
02:04:27.980 | and look better in this.
02:04:29.000 | So it's, I think people are really gonna like it.
02:04:31.640 | Yeah, so this fall.
02:04:33.680 | - This fall, I have to ask,
02:04:36.280 | Apple just announced a mixed reality headset
02:04:40.020 | called Vision Pro for $3,500, available in early 2024.
02:04:45.020 | What do you think about this headset?
02:04:48.640 | - Well, I saw the materials when they launched.
02:04:51.800 | I haven't gotten a chance to play with it yet.
02:04:53.800 | So kind of take everything with a grain of salt,
02:04:56.160 | but a few high level thoughts.
02:04:58.980 | I mean, first, you know, I do think that this is
02:05:03.980 | a certain level of validation for the category, right?
02:05:09.940 | Where, you know, we were the primary folks out there
02:05:13.520 | before saying, hey, I think that this, you know,
02:05:16.800 | virtual reality, augmented reality, mixed reality,
02:05:19.000 | this is gonna be a big part of the next computing platform.
02:05:21.960 | I think having Apple come in and share that vision
02:05:28.240 | will make a lot of people who are fans of their products
02:05:34.360 | really consider that.
02:05:36.400 | And then, you know, of course the $3,500 price,
02:05:41.700 | you know, on the one hand, I get it
02:05:45.000 | for with all the stuff that they're trying to pack in there.
02:05:47.200 | On the other hand, a lot of people aren't gonna find that
02:05:49.680 | to be affordable.
02:05:51.240 | So I think that there's a chance that them coming in
02:05:53.840 | actually increases demand for the overall space
02:05:57.600 | and that Quest 3 is actually the primary beneficiary of that
02:06:01.400 | because a lot of the people who might say,
02:06:03.880 | hey, you know, this, like I'm gonna give another
02:06:07.160 | consideration to this, or, you know,
02:06:09.280 | now I understand maybe what mixed reality is more
02:06:11.960 | and Quest 3 is the best one on the market
02:06:14.480 | that I can afford.
02:06:16.320 | And it's great also, right?
02:06:18.280 | I think that that's, and, you know, in our own way,
02:06:20.920 | I think we're, and there are a lot of features that we have
02:06:23.120 | where we're leading on.
02:06:24.280 | So I think that that's, that I think is gonna be a very,
02:06:29.040 | that could be quite good.
02:06:30.280 | And then obviously over time,
02:06:33.120 | the companies are just focused on
02:06:35.560 | somewhat different things, right?
02:06:36.960 | Apple has always, you know, I think focused on building
02:06:42.280 | really kind of high-end things,
02:06:45.280 | whereas our focus has been on,
02:06:49.240 | it's just, we have a more democratic ethos.
02:06:51.480 | We wanna build things that are accessible
02:06:53.800 | to a wider number of people.
02:06:55.440 | You know, we've sold tens of millions of Quest devices.
02:07:00.080 | My understanding, just based on rumors,
02:07:05.040 | I don't have any special knowledge on this,
02:07:06.320 | is that Apple is building about one million
02:07:08.480 | of their device, right?
02:07:10.600 | So just in terms of like what you kind of expect
02:07:13.440 | in terms of sales numbers,
02:07:15.000 | I just think that this is, I mean,
02:07:19.040 | Quest is gonna be the primary thing that people
02:07:23.000 | in the market will continue using
02:07:25.000 | for the foreseeable future.
02:07:25.960 | And then obviously over the longterm,
02:07:27.000 | it's up to the companies to see how well we each executed
02:07:29.840 | the different things that we're doing.
02:07:31.320 | But we kind of come at it from different places.
02:07:33.000 | We're very focused on social interaction, communication,
02:07:39.360 | being more active, right?
02:07:40.800 | So there's fitness, there's gaming, there are those things.
02:07:43.800 | You know, whereas I think a lot of the use cases
02:07:46.600 | that you saw in Apple's launch material
02:07:50.680 | were more around people sitting,
02:07:53.640 | you know, people looking at screens, which are great.
02:07:56.800 | I think that you will replace your laptop over time
02:07:59.280 | with a headset.
02:08:00.640 | But I think in terms of kind of how
02:08:03.440 | the different use cases that the companies are going after,
02:08:07.280 | they're a bit different for where we are right now.
02:08:10.360 | - Yeah, so gaming wasn't a big part of the presentation,
02:08:13.680 | which is interesting.
02:08:15.320 | It feels like mixed reality gaming's
02:08:19.720 | such a big part of that.
02:08:20.960 | It was interesting to see it missing in the presentation.
02:08:24.400 | - Well, I mean, look, there are certain design trade-offs
02:08:27.080 | in this where, you know, they,
02:08:30.800 | I think they made this point about not wanting
02:08:32.560 | to have controllers, which on the one hand,
02:08:35.360 | there's a certain elegance about just being able
02:08:37.040 | to navigate the system with eye gaze and hand tracking.
02:08:41.120 | And by the way, you'll be able to just navigate Quest
02:08:43.920 | with your hands too, if that's what you want.
02:08:46.640 | - Yeah, one of the things I should mention
02:08:48.360 | is that the capability from the cameras
02:08:51.760 | with computer vision to detect certain aspects of the hand,
02:08:56.160 | allowing you to have a controller
02:08:57.360 | that doesn't have that ring thing.
02:08:59.000 | - Yeah, the hand tracking in Quest 3
02:09:01.120 | and the controller tracking is a big step up
02:09:03.640 | from the last generation.
02:09:05.600 | And one of the demos that we have is basically
02:09:09.720 | an MR experience teaching you how to play piano
02:09:12.120 | where it basically highlights the notes
02:09:13.560 | that you need to play and it's like,
02:09:14.760 | we're just all, it's hands, it's no controllers.
02:09:16.880 | But I think if you care about gaming,
02:09:20.480 | having a controller allows you to have a more tactile feel
02:09:25.480 | and allows you to capture fine motor movement
02:09:30.280 | much more precisely than what you can do with hands
02:09:34.520 | without something that you're touching.
02:09:36.000 | So again, I think there are certain questions
02:09:38.520 | which are just around what use cases are you optimizing for?
02:09:42.040 | I think if you wanna play games,
02:09:45.760 | then I think that you wanna design the system
02:09:49.200 | in a different way and we're more focused
02:09:52.160 | on kind of social experiences, entertainment experiences.
02:09:56.960 | Whereas if what you want is to make sure that the text
02:10:01.560 | that you read on a screen is as crisp as possible,
02:10:04.600 | then you need to make the design and cost trade-offs
02:10:08.080 | that they made that lead you to making a $3,500 device.
02:10:12.320 | So I think that there is a use case for that for sure,
02:10:14.680 | but I just think that the company
02:10:17.360 | is we've basically made different design trade-offs
02:10:20.360 | to get to the use cases that we're trying to serve.
02:10:24.560 | - There's a lot of other stuff I'd love to talk to you
02:10:27.520 | about the Metaverse, especially the Kodak Avatar,
02:10:31.880 | which I've gotten to experience a lot
02:10:33.520 | of different variations of recently
02:10:35.160 | that I'm really, really excited about.
02:10:37.000 | - Yeah, I'm excited to talk about that too.
02:10:39.200 | - I'll have to wait a little bit because,
02:10:41.420 | well, I think there's a lot more to show off in that regard.
02:10:47.920 | But let me step back to AI.
02:10:50.240 | I think we've mentioned it a little bit,
02:10:51.840 | but I'd like to linger on this question
02:10:55.640 | that folks like Eliezer Yudkowsky has to worry about
02:11:00.640 | and others of the existential, the serious threats of AI
02:11:05.320 | that have been reinvigorated now
02:11:07.080 | with the rapid developments of AI systems.
02:11:09.880 | Do you worry about the existential risks of AI
02:11:14.680 | as Eliezer does, about the alignment problem,
02:11:17.960 | about this getting out of hand?
02:11:20.640 | - Anytime where there's a number of serious people
02:11:23.120 | who are raising a concern that is that existential
02:11:27.480 | about something that you're involved with,
02:11:28.960 | I think you have to think about it, right?
02:11:31.000 | So I've spent quite a bit of time thinking about it
02:11:33.960 | from that perspective.
02:11:35.160 | The thing that I, where I basically have come out
02:11:42.200 | on this for now is I do think that there are,
02:11:44.640 | over time, I think that we need to think about this
02:11:47.520 | even more as we approach something
02:11:50.120 | that could be closer to super intelligence.
02:11:53.040 | I just think it's pretty clear
02:11:54.360 | to anyone working on these projects today
02:11:56.440 | that we're not there.
02:11:58.080 | And one of my concerns is that,
02:12:01.760 | we spent a fair amount of time on this before,
02:12:04.600 | but there are more,
02:12:06.840 | I don't know if mundane is the right word,
02:12:11.720 | but there's concerns that already exist, right?
02:12:14.960 | About people using AI tools to do harmful things
02:12:19.720 | of the type that we're already aware,
02:12:21.120 | whether we talked about fraud or scams
02:12:23.800 | or different things like that.
02:12:25.320 | And that's going to be a pretty big set of challenges
02:12:31.560 | that the companies working on this
02:12:32.760 | are gonna need to grapple with,
02:12:34.320 | regardless of whether there is an existential concern
02:12:38.120 | as well at some point down the road.
02:12:40.040 | So I do worry that to some degree,
02:12:42.960 | people can get a little too focused on
02:12:49.440 | some of the tail risk
02:12:51.600 | and then not do as good of a job as we need to
02:12:54.480 | on the things that you can be almost certain
02:12:57.680 | are going to come down the pipe
02:12:59.440 | as real risks that kind of manifest themselves
02:13:03.840 | in the near term.
02:13:04.680 | So for me, I've spent most of my time on that
02:13:08.160 | once I kind of made the realization
02:13:12.080 | that the size of models that we're talking about now
02:13:15.120 | in terms of what we're building
02:13:16.400 | are just quite far from the super intelligence type concerns
02:13:20.040 | that people raise.
02:13:22.200 | But I think once we get a couple of steps closer to that,
02:13:25.800 | I know as we do get closer,
02:13:26.960 | I think that those,
02:13:27.920 | there are going to be some novel risks and issues
02:13:32.520 | about how we make sure that the systems are safe for sure.
02:13:36.720 | I guess here just to take the conversation
02:13:38.480 | in a somewhat different direction,
02:13:40.160 | I think in some of these debates around safety,
02:13:45.720 | I think the concepts of intelligence and autonomy
02:13:50.040 | or like the being of the thing,
02:13:55.120 | as an analogy, they get kind of conflated together.
02:13:59.680 | And I think it very well could be the case
02:14:03.440 | that you can make something in scale intelligence quite far,
02:14:07.240 | but that may not manifest the safety concerns
02:14:14.920 | that people are saying in the sense that,
02:14:16.440 | I mean, just if you look at human biology,
02:14:18.560 | it's like, all right, we have our neocortex
02:14:20.200 | is where all the thinking happens, right?
02:14:23.000 | And it's, but it's not really calling the shots
02:14:25.800 | at the end of the day.
02:14:26.640 | We have a much more primitive old brain structure
02:14:31.080 | for which our neocortex, which is this powerful machinery
02:14:34.160 | is basically just a kind of prediction and reasoning engine
02:14:38.080 | to help it kind of like our very simple brain.
02:14:44.640 | Decide how to plan and do what it needs to do
02:14:48.400 | in order to achieve these like very kind of basic impulses.
02:14:52.440 | And I think that you can think about some of the development
02:14:57.400 | of intelligence along the same lines
02:15:00.200 | where just like our neocortex doesn't have free will
02:15:03.240 | or autonomy, we might develop these wildly intelligent
02:15:07.920 | systems that are much more intelligent
02:15:11.080 | than our neocortex have much more capacity,
02:15:13.600 | but are in the same way that our neocortex
02:15:15.880 | is sort of subservient and is used as a tool
02:15:18.760 | by our kind of simple impulse brain.
02:15:22.160 | It's, I think that it's not out of the question
02:15:25.400 | that very intelligent systems that have the capacity
02:15:28.160 | to think will kind of act as that is sort of an extension
02:15:31.560 | of the neocortex doing that.
02:15:33.640 | So I think my own view is that where we really need
02:15:37.520 | to be careful is on the development of autonomy
02:15:42.680 | and how we think about that,
02:15:44.280 | because it's actually the case that relatively simple
02:15:49.280 | and unintelligent things that have runaway autonomy
02:15:52.360 | and just spread themselves or,
02:15:54.400 | you know, it's like we have a word for that,
02:15:55.800 | it's a virus, right?
02:15:56.960 | It's, I mean, like it's can be simple computer code
02:15:59.520 | that is not particularly intelligent,
02:16:01.000 | but just spreads itself and does a lot of harm,
02:16:03.360 | biologically or computer.
02:16:06.880 | And I just think that these are somewhat separable things.
02:16:12.880 | And a lot of what I think we need to develop
02:16:15.680 | when people talk about safety and responsibility
02:16:18.160 | is really the governance on the autonomy
02:16:21.280 | that can be given to systems.
02:16:23.720 | And to me, if I were a policymaker or thinking about this,
02:16:28.720 | I would really wanna think about that distinction
02:16:31.440 | between these, where I think building intelligent systems
02:16:34.120 | will be, can create a huge advance
02:16:36.600 | in terms of people's quality of life
02:16:38.760 | and productivity growth in the economy.
02:16:42.400 | But it's the autonomy part of this
02:16:45.120 | that I think we really need to make progress
02:16:47.680 | on how to govern these things responsibly
02:16:50.120 | before we build the capacity for them
02:16:54.280 | to make a lot of decisions on their own
02:16:56.600 | or give them goals or things like that.
02:17:00.360 | And I know that's a research problem,
02:17:01.960 | but I do think that to some degree,
02:17:03.160 | these are somewhat separable things.
02:17:06.920 | I love the distinction between intelligence and autonomy
02:17:09.760 | and the metaphor within your cortex.
02:17:12.180 | Let me ask about power.
02:17:15.880 | So building superintelligent systems,
02:17:19.960 | even if it's not in the near term,
02:17:22.040 | I think Meta is one of the few companies,
02:17:25.520 | if not the main company,
02:17:28.220 | that will develop the superintelligent system.
02:17:31.360 | And you are a man who's at the head of this company.
02:17:34.560 | Building AGI might make you
02:17:36.440 | the most powerful man in the world.
02:17:37.960 | Do you worry that that power will corrupt you?
02:17:40.260 | - What a question.
02:17:43.580 | I mean, look, I think realistically,
02:17:48.560 | this gets back to the open source things
02:17:50.300 | that we talked about before,
02:17:51.800 | which is I don't think that the world will be best served
02:17:56.800 | by any small number of organizations
02:18:03.520 | having this without it being something
02:18:06.440 | that is more broadly available.
02:18:09.200 | And I think if you look through history,
02:18:11.200 | it's when there are these sort of like unipolar advances
02:18:16.480 | and things that, and like power imbalances
02:18:19.240 | that they're due into being kind of weird situations.
02:18:23.960 | So this is one of the reasons why I think open sources
02:18:26.880 | is generally the right approach.
02:18:31.520 | And I think it's a categorically different question today
02:18:34.720 | when we're not close to superintelligence.
02:18:36.840 | I think that there's a good chance
02:18:37.760 | that even once we get closer to superintelligence,
02:18:40.120 | open sourcing remains the right approach,
02:18:42.100 | even though I think at that point
02:18:43.000 | it's a somewhat different debate.
02:18:44.640 | But I think part of that is that that is,
02:18:49.200 | I think one of the best ways to ensure
02:18:51.680 | that the system is as secure and safe as possible,
02:18:54.200 | because it's not just about
02:18:55.340 | a lot of people having access to it.
02:18:57.160 | It's the scrutiny that kind of comes
02:18:59.360 | with building an open source system.
02:19:02.360 | But I think that this is a pretty widely accepted thing
02:19:04.440 | about open source is that you have the code out there,
02:19:08.760 | so anyone can see the vulnerabilities.
02:19:11.240 | Anyone can kind of mess with it in different ways.
02:19:13.880 | People can spin off their own projects
02:19:15.520 | and experiment in a ton of different ways.
02:19:17.720 | And the net result of all of that
02:19:20.000 | is that the systems just get hardened
02:19:22.040 | and get to be a lot safer and more secure.
02:19:25.920 | So I think that there's a chance
02:19:29.520 | that that ends up being the way that this goes to,
02:19:34.040 | a pretty good chance,
02:19:35.440 | and that having this be open
02:19:39.120 | both leads to a healthier development of the technology
02:19:43.120 | and also leads to a more balanced distribution
02:19:47.440 | of the technology in a way that strike me
02:19:50.800 | as good values to aspire to.
02:19:53.120 | - So to you, there's risks to open sourcing,
02:19:55.960 | but the benefits outweigh the risks.
02:19:57.760 | At the two, it's interesting,
02:20:00.080 | I think the way you put it,
02:20:01.920 | you put it well,
02:20:04.280 | that there's a different discussion now
02:20:06.760 | than when we get closer
02:20:08.240 | to development of superintelligence,
02:20:11.440 | of the benefits and risks of open sourcing.
02:20:15.460 | - Yeah, and to be clear,
02:20:16.360 | I feel quite confident in the assessment
02:20:18.680 | that open sourcing models now is net positive.
02:20:23.360 | I think there's a good argument
02:20:25.120 | that in the future it will be too,
02:20:26.960 | even as you get closer to superintelligence,
02:20:28.720 | but I've not, I've certainly have not decided on that yet.
02:20:32.240 | And I think that it becomes
02:20:33.220 | a somewhat more complex set of questions
02:20:35.680 | that I think people will have time to debate
02:20:37.300 | and will also be informed by what happens
02:20:39.520 | between now and then to make those decisions.
02:20:41.600 | We don't have to necessarily
02:20:43.160 | just debate that in theory right now.
02:20:45.120 | - What year do you think we'll have a superintelligence?
02:20:48.960 | - I don't know, I mean, that's pure speculation.
02:20:52.080 | I think it's very clear just taking a step back
02:20:55.320 | that we had a big breakthrough in the last year, right?
02:20:57.480 | Where the LLMs and diffusion models
02:21:00.020 | basically reached a scale where they're able
02:21:02.600 | to do some pretty interesting things.
02:21:05.280 | And then I think the question is what happens from here?
02:21:07.280 | And just to paint the two extremes,
02:21:09.680 | on one side, it's like, okay,
02:21:15.600 | well, we just had one breakthrough.
02:21:17.440 | If we just have like another breakthrough like that,
02:21:19.360 | or maybe two, then we can have something
02:21:21.200 | that's truly crazy, right?
02:21:23.240 | And is like, is just like so much more advanced.
02:21:28.240 | And on that side of the argument, it's like, okay,
02:21:31.880 | well, maybe we're,
02:21:32.820 | maybe we're only a couple of big steps away
02:21:38.120 | from reaching something that looks more
02:21:41.600 | like general intelligence.
02:21:43.320 | Okay, that's one side of the argument.
02:21:45.800 | And the other side, which is what we've historically seen
02:21:47.800 | a lot more, is that a breakthrough leads to,
02:21:50.960 | in that Gartner hype cycle, there's like the hype,
02:21:58.320 | and then there's the trough of disillusionment after,
02:22:00.960 | when like people think that there's a chance that, hey,
02:22:03.600 | okay, there's a big breakthrough.
02:22:04.640 | Maybe we're about to get another big breakthrough.
02:22:06.040 | And it's like, actually, you're not about
02:22:07.480 | to get another breakthrough.
02:22:08.960 | Maybe you're actually just gonna have to sit
02:22:10.760 | with this one for a while.
02:22:12.320 | And, you know, it could be five years,
02:22:17.320 | it could be 10 years, it could be 15 years
02:22:19.800 | until you figure out the, kind of the next big thing
02:22:24.040 | that needs to get figured out.
02:22:25.600 | And, but I think that the fact that we just had
02:22:28.680 | this breakthrough sort of makes it so that we're at a point
02:22:32.720 | of almost a very wide error bars on what happens next.
02:22:36.520 | I think the traditional technical view,
02:22:40.280 | or like looking at the industry,
02:22:42.240 | would suggest that we're not just going to stack
02:22:46.160 | in a breakthrough on top of breakthrough
02:22:48.120 | on top of breakthrough every six months or something.
02:22:51.840 | Right now, I think it will, I'm guessing,
02:22:54.600 | I would guess that it will take somewhat longer
02:22:56.520 | in between these, but I don't know.
02:23:01.400 | I tend to be pretty optimistic about breakthroughs too.
02:23:03.400 | So I mean, so I think if you're normalized
02:23:05.800 | for my normal optimism, then maybe it would be
02:23:08.600 | even slower than what I'm saying.
02:23:10.320 | But even within that, like I'm not even opining
02:23:13.480 | on the question of how many breakthroughs are required
02:23:15.400 | to get to general intelligence, because no one knows.
02:23:18.040 | - But this particular breakthrough was so,
02:23:21.520 | such a small step that resulted in such a big leap
02:23:25.920 | in performance as experienced by human beings
02:23:30.080 | that it makes you think, wow, are we,
02:23:33.280 | as we stumble across this very open world of research,
02:23:37.760 | will we stumble across another thing
02:23:41.600 | that will have a giant leap in performance?
02:23:43.960 | And also we don't know exactly at which stage
02:23:49.800 | is it really going to be impressive,
02:23:52.160 | 'cause it feels like it's really encroaching
02:23:54.320 | on impressive levels of intelligence.
02:23:57.960 | You still didn't answer the question
02:23:59.720 | of what year we're going to have super intelligence.
02:24:02.040 | I'd like to hold you to that.
02:24:03.680 | No, I'm just kidding.
02:24:04.520 | But is there something you could say about the timeline
02:24:07.840 | as you think about the development
02:24:10.920 | of AGI super intelligence systems?
02:24:14.920 | - Sure, so I still don't think I have any particular insight
02:24:19.360 | on when like a singular AI system
02:24:21.800 | that is a general intelligence will get created.
02:24:24.120 | But I think the one thing that most people
02:24:26.240 | in the discourse that I've seen about this
02:24:28.840 | haven't really grappled with is that we do seem to have
02:24:33.960 | organizations and structures in the world
02:24:37.560 | that exhibit greater than human intelligence already.
02:24:40.200 | So one example is a company.
02:24:44.440 | It acts as an entity, it has a singular brand.
02:24:48.480 | Obviously it's a collection of people,
02:24:50.920 | but I certainly hope that Meta
02:24:53.440 | with tens of thousands of people
02:24:55.480 | make smarter decisions than one person.
02:24:57.760 | But I think that that would be pretty bad if it didn't.
02:25:01.640 | Another example that I think is even more removed
02:25:05.080 | from kind of the way we think about
02:25:07.520 | like the personification of intelligence,
02:25:11.000 | which is often implied in some of these questions,
02:25:13.480 | is think about something like the stock market.
02:25:15.680 | Where the stock market is, it takes inputs,
02:25:18.440 | it's a distributed system,
02:25:19.520 | it's like the cybernetic organism
02:25:21.680 | that probably millions of people around the world
02:25:26.400 | are basically voting every day
02:25:28.480 | by choosing what to invest in.
02:25:30.800 | But it's basically this organism or structure
02:25:35.800 | that is smarter than any individual
02:25:39.280 | that we use to allocate capital
02:25:42.280 | as efficiently as possible around the world.
02:25:44.480 | And I do think that this notion that there are already
02:25:49.480 | these cybernetic systems that are either melding
02:25:56.400 | the intelligence of multiple people together
02:26:00.400 | or melding the intelligence of multiple people
02:26:02.880 | and technology together to form something
02:26:06.440 | which is dramatically more intelligent
02:26:09.400 | than any individual in the world
02:26:12.120 | is something that seems to exist
02:26:17.360 | and that we seem to be able to harness
02:26:20.160 | in a productive way for our society
02:26:22.200 | as long as we basically build these structures
02:26:25.120 | and balance with each other.
02:26:27.200 | So I don't know, I mean, that at least gives me hope
02:26:31.640 | that as we advance the technology,
02:26:33.360 | and I don't know how long exactly it's gonna be,
02:26:34.960 | but you asked, when is this gonna exist?
02:26:36.960 | I think to some degree we already have
02:26:39.480 | many organizations in the world
02:26:41.000 | that are smarter than a single human.
02:26:42.840 | And that seems to be something
02:26:44.400 | that is generally productive in advancing humanity.
02:26:46.880 | - And somehow the individual AI systems
02:26:49.520 | empower the individual humans
02:26:51.240 | and the interaction between those humans
02:26:53.240 | to make that collective intelligence machinery
02:26:56.040 | that you're referring to smarter.
02:26:57.800 | So it's not like AI is becoming super intelligent,
02:27:00.520 | it's just becoming the engine
02:27:03.120 | that's making the collective intelligence
02:27:04.920 | is primarily human more intelligent.
02:27:07.120 | - Yeah.
02:27:09.000 | - It's educating the humans better,
02:27:10.680 | it's making them better informed,
02:27:12.820 | it's making it more efficient for them
02:27:15.480 | to communicate effectively and debate ideas,
02:27:19.000 | and through that process,
02:27:20.440 | just making the whole collective intelligence
02:27:22.160 | more and more and more intelligent.
02:27:24.340 | Maybe faster than the individual AI systems
02:27:26.940 | that are trained on human data anyway are becoming.
02:27:30.500 | Maybe the collective intelligence of the human species
02:27:32.780 | might outpace the development of AI.
02:27:34.740 | Just like--
02:27:36.860 | - I think there's a balance in here,
02:27:37.820 | because I mean, if a lot of the input
02:27:40.920 | that the systems are being trained on
02:27:44.700 | is basically coming from feedback from people,
02:27:47.660 | then a lot of the development
02:27:49.380 | does need to happen in human time, right?
02:27:51.860 | It's not like a machine will just be able to go learn
02:27:55.860 | all the stuff about how people think about stuff,
02:27:58.220 | there's a cycle to how this needs to work.
02:28:00.980 | - This is an exciting world we're living in,
02:28:04.660 | and that you're at the forefront of developing.
02:28:07.180 | One of the ways you keep yourself humble,
02:28:09.540 | like we mentioned with jiu-jitsu,
02:28:11.520 | is doing some really difficult challenges,
02:28:14.440 | mental and physical.
02:28:16.160 | One of those you've done very recently
02:28:19.220 | is the Murph Challenge,
02:28:21.540 | and you got a really good time.
02:28:22.940 | It's 100 pull-ups, 200 push-ups, 300 squats,
02:28:25.660 | and a mile before and a mile run after.
02:28:28.880 | You got under 40 minutes on that.
02:28:32.500 | What was the hardest part?
02:28:35.340 | I think a lot of people were very impressed.
02:28:37.420 | It's a very impressive time.
02:28:38.820 | - Yeah, I was pretty happy. - How crazy are you?
02:28:41.700 | I guess is the question I'm asking.
02:28:44.140 | - It wasn't my best time,
02:28:44.980 | but anything under 40 minutes I'm happy with.
02:28:48.180 | - It wasn't your best time?
02:28:49.580 | - No, I think I've done it a little faster before,
02:28:52.420 | but not much.
02:28:53.260 | Of my friends, I did not win on Memorial Day.
02:28:58.100 | One of my friends did it actually
02:28:59.780 | several minutes faster than me.
02:29:01.940 | But just to clear up one thing that I think was,
02:29:04.860 | I saw a bunch of questions about this on the internet.
02:29:07.060 | There are multiple ways to do the Murph Challenge.
02:29:09.500 | There's a kind of partitioned mode
02:29:12.060 | where you do sets of pull-ups, push-ups, and squats together,
02:29:17.060 | and then there's unpartitioned
02:29:18.660 | where you do the 100 pull-ups,
02:29:20.260 | and then the 200 push-ups,
02:29:22.500 | and then the 300 squats in serial.
02:29:25.740 | And obviously if you're doing them unpartitioned,
02:29:29.700 | then it takes longer to get through the 100 pull-ups
02:29:33.020 | 'cause anytime you're resting in between the pull-ups,
02:29:35.620 | you're not also doing push-ups and squats.
02:29:38.020 | So yeah, I'm sure my unpartitioned time
02:29:40.540 | would be quite a bit slower.
02:29:42.460 | But no, I think at the end of this,
02:29:48.220 | I don't know, first of all,
02:29:49.060 | I think it's a good way to honor Memorial Day.
02:29:51.260 | This Lieutenant Murphy, basically,
02:29:57.020 | this was one of his favorite exercises,
02:30:00.900 | and I just try to do it on Memorial Day each year.
02:30:03.820 | And it's a good workout.
02:30:05.660 | I got my older daughters to do it with me this time.
02:30:09.500 | My oldest daughter wants a weight vest
02:30:12.940 | because she sees me doing it with a weight vest.
02:30:15.100 | I don't know if a seven-year-old
02:30:16.340 | should be using a weight vest to do pull-ups.
02:30:19.100 | - Difficult question a parent must ask themselves, yes.
02:30:23.460 | - I was like, maybe I can make you a very lightweight vest,
02:30:25.620 | but I didn't think it was good for this.
02:30:27.100 | So she basically did a quarter Murph.
02:30:28.940 | So she ran a quarter mile and then did 25 pull-ups,
02:30:33.380 | 50 push-ups, and 75 air squats,
02:30:37.100 | then ran another quarter mile in 15 minutes,
02:30:40.980 | which I was pretty impressed by.
02:30:43.300 | And my five-year-old too.
02:30:44.940 | So I was excited about that.
02:30:48.340 | And I'm glad that I'm teaching them
02:30:50.740 | kind of the value of physicality.
02:30:54.700 | I think a good day for Max, my daughter,
02:30:57.620 | is when she gets to go to the gym with me
02:30:59.380 | and cranks out a bunch of pull-ups.
02:31:00.900 | And I love that about her.
02:31:03.540 | I mean, I think it's good.
02:31:04.780 | She's, you know, hopefully I'm teaching her
02:31:07.460 | some good lessons.
02:31:08.980 | - I mean, the broader question here is,
02:31:11.660 | given how busy you are,
02:31:12.700 | given how much stuff you have going on in your life,
02:31:14.860 | what's like the perfect exercise regimen for you
02:31:19.860 | to keep yourself happy,
02:31:25.300 | to keep yourself productive in your main line of work?
02:31:28.640 | - Yeah, so I mean, I've right now,
02:31:31.260 | I'm focused most of my workouts on fighting.
02:31:35.580 | So jujitsu and MMA.
02:31:38.980 | But I don't know.
02:31:42.220 | I mean, maybe if you're a professional,
02:31:43.700 | you can do that every day.
02:31:44.580 | I can't.
02:31:45.420 | I just get, you know, it's too many bruises
02:31:48.340 | and things that you need to recover from.
02:31:49.740 | So I do that, you know, three to four times a week.
02:31:52.060 | And then the other days,
02:31:55.660 | I just try to do a mix of things,
02:31:57.940 | like just cardio conditioning, strength building, mobility.
02:32:02.060 | - So you try to do something physical every day?
02:32:04.140 | - Yeah, I try to, unless I'm just so tired
02:32:06.020 | that I just need to relax.
02:32:08.460 | But then I'll still try to like go for a walk or something.
02:32:10.500 | I mean, even here, I don't know.
02:32:13.460 | Have you been on the roof here yet?
02:32:14.900 | - No.
02:32:15.740 | - We'll go on the roof after this.
02:32:16.580 | - I heard of things.
02:32:17.420 | - But it's like, we designed this building
02:32:18.500 | and I put a park on the roof.
02:32:20.420 | So that way, that's like my meetings
02:32:22.260 | when I'm just doing kind of a one-on-one
02:32:24.140 | or talking to a couple of people.
02:32:25.820 | I have a very hard time just sitting.
02:32:27.940 | I feel like it gets super stiff.
02:32:29.540 | It like feels really bad.
02:32:31.040 | But I don't know.
02:32:34.740 | Being physical is very important to me.
02:32:36.420 | I think it's, I do not believe,
02:32:39.060 | this gets to the question about AI.
02:32:41.460 | I don't think that a being is just a mind.
02:32:44.300 | I think we're kind of meant to do things
02:32:48.540 | and like physically and a lot of the sensations
02:32:52.180 | that we feel are connected to that.
02:32:55.620 | And I think that that's a lot of what makes you a human
02:32:58.140 | is basically having those,
02:33:01.740 | having that set of sensations and experiences around that
02:33:06.740 | coupled with a mind to reason about them.
02:33:11.400 | But I don't know.
02:33:13.160 | I think it's important for balance to kind of get out,
02:33:18.160 | challenge yourself in different ways,
02:33:20.440 | learn different skills, clear your mind.
02:33:23.440 | - Do you think AI, in order to become super intelligent,
02:33:27.520 | an AGI should have a body?
02:33:28.920 | - It depends on what the goal is.
02:33:35.320 | I think that there's this assumption in that question
02:33:39.080 | that intelligence should be kind of person-like.
02:33:44.080 | Whereas, as we were just talking about,
02:33:46.980 | you can have these greater than single human
02:33:51.720 | intelligent organisms like the stock market,
02:33:54.760 | which obviously do not have bodies
02:33:56.280 | and do not speak a language, right?
02:33:57.920 | And just kind of have their own system.
02:34:02.440 | But, so I don't know.
02:34:06.640 | My guess is there will be limits to what a system
02:34:10.840 | that is purely an intelligence can understand
02:34:13.120 | about the human condition without having the same,
02:34:17.040 | not just senses, but our bodies change as we get older.
02:34:22.040 | Right, and we kind of evolve.
02:34:24.080 | And I think that those very subtle physical changes
02:34:29.080 | just drive a lot of social patterns and behavior
02:34:34.780 | around when you choose to have kids, right?
02:34:37.280 | Like just like all these, that's not even subtle,
02:34:39.160 | that's a major one, right?
02:34:40.120 | But like how you design things around the house.
02:34:43.960 | So, yeah, I mean, I think if the goal is to understand
02:34:48.420 | people as much as possible, I think that that's,
02:34:50.920 | trying to model those sensations
02:34:54.320 | is probably somewhat important.
02:34:55.540 | But I think that there's a lot of value
02:34:57.260 | that can be created by having intelligence,
02:34:58.960 | even that is separate from that, it's a separate thing.
02:35:02.480 | - So one of the features of being human
02:35:04.600 | is that we're mortal, we die.
02:35:08.280 | We've talked about AI a lot,
02:35:10.440 | about potentially replicas of ourselves.
02:35:12.960 | Do you think there'll be AI replicas of you and me
02:35:16.280 | that persist long after we're gone,
02:35:18.840 | that family and loved ones can talk to?
02:35:21.720 | - I think we'll have the capacity
02:35:26.060 | to do something like that.
02:35:27.620 | And I think one of the big questions
02:35:29.420 | that we've had to struggle with in the context
02:35:33.400 | of social networks is who gets to make that?
02:35:36.920 | And my answer to that,
02:35:40.840 | in the context of the work that we're doing
02:35:42.320 | is that that should be your choice.
02:35:43.920 | Right, I don't think anyone should be able to choose
02:35:46.360 | to make a Lex bot that people can choose to talk to
02:35:51.360 | and get to train that.
02:35:53.640 | And we have this precedent of making some of these calls
02:35:56.600 | where, I mean, someone can create a page
02:36:00.020 | for a Lex fan club, but you can't create a page
02:36:04.880 | and say that you're Lex, right?
02:36:07.080 | So I think that this, similarly, I think,
02:36:11.260 | I mean, maybe, you know, someone maybe can make a,
02:36:14.420 | should be able to make an AI that's a Lex admirer
02:36:17.500 | that someone can talk to,
02:36:18.420 | but I think it should ultimately be your call
02:36:21.180 | whether there is a Lex AI.
02:36:24.780 | - Well, I'm open sourcing the Lex.
02:36:29.140 | So you're a man of faith.
02:36:31.820 | What role has faith played in your life
02:36:34.260 | and your understanding of the world
02:36:35.620 | and your understanding of your own life
02:36:37.900 | and your understanding of your work
02:36:41.140 | and how your work impacts the world?
02:36:44.500 | - Yeah, I think that there's a few different parts
02:36:48.500 | of this that are relevant.
02:36:49.800 | There's sort of a philosophical part
02:36:53.540 | and there's a cultural part.
02:36:55.540 | And one of the most basic lessons
02:36:58.660 | is right at the beginning of Genesis, right?
02:37:01.060 | It's like God creates the earth and creates people
02:37:04.460 | and creates people in God's image.
02:37:06.920 | And there's the question of, you know, what does that mean?
02:37:09.940 | And all, the only context that you have about God
02:37:12.180 | at that point in the Old Testament
02:37:13.300 | is that God has created things.
02:37:15.620 | So I always thought that like one of the interesting lessons
02:37:18.380 | from that is that there's a virtue in creating things
02:37:24.340 | that is like whether it's artistic
02:37:27.040 | or whether you're building things
02:37:29.980 | that are functionally useful for other people.
02:37:32.280 | I think that that by itself is a good.
02:37:39.380 | And that kind of drives a lot of how I think about morality
02:37:44.500 | and my personal philosophy around like,
02:37:49.300 | what is a good life, right?
02:37:51.620 | I think it's one where you're helping the people around you
02:37:56.620 | and you're being a kind of positive creative force
02:38:01.620 | in the world that is helping to bring new things
02:38:05.200 | into the world, whether they're amazing other people, kids,
02:38:09.520 | or just leading to the creation of different things
02:38:14.520 | that wouldn't have been possible otherwise.
02:38:17.140 | And so that's a value for me that matters deeply.
02:38:20.560 | And I just, I mean, I just love spending time with the kids
02:38:24.020 | and seeing that they sort of,
02:38:25.700 | trying to impart this value to them.
02:38:27.740 | And it's like, I mean, nothing makes me happier
02:38:31.540 | than like when I come home from work
02:38:33.980 | and I see like my daughter's like building Legos
02:38:38.060 | on the table or something.
02:38:38.980 | It's like, all right, I did that when I was a kid, right?
02:38:41.520 | So many other people are doing this.
02:38:43.220 | And like, I hope you don't lose that spirit
02:38:45.300 | where when you kind of grow up
02:38:46.860 | and you wanna just continue building different things
02:38:49.780 | no matter what it is, to me, that's a lot of what matters.
02:38:54.280 | That's the philosophical piece.
02:38:56.120 | I think the cultural piece is just about community
02:38:58.460 | and values and that part of things I think
02:39:01.300 | has just become a lot more important to me
02:39:02.740 | since I've had kids.
02:39:04.000 | You know, it's almost autopilot when you're a kid,
02:39:08.020 | you're in the kind of getting imparted to phase of your life.
02:39:10.980 | But, and I didn't really think about religion
02:39:14.220 | that much for a while.
02:39:16.380 | You know, I was in college, you know, before I had kids.
02:39:20.980 | And then I think having kids has this way
02:39:23.500 | of really making you think about what traditions
02:39:26.400 | you wanna impart and how you wanna celebrate
02:39:29.980 | and like what balance you want in your life.
02:39:34.460 | And I mean, a bunch of the questions that you've asked
02:39:37.300 | and a bunch of the things that we're talking about.
02:39:40.540 | - Just the irony of the curtains coming down
02:39:44.700 | as we're talking about mortality.
02:39:46.780 | Once again, same as last time.
02:39:49.380 | This is just, the universe works
02:39:52.540 | and we are definitely living in a simulation,
02:39:55.060 | but go ahead.
02:39:56.260 | Community, tradition, and the values,
02:39:58.940 | the faith and religion is still--
02:40:00.580 | - A lot of the topics that we've talked about today
02:40:03.100 | are around how do you balance,
02:40:08.100 | you know, whether it's running a company
02:40:11.060 | or different responsibilities with this,
02:40:14.060 | how do you kind of balance that?
02:40:18.900 | And I always also just think that it's very grounding
02:40:21.860 | to just believe that there is something
02:40:25.900 | that is much bigger than you that is guiding things.
02:40:28.500 | - That amongst other things gives you a bit of humility.
02:40:34.400 | As you pursue that spirit of creating
02:40:41.500 | that you spoke to, creating beauty in the world,
02:40:43.980 | and as Dostoevsky said, beauty will save the world.
02:40:47.340 | Mark, I'm a huge fan of yours.
02:40:50.500 | Honored to be able to call you a friend
02:40:53.220 | and I am looking forward to both kicking your ass
02:40:57.980 | and you kicking my ass on the mat tomorrow in jiu-jitsu,
02:41:02.020 | this incredible sport and art that we both participate in.
02:41:07.020 | Thank you so much for talking today.
02:41:08.140 | Thank you for everything you're doing
02:41:09.460 | in so many exciting realms of technology and human life.
02:41:14.020 | I can't wait to talk to you again in the metaverse.
02:41:16.060 | - Thank you.
02:41:16.900 | - Thanks for listening to this conversation
02:41:19.620 | with Mark Zuckerberg.
02:41:20.960 | To support this podcast,
02:41:22.100 | please check out our sponsors in the description.
02:41:25.100 | And now let me leave you with some words from Isaac Asimov.
02:41:29.100 | It is change, continuing change, inevitable change
02:41:33.620 | that is the dominant factor in society today.
02:41:36.120 | No sensible decision can be made any longer
02:41:40.040 | without taking into account not only the world as it is,
02:41:43.700 | but the world as it will be.
02:41:45.720 | Thank you for listening and hope to see you next time.
02:41:49.780 | (upbeat music)
02:41:52.360 | (upbeat music)
02:41:54.940 | [BLANK_AUDIO]