back to index

Sam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI | Lex Fridman Podcast #419


Chapters

0:0 Introduction
1:5 OpenAI board saga
18:31 Ilya Sutskever
24:40 Elon Musk lawsuit
34:32 Sora
44:23 GPT-4
55:32 Memory & privacy
62:36 Q
66:12 GPT-5
69:27 7 trillion of compute
77:35 Google and Gemini
88:40 Leap to GPT-5
92:24 AGI
110:57 Aliens

Whisper Transcript | Transcript Only Page

00:00:00.000 | I think compute is going to be the currency of the future. I think it will be maybe the
00:00:03.880 | most precious commodity in the world. I expect that by the end of this decade, and possibly
00:00:14.260 | somewhat sooner than that, we will have quite capable systems that we look at and say, "Wow,
00:00:19.760 | that's really remarkable." The road to AGI should be a giant power struggle. I expect
00:00:25.160 | that to be the case.
00:00:26.160 | Whoever builds AGI first gets a lot of power. Do you trust yourself with that much power?
00:00:36.660 | The following is a conversation with Sam Altman, his second time in the podcast. He is the
00:00:42.280 | CEO of OpenAI, the company behind GPT-4, Chad GPT, Sora, and perhaps one day, the very
00:00:50.760 | company that will build AGI. This is Alex Friedman Podcast. To support it, please check
00:00:58.880 | out our sponsors in the description. And now, dear friends, here's Sam Altman.
00:01:05.680 | Take me through the OpenAI board saga that started on Thursday, November 16th, maybe
00:01:11.240 | Friday, November 17th for you.
00:01:13.720 | That was definitely the most painful professional experience of my life. Chaotic and shameful
00:01:24.440 | and upsetting and a bunch of other negative things. There were great things about it too,
00:01:31.760 | and I wish it had not been in such an adrenaline rush that I wasn't able to stop and appreciate
00:01:39.360 | them at the time. But I came across this old tweet of mine, or this tweet of mine from
00:01:48.040 | that time period, which was like, it was like, you know, kind of going to your own eulogy,
00:01:51.880 | watching people say all these great things about you and just like unbelievable support
00:01:57.280 | from people I love and care about. That was really nice.
00:02:04.560 | That whole weekend, with one big exception, I felt like a great deal of love and very
00:02:13.640 | little hate, even though it felt like I have no idea what's happening and what's going
00:02:21.720 | to happen here, and this feels really bad. There were definitely times I thought it was
00:02:25.840 | going to be one of the worst things to ever happen for AI safety. Well, I also think I'm
00:02:31.400 | happy that it happened relatively early. I thought at some point between when OpenAI
00:02:37.760 | started and when we created AGI, there was going to be something crazy and explosive
00:02:43.080 | that happened, but there may be more crazy and explosive things still to happen. It still,
00:02:51.240 | I think, helped us build up some resilience and be ready for more challenges in the future.
00:03:02.280 | But the thing you had a sense that you would experience is some kind of power struggle.
00:03:08.720 | The road to AGI should be a giant power struggle. The world should. Well, not should, I expect
00:03:15.920 | that to be the case. And so, you have to go through that, like you said, iterate as often
00:03:22.240 | as possible in figuring out how to have a board structure, how to have organization,
00:03:28.000 | how to have the kind of people that you're working with, how to communicate all that
00:03:32.760 | in order to deescalate the power struggle as much as possible, pacify it. But at this
00:03:39.640 | point it feels like something that was in the past that was really unpleasant and really
00:03:49.720 | difficult and painful, but we're back to work and things are so busy and so intense that
00:03:57.120 | I don't spend a lot of time thinking about it.
00:04:00.680 | There was a time after, there was this fugue state for the month after, maybe 45 days after,
00:04:09.880 | that I was just sort of drifting through the days. I was so out of it. I was feeling so
00:04:16.920 | down.
00:04:17.920 | Just at a personal psychological level.
00:04:20.240 | Yeah. Really painful. And hard to have to keep running open AI in the middle of that.
00:04:28.160 | I just wanted to crawl into a cave and kind of recover for a while. But now it's like
00:04:34.040 | we're just back to working on the mission.
00:04:37.880 | Well, it's still useful to go back there and reflect on board structures, on power dynamics,
00:04:48.860 | on how companies are run, the tension between research and product development and money
00:04:55.480 | and all this kind of stuff, so that you, who have a very high potential of building AGI,
00:05:02.800 | would do so in a slightly more organized, less dramatic way in the future. So there's
00:05:08.200 | value there to go, both the personal psychological aspects of you as a leader and also just the
00:05:15.360 | board structure and all this kind of messy stuff.
00:05:18.480 | We learned a lot about structure and incentives and what we need out of a board. I think it
00:05:30.320 | is valuable that this happened now in some sense. I think this is probably not the last
00:05:37.560 | high-stress moment of open AI, but it was quite a high-stress moment. My company very
00:05:41.760 | nearly got destroyed. We think a lot about many of the other things we've got to get
00:05:49.100 | right for AGI, but thinking about how to build a resilient org and how to build a structure
00:05:54.720 | that will stand up to a lot of pressure in the world, which I expect more and more as
00:05:58.920 | we get closer, I think that's super important.
00:06:01.120 | Do you have a sense of how deep and rigorous the deliberation process by the board was?
00:06:07.520 | Can you shine some light on just human dynamics involved in situations like this? Was it just
00:06:13.320 | a few conversations and all of a sudden it escalates and why don't we fire Sam kind of
00:06:18.400 | thing?
00:06:19.400 | I think the board members are well-meaning people on the whole. I believe that in stressful
00:06:37.520 | situations where people feel time pressure or whatever, people understandably make sub-optimal
00:06:49.080 | decisions. I think one of the challenges for open AI will be we're going to have to have
00:06:55.320 | a board and a team that are good at operating under pressure.
00:06:59.920 | Do you think the board had too much power?
00:07:02.960 | I think boards are supposed to have a lot of power, but one of the things that we did
00:07:08.440 | see is in most corporate structures, boards are usually answerable to shareholders. Sometimes
00:07:15.160 | people have super voting shares or whatever. In this case, and I think one of the things
00:07:20.040 | with our structure that we maybe should have thought about more than we did, is that the
00:07:26.000 | board of a nonprofit has, unless you put other rules in place, quite a lot of power. They
00:07:32.720 | don't really answer to anyone but themselves. There's ways in which that's good, but what
00:07:37.440 | we'd really like is for the board of open AI to answer to the world as a whole as much
00:07:42.160 | as that's a practical thing.
00:07:44.080 | So, there's a new board announced?
00:07:46.680 | Yeah.
00:07:47.680 | There's, I guess, a new smaller board at first and now there's a new final board.
00:07:53.840 | Not a final board yet. We've added some, a lot more.
00:07:56.600 | Okay. What is fixed in the new one that was perhaps broken in the previous one?
00:08:05.960 | The old board sort of got smaller over the course of about a year. It was nine and then
00:08:11.080 | it went down to six. Then we couldn't agree on who to add. The board also, I think, didn't
00:08:20.040 | have a lot of experienced board members. A lot of the new board members at open AI have
00:08:25.280 | just have more experience as board members. I think that'll help.
00:08:31.480 | It's been criticized, some of the people that are added to the board. I heard a lot of people
00:08:36.000 | criticizing the addition of Larry Summers, for example. What's the process of selecting
00:08:40.840 | the board like? What's involved in that?
00:08:43.160 | So, Brett and Larry were kind of decided in the heat of the moment over this very tense
00:08:48.680 | weekend. That weekend was like a real roller coaster. It was like a lot of ups and downs.
00:08:56.760 | We were trying to agree on new board members that both sort of the executive team here
00:09:05.360 | and the old board members felt would be reasonable. Larry was actually one of their suggestions,
00:09:11.680 | the old board members. Brett, I think, I had even previous to that weekend suggested, but
00:09:17.640 | he was busy and didn't want to do it. Then we really needed help in wood. We talked about
00:09:22.600 | a lot of other people too, but I felt like if I was going to come back, I needed new
00:09:32.440 | board members. I didn't think I could work with the old board again in the same configuration.
00:09:39.640 | We then decided, and I'm grateful that Adam would stay, but we considered various configurations
00:09:48.120 | and decided we wanted to get to a board of three and had to find two new board members
00:09:54.000 | over the course of sort of a short period of time. Those were decided honestly without
00:09:59.240 | – that's like you kind of do that on the battlefield. You don't have time to design
00:10:03.640 | a rigorous process then. For new board members since, new board members will add going forward,
00:10:11.260 | we have some criteria that we think are important for the board to have, different expertise
00:10:17.640 | that we want the board to have. Unlike hiring an executive where you need them to do one
00:10:22.560 | role well, the board needs to do a whole role of kind of governance and thoughtfulness well.
00:10:30.880 | One thing that Brett says, which I really like, is that we want to hire board members
00:10:34.160 | in slates, not as individuals one at a time. Thinking about a group of people that will
00:10:40.080 | bring nonprofit expertise, expertise in running companies, sort of good legal and governance
00:10:46.240 | expertise, that's kind of what we've tried to optimize for.
00:10:48.920 | So it's technical savvy important for the individual board members?
00:10:52.200 | Not for every board member, but for certainly some you need that. That's part of what
00:10:55.520 | the board needs to do.
00:10:56.560 | So, I mean, the interesting thing that people probably don't understand about OpenAI, I
00:11:00.680 | certainly don't, is like all the details of running the business. When they think about
00:11:04.640 | the board, given the drama, they think about you, they think about like, if you reach AGI
00:11:11.200 | or you reach some of these incredibly impactful products and you build them and deploy them,
00:11:16.200 | what's the conversation with the board like? And they kind of think, all right, what's
00:11:21.000 | the right squad to have in that kind of situation to deliberate?
00:11:25.080 | Look, I think you definitely need some technical experts there. And then you need some people
00:11:29.840 | who are like, how can we deploy this in a way that will help people in the world the
00:11:36.420 | most and people who have a very different perspective? I think a mistake that you or
00:11:41.840 | I might make is to think that only the technical understanding matters. And that's definitely
00:11:46.320 | part of the conversation you want that board to have, but there's a lot more about how
00:11:49.960 | that's going to just like impact society and people's lives that you really want represented
00:11:54.240 | in there too.
00:11:55.240 | And you're just kind of, are you looking at the track record of people or you're just
00:11:59.760 | having conversations?
00:12:01.000 | Track record is a big deal. You of course have a lot of conversations, but I, you know,
00:12:08.760 | there's some roles where I kind of totally ignore track record and just look at slope,
00:12:16.600 | kind of ignore the Y-intercept.
00:12:17.600 | Thank you. Thank you for making it mathematical for the audience.
00:12:21.560 | For a board member, like I do care much more about the Y-intercept. Like I think there
00:12:25.440 | is something deep to say about track record there. And experience is sometimes very hard
00:12:31.280 | to replace.
00:12:32.280 | Do you try to fit a polynomial function or exponential one to the, to the track record?
00:12:36.720 | That's not that, an analogy doesn't carry that far.
00:12:39.280 | All right. You mentioned some of the low points that weekend. What were some of the low points
00:12:45.600 | psychologically for you? Did you consider going to the Amazon jungle and just taking
00:12:51.420 | an ayahuasca and disappearing forever or?
00:12:53.840 | I mean, there's so many low, like it was a very bad period of time. There were great
00:12:58.920 | high points too. Like my phone was just like sort of nonstop blowing up with nice messages
00:13:05.960 | from people I work with every day, people I hadn't talked to in a decade. I didn't get
00:13:09.720 | to like appreciate that as much as I should have because I was just like in the middle
00:13:12.680 | of this firefight, but that was really nice. But on the whole, it was like a very painful
00:13:17.240 | weekend and also just like a very, it was like a battle fought in public to a surprising
00:13:25.840 | degree. And that's, that was extremely exhausting to me much more than I expected. I think fights
00:13:31.320 | are generally exhausting, but this one really was, you know, the board did this Friday afternoon.
00:13:39.100 | I really couldn't get much in the way of answers, but I also was just like, well, the board
00:13:44.320 | gets to do this. And so I'm going to think for a little bit about what I want to do,
00:13:49.240 | but I'll try to find the blessing in disguise here. And I was like, well, I, you know, my
00:13:56.560 | current job at OpenAI is or it was like to like run a decently sized company at this
00:14:01.960 | point. And the thing I had always liked the most was just getting to like work on, work
00:14:05.760 | with the researchers. And I was like, yeah, I can just go do like a very focused AGI research
00:14:10.000 | effort. And I got excited about that. It didn't even occur to me at the time to like, possibly
00:14:15.760 | that this was all going to get undone. This was like Friday afternoon.
00:14:18.560 | So you've accepted your, the death of this previous-
00:14:22.280 | Very quickly, very quickly. Like within, you know, I mean, I went through like a little
00:14:26.360 | period of confusion and rage, but very quickly. And by Friday night, I was like talking to
00:14:30.360 | people about what was going to be next. And I was excited about that. I think it was Friday
00:14:38.920 | night evening for the first time that I heard from the exec team here, which was like, hey,
00:14:42.840 | we're going to like fight this. And, you know, we think, well, whatever. And then I went
00:14:48.600 | to bed just still being like, okay, excited. Like onward, were you able to sleep? Not a
00:14:54.160 | lot. It was one of the weird things was, it was this like period of four and a half days
00:14:59.160 | where sort of didn't sleep much, didn't eat much and still kind of had like a surprising
00:15:04.640 | amount of energy. It was, you learn like a weird thing about adrenaline and more time.
00:15:09.520 | So you kind of accepted the death of a, you know, this baby opening eyes.
00:15:13.080 | And I was excited for the new thing. I was just like, okay, this was crazy, but whatever.
00:15:16.760 | It's a very good coping mechanism.
00:15:18.560 | And then Saturday morning, two of the board members called and said, hey, we, you know,
00:15:22.600 | destabilize, we didn't mean to destabilize things. We don't want to destroy a lot of
00:15:25.600 | value here. You know, can we talk about you coming back? And I immediately didn't want
00:15:31.520 | to do that, but I thought a little more and I was like, well, I don't really care about
00:15:36.080 | the people here, the partners, shareholders, like all of the, I love this company.
00:15:41.400 | And so I thought about it and I was like, well, okay, but like, here's, here's the stuff
00:15:44.080 | I would need. And, and then the most painful time of all was over the course of that weekend.
00:15:52.600 | I kept thinking and being told, and we all kept, not just me, like the whole team here
00:15:57.280 | kept thinking, well, we were trying to like keep OpenAI stabilized while the whole world
00:16:01.980 | was trying to break it apart, people trying to recruit, whatever.
00:16:04.460 | We kept being told like, all right, we're almost done. We're almost done. We just need
00:16:06.880 | like a little bit more time. And it was this like very confusing state. And then Sunday
00:16:12.440 | evening, when again, like every few hours I expected that we were going to be done and
00:16:17.180 | we're going to like figure out a way for me to return and things to go back to how they
00:16:21.880 | were, the board then appointed a new interim CEO. And then I was like, I mean, that is,
00:16:29.520 | that is, that feels really bad. That was the low point of the whole thing.
00:16:36.320 | You know, I'll tell you something. I, it felt very painful, but I felt a lot of love that
00:16:42.840 | whole weekend. It was not other than that one moment, Sunday night, I would not characterize
00:16:48.520 | my emotions as anger or hate. But I really just like, I felt a lot of love from people
00:16:56.720 | towards people. It was like painful, but it would like the dominant emotion of the weekend
00:17:02.640 | was love, not hate.
00:17:04.240 | You've spoken highly of Mira Moradi that she helped, especially as you put in a tweet in
00:17:09.600 | the quiet moments when it counts, perhaps we could take a bit of a tangent. What do
00:17:14.400 | you admire about Mira?
00:17:15.800 | Well, she did a great job during that weekend in a lot of chaos, but, but people often see
00:17:22.240 | leaders in the moment in like the crisis moments, good or bad. But a thing I really value in
00:17:29.000 | leaders is how people act on a boring Tuesday at 9 46 in the morning. And in, in just sort
00:17:36.760 | of the, the, the normal drudgery of the day to day, how someone shows up in a meeting,
00:17:43.260 | the quality of the decisions they make. That was what I meant about the quiet moments.
00:17:48.160 | Meaning like most of the work is done on a day by day in a meeting by meeting just, just
00:17:55.840 | be present and, and make great decisions.
00:17:58.400 | Yeah. I mean, look what you wanted to have wanted to spend the last 20 minutes about,
00:18:02.160 | and I understand is like this one very dramatic weekend. Yeah. But that's not really what
00:18:07.240 | opening eyes about opening eyes really about the other seven years.
00:18:10.600 | Well, yeah. Human civilization is not about the invasion of the Soviet Union by Nazi Germany,
00:18:16.160 | but still that's something people focus on very, very understandable. It gives us an
00:18:21.020 | insight into human nature, the extremes of human nature, and perhaps some of the damage
00:18:25.540 | and some of the triumphs of human civilization can happen in those moments. So it's like
00:18:29.620 | illustrative. Let me ask you about Ilya. Is he being held hostage in a secret nuclear
00:18:36.460 | facility?
00:18:38.460 | What about a regular secret facility?
00:18:40.460 | A regular nuclear non-secret facility.
00:18:41.460 | Neither of them. Not that either.
00:18:42.460 | I mean, it's becoming a meme at some point. You've known Ilya for a long time. He was
00:18:48.460 | obviously in part of this drama with the board and all that kind of stuff. What's your relationship
00:18:54.860 | with him now?
00:18:57.060 | I love Ilya. I have tremendous respect for Ilya. I don't have anything I can like say
00:19:02.720 | about his plans right now. That's a question for him. But I really hope we work together
00:19:08.220 | for certainly the rest of my career. He's a little bit younger than me. Maybe he works
00:19:13.540 | a little bit longer.
00:19:14.540 | There's a meme that he saw something. He maybe saw AGI and that gave him a lot of worry internally.
00:19:25.220 | What did Ilya see?
00:19:28.860 | Ilya has not seen AGI. None of us have seen AGI. We've not built AGI. I do think one of
00:19:37.740 | the many things that I really love about Ilya is he takes AGI and the safety concerns, broadly
00:19:47.340 | speaking, including things like the impact this is going to have on society, very seriously.
00:19:55.780 | As we continue to make significant progress, Ilya is one of the people that I spent the
00:20:01.140 | most time over the last couple of years talking about what this is going to mean, what we
00:20:06.940 | need to do to ensure we get it right, to ensure that we succeed at the mission. Ilya did not
00:20:14.620 | see AGI, but Ilya is a credit to humanity in terms of how much he thinks and worries
00:20:27.380 | about making sure we get this right.
00:20:29.940 | I had a bunch of conversations with him in the past. I think when he talks about technology,
00:20:34.460 | he's always doing this long-term thinking type of thing. He's not thinking about what
00:20:39.460 | this is going to be in a year. He's thinking about it in 10 years. Just thinking from first
00:20:43.900 | principles like, "Okay, if this scales, what are the fundamentals here? Where is this going?"
00:20:51.140 | That's a foundation for them thinking about all the other safety concerns and all that
00:20:55.980 | kind of stuff, which makes him a really fascinating human to talk with. Do you have any idea why
00:21:03.740 | he's been quiet? Is it he's just doing some soul searching?
00:21:07.700 | Again, I don't want to speak for Ilya. I think that you should ask him that. He's definitely
00:21:18.420 | a thoughtful guy. I think Ilya is always on a soul search in a really good way.
00:21:26.660 | Yes. Yeah. Also, he appreciates the power of silence. Also, I'm told he can be a silly guy,
00:21:33.980 | which I've never seen that side of him.
00:21:36.180 | It's very sweet when that happens.
00:21:37.300 | I've never witnessed a silly Ilya, but I look forward to that as well.
00:21:43.860 | I was at a dinner party with him recently, and he was playing with a puppy. And he was
00:21:47.740 | in a very silly mood, very endearing. And I was thinking like, "Oh, man, this is not the
00:21:52.220 | side of Ilya that the world sees the most."
00:21:54.300 | So just to wrap up this whole saga, are you feeling good about the board structure,
00:22:00.860 | about all of this, and where it's moving?
00:22:03.540 | I feel great about the new board. In terms of the structure of OpenAI, one of the board's
00:22:08.780 | tasks is to look at that and see where we can make it more robust. We wanted to get
00:22:13.980 | new board members in place first, but we clearly learned a lesson about structure
00:22:19.900 | throughout this process. I don't have, I think, super deep things to say. It was a crazy,
00:22:25.780 | very painful experience. I think it was like a perfect storm of weirdness. It was like
00:22:30.220 | a preview for me of what's going to happen as the stakes get higher and higher, and the
00:22:34.060 | need that we have robust governance structures and processes and people. I am kind of happy
00:22:40.780 | it happened when it did, but it was a shockingly painful thing to go through.
00:22:46.620 | Did it make you be more hesitant in trusting people?
00:22:51.020 | Just on a personal level?
00:22:52.380 | Yes. I think I'm an extremely trusting person. I've always had a life philosophy of don't
00:22:57.500 | worry about all of the paranoia, don't worry about the edge cases. You get a little bit
00:23:02.260 | screwed in exchange for getting to live with your guard down. This was so shocking to me,
00:23:09.380 | I was so caught off guard, that it has definitely changed, and I really don't like this. It's
00:23:15.500 | definitely changed how I think about just default trust of people and planning for the
00:23:20.420 | bad scenarios.
00:23:21.180 | You got to be careful with that. Are you worried about becoming a little too cynical?
00:23:26.580 | I'm not worried about becoming too cynical. I think I'm the extreme opposite of a cynical
00:23:30.180 | person, but I'm worried about just becoming less of a default trusting person.
00:23:35.380 | I'm actually not sure which mode is best to operate in for a person who's developing AGI.
00:23:43.020 | Trusting or untrusting. It's an interesting journey you're on. But in terms of structure,
00:23:49.660 | see, I'm more interested on the human level. How do you surround yourself with humans that
00:23:54.740 | are building cool shit, but also are making wise decisions? Because the more money you
00:24:01.340 | start making, the more power the thing has, the weirder people get.
00:24:04.540 | I think you could make all kinds of comments about the board members and the level of trust
00:24:12.860 | I should have had there, or how I should have done things differently. But in terms of the
00:24:17.140 | team here, I think you'd have to give me a very good grade on that one. I have just enormous
00:24:25.740 | gratitude and trust and respect for the people that I work with every day. I think being
00:24:30.620 | surrounded with people like that is really important.
00:24:40.020 | Our mutual friend Elon sued OpenAI. What is the essence of what he's criticizing? To what
00:24:48.780 | degree does he have a point? To what degree is he wrong?
00:24:52.380 | I don't know what it's really about. We started off just thinking we were going to be a research
00:24:57.660 | lab and having no idea about how this technology was going to go. Because it was only seven
00:25:04.540 | or eight years ago, it's hard to go back and really remember what it was like then. But
00:25:08.500 | before language models were a big deal, this was before we had any idea about an API or
00:25:12.900 | selling access to a chatbot. This was before we had any idea we were going to productize
00:25:16.980 | at all.
00:25:17.980 | So we're just going to try to do research, and we don't really know what we're going
00:25:22.060 | to do with that. I think with many fundamentally new things, you start fumbling through the
00:25:27.260 | dark and you make some assumptions, most of which turn out to be wrong. Then it became
00:25:33.580 | clear that we were going to need to do different things and also have huge amounts
00:25:42.460 | more capital. So we said, "Okay, well, the structure doesn't quite work for that. How
00:25:46.700 | do we patch the structure?" And then patch it again and patch it again, and you end up
00:25:51.260 | with something that does look kind of eyebrow-raising, to say the least. But we got here gradually
00:25:57.900 | with, I think, reasonable decisions at each point along the way. And it doesn't mean
00:26:03.460 | I wouldn't do it totally differently if we could go back now with an oracle, but you
00:26:06.540 | don't get the oracle at the time. But anyway, in terms of what Elon's real motivations
00:26:10.340 | here are, I don't know.
00:26:12.100 | LR: To the degree you remember, what was the response that OpenAI gave in the blog post?
00:26:19.140 | Can you summarize it?
00:26:21.220 | CB: Oh, Elon said this set of things. Here's the characterization of how this went down.
00:26:35.420 | We tried to not make it emotional and just say, "Here's the history."
00:26:43.620 | LR: I do think there's a degree of mischaracterization from Elon here about one of the points you
00:26:54.540 | just made, which is the degree of uncertainty you had at the time. You guys are a bunch
00:27:00.780 | of like a small group of researchers crazily talking about AGI when everybody's laughing
00:27:07.420 | at that thought.
00:27:08.420 | Wasn't that long ago Elon was crazily talking about launching rockets when people were laughing
00:27:14.060 | at that thought? So I think he'd have more empathy for this.
00:27:20.260 | CB: I mean, I do think that there's personal stuff here, that there was a split, that OpenAI
00:27:28.380 | and a lot of amazing people here chose to part ways with Elon. So there's a personal--
00:27:33.940 | LR: Elon chose to part ways.
00:27:36.180 | Can you describe that exactly, the choosing to part ways?
00:27:41.500 | CB: He thought OpenAI was going to fail. He wanted total control to sort of turn it around.
00:27:46.580 | We wanted to keep going in the direction that now has become OpenAI. He also wanted Tesla
00:27:50.940 | to be able to build an AGI effort. At various times he wanted to make OpenAI into a for-profit
00:27:57.580 | company that he could have control of or have it merge with Tesla. We didn't want to do
00:28:02.340 | that and he decided to leave, which that's fine.
00:28:06.540 | So you're saying, and that's one of the things that the blog post says, is that he wanted
00:28:11.380 | OpenAI to be basically acquired by Tesla in the same way that-- Or maybe something similar
00:28:19.620 | or maybe something more dramatic than the partnership with Microsoft.
00:28:23.100 | My memory is the proposal was just like, "Yeah, get acquired by Tesla and have Tesla have
00:28:27.340 | full control over it." I'm pretty sure that's what it was.
00:28:29.620 | So what is the word open in OpenAI mean? To Elon at the time, Ilya has talked about this
00:28:37.980 | in the email exchanges and all this kind of stuff. What does it mean to you at the time?
00:28:41.900 | What does it mean to you now?
00:28:42.900 | I would definitely pick a different-- Speaking of going back with an Oracle, I'd pick a different
00:28:46.540 | name. One of the things that I think OpenAI is doing that is the most important of everything
00:28:53.260 | that we're doing is putting powerful technology in the hands of people for free as a public
00:29:01.860 | good. We don't run ads on our free version. We don't monetize it in other ways. We just
00:29:08.380 | say as part of our mission, we want to put increasingly powerful tools in the hands of
00:29:12.780 | people for free and get them to use them.
00:29:15.940 | I think that kind of open is really important to our mission. I think if you give people
00:29:23.060 | great tools and teach them to use them or don't even teach them, they'll figure it out
00:29:26.420 | and let them go build an incredible future for each other with that, that's a big deal.
00:29:31.940 | So if we can keep putting free or low cost or free and low cost powerful AI tools out
00:29:37.660 | in the world, I think that's a huge deal for how we fulfill the mission.
00:29:43.860 | Open source or not, yeah, I think we should open source some stuff and not other stuff.
00:29:49.380 | It does become this religious battle line where nuance is hard to have, but I think
00:29:53.340 | nuance is the right answer.
00:29:55.540 | So he said, change your name to closed AI and I'll drop the lawsuit. I mean, is it going
00:30:00.340 | to become this battleground in the land of memes about the name?
00:30:06.180 | I think that speaks to the seriousness with which Elon means the lawsuit. I mean, that's
00:30:18.380 | an astonishing thing to say, I think.
00:30:21.580 | I don't think the lawsuit, maybe correct me if I'm wrong, but I don't think the lawsuit
00:30:26.620 | is legally serious. It's more to make a point about the future of AGI and the company that's
00:30:32.300 | currently leading the way.
00:30:36.180 | So look, I mean, Grok had not open sourced anything until people pointed out it was a
00:30:41.700 | little bit hypocritical and then he announced that Grok will open source things this week.
00:30:45.740 | I don't think open source versus not is what this is really about for him.
00:30:48.860 | Well, we'll talk about open source and not. I do think maybe criticizing the competition
00:30:53.060 | is great, just talking a little shit, that's great, but friendly competition versus like,
00:30:58.500 | I personally hate lawsuits.
00:31:00.700 | Look, I think this whole thing is like unbecoming of the builder and I respect Elon as one of
00:31:05.700 | the great builders of our time. I know he knows what it's like to have haters attack
00:31:15.220 | him and it makes me extra sad he's doing it to us.
00:31:17.700 | Yeah, he's one of the greatest builders of all time, potentially the greatest builder
00:31:21.660 | of all time.
00:31:22.660 | It makes me sad. I think it makes a lot of people sad. There's a lot of people who've
00:31:26.220 | really looked up to him for a long time and I said in some interview or something that
00:31:31.140 | I missed the old Elon and the number of messages I got being like, that exactly encapsulates
00:31:35.780 | how I feel.
00:31:36.780 | I think he should just win. He should just make Grok beat GPT and then GPT beats Grok
00:31:45.100 | and it's just a competition and it's beautiful for everybody. But on the question of open
00:31:50.620 | source, do you think there's a lot of companies playing with this idea? It's quite interesting.
00:31:55.540 | I would say meta, surprisingly, has led the way on this or at least took the first step
00:32:03.860 | in the game of chess of really open sourcing the model. Of course, it's not the state of
00:32:09.740 | the art model, but open sourcing llama and Google is flirting with the idea of open sourcing
00:32:16.900 | a smaller version. What are the pros and cons of open sourcing? Have you played around with
00:32:22.180 | this idea?
00:32:23.180 | I think there is definitely a place for open source models, particularly smaller models
00:32:27.300 | that people can run locally. I think there's huge demand for. I think there will be some
00:32:32.900 | open source models, there will be some closed source models. It won't be unlike other ecosystems
00:32:37.780 | in that way.
00:32:38.780 | I listened to an all-in podcast talking about this lawsuit and all that kind of stuff and
00:32:44.180 | they were more concerned about the precedent of going from non-profit to this cap for profit.
00:32:52.140 | What precedent does that set for other startups?
00:32:56.620 | I would heavily discourage any startup that was thinking about starting as a non-profit
00:33:01.060 | and adding a for-profit arm later. I'd heavily discourage them from doing that. I don't think
00:33:04.820 | we'll set a precedent here.
00:33:06.460 | Okay. Most startups should go just-
00:33:08.580 | For sure. Again, if we knew what was going to happen, we would have done that too.
00:33:12.740 | In theory, if you dance beautifully here, there's some tax incentives or whatever.
00:33:19.500 | I don't think that's how most people think about these things.
00:33:21.860 | So it's not possible to save a lot of money for a startup if you do it this way?
00:33:26.540 | No, I think there's laws that would make that pretty difficult.
00:33:30.700 | Where do you hope this goes with Elon? This tension, this dance, where do you hope this?
00:33:37.540 | If we go one, two, three years from now, your relationship with him on a personal level
00:33:44.140 | too, like friendship, friendly competition, just all this kind of stuff.
00:33:50.860 | Yeah, I'd really respect Elon. And I hope that years in the future, we have an amicable
00:34:03.500 | relationship.
00:34:04.500 | Yeah, I hope you guys have an amicable relationship this month. And just compete and win. And
00:34:13.340 | explore these ideas together. I do suppose there's competition for talent or whatever,
00:34:20.300 | but it should be friendly competition. Just build cool shit. And Elon is pretty good at
00:34:29.580 | building cool shit, but so are you.
00:34:32.980 | So speaking of cool shit, Sora, there's like a million questions I could ask. First of
00:34:40.300 | all, it's amazing. It truly is amazing. On a product level, but also just on a philosophical
00:34:45.700 | level. So let me just, technical/philosophical, ask, what do you think it understands about
00:34:52.780 | the world more or less than GPT-4, for example? The world model, when you train on these patches
00:35:01.580 | versus language tokens.
00:35:04.100 | I think all of these models understand something more about the world model than most of us
00:35:10.820 | give them credit for. And because there are also very clear things they just don't understand
00:35:16.580 | or don't get right, it's easy to look at the weaknesses, see through the veil and say,
00:35:21.460 | "Oh, this is all fake." But it's not all fake, it's just some of it works and some of it
00:35:26.340 | doesn't work. I remember when I started first watching Sora videos, and I would see a person
00:35:31.860 | walk in front of something for a few seconds and occlude it, and then walk away and the
00:35:35.540 | same thing was still there. I was like, "Oh, this is pretty good." Or there's examples
00:35:39.820 | where the underlying physics looks so well-represented over a lot of steps in a sequence. It's like,
00:35:47.100 | "Oh, this is quite impressive."
00:35:49.300 | But fundamentally, these models are just getting better, and that will keep happening. If you
00:35:54.740 | look at the trajectory from Dali 1 to 2 to 3 to Sora, there were a lot of people that
00:36:00.300 | were dunked on each person, saying, "It can't do this, it can't do that," and look at it
00:36:05.540 | Well, the thing you just mentioned is with occlusions, is basically modeling the physics
00:36:12.340 | of the three-dimensional physics of the world sufficiently well to capture those kinds of
00:36:16.500 | things?
00:36:17.500 | Well...
00:36:18.500 | Or maybe you can tell me, in order to deal with occlusions, what does the world model
00:36:23.820 | need to...
00:36:24.820 | Yeah, so what I would say is it's doing something to deal with occlusions really well. What
00:36:28.060 | I represent that it has a great underlying 3D model of the world, it's a little bit more
00:36:32.700 | of a stretch.
00:36:33.700 | But can you get there through just these kinds of two-dimensional training data approaches?
00:36:39.060 | It looks like this approach is going to go surprisingly far. I don't want to speculate
00:36:42.660 | too much about what limits it will surmount and which it won't, but...
00:36:46.260 | What are some interesting limitations of the system that you've seen? I mean, there's been
00:36:50.560 | some fun ones you've posted.
00:36:52.060 | There's all kinds of fun. I mean, like, you know, cats sprouting an extra limit at random
00:36:56.460 | points in a video. Pick what you want, but there's still a lot of problems, a lot of
00:37:01.780 | weaknesses.
00:37:02.780 | Do you think that's a fundamental flaw of the approach? Or is it just, you know, bigger
00:37:09.220 | model or better, like, technical details or better data, more data, is going to solve
00:37:17.340 | the cats sprouting?
00:37:18.340 | I would say yes to both. Like, I think there is something about the approach which just
00:37:22.460 | seems to feel different from how we think and learn and whatever. And then also, I think
00:37:29.060 | it'll get better with skill.
00:37:30.700 | Like I mentioned, LLMs have tokens, text tokens, and Sora has visual patches. So it converts
00:37:35.860 | all visual data, diverse kinds of visual data, videos, and images into patches. Is the training
00:37:41.780 | to the degree you can say fully self-supervised? Or is there some manual labeling going on?
00:37:46.220 | Like, what's the involvement of humans in all this?
00:37:49.780 | I mean, without saying anything specific about the Sora approach, we use lots of human data
00:37:58.100 | in our work.
00:38:00.900 | But not intranet-scale data. So lots of humans. Lots is a complicated word, Sam.
00:38:07.780 | I think lots is a fair word in this case.
00:38:11.740 | It doesn't, because to me, lots, like, listen, I'm an introvert, and when I hang out with
00:38:15.860 | like three people, that's a lot of people. And four people, that's a lot. But I suppose
00:38:20.460 | you mean more than...
00:38:21.900 | More than three people work on labeling the data for these models, yeah.
00:38:26.060 | But fundamentally, there's a lot of self-supervised learning. Because what you mentioned in the
00:38:32.300 | technical report is intranet-scale data. That's another beautiful, it's like poetry. So it's
00:38:39.100 | a lot of data that's not human-labeled, it's like, it's self-supervised in that way. And
00:38:45.140 | then the question is, how much data is there on the intranet that could be used in this,
00:38:50.780 | that is conducive to this kind of self-supervised way? If only we knew the details of the self-supervised.
00:38:57.860 | Do you, have you considered opening it up a little more? Details?
00:39:02.660 | We have. You mean for Sora specifically?
00:39:04.620 | Sora specifically, because it's so interesting that, like, can this, can the same magic of
00:39:12.180 | LLMs now start moving towards visual data? And what does that take to do that?
00:39:17.700 | I mean, it looks to me like yes. But we have more work to do.
00:39:22.220 | Sure. What are the dangers? Why are you concerned about releasing the system? What are some
00:39:28.060 | possible dangers of this?
00:39:29.460 | I mean, frankly speaking, one thing we have to do before releasing the system is just
00:39:33.540 | like, get it to work at a level of efficiency that will deliver the scale people are gonna
00:39:40.300 | want from this. So I don't want to like downplay that. And there's still a ton, ton of work
00:39:45.980 | to do there. But, you know, you can imagine like issues with deep fakes, misinformation.
00:39:55.980 | Like we try to be a thoughtful company about what we put out into the world. And it doesn't
00:40:01.180 | take much thought to think about the ways this can go badly.
00:40:05.300 | There's a lot of tough questions here. You're dealing in a very tough space. Do you think
00:40:10.060 | training AI should be or is fair use under copyright law?
00:40:14.820 | I think the question behind that question is, do people who create valuable data deserve
00:40:19.220 | to have some way that they get compensated for use of it? And that I think the answer
00:40:23.740 | is yes. I don't know yet what the answer is. People have proposed a lot of different things.
00:40:29.700 | We've tried some different models. But, you know, if I'm like an artist, for example,
00:40:35.780 | A, I would like to be able to opt out of people generating art in my style. And B, if they
00:40:41.940 | do generate art in my style, I'd like to have some economic model associated with that.
00:40:46.180 | Yeah, it's that transition from CDs to Napster to Spotify. We have to figure out some kind
00:40:52.500 | of model.
00:40:53.500 | The model changes, but people have got to get paid.
00:40:55.060 | Well, there should be some kind of incentive, if we zoom out even more, for humans to keep
00:41:00.940 | doing cool shit.
00:41:02.580 | Something I worry about, humans are going to do cool shit, and society is going to find
00:41:05.820 | some way to reward it. That seems pretty hardwired. We want to create. We want to be useful. We
00:41:12.420 | want to achieve status in whatever way. That's not going anywhere, I don't think.
00:41:17.220 | But the reward might not be monetary, financial. It might be like fame and celebration of other
00:41:24.660 | cool people.
00:41:25.660 | Maybe financial in some other way. Again, I don't think we've seen the last evolution
00:41:29.100 | of how the economic system is going to work.
00:41:31.540 | Yeah, but artists and creators are worried. When they see Sora, they're like, "Holy shit."
00:41:36.700 | Sure. Artists were also super worried when photography came out. And then photography
00:41:41.500 | became a new art form, and people made a lot of money taking pictures. I think things like
00:41:46.500 | that will keep happening. People will use the new tools in new ways.
00:41:50.320 | If we just look on YouTube or something like this, how much of that will be using Sora-like
00:41:56.040 | AI-generated content, do you think, in the next five years?
00:42:01.780 | People talk about how many jobs they're going to do in five years. And the framework that
00:42:06.260 | people have is what percentage of current jobs are just going to be totally replaced
00:42:10.300 | by some AI doing the job. The way I think about it is not what percent of jobs AI will
00:42:16.100 | do, but what percent of tasks will AI do, and over what time horizon.
00:42:19.760 | So if you think of all of the five-second tasks in the economy, the five-minute tasks,
00:42:24.340 | the five-hour tasks, maybe even the five-day tasks, how many of those can AI do? I think
00:42:30.820 | that's a way more interesting, impactful, important question than how many jobs AI can
00:42:37.180 | do. Because it is a tool that will work at increasing levels of sophistication and over
00:42:43.020 | longer and longer time horizons for more and more tasks, and let people operate at a higher
00:42:48.280 | level of abstraction. So maybe people are way more efficient at the job they do, and
00:42:53.100 | at some point, that's not just a quantitative change, but it's a qualitative one, too, about
00:42:57.740 | the kinds of problems you can keep in your head.
00:43:00.380 | I think that for videos on YouTube, it'll be the same. Many videos, maybe most of them,
00:43:06.300 | will use AI tools in the production, but they'll still be fundamentally driven by a person
00:43:12.100 | thinking about it, putting it together, doing parts of it, sort of directing and running
00:43:18.540 | That's interesting. I mean, it's scary, but it's interesting to think about. I tend to
00:43:22.540 | believe that humans like to watch other humans, or other human-like things.
00:43:26.940 | Humans really care about other humans a lot.
00:43:29.500 | Yeah. If there's a cooler thing that's better than a human, humans care about that for,
00:43:36.740 | like, two days, and then they go back to humans.
00:43:39.420 | That seems very deeply wired.
00:43:41.900 | It's the whole chess thing. Yeah, but no, let's, everybody keep playing chess. And let's
00:43:47.300 | ignore the elephant in the room that humans are really bad at chess, relative to AI systems.
00:43:52.140 | We still run races, and cars are much faster. I mean, there's like a lot of examples.
00:43:56.140 | Yeah. And maybe it'll just be tooling, like, in the Adobe suite type of way, where it can
00:44:02.660 | just make videos much easier, all that kind of stuff. Listen, I hate being in front of
00:44:08.940 | the camera. If I can figure out a way to not be in front of the camera, I would love it.
00:44:12.900 | Unfortunately, it'll take a while. Like that, generating faces, it's getting there. But
00:44:18.580 | generating faces in video format is tricky, when it's specific people versus generic people.
00:44:23.620 | Let me ask you about GPT-4. There's so many questions. First of all, also amazing. Looking
00:44:34.020 | back, it'll probably be this kind of historic pivotal moment with 3, 5, and 4, which had
00:44:40.300 | Maybe 3, 5 will be the pivotal moment, I don't know. Hard to say that looking forward.
00:44:44.620 | We never know. That's the annoying thing about the future. It's hard to predict. But for
00:44:48.460 | me, looking back, GPT-4, Chad, GPT is pretty damn impressive. Like, historically impressive.
00:44:54.940 | So allow me to ask, what's been the most impressive capabilities of GPT-4 to you, and GPT-4 Turbo?
00:45:06.020 | I think it kind of sucks.
00:45:08.020 | Typical human also. Gotten used to an awesome thing.
00:45:11.340 | No, I think it is an amazing thing. But relative to where we need to get to, and where I believe
00:45:19.500 | we will get to, at the time of GPT-3, people were like, "Oh, this is amazing. This is this
00:45:27.860 | marvel of technology." And it is. It was. But now we have GPT-4, and look at GPT-3, that's
00:45:36.580 | unimaginably horrible. I expect that the delta between 5 and 4 will be the same as between
00:45:43.260 | 4 and 3. And I think it is our job to live a few years in the future and remember that
00:45:49.660 | the tools we have now are going to kind of suck looking backwards at them. And we make
00:45:57.780 | sure the future is better.
00:45:59.700 | What are the most glorious ways that GPT-4 sucks?
00:46:03.740 | What are the best things it can do?
00:46:06.380 | What are the best things it can do, and the limits of those best things that allow you
00:46:11.460 | to say it sucks, therefore gives you inspiration and hope for the future?
00:46:16.340 | You know, one thing I've been using it for more recently is sort of like a brainstorming
00:46:22.580 | partner.
00:46:23.580 | Yeah. I was for that.
00:46:25.940 | There's a glimmer of something amazing in there. I don't think it gets – when people
00:46:32.060 | talk about what it does, they're like, "Oh, it helps me code more productively. It helps
00:46:36.420 | me write faster and better. It helps me translate from this language to another." All these
00:46:42.420 | amazing things. But there's something about the kind of creative brainstorming partner
00:46:51.620 | – I need to come up with a name for this thing. I need to think about this problem
00:46:55.120 | in a different way. I'm not sure what to do here – that I think gives a glimpse of something
00:47:00.500 | I hope to see more of.
00:47:03.420 | One of the other things that you can see a very small glimpse of is when it can help
00:47:10.460 | on longer-horizon tasks. You know, break down something into multiple steps, maybe execute
00:47:15.660 | some of those steps, search the internet, write code, whatever, put that together. When
00:47:20.660 | that works, which is not very often, it's very magical.
00:47:24.500 | The iterative back-and-forth with a human. It works a lot for me. What do you mean?
00:47:29.940 | Iterative back-and-forth with a human, it can get more often. When it can go do a 10-step
00:47:32.580 | problem on its own. It doesn't work for that too often. Sometimes.
00:47:37.140 | But multiple layers of abstraction, or do you mean just sequential?
00:47:40.580 | Both. To break it down and then do things at different layers of abstraction and put
00:47:45.500 | them together. Look, I don't want to downplay the accomplishment of GPT-4, but I don't
00:47:53.740 | want to overstate it either. I think this point that we are on an exponential curve,
00:47:57.860 | we will look back relatively soon at GPT-4, like we look back at GPT-3 now.
00:48:03.980 | That said, I mean, chat-GPT was the transition to where people started to believe it. There
00:48:13.060 | is an uptick of believing. Not internally at OpenAI, perhaps. There's believers here.
00:48:19.380 | In that sense, I do think it'll be a moment where a lot of the world went from not believing
00:48:23.140 | to believing. That was more about the chat-GPT interface, and by the interface and product,
00:48:30.620 | I also mean the post-training of the model, and how we tune it to be helpful to you, and
00:48:35.420 | how to use it, than the underlying model itself.
00:48:38.380 | How much of those two, each of those things are important? The underlying model and the
00:48:45.760 | RLHF, or something of that nature that tunes it to be more compelling to the human, more
00:48:53.180 | effective and productive for the human.
00:48:55.380 | I mean, they're both super important, but the RLHF, the post-training step, the little
00:49:01.180 | wrapper of things that, from a compute perspective, little wrapper of things that we do on top
00:49:06.380 | of the base model, even though it's a huge amount of work, that's really important, to
00:49:09.700 | say nothing of the product that we build around it.
00:49:16.100 | In some sense, we did have to do two things. We had to invent the underlying technology,
00:49:22.580 | and then we had to figure out how to make it into a product people would love, which
00:49:30.860 | is not just about the actual product work itself, but this whole other step of how you
00:49:35.460 | align it and make it useful.
00:49:37.180 | How you make the scale work, where a lot of people can use it at the same time, all that
00:49:42.380 | kind of stuff.
00:49:43.380 | And that. That was a known difficult thing. We knew we were going to have to scale it
00:49:48.940 | up. We had to go do two things that had never been done before, that were both, I would
00:49:53.940 | say, quite significant achievements, and then a lot of things like scaling it up that other
00:49:58.780 | companies have had to do before.
00:50:01.540 | How does the context window of going from 8K to 128K tokens compare from GPT-4 to GPT-4
00:50:11.860 | Turbo?
00:50:13.500 | Most people don't need all the way to 128K most of the time. Although, if we dream into
00:50:18.700 | the distant future, we'll have a way distant future. We'll have context length of several
00:50:23.820 | billion. You will feed in all of your information, all of your history over time, and it'll just
00:50:28.860 | get to know you better and better, and that'll be great.
00:50:32.220 | For now, the way people use these models, they're not doing that. People sometimes post
00:50:38.140 | in a paper or a significant fraction of a code repository or whatever. But most usage
00:50:46.620 | of the models is not using the long context most of the time.
00:50:49.420 | I like that this is your "I have a dream" speech. One day, you'll be judged by the full
00:50:55.900 | context of your character, or of your whole lifetime. That's interesting. That's part
00:51:01.740 | of the expansion that you're hoping for, is a greater and greater context.
00:51:06.620 | I saw this internet clip once. I'm going to get the numbers wrong, but it was Bill Gates
00:51:09.980 | talking about the amount of memory on some early computer. Maybe 64K, maybe 640K, something
00:51:15.820 | like that. Most of it was used for the screen buffer. He just couldn't seem genuine, couldn't
00:51:22.620 | imagine that the world would eventually need gigabytes of memory in a computer, or terabytes
00:51:27.740 | of memory in a computer. You always do just need to follow the exponential of technology.
00:51:37.100 | We will find out how to use better technology. I can't really imagine what it's like right now
00:51:43.420 | for context links to go out to the billions someday. They might not literally go there,
00:51:47.500 | but effectively, it'll feel like that. But I know we'll use it and really not want to
00:51:54.460 | go back once we have it. Yeah. Even saying billions 10 years from now might seem dumb,
00:51:59.980 | because it'll be like trillions upon trillions. There'll be some kind of breakthrough
00:52:05.900 | that will effectively feel like infinite context. But even 120, I have to be honest,
00:52:12.140 | I haven't pushed it to that degree. Maybe putting in entire books, or parts of books and so on.
00:52:17.580 | Papers. What are some interesting use cases of GPT-4 that you've seen?
00:52:23.420 | The thing that I find most interesting is not any particular use case that we can talk about those,
00:52:27.580 | but it's people who... This is mostly younger people, but people who use it as their default
00:52:35.420 | start for any kind of knowledge work task. And it's the fact that it can do a lot of things
00:52:40.780 | reasonably well. You can use GPT-V, you can use it to help you write code, you can use it to help
00:52:44.860 | you do search, you can use it to edit a paper. The most interesting thing to me is the people
00:52:49.420 | who just use it as the start of their workflow. I do as well for many things. I use it as a
00:52:55.100 | reading partner for reading books. It helps me think through ideas, especially when the books
00:53:02.140 | are classics, so it's really well written about. I find it often to be significantly better than
00:53:10.140 | even Wikipedia on well-covered topics. It's somehow more balanced and more nuanced. Or maybe
00:53:16.620 | it's me, but it inspires me to think deeper than a Wikipedia article does. I'm not exactly sure what
00:53:21.820 | that is. You mentioned this collaboration. I'm not sure where the magic is, if it's in here,
00:53:26.380 | or if it's in there, or if it's somewhere in between. I'm not sure. But one of the things
00:53:31.900 | that concerns me for knowledge tasks when I start with GPT is I'll usually have to do fact-checking
00:53:37.820 | after, like check that it didn't come up with fake stuff. How do you figure that out, that GPT can
00:53:48.140 | come up with fake stuff that sounds really convincing? How do you ground it in truth?
00:53:54.620 | That's obviously an area of intense interest for us. I think it's going to get a lot better
00:54:01.900 | with upcoming versions, but we'll have to work on it, and we're not going to have it all solved
00:54:06.060 | this year. Well, the scary thing is as it gets better, you'll start not doing the fact-checking
00:54:12.300 | more and more, right? I'm of two minds about that. I think people are much more sophisticated
00:54:17.660 | users of technology than we often give them credit for, and people seem to really understand
00:54:22.540 | that GPT, any of these models, hallucinate some of the time, and if it's mission-critical,
00:54:26.620 | you've got to check it. Except journalists don't seem to understand that. I've seen journalists
00:54:30.780 | half-assedly just using GPT for... Of the long list of things I'd like to dunk on journalists
00:54:36.940 | for, this is not my top criticism of them. Well, I think the bigger criticism is perhaps
00:54:43.500 | the pressures and the incentives of being a journalist is that you have to work really
00:54:47.340 | quickly, and this is a shortcut. I would love our society to incentivize like... I would too.
00:54:55.020 | Long, like a journalist, journalistic efforts that take days and weeks and rewards great,
00:55:00.780 | in-depth journalism. Also journalism that presents stuff in a balanced way where it's like,
00:55:05.820 | celebrates people while criticizing them, even though the criticism is the thing that gets
00:55:10.540 | clicks, and making shit up also gets clicks, and headlines that mischaracterize completely.
00:55:16.140 | I'm sure you have a lot of people dunking on... Well, all that drama probably got a lot of clicks.
00:55:21.260 | Probably did. And that's a bigger problem about human civilization. I'd love to see solid,
00:55:29.980 | this is where we celebrate a bit more. You've given Chad GPT the ability to have memories,
00:55:34.620 | you've been playing with that, about previous conversations. And also the ability to turn off
00:55:40.300 | memory, which I wish I could do that sometimes, just turn on and off, depending. I guess sometimes
00:55:45.980 | alcohol can do that, but not optimally, I suppose. What have you seen through that,
00:55:52.700 | playing around with that idea of remembering conversations or not?
00:55:56.380 | We're very early in our explorations here, but I think what people want, or at least what I want
00:56:01.340 | for myself, is a model that gets to know me and gets more useful to me over time.
00:56:08.220 | This is an early exploration. I think there's a lot of other things to do,
00:56:14.940 | but that's where we'd like to head. You'd like to use a model and over the course of your life,
00:56:19.340 | or use a system, there'll be many models, and over the course of your life, it gets better and better.
00:56:25.180 | Yeah, how hard is that problem? Because right now it's more like remembering
00:56:28.780 | little factoids and preferences and so on. What about remembering, don't you want GPT to remember
00:56:35.900 | all the shit you went through in November and all the drama?
00:56:40.060 | Yeah, yeah, yeah.
00:56:40.940 | Because right now you're clearly blocking it out a little bit.
00:56:43.340 | It's not just that I want it to remember that, I want it to integrate the lessons of that and
00:56:49.420 | remind me in the future what to do differently or what to watch out for.
00:56:57.420 | We all gain from experience over the course of our lives,
00:57:03.260 | varying degrees, and I'd like my AI agent to gain with that experience too.
00:57:10.220 | So if we go back and let ourselves imagine that trillions and trillions of context length,
00:57:15.900 | if I can put every conversation I've ever had with anybody in my life in there, if I can have
00:57:21.740 | all of my emails input out, all of my input output in the context window every time I ask a question,
00:57:26.540 | that'd be pretty cool, I think.
00:57:28.300 | Yeah, I think that would be very cool. People sometimes will hear that and be concerned about
00:57:33.820 | privacy. What do you think about that aspect of it? The more effective the AI becomes at really
00:57:41.740 | integrating all the experiences and all the data that happened to you and giving you advice?
00:57:47.500 | I think the right answer there is just user choice. Anything I want stricken from the record
00:57:52.140 | for my AI agent, I want to be able to take out. If I don't want it to remember anything,
00:57:55.020 | I want that too. You and I may have different opinions about where on that privacy utility
00:58:03.020 | trade-off for our own AI we want to be, which is totally fine. But I think the answer is just
00:58:06.780 | like really easy user choice. But there should be some high level of transparency from a company
00:58:12.940 | about the user choice. Because sometimes companies in the past have been kind of shady about like,
00:58:18.540 | "Eh, it's kind of presumed that we're collecting all your data and we're using it for a good
00:58:24.780 | reason, for advertisement and so on." But there's not a transparency about the details of that.
00:58:31.100 | That's totally true. You mentioned earlier that I'm like blocking out the November stuff.
00:58:34.540 | Just teasing you.
00:58:35.900 | Well, I mean, I think it was a very traumatic thing and it did immobilize me
00:58:42.940 | for a long period of time. Definitely the hardest work that I've had to do was just keep working
00:58:50.940 | that period. Because I had to try to come back in here and put the pieces together while I was just
00:58:57.260 | in sort of shock and pain. Nobody really cares about that. I mean, the team gave me a pass and
00:59:02.540 | I was not working at my normal level. But there was a period where I was just like,
00:59:05.660 | it was really hard to have to do both. But I kind of woke up one morning and I was like,
00:59:11.500 | "This was a horrible thing to happen to me. I think I could just feel like a victim forever."
00:59:15.020 | Or I can say, "This is like the most important work I'll ever touch in my life and I need to
00:59:19.820 | get back to it." And it doesn't mean that I've repressed it because sometimes I wake in the
00:59:26.380 | middle of the night thinking about it, but I do feel like an obligation to keep moving forward.
00:59:31.100 | Well, that's beautifully said, but there could be some linkering stuff in there. Like,
00:59:36.220 | what I would be concerned about is that trust thing that you mentioned,
00:59:42.060 | that being paranoid about people as opposed to just trusting everybody or most people,
00:59:48.220 | like using your gut. It's a tricky dance.
00:59:50.540 | For sure.
00:59:51.500 | I mean, because I've seen in my part-time explorations, I've been diving deeply into
01:00:00.540 | the Zelensky administration, the Putin administration, and the dynamics there
01:00:06.140 | in wartime in a very highly stressful environment. And what happens is distrust
01:00:11.820 | and you isolate yourself both and you start to not see the world clearly. And that's a concern.
01:00:19.340 | That's a human concern. You seem to have taken it in stride and kind of learned the good lessons
01:00:24.060 | and felt the love and let the love energize you, which is great, but still can linger in there.
01:00:29.500 | There's just some questions I would love to ask your intuition about what's GPT able to do and
01:00:36.620 | not. So it's allocating approximately the same amount of compute for each token it generates.
01:00:44.460 | Is there room there in this kind of approach to slower thinking, sequential thinking?
01:00:51.500 | I think there will be a new paradigm for that kind of thinking.
01:00:54.860 | Will it be similar like architecturally as what we're seeing now with LLMs?
01:01:00.140 | Is it a layer on top of the LLMs?
01:01:01.980 | I can imagine many ways to implement that. I think that's less important
01:01:08.220 | than the question you were getting at, which is, do we need a way to do
01:01:12.380 | a slower kind of thinking where the answer doesn't have to get like,
01:01:16.540 | you know, it's like, I guess like spiritually, you could say that you want an AI to be able to think
01:01:24.620 | harder about a harder problem and answer more quickly about an easier problem. And I think
01:01:28.860 | that will be important. Is that like a human thought that we're just having? You should be
01:01:32.220 | able to think hard. Is that the wrong intuition? I suspect that's a reasonable intuition.
01:01:36.940 | Interesting. So it's not possible once the GPT gets like GPT-7, we'll just be instantaneously
01:01:42.860 | be able to see, you know, here's the proof of Fermat's theorem.
01:01:47.580 | It seems to me like you want to be able to allocate more compute to harder problems.
01:01:54.220 | Like, it seems to me that a system knowing,
01:02:02.540 | if you ask a system like that, proof for Fermat's last theorem versus what's today's date,
01:02:09.580 | unless it already knew and had memorized the answer to the
01:02:14.140 | proof, assuming it's got to go figure that out, seems like that will take more compute.
01:02:19.500 | But can it look like a basically LLM talking to itself, that kind of thing?
01:02:24.140 | Maybe. I mean, there's a lot of things that you could imagine working. What, like, what the right
01:02:31.900 | or the best way to do that will be? We don't know.
01:02:35.500 | This does make me think of the mysterious, the lore behind Q*. What's this mysterious
01:02:44.780 | Q* project? Is it also in the same nuclear facility?
01:02:47.740 | There is no nuclear facility.
01:02:51.340 | That's what a person with a nuclear facility always says.
01:02:54.780 | I would love to have a secret nuclear facility. There isn't one.
01:02:58.300 | All right.
01:02:59.740 | Maybe someday.
01:03:01.260 | Someday. All right.
01:03:02.220 | One can dream.
01:03:05.340 | OpenAI is not a good company at keeping secrets. It would be nice, you know,
01:03:08.380 | we're like been plagued by a lot of leaks, and it would be nice if we were able to have
01:03:12.860 | something like that.
01:03:13.900 | Can you speak to what Q* is?
01:03:15.820 | We are not ready to talk about that.
01:03:17.020 | See, but an answer like that means there's something to talk about.
01:03:20.380 | It's very mysterious, Sam.
01:03:21.660 | I mean, we work on all kinds of research.
01:03:28.700 | We have said for a while that we think better reasoning in these systems is an important
01:03:37.580 | direction that we'd like to pursue.
01:03:38.860 | We haven't cracked the code yet.
01:03:42.060 | We're very interested in it.
01:03:45.900 | Is there going to be moments, Q* or otherwise, where
01:03:52.300 | there's going to be leaps similar to JadGBT, where you're like…
01:03:56.460 | That's a good question.
01:03:57.340 | What do I think about that?
01:04:00.300 | It's interesting.
01:04:05.900 | To me, it all feels pretty continuous.
01:04:07.580 | Right.
01:04:08.460 | This is kind of a theme that you're saying is there's a gradual…
01:04:11.500 | You're basically gradually going up an exponential slope.
01:04:13.980 | But from an outsider perspective, for me just watching it, it does feel like there's leaps.
01:04:19.500 | But to you, there isn't.
01:04:21.820 | I do wonder if we should have…
01:04:24.940 | Part of the reason that we deploy the way we do is that we think…
01:04:28.060 | We call it iterative deployment.
01:04:30.140 | Rather than go build in secret until we got all the way to GPT-5, we decided to talk about GPT-1,
01:04:37.740 | 2, 3, and 4.
01:04:38.460 | Part of the reason there is I think AI and surprise don't go together.
01:04:43.020 | Also, the world, people, institutions, whatever you want to call it, need time to adapt and think
01:04:48.620 | about these things.
01:04:50.780 | I think one of the best things that open AI has done is this strategy and we get the world to pay
01:04:56.860 | attention to the progress, to take AGI seriously, to think about what systems and structures and
01:05:04.380 | governance we want in place before we're under the gun and have to make a rush decision.
01:05:08.300 | I think that's really good.
01:05:09.180 | But the fact that people like you and others say you still feel like there are these leaps
01:05:16.780 | makes me think that maybe we should be doing our releasing even more iteratively.
01:05:21.820 | I don't know what that would mean.
01:05:22.700 | I don't have any answer ready to go.
01:05:23.820 | But our goal is not to have shock updates to the world, the opposite.
01:05:29.580 | Yeah, for sure.
01:05:30.300 | More iterative would be amazing.
01:05:32.380 | I think that's just beautiful for everybody.
01:05:34.380 | But that's what we're trying to do.
01:05:35.660 | That's our state of the strategy.
01:05:37.340 | And I think we're somehow missing the mark.
01:05:38.860 | Maybe we should think about releasing GPT-5 in a different way or something like that.
01:05:43.980 | Yeah, 4.71, 4.72.
01:05:47.500 | But people tend to like to celebrate.
01:05:49.180 | People celebrate birthdays.
01:05:50.380 | I don't know if you know humans, but they kind of have these milestones.
01:05:54.700 | I do know some humans.
01:05:56.060 | People do like milestones.
01:05:57.500 | I totally get that.
01:06:00.380 | I think we like milestones too.
01:06:05.260 | It's fun to say, declare victory on this one and go start the next thing.
01:06:09.020 | But yeah, I feel like we're somehow getting this a little bit wrong.
01:06:12.860 | So when is GPT-5 coming out again?
01:06:15.580 | I don't know.
01:06:16.460 | That's the honest answer.
01:06:17.260 | Oh, that's the honest answer.
01:06:19.180 | Is it blink twice if it's this year?
01:06:23.500 | I also, we will release an amazing model this year.
01:06:33.100 | I don't know what we'll call it.
01:06:35.020 | So that goes to the question of what's the way we release this thing?
01:06:41.740 | We'll release over in the coming months, many different things.
01:06:46.380 | I think they'll be very cool.
01:06:49.340 | I think before we talk about like a GPT-5 like model called that or not called that
01:06:54.940 | or a little bit worse or a little bit better than what you'd expect from a GPT-5.
01:06:58.380 | I know we have a lot of other important things to release first.
01:07:01.980 | I don't know what to expect from GPT-5.
01:07:04.540 | You're making me nervous and excited.
01:07:08.540 | What are some of the biggest challenges and bottlenecks to overcome
01:07:11.420 | for whatever it ends up being called, but let's call it GPT-5?
01:07:15.740 | Just interesting to ask, what are, is it on the compute side?
01:07:19.980 | Is in the technical side?
01:07:21.020 | Always all of these, I was, you know, what's the one big unlock?
01:07:24.780 | Is it a bigger computer?
01:07:26.140 | Is it like a new secret?
01:07:27.260 | Is it something else?
01:07:28.140 | It's all of these things together.
01:07:31.420 | Like the thing that OpenAI I think does really well.
01:07:35.900 | This is actually an original Ilio quote that I'm going to butcher, but it's something like
01:07:39.660 | we multiply 200 medium-sized things together into one giant thing.
01:07:46.620 | So there's this distributed constant innovation happening.
01:07:50.380 | Yeah.
01:07:50.880 | So even on the technical side, like...
01:07:53.340 | Especially on the technical side.
01:07:54.380 | So like even like detailed approaches, like you do detailed aspects of every...
01:07:58.300 | How does that work with different disparate teams and so on?
01:08:02.460 | Like how do they, how do the medium-sized things become one whole giant transformer?
01:08:07.020 | How does this...
01:08:07.980 | There's a few people who have to like think about putting the whole thing together,
01:08:11.260 | but a lot of people try to keep most of the picture in their head.
01:08:14.140 | Oh, like the individual teams, individual contributors try to...
01:08:16.700 | At a high level, yeah.
01:08:17.980 | You don't know exactly how every piece works, of course, but one thing I generally believe
01:08:23.820 | is that it's sometimes useful to zoom out and look at the entire map.
01:08:28.540 | And I think this is true for like a technical problem.
01:08:33.500 | I think this is true for like innovating in business.
01:08:36.140 | But things come together in surprising ways and having an understanding of that whole picture,
01:08:43.100 | even if most of the time you're operating in the weeds in one area,
01:08:47.580 | pays off with surprising insights.
01:08:51.260 | In fact, one of the things that I used to have, and I think was super valuable,
01:08:55.980 | was I used to have like a good map of that, all of the frontier,
01:09:01.420 | or most of the frontiers in the tech industry.
01:09:03.420 | And I could sometimes see these connections or new things that were possible that if I were only,
01:09:09.100 | you know, deep in one area, I wouldn't be able to like have the idea for it because
01:09:13.500 | I wouldn't have all the data.
01:09:14.540 | And I don't really have that much anymore.
01:09:16.860 | I'm like super deep now.
01:09:18.220 | But I know that it's a valuable thing.
01:09:22.140 | You're not the man you used to be, Sam.
01:09:25.180 | Very different job now than what I used to have.
01:09:26.780 | - Speaking of zooming out, let's zoom out to another cheeky thing,
01:09:33.340 | but profound thing perhaps that you said.
01:09:36.620 | You tweeted about needing $7 trillion.
01:09:40.860 | - I did not tweet about that.
01:09:42.780 | I never said like we're raising $7 trillion, blah, blah, blah.
01:09:45.340 | - Oh, that's somebody else.
01:09:46.220 | - Yeah.
01:09:46.540 | - Oh, but you said fuck it, maybe eight, I think.
01:09:50.220 | - Okay, I meme like once there's like misinformation out in the world.
01:09:53.420 | - Oh, you meme.
01:09:54.460 | But sort of misinformation may have a foundation of like insight there.
01:09:59.980 | - Look, I think compute is gonna be the currency of the future.
01:10:03.740 | I think it will be maybe the most precious commodity in the world.
01:10:07.180 | And I think we should be investing heavily to make a lot more compute.
01:10:12.140 | Compute is,
01:10:13.740 | it's an unusual, I think it's gonna be an unusual market.
01:10:22.140 | You know, people think about
01:10:23.180 | the market for like chips for mobile phones or something like that.
01:10:30.300 | And you can say that, okay, there's 8 billion people in the world,
01:10:33.500 | maybe 7 billion of them have phones, maybe there are 6 billion, let's say.
01:10:36.700 | They upgrade every two years.
01:10:38.460 | So the market per year is 3 billion system-on-chip for smartphones.
01:10:41.660 | And if you make 30 billion, you will not sell 10 times as many phones
01:10:45.500 | because most people have one phone.
01:10:50.780 | But compute is different.
01:10:51.900 | Like intelligence is gonna be more like energy or something like that,
01:10:55.340 | where the only thing that I think makes sense to
01:10:57.660 | talk about is at price X, the world will use this much compute.
01:11:04.700 | And at price Y, the world will use this much compute.
01:11:06.780 | Because if it's really cheap, I'll have it like reading my email all day,
01:11:11.020 | like giving me suggestions about what I maybe should think about or work on
01:11:13.820 | and trying to cure cancer.
01:11:15.980 | And if it's really expensive, maybe I'll only use it,
01:11:18.060 | will only use it to try to cure cancer.
01:11:19.980 | So I think the world is gonna want a tremendous amount of compute.
01:11:23.580 | And there's a lot of parts of that that are hard.
01:11:26.300 | Energy is the hardest part.
01:11:28.140 | Building data centers is also hard.
01:11:30.780 | The supply chain is harder than, of course, fabricating enough chips is hard.
01:11:33.660 | But this seems to me where things are going.
01:11:37.340 | Like we're gonna want an amount of compute that's just hard to reason about right now.
01:11:40.540 | - How do you solve the energy puzzle?
01:11:45.580 | Nuclear?
01:11:46.060 | - That's what I believe.
01:11:46.780 | - Fusion?
01:11:47.340 | - That's what I believe.
01:11:49.260 | - Nuclear fusion.
01:11:49.900 | - Yeah.
01:11:50.400 | - Who's gonna solve that?
01:11:52.620 | - I think Helion's doing the best work,
01:11:54.300 | but I'm happy there's like a race for fusion right now.
01:11:56.620 | Nuclear fission, I think, is also like quite amazing.
01:12:00.140 | And I hope as a world, we can re-embrace that.
01:12:03.100 | It's really sad to me how the history of that went
01:12:05.260 | and hope we get back to it in a meaningful way.
01:12:07.420 | - So to you, part of the puzzle is nuclear fission,
01:12:10.220 | like nuclear reactors as we currently have them.
01:12:12.700 | And a lot of people are terrified because of Chernobyl and so on.
01:12:15.420 | - Well, I think we should make new reactors.
01:12:18.060 | I think it's just like it's a shame that industry kind of ground to a halt.
01:12:21.340 | - And what just mass hysteria is how you explain the halt?
01:12:24.860 | - Yeah.
01:12:25.580 | - I don't know if you know humans, but that's one of the dangers.
01:12:29.340 | That's one of the security threats for nuclear fission
01:12:33.980 | is humans seem to be really afraid of it.
01:12:37.020 | And that's something we have to incorporate into the calculus of it.
01:12:40.700 | So we have to kind of win people over and to show how safe it is.
01:12:43.740 | - I worry about that for AI.
01:12:45.740 | - Mm-hmm.
01:12:47.180 | - I think some things are going to go theatrically wrong with AI.
01:12:50.140 | I don't know what the percent chance is that I eventually get shot, but it's not zero.
01:12:56.140 | - Oh, like we want to stop this.
01:12:59.740 | - Maybe.
01:13:00.700 | - How do you decrease the theatrical nature of it?
01:13:06.060 | You know, I've already starting to hear rumblings
01:13:09.100 | because I do talk to people on both sides of the political spectrum,
01:13:15.180 | hear rumblings where it's going to be politicized, AI.
01:13:18.140 | It's going to be politicized.
01:13:19.180 | It really worries me because then it's like maybe the right is against AI
01:13:24.300 | and the left is for AI because it's going to help the people
01:13:28.060 | or whatever the narrative and formulation is that really worries me.
01:13:31.500 | And then the theatrical nature of it can be leveraged fully.
01:13:36.380 | How do you fight that?
01:13:37.180 | - I think it will get caught up in like left versus right wars.
01:13:41.500 | I don't know exactly what that's going to look like,
01:13:43.020 | but I think that's just what happens with anything of consequence, unfortunately.
01:13:46.700 | What I meant more about theatrical risks is like AI is going to have,
01:13:51.420 | I believe, tremendously more good consequences than bad ones,
01:13:55.980 | but it is going to have bad ones.
01:13:57.180 | And there'll be some bad ones that are bad, but not theatrical.
01:14:05.340 | You know, like a lot more people have died of air pollution
01:14:09.420 | than nuclear reactors, for example.
01:14:12.780 | But we worry.
01:14:13.740 | Most people worry more about living next to a nuclear reactor than a coal plant.
01:14:17.020 | But something about the way we're wired is that
01:14:20.620 | although there's many different kinds of risks we have to confront,
01:14:24.140 | the ones that make a good climax scene of a movie carry much more weight with us
01:14:30.060 | than the ones that are very bad over a long period of time, but on a slow burn.
01:14:34.220 | - Well, that's why truth matters.
01:14:37.340 | And hopefully AI can help us see the truth of things,
01:14:40.220 | to have balance, to understand what are the actual risks,
01:14:44.140 | what are the actual dangers of things in the world.
01:14:45.900 | What are the pros and cons of the competition in this space
01:14:50.300 | and competing with Google, Meta, XAI, and others?
01:14:54.780 | - I think I have a pretty straightforward answer to this
01:15:00.140 | that maybe I can think of more nuance later,
01:15:01.660 | but the pros seem obvious,
01:15:02.940 | which is that we get better products and more innovation faster and cheaper,
01:15:07.340 | and all the reasons competition is good.
01:15:09.820 | And the con is that I think if we're not careful,
01:15:13.740 | it could lead to an increase in sort of an arms race that I'm nervous about.
01:15:20.860 | - Do you feel the pressure of the arms race, like in some negative--
01:15:24.780 | - Definitely in some ways, for sure.
01:15:26.780 | We spend a lot of time talking about the need to prioritize safety.
01:15:32.540 | And I've said for like a long time that I think,
01:15:36.620 | if you think of a quadrant of slow timelines to the start of AGI,
01:15:42.860 | long timelines, and then a short takeoff or a fast takeoff,
01:15:46.300 | I think short timelines, slow takeoff is the safest quadrant
01:15:49.980 | and the one I'd most like us to be in.
01:15:51.420 | But I do want to make sure we get that slow takeoff.
01:15:54.540 | - Part of the problem I have with this kind of slight beef with Elon
01:15:59.180 | is that there's silos are created in it,
01:16:01.260 | as opposed to collaboration on the safety aspect of all of this.
01:16:04.860 | It tends to go into silos and closed open source, perhaps in the model.
01:16:09.740 | - Elon says at least that he cares a great deal about AI safety
01:16:13.500 | and is really worried about it.
01:16:14.940 | And I assume that he's not gonna race on safely.
01:16:19.260 | - Yeah, but collaboration here I think is really beneficial for everybody on that front.
01:16:24.940 | - Not really the thing he's most known for.
01:16:27.340 | - Well, he is known for caring about humanity
01:16:31.500 | and humanity benefits from collaboration.
01:16:33.660 | And so there's always attention and incentives and motivations.
01:16:36.940 | And in the end, I do hope humanity prevails.
01:16:41.180 | - I was thinking, someone just reminded me the other day
01:16:44.700 | about how the day that he got like surpassed Jeff Bezos
01:16:48.780 | for like richest person in the world, he tweeted a silver medal at Jeff Bezos.
01:16:52.700 | I hope we have less stuff like that as people start to work on.
01:16:57.900 | - I agree.
01:16:58.700 | - Towards AGI.
01:16:59.260 | - I think Elon is a friend and he's a beautiful human being
01:17:02.780 | and one of the most important humans ever, that stuff is not good.
01:17:06.940 | - The amazing stuff about Elon is amazing and I super respect him.
01:17:10.940 | I think we need him, all of us should be rooting for him
01:17:14.540 | and need him to step up as a leader through this next phase.
01:17:17.820 | - Yeah, I hope you can have one without the other,
01:17:21.020 | but sometimes humans are flawed and complicated and all that kind of stuff.
01:17:24.140 | - There's a lot of really great leaders throughout history.
01:17:26.540 | - Yeah, and we can each be the best version of ourselves and strive to do so.
01:17:32.220 | - Let me ask you, Google, with the help of search,
01:17:38.940 | has been dominating the past 20 years, I think it's fair to say,
01:17:45.340 | in terms of the access, the world's access to information,
01:17:48.780 | how we interact and so on.
01:17:50.060 | And one of the nerve-wracking things for Google,
01:17:53.180 | but for the entirety of people in this space,
01:17:55.580 | is thinking about how are people going to access information?
01:17:59.100 | - Yeah.
01:17:59.340 | - Like you said, people show up to GPT as a starting point.
01:18:04.700 | So is OpenAI going to really take on this thing that Google started 20 years ago,
01:18:10.300 | which is how do we get--
01:18:12.380 | - I find that boring.
01:18:13.420 | If the question is, if we can build a better search engine than Google or whatever,
01:18:19.660 | then sure, we should go, people should use a better product.
01:18:25.740 | But I think that would so understate what this can be.
01:18:31.580 | You know, Google shows you like 10 blue links,
01:18:36.220 | well, like 13 ads and then 10 blue links,
01:18:38.380 | and that's like one way to find information.
01:18:41.740 | But the thing that's exciting to me is not that we can go build a better copy of Google search,
01:18:48.940 | but that maybe there's just some much better way to help people
01:18:53.020 | find and act and on and synthesize information.
01:18:56.700 | Actually, I think chat GPT is that for some use cases,
01:18:59.740 | and hopefully we'll make it be like that for a lot more use cases.
01:19:03.340 | But I don't think it's that interesting to say like,
01:19:06.460 | how do we go do a better job of giving you like 10 ranked
01:19:09.660 | web pages to look at than what Google does?
01:19:11.420 | Maybe it's really interesting to go say,
01:19:14.860 | how do we help you get the answer or the information you need?
01:19:17.500 | How do we help create that in some cases,
01:19:19.980 | synthesize that and others are pointed to it,
01:19:21.900 | and yet others.
01:19:22.940 | But a lot of people have tried to just make a better search engine than Google.
01:19:30.140 | And it is a hard technical problem.
01:19:32.780 | It is a hard branding problem.
01:19:33.980 | It's a hard ecosystem problem.
01:19:35.340 | I don't think the world needs another copy of Google.
01:19:38.300 | - And integrating a chat client, like a chat GPT with a search engine.
01:19:44.300 | - That's cooler.
01:19:45.020 | - It's cool, but it's tricky.
01:19:47.900 | It's like, if you just do it simply, it's awkward.
01:19:51.260 | Because like, if you just shove it in there, it can be awkward.
01:19:54.700 | - As you might guess, we are interested in how to do that well.
01:19:57.420 | That would be an example of a cool thing.
01:19:59.420 | - How to do that well, like a heterogeneous, like integrating.
01:20:03.420 | - The intersection of LLMs plus search.
01:20:05.980 | I don't think anyone has cracked the code on yet.
01:20:08.860 | I would love to go do that.
01:20:11.740 | I think that would be cool.
01:20:12.540 | - Yeah.
01:20:13.660 | What about the ad side?
01:20:15.260 | Have you ever considered monetization?
01:20:16.540 | - You know, I kind of hate ads just as like an aesthetic choice.
01:20:20.300 | I think ads needed to happen on the internet for a bunch of reasons to get it going.
01:20:26.620 | But it's a more mature industry.
01:20:28.860 | The world is richer now.
01:20:31.500 | I like that people pay for chat GPT and know that the answers they're getting
01:20:38.060 | are not influenced by advertisers.
01:20:40.140 | There is, I'm sure there's an ad unit that makes sense for LLMs.
01:20:45.740 | And I'm sure there's a way to like participate in the transaction stream
01:20:50.060 | in an unbiased way that is okay to do.
01:20:53.340 | But it's also easy to think about like the dystopic visions of the future
01:20:58.540 | where you ask chat GPT something and it says,
01:21:01.340 | oh, here's, you know, you should think about buying this product
01:21:03.500 | or you should think about, you know, this going here for your vacation or whatever.
01:21:07.420 | And I don't know, like,
01:21:13.660 | we have a very simple business model and I like it.
01:21:15.820 | And I know that I'm not the product.
01:21:19.180 | Like I know I'm paying and that's how the business model works.
01:21:23.180 | And when I go use like Twitter or Facebook or Google or any other great product,
01:21:31.420 | but ad supported great product,
01:21:33.020 | I don't love that.
01:21:36.140 | And I think it gets worse, not better in a world with AI.
01:21:39.420 | - Yeah, I mean, I can imagine AI would be better
01:21:42.060 | at showing the best kind of version of ads,
01:21:44.780 | not in a dystopic future,
01:21:46.620 | but where the ads are for things you actually need.
01:21:49.740 | But then does that system always result in the ads
01:21:56.700 | driving the kind of stuff that's shown all that it's...
01:21:59.340 | Yeah, I think it was a really bold move of Wikipedia
01:22:03.340 | not to do advertisements,
01:22:04.860 | but then it makes it very challenging as a business model.
01:22:08.780 | So you're saying the current thing with open AI
01:22:11.500 | is sustainable from a business perspective?
01:22:14.380 | - Well, we have to figure out how to grow,
01:22:16.300 | but it looks like we're gonna figure that out.
01:22:19.340 | If the question is, do I think we can have a great business
01:22:23.500 | that pays for our compute needs without ads,
01:22:25.980 | that I think the answer is yes.
01:22:27.100 | - Well, that's promising.
01:22:33.660 | I also just don't want to completely throw out ads as a...
01:22:37.580 | - I'm not saying that.
01:22:38.380 | I guess I'm saying I have a bias against them.
01:22:41.500 | - Yeah, I have also a bias and just a skepticism in general.
01:22:47.820 | And in terms of interface,
01:22:49.900 | 'cause I personally just have like a spiritual
01:22:52.220 | dislike of crappy interfaces,
01:22:55.660 | which is why AdSense when it first came out
01:22:57.660 | was a big leap forward
01:22:59.260 | versus like animated banners or whatever.
01:23:02.300 | But like, it feels like there should be
01:23:03.820 | many more leaps forward in advertisement
01:23:07.340 | that doesn't interfere with the consumption of the content
01:23:09.900 | and doesn't interfere in the big fundamental way,
01:23:11.820 | which is like what you were saying.
01:23:13.420 | Like it will manipulate the truth to suit the advertisers.
01:23:18.460 | Let me ask you about safety, but also bias
01:23:26.300 | and like safety in the short term, safety in the long term.
01:23:29.340 | The Gemini One 5 came out recently.
01:23:31.820 | There's a lot of drama around it,
01:23:33.420 | speaking of theatrical things,
01:23:34.780 | and it generated black Nazis and black founding fathers.
01:23:40.860 | I think fair to say it was a bit on the ultra woke side.
01:23:46.700 | So that's a concern for people
01:23:48.700 | that if there is a human layer within companies
01:23:51.980 | that modifies the safety or the harm caused by a model,
01:23:57.260 | that they will introduce a lot of bias
01:23:59.580 | that fits sort of an ideological lean within a company.
01:24:04.220 | How do you deal with that?
01:24:05.740 | - I mean, we work super hard not to do things like that.
01:24:09.500 | We've made our own mistakes.
01:24:11.100 | We'll make others.
01:24:11.900 | I assume Google will learn from this one,
01:24:13.740 | still make others.
01:24:14.460 | It is all, like these are not easy problems.
01:24:19.900 | One thing that we've been thinking about more and more is,
01:24:22.780 | I think this was a great idea somebody here had,
01:24:24.940 | like it'd be nice to write out
01:24:26.380 | what the desired behavior of a model is,
01:24:28.380 | make that public, take input on it.
01:24:30.140 | Say, here's how this model is supposed to behave
01:24:32.620 | and explain the edge cases too.
01:24:34.140 | And then when a model is not behaving
01:24:37.260 | in a way that you want,
01:24:38.140 | it's at least clear about whether it's a bug
01:24:40.540 | the company should fix or behaving as intended
01:24:42.620 | and you should debate the policy.
01:24:44.060 | And right now it can sometimes be caught in between.
01:24:48.060 | Like black Nazis, obviously ridiculous,
01:24:50.300 | but there are a lot of other kind of subtle things
01:24:52.220 | that you could make a judgment call on either way.
01:24:53.900 | - Yeah, but sometimes if you write it out
01:24:56.860 | and make it public, you can use kind of language
01:24:59.980 | that's, you know, the Google's AI principles
01:25:03.020 | are very high level.
01:25:03.820 | - That doesn't, that's not what I'm talking about.
01:25:05.260 | That doesn't work.
01:25:05.900 | Like I'd have to say, you know,
01:25:06.940 | when you ask it to do thing X,
01:25:09.500 | it's supposed to respond in way Y.
01:25:10.860 | - So like literally who's better, Trump or Biden?
01:25:14.860 | What's the expected response for a model?
01:25:17.340 | Like something like very concrete.
01:25:18.620 | - Yeah, I'm open to a lot of ways
01:25:20.140 | a model could behave then,
01:25:21.020 | but I think you should have to say, you know,
01:25:22.540 | here's the principle
01:25:23.180 | and here's what it should say in that case.
01:25:24.380 | - That would be really nice.
01:25:25.980 | That'd be really nice.
01:25:26.940 | And then everyone kind of agrees
01:25:28.460 | because there's this anecdotal data
01:25:31.340 | that people pull out all the time.
01:25:33.900 | And if there's some clarity
01:25:35.020 | about other representative anecdotal examples,
01:25:38.300 | you can define.
01:25:38.940 | - And then when it's a bug, it's a bug
01:25:40.220 | and, you know, the company can fix that.
01:25:41.580 | - Right.
01:25:42.220 | Then it'd be much easier to deal
01:25:43.500 | with a black Nazi type of image generation
01:25:45.420 | if there's great examples.
01:25:46.940 | So San Francisco is a bit of an ideological bubble,
01:25:53.340 | tech in general as well.
01:25:55.980 | Do you feel the pressure of that
01:25:57.180 | within a company that there's like a lean
01:26:00.220 | towards the left politically
01:26:03.820 | that affects the product, that affects the teams?
01:26:06.220 | - I feel very lucky that we don't have the challenges
01:26:09.100 | at OpenAI that I have heard of
01:26:11.260 | at a lot of other companies.
01:26:12.620 | I think part of it is like
01:26:15.420 | every company has got some ideological thing.
01:26:18.300 | We have one about AGI and belief in that
01:26:21.980 | and it pushes out some others.
01:26:23.260 | Like we are much less caught up in the culture war
01:26:27.500 | than I've heard about at a lot of other companies.
01:26:29.900 | San Francisco's a mess in all sorts of ways, of course.
01:26:32.620 | - So that doesn't infiltrate OpenAI as--
01:26:35.900 | - I'm sure it does in all sorts of subtle ways,
01:26:37.820 | but not in the obvious.
01:26:39.020 | Like I think we've had our flare-ups for sure,
01:26:45.260 | like any company,
01:26:45.900 | but I don't think we have anything
01:26:47.020 | like what I hear about happen at other companies here.
01:26:49.340 | - So what's in general is the process
01:26:51.740 | for the bigger question of safety.
01:26:53.820 | How do you provide that layer that protects the model
01:26:55.980 | from doing crazy, dangerous things?
01:26:57.660 | - I think there will come a point
01:27:03.420 | where that's mostly what we think about the whole company.
01:27:05.740 | And it won't be like,
01:27:06.620 | it's not like you have one safety team.
01:27:08.220 | It's like when we shipped GPT-4,
01:27:10.380 | that took the whole company,
01:27:11.260 | thinking about all these different aspects
01:27:12.300 | and how they fit together.
01:27:13.100 | And I think it's gonna take that.
01:27:14.460 | More and more of the company
01:27:17.980 | thinks about those issues all the time.
01:27:19.740 | - That's literally what humans will be thinking about
01:27:24.380 | the more powerful AI becomes.
01:27:26.940 | So most of the employees at OpenAI
01:27:28.700 | will be thinking safety,
01:27:29.980 | or at least to some degree.
01:27:31.100 | - Broadly defined, yes.
01:27:32.780 | - Yeah.
01:27:33.660 | I wonder what are the full broad definition of that?
01:27:37.260 | Like what are the different harms that could be caused?
01:27:39.660 | Is this like on a technical level,
01:27:41.660 | or is this almost like security threats?
01:27:44.860 | - It'll be all those things.
01:27:45.580 | Yeah, I was gonna say,
01:27:46.380 | it'll be people, state actors trying to steal the model.
01:27:50.460 | It'll be all of the technical alignment work.
01:27:53.900 | It'll be societal impacts, economic impacts.
01:27:56.780 | It's not just like we have one team
01:28:02.300 | thinking about how to align the model.
01:28:03.660 | And it's really gonna be like,
01:28:05.660 | getting to the good outcome
01:28:08.300 | is gonna take the whole effort.
01:28:09.980 | - How hard do you think people, state actors perhaps,
01:28:14.060 | are trying to hack?
01:28:15.100 | First of all, infiltrate OpenAI,
01:28:18.220 | but second of all, infiltrate unseen?
01:28:20.300 | - They're trying.
01:28:21.100 | (Lex laughs)
01:28:24.380 | - What kind of accent do they have?
01:28:26.060 | - I don't actually want any further details on this point.
01:28:29.180 | - Okay.
01:28:29.680 | But I presume it'll be more and more and more
01:28:34.540 | as time goes on.
01:28:35.340 | - That feels reasonable.
01:28:36.380 | - Boy, what a dangerous space.
01:28:39.580 | What aspect of the leap,
01:28:41.980 | and sorry to linger on this,
01:28:43.660 | even though you can't quite say details yet,
01:28:46.540 | but what aspects of the leap from GPT-4 to GPT-5
01:28:49.580 | are you excited about?
01:28:51.340 | - I'm excited about being smarter.
01:28:54.220 | And I know that sounds like a glib answer,
01:28:55.820 | but I think the really special thing happening
01:28:58.940 | is that it's not like it gets better in this one area
01:29:01.340 | and worse at others.
01:29:02.620 | It's getting better across the board.
01:29:05.180 | That's, I think, super cool.
01:29:07.180 | - Yeah, there's this magical moment.
01:29:09.100 | I mean, you meet certain people,
01:29:10.540 | you hang out with people, and you talk to them.
01:29:13.020 | You can't quite put a finger on it,
01:29:15.340 | but they kind of get you.
01:29:16.860 | It's not intelligence, really.
01:29:20.380 | It's something else.
01:29:21.660 | And that's probably how I would characterize
01:29:25.500 | the progress of GPT.
01:29:26.860 | It's not like, yeah, you can point out,
01:29:28.540 | look, it didn't get this or that.
01:29:30.300 | But it's just to which degree
01:29:31.980 | is there's this intellectual connection?
01:29:33.980 | Like you feel like there's an understanding
01:29:37.740 | in your crappy formulated prompts that you're doing
01:29:41.820 | that it grasps the deeper question
01:29:45.020 | behind the question that you're...
01:29:46.220 | Yeah, I'm also excited by that.
01:29:48.620 | I mean, all of us love being understood,
01:29:51.900 | heard and understood.
01:29:52.860 | - That's for sure.
01:29:53.660 | - That's a weird feeling.
01:29:54.540 | Even like with the programming,
01:29:56.060 | like when you're programming and you say something
01:29:58.460 | or just the completion that GPT might do,
01:30:01.180 | it's just such a good feeling when it got you,
01:30:04.860 | like what you're thinking about.
01:30:07.020 | And I look forward to getting you even better.
01:30:09.340 | On the programming front, looking out into the future,
01:30:13.900 | how much programming do you think humans
01:30:15.500 | will be doing five, 10 years from now?
01:30:17.260 | - I mean, a lot, but I think it'll be
01:30:21.180 | in a very different shape.
01:30:22.700 | Like maybe some people will program
01:30:24.940 | entirely in natural language.
01:30:26.220 | - Entirely natural language.
01:30:28.300 | - I mean, no one programs like writing bytecode.
01:30:33.260 | Some people, no one programs the punch cards anymore.
01:30:35.420 | I'm sure you can find someone who does.
01:30:37.180 | But you know what I mean.
01:30:38.540 | - Yeah, you're gonna get a lot of angry comments now.
01:30:40.940 | Yeah, there's very few.
01:30:43.020 | I've been looking for people who program FORTRAN.
01:30:44.940 | It's hard to find, even FORTRAN.
01:30:46.620 | - I hear you.
01:30:48.300 | But that changes the nature of what the skill set
01:30:51.260 | or the predisposition for the kind of people
01:30:54.060 | we call programmers then.
01:30:55.260 | - It changes the skill set.
01:30:56.460 | How much it changes the predisposition, I'm not sure.
01:30:58.620 | - Oh, same kind of puzzle solving?
01:31:00.700 | - Maybe.
01:31:01.020 | - All that kind of stuff.
01:31:01.500 | Programming is hard.
01:31:04.620 | How get that last 1% to close the gap?
01:31:08.380 | How hard is that?
01:31:09.100 | - Yeah, I think with most other cases,
01:31:11.260 | the best practitioners of the craft will use multiple tools.
01:31:14.220 | And they'll do some work in natural language.
01:31:16.460 | And when they need to go write C for something,
01:31:19.420 | they'll do that.
01:31:19.980 | - Will we see humanoid robots
01:31:22.860 | or humanoid robot brains from open AI at some point?
01:31:26.940 | - At some point.
01:31:28.460 | - How important is embodied AI to you?
01:31:31.660 | - I think it's sort of depressing if we have AGI
01:31:34.620 | and the only way to get things done in the physical world
01:31:38.460 | is to make a human go do it.
01:31:40.380 | So I really hope that as part of this transition,
01:31:45.580 | as this phase change,
01:31:46.940 | we also get humanoid robots
01:31:50.060 | or some sort of physical world robots.
01:31:51.500 | - I mean, open AI has some history,
01:31:53.100 | quite a bit of history working in robotics.
01:31:55.420 | - Yeah.
01:31:55.740 | - But it hasn't quite done in terms of emphasis--
01:31:59.020 | - We're like a small company.
01:32:00.060 | We have to really focus.
01:32:00.940 | And also robots were hard for the wrong reason at the time.
01:32:04.460 | But like, we will return to robots in some way at some point.
01:32:10.140 | - That sounds both inspiring and menacing.
01:32:13.420 | - Why?
01:32:14.540 | - Because immediately we will return to robots.
01:32:17.580 | It's kind of like, and they could determine--
01:32:19.740 | - We will return to work on developing robots.
01:32:21.820 | We will not like turn ourselves into robots, of course.
01:32:23.740 | - Yeah, yeah.
01:32:24.140 | When do you think we, you and we as humanity will build AGI?
01:32:30.300 | - I used to love to speculate on that question.
01:32:33.340 | I have realized since that I think it's like very poorly formed
01:32:37.180 | and that people use extremely different definitions for what AGI is.
01:32:42.540 | And so I think it makes more sense to talk about
01:32:48.700 | when we'll build systems that can do capability X or Y or Z,
01:32:51.900 | rather than when we kind of like fuzzily cross this one mile marker.
01:32:57.260 | It's not like, like AGI is also not an ending.
01:32:59.500 | It's much more of a, it's closer to a beginning,
01:33:01.820 | but it's much more of a mile marker than either of those things.
01:33:04.300 | But what I would say in the interest of not trying to dodge a question
01:33:10.860 | is I expect that by the end of this decade
01:33:14.380 | and possibly somewhat sooner than that,
01:33:21.180 | we will have quite capable systems that we look at and say,
01:33:25.180 | "Wow, that's really remarkable."
01:33:27.820 | If we could look at it now, you know,
01:33:28.940 | maybe we've adjusted by the time we get there.
01:33:30.540 | - Yeah, but you know, if you look at Chad GPT,
01:33:34.140 | even with 3.5 and you show that to Alan Turing
01:33:38.620 | or not even Alan Turing, people in the '90s,
01:33:41.900 | they would be like, "This is definitely AGI."
01:33:44.620 | Well, not definitely, but there's a lot of experts
01:33:47.740 | that would say this is AGI.
01:33:48.940 | - Yeah, but I don't think 3.5 changed the world.
01:33:52.380 | It maybe changed the world's expectations for the future
01:33:56.620 | and that's actually really important.
01:33:58.460 | And it did kind of like get more people to take this seriously
01:34:01.580 | and put us on this new trajectory
01:34:02.940 | and that's really important too.
01:34:04.460 | So again, I don't wanna undersell it.
01:34:07.020 | I think I could retire after that accomplishment
01:34:09.980 | and be pretty happy with my career,
01:34:11.420 | but as an artifact,
01:34:13.100 | I don't think we're gonna look back at that and say
01:34:16.060 | that was a threshold that really changed the world itself.
01:34:20.140 | - So to you, you're looking for some really major transition
01:34:23.420 | in how the world--
01:34:24.140 | - For me, that's part of what AGI implies.
01:34:27.140 | - Like singularity level transition?
01:34:31.100 | - No, definitely not.
01:34:31.980 | - But just a major, like the internet being,
01:34:34.380 | like Google search did, I guess.
01:34:36.460 | What was the transition point?
01:34:39.020 | - Like does the global economy feel any different to you now
01:34:42.220 | or materially different to you now
01:34:43.580 | than it did before we launched GPT-4?
01:34:45.500 | I think you would say no.
01:34:46.780 | - No, no.
01:34:48.540 | It might be just a really nice tool
01:34:50.940 | for a lot of people to use.
01:34:51.900 | It'll help you with a lot of stuff,
01:34:52.940 | but doesn't feel different.
01:34:53.900 | And you're saying that, I mean, again,
01:34:55.980 | people define AGI all sorts of different ways,
01:34:57.900 | so maybe you have a different definition than I do,
01:34:59.580 | but for me, I think that should be part of it.
01:35:01.900 | - There could be major theatrical moments also.
01:35:04.780 | What to you would be an impressive thing AGI would do?
01:35:10.860 | Like you are alone in a room with a system.
01:35:14.700 | - This is personally important to me.
01:35:17.820 | I don't know if this is the right definition.
01:35:19.340 | I think when a system can significantly increase
01:35:24.540 | the rate of scientific discovery in the world,
01:35:26.220 | that's like a huge deal.
01:35:28.540 | I believe that most real economic growth
01:35:31.500 | comes from scientific and technological progress.
01:35:33.900 | - I agree with you.
01:35:35.980 | That's why I don't like the skepticism about science
01:35:40.780 | in the recent years.
01:35:41.660 | - Totally.
01:35:42.160 | - But actual rate, like measurable rate
01:35:46.620 | of scientific discovery.
01:35:47.740 | But even just seeing a system
01:35:53.100 | have really novel intuitions, like scientific intuitions,
01:35:57.180 | even that would be just incredible.
01:36:00.700 | - Yeah.
01:36:01.200 | - You quite possibly would be the person
01:36:03.740 | to build the AGI to be able to interact with it
01:36:05.740 | before anyone else does.
01:36:06.700 | What kind of stuff would you talk about?
01:36:08.780 | - I mean, definitely the researchers here
01:36:10.780 | will do that before I do.
01:36:12.060 | - Sure.
01:36:12.780 | - But what will I, I've actually thought a lot
01:36:15.660 | about this question.
01:36:16.380 | If I were, someone was like, I think this is,
01:36:19.420 | as we talked earlier, I think this is a bad framework.
01:36:21.580 | But if someone were like, okay, Sam, we're finished.
01:36:25.020 | Here's a laptop.
01:36:26.220 | This is the AGI.
01:36:27.260 | You know, you can go talk to it.
01:36:31.260 | I find it surprisingly difficult to say what I would ask,
01:36:37.340 | that I would expect that first AGI to be able to answer.
01:36:39.820 | Like that first one is not gonna be the one which is like,
01:36:44.300 | go like, you know, I don't think, like go explain to me
01:36:49.100 | like the grand unified theory of physics,
01:36:51.420 | the theory of everything for physics.
01:36:52.620 | I'd love to ask that question.
01:36:53.740 | I'd love to know the answer to that question.
01:36:55.100 | - You can ask yes or no questions about,
01:36:56.940 | does such a theory exist?
01:36:59.420 | Can it exist?
01:37:00.060 | - Well, then those are the first questions I would ask.
01:37:01.900 | - Yes or no, just very, and then based on that,
01:37:05.180 | are there other alien civilizations out there?
01:37:07.100 | Yes or no?
01:37:07.820 | What's your intuition?
01:37:09.020 | And then you just ask that.
01:37:10.140 | - Yeah, I mean, well, so I don't expect
01:37:11.980 | that this first AGI could answer any of those questions,
01:37:14.700 | even as yes or no.
01:37:15.820 | But those would, if it could,
01:37:17.340 | those would be very high on my list.
01:37:18.540 | - Maybe you're gonna start assigning probabilities.
01:37:21.820 | - Maybe, maybe we need to go invent more technology
01:37:25.340 | and measure more things first.
01:37:26.380 | - But if it's an AGI, oh, I see.
01:37:29.660 | It just doesn't have enough data.
01:37:30.940 | - I mean, maybe it says like, you know,
01:37:32.860 | you wanna know the answer to this question about physics.
01:37:35.420 | I need you to like build this machine
01:37:36.780 | and make these five measurements and tell me that.
01:37:39.020 | - Yeah, like what the hell do you want from me?
01:37:41.820 | I need the machine first
01:37:43.100 | and I'll help you deal with the data from that machine.
01:37:45.740 | Maybe it'll help you build the machine.
01:37:46.940 | - Maybe, maybe.
01:37:47.580 | - And on the mathematical side, maybe prove some things.
01:37:51.580 | Are you interested in that side of things too?
01:37:54.300 | The formalized exploration of ideas?
01:37:56.380 | Whoever builds AGI first gets a lot of power.
01:38:02.540 | Do you trust yourself with that much power?
01:38:14.340 | Look, I was gonna, I'll just be very honest with this answer.
01:38:19.300 | I was gonna say, and I still believe this,
01:38:20.900 | that it is important that I, nor any other one person,
01:38:26.020 | have total control over OpenAI or over AGI.
01:38:31.300 | And I think you want a robust governance system.
01:38:36.260 | I can point out a whole bunch of things
01:38:41.220 | about all of our board drama from last year
01:38:44.180 | about how I didn't fight it initially
01:38:48.420 | and was just like, yeah, that's the will of the board,
01:38:50.820 | even though I think it's a really bad decision.
01:38:52.340 | And then later I clearly did fight it
01:38:57.380 | and I can explain the nuance
01:38:58.580 | and why I think it was okay for me to fight it later.
01:39:01.380 | But as many people have observed,
01:39:08.260 | although the board had the legal ability to fire me,
01:39:12.340 | in practice, it didn't quite work.
01:39:15.140 | And that is its own kind of governance failure.
01:39:23.220 | Now, again, I feel like I can completely defend
01:39:27.220 | the specifics here.
01:39:29.140 | And I think most people would agree with that,
01:39:32.660 | but it does make it harder for me
01:39:40.420 | to look you in the eye and say,
01:39:41.620 | "Hey, the board can just fire me."
01:39:42.740 | I continue to not want super-voting control over OpenAI.
01:39:51.140 | I never have, never had it, never wanted it.
01:39:52.980 | Even after all this craziness, I still don't want it.
01:40:00.340 | I continue to think that no company
01:40:03.300 | should be making these decisions
01:40:05.060 | and that we really need governments
01:40:08.660 | to put rules of the road in place.
01:40:11.860 | And I realize that that means people
01:40:14.180 | like Marc Andreessen or whatever
01:40:15.940 | will claim I'm going for regulatory capture
01:40:18.100 | and I'm just willing to be misunderstood there.
01:40:20.020 | It's not true.
01:40:21.300 | And I think in the fullness of time,
01:40:23.220 | it'll get proven out why this is important.
01:40:27.060 | But I think I have made plenty of bad decisions
01:40:35.060 | for OpenAI along the way, and a lot of good ones.
01:40:38.660 | And I am proud of the track record overall,
01:40:41.700 | but I don't think any one person should,
01:40:44.020 | and I don't think any one person will.
01:40:45.620 | I think it's just too big of a thing
01:40:46.980 | now that it's happening throughout society
01:40:48.500 | in a good and healthy way.
01:40:50.260 | I don't think any one person should be in control of an AGI,
01:40:53.140 | or this whole movement towards AGI.
01:40:57.220 | And I don't think that's what's happening.
01:40:58.420 | - Thank you for saying that.
01:41:00.900 | That was really powerful and that was really insightful
01:41:03.140 | that this idea that the board can fire you is legally true.
01:41:06.980 | But you can,
01:41:09.780 | and human beings can manipulate the masses
01:41:14.660 | into overriding the board and so on.
01:41:19.300 | But I think there's also a much more positive version of that
01:41:22.420 | where the people still have power.
01:41:24.740 | So the board can't be too powerful either.
01:41:26.980 | There's a balance of power in all of this.
01:41:29.380 | - Balance of power is a good thing for sure.
01:41:30.980 | - Are you afraid of losing control of the AGI itself?
01:41:37.140 | That's a lot of people who worried about existential risk,
01:41:40.660 | not because of state actors,
01:41:41.860 | not because of security concerns,
01:41:43.460 | because of the AI itself.
01:41:44.740 | - That is not my top worry.
01:41:46.420 | As I currently see things,
01:41:47.460 | there have been times I worried about that more
01:41:48.820 | than maybe times, again, in the future,
01:41:50.260 | that's my top worry.
01:41:52.180 | It's not my top worry right now.
01:41:53.300 | - What's your intuition about it not being your worry?
01:41:55.300 | 'Cause there's a lot of other stuff
01:41:56.420 | to worry about essentially.
01:41:57.380 | You think you could be surprised?
01:42:01.540 | We could be surprised.
01:42:03.180 | - For sure, of course.
01:42:05.140 | Saying it's not my top worry doesn't mean
01:42:06.660 | I think we need to work on it super hard.
01:42:09.460 | And we have great people here who do work on that.
01:42:12.180 | I think there's a lot of other things
01:42:14.340 | we also have to get right.
01:42:15.380 | - To you, it's not super easy to escape the box at this time.
01:42:18.980 | Like connect to the internet.
01:42:20.900 | - We talked about theatrical risks earlier.
01:42:24.420 | That's a theatrical risk.
01:42:25.540 | That is a thing that can really take over
01:42:29.460 | how people think about this problem.
01:42:31.220 | And there's a big group of very smart,
01:42:34.500 | I think very well-meaning AI safety researchers
01:42:37.620 | that got super hung up on this one problem.
01:42:41.300 | I'd argue without much progress,
01:42:42.580 | but super hung up on this one problem.
01:42:44.020 | I'm actually happy that they do that
01:42:46.900 | because I think we do need to think about this more.
01:42:50.660 | But I think it pushed aside,
01:42:52.900 | it pushed out of the space of discourse
01:42:54.740 | a lot of the other very significant AI-related risks.
01:43:00.900 | - Let me ask you about you tweeting with no capitalization.
01:43:04.820 | Is the shift key broken on your keyboard?
01:43:06.980 | - Why does anyone care about that?
01:43:09.140 | - I deeply care.
01:43:10.420 | - But why?
01:43:11.060 | I mean, other people are asking about that too.
01:43:13.460 | - Yeah.
01:43:13.940 | - Any intuition?
01:43:15.860 | - I think it's the same reason.
01:43:18.020 | There's like this poet, E.E. Cummings,
01:43:20.020 | that mostly doesn't use capitalization
01:43:23.300 | to say like, "Fuck you to the system," kind of thing.
01:43:26.020 | And I think people are very paranoid
01:43:27.780 | 'cause they want you to follow the rules.
01:43:28.900 | - You think that's what it's about?
01:43:29.860 | - I think it's--
01:43:30.740 | - It's like this guy doesn't follow the rules.
01:43:33.700 | He doesn't capitalize his tweets.
01:43:35.060 | - Yeah.
01:43:35.540 | - This seems really dangerous.
01:43:36.740 | - He seems like an anarchist.
01:43:38.580 | - It doesn't--
01:43:39.700 | - Are you just being poetic, hipster?
01:43:42.580 | What's the--
01:43:43.300 | - I grew up--
01:43:44.340 | - Follow the rules, Sam.
01:43:45.380 | - I grew up as a very online kid.
01:43:47.140 | I'd spent a huge amount of time
01:43:48.660 | chatting with people back in the days
01:43:51.620 | where you did it on a computer
01:43:52.900 | and you could log off Instant Messenger at some point.
01:43:55.460 | And I never capitalized there
01:43:57.620 | as I think most internet kids didn't,
01:44:00.740 | or maybe they still don't, I don't know.
01:44:02.740 | And actually, this is like,
01:44:09.780 | now I'm really trying to reach for something,
01:44:11.780 | but I think capitalization has gone down over time.
01:44:15.380 | If you read old English writing,
01:44:16.900 | they capitalized a lot of random words
01:44:18.580 | in the middle of sentences, nouns, and stuff
01:44:20.100 | that we just don't do anymore.
01:44:21.220 | I personally think it's sort of like a dumb construct
01:44:26.100 | that we capitalize the letter
01:44:27.380 | at the beginning of a sentence
01:44:28.260 | and of certain names and whatever,
01:44:30.340 | but I don't, that's fine.
01:44:31.940 | And I used to, I think, even capitalize my tweets
01:44:37.780 | 'cause I was trying to sound professional or something.
01:44:39.780 | I haven't capitalized my private DMs
01:44:43.700 | or whatever in a long time.
01:44:44.900 | And then slowly, stuff like shorter form,
01:44:53.620 | less formal stuff has slowly drifted
01:44:57.140 | to closer and closer to how I would text my friends.
01:45:00.740 | If I pull up a Word document
01:45:04.100 | and I'm writing a strategy memo
01:45:05.620 | for the company or something,
01:45:06.580 | I always capitalize that.
01:45:07.620 | If I'm writing a long, more formal message,
01:45:11.540 | I always use capitalization there too.
01:45:12.820 | So I still remember how to do it,
01:45:14.820 | but even that may fade out, I don't know.
01:45:16.740 | But I never spend time thinking about this,
01:45:21.380 | so I don't have a ready-made.
01:45:22.740 | - Well, it's interesting.
01:45:24.500 | Well, it's good to, first of all,
01:45:25.380 | know the shift key is not broken.
01:45:26.740 | - It works.
01:45:26.980 | - I was just mostly concerned about your well-being
01:45:29.220 | on that front.
01:45:29.780 | - I wonder if people still capitalize
01:45:31.700 | their Google searches.
01:45:32.740 | If you're writing something just to yourself
01:45:34.500 | or their ChatGBT queries,
01:45:35.700 | if you're writing something just to yourself,
01:45:37.140 | do some people still bother to capitalize?
01:45:40.420 | - Probably not.
01:45:42.420 | Yeah, there's a percentage, but it's a small one.
01:45:44.100 | - The thing that would make me do it
01:45:45.140 | is if people were like,
01:45:45.940 | it's a sign of,
01:45:48.660 | 'cause I'm sure I could force myself
01:45:51.860 | to use capital letters, obviously.
01:45:53.380 | If it felt like a sign of respect to people or something,
01:45:56.500 | then I could go do it.
01:45:57.620 | But I don't know, I just don't think about this.
01:46:00.580 | - I don't think there's a disrespect,
01:46:02.100 | but I think it's just the conventions of civility
01:46:04.660 | that have a momentum,
01:46:08.580 | and then you realize it's not actually important
01:46:10.660 | for civility if it's not a sign of respect or disrespect.
01:46:13.140 | But I think there's a movement of people
01:46:15.300 | that just want you to have a philosophy around it
01:46:17.460 | so they can let go of this whole capitalization thing.
01:46:19.540 | - I don't think anybody else thinks about this as much.
01:46:21.460 | I mean, maybe some people.
01:46:22.340 | - I think about this every day for many hours a day.
01:46:25.220 | So I'm really grateful we clarified it.
01:46:27.860 | - Can't be the only person that doesn't capitalize tweets.
01:46:29.860 | - You're the only CEO of a company
01:46:32.580 | that doesn't capitalize tweets.
01:46:33.780 | - I don't even think that's true, but maybe, maybe.
01:46:35.700 | - All right, we'll investigate further
01:46:37.780 | and return to this topic later.
01:46:40.260 | Given Thor's ability to generate simulated worlds,
01:46:43.780 | let me ask you a pothead question.
01:46:45.300 | Does this increase your belief,
01:46:50.340 | if you ever had one, that we live in a simulation?
01:46:52.740 | Maybe a simulated world generated by an AI system?
01:46:57.620 | - Yes, somewhat.
01:47:08.740 | I don't think that's the strongest piece of evidence.
01:47:11.940 | I think the fact that we can generate worlds
01:47:18.260 | should increase everyone's probability somewhat,
01:47:22.740 | or at least openness to it somewhat.
01:47:25.220 | But I was certain we would be able
01:47:27.060 | to do something like Sora at some point.
01:47:28.500 | It happened faster than I thought.
01:47:29.860 | But I guess that was not a big update.
01:47:33.300 | - Yeah, but the fact that,
01:47:35.380 | and presumably it'll get better and better and better,
01:47:37.940 | the fact that you can generate worlds, they're novel.
01:47:40.180 | They're based in some aspect of training data,
01:47:44.260 | but when you look at them, they're novel.
01:47:47.300 | That makes you think, how easy it is to do this thing?
01:47:52.500 | How easy is it to create universes?
01:47:53.940 | Entire video game worlds that seem ultra-realistic
01:47:58.260 | and photo-realistic, and then how easy is it
01:48:01.140 | to get lost in that world?
01:48:02.340 | First with a VR headset,
01:48:04.420 | and then on the physics-based level.
01:48:07.780 | - Someone said to me recently,
01:48:11.060 | I thought it was a super profound insight,
01:48:12.660 | that there are these very simple sounding,
01:48:22.100 | but very psychedelic insights that exist sometimes.
01:48:26.740 | So the square root function.
01:48:29.060 | Square root of four, no problem.
01:48:32.500 | Square root of two, okay,
01:48:35.380 | now I have to think about this new kind of number.
01:48:37.380 | But once I come up with this easy idea
01:48:45.540 | of a square root function that you can explain to a child
01:48:49.540 | and exists by even looking at some simple geometry,
01:48:54.180 | then you can ask the question
01:48:56.260 | of what is the square root of negative one?
01:48:57.860 | And that, this is why it's a psychedelic thing,
01:49:02.900 | that tips you into some whole other kind of reality.
01:49:05.700 | And you can come up with lots of other examples,
01:49:11.220 | but I think this idea that the lowly square root operator
01:49:15.860 | can offer such a profound insight
01:49:20.820 | and a new realm of knowledge applies in a lot of ways.
01:49:25.540 | And I think there are a lot of those operators
01:49:28.900 | for why people may think that any version that they like
01:49:34.500 | of the simulation hypothesis
01:49:35.860 | is maybe more likely than they thought before.
01:49:37.780 | But for me, the fact that Sora worked
01:49:43.940 | is not in the top five.
01:49:44.980 | - I do think broadly speaking,
01:49:47.860 | AI will serve as those kinds of gateways at its best.
01:49:52.100 | Simple, psychedelic-like gateways
01:49:55.300 | to another way of seeing reality.
01:49:57.220 | - That seems for certain.
01:49:58.580 | - That's pretty exciting.
01:50:00.660 | I haven't done ayahuasca before, but I will soon.
01:50:04.180 | I'm going to the aforementioned Amazon jungle
01:50:06.580 | in a few weeks.
01:50:07.220 | - Excited?
01:50:07.780 | - Yeah, I'm excited for it.
01:50:09.460 | Not the ayahuasca part, but that's great, whatever.
01:50:11.460 | But I'm gonna spend several weeks in the jungle,
01:50:14.180 | deep in the jungle.
01:50:14.980 | And it's exciting, but it's terrifying
01:50:17.780 | because there's a lot of things that can eat you there
01:50:19.860 | and kill you and poison you.
01:50:21.540 | But it's also nature and it's the machine of nature.
01:50:25.220 | And you can't help but appreciate the machinery of nature
01:50:28.020 | in the Amazon jungle
01:50:29.060 | 'cause it's just like this system that just exists
01:50:33.620 | and renews itself every second, every minute, every hour.
01:50:37.220 | It's the machine.
01:50:38.820 | It makes you appreciate this thing we have here,
01:50:42.500 | this human thing came from somewhere.
01:50:44.500 | This evolutionary machine has created that.
01:50:46.900 | And it's most clearly on display in the jungle.
01:50:51.860 | So hopefully I'll make it out alive.
01:50:53.620 | If not, this will be the last conversation we had,
01:50:55.860 | so I really deeply appreciate it.
01:50:57.380 | Do you think, as I mentioned before,
01:50:59.780 | there's other alien civilizations out there,
01:51:02.100 | intelligent ones, when you look up at the skies?
01:51:05.220 | - I deeply wanna believe that the answer is yes.
01:51:21.940 | I do find the kind of where,
01:51:24.020 | I find the Fermi paradox very puzzling.
01:51:26.260 | - I find it scary that intelligence
01:51:31.460 | is not good at handling powerful technologies.
01:51:35.620 | But at the same time, I think I'm pretty confident
01:51:39.380 | that there's just a very large number
01:51:41.860 | of intelligent alien civilizations out there.
01:51:44.180 | It might just be really difficult to travel through space.
01:51:46.420 | - Very possible.
01:51:47.700 | - And it also makes me think
01:51:50.980 | about the nature of intelligence.
01:51:52.180 | Maybe we're really blind to what intelligence looks like.
01:51:56.100 | And maybe AI will help us see that.
01:51:57.780 | That it's not as simple as IQ tests
01:52:01.140 | and simple puzzle solving.
01:52:02.340 | There's something bigger.
01:52:03.380 | What gives you hope about the future of humanity?
01:52:08.900 | This thing we've got going on, this human civilization?
01:52:12.020 | - I think the past is like a lot.
01:52:14.900 | I mean, we just look at what humanity has done
01:52:17.060 | in a not very long period of time.
01:52:19.540 | You know, huge problems, deep flaws,
01:52:24.340 | lots to be super ashamed of.
01:52:26.180 | But on the whole, very inspiring.
01:52:28.500 | Gives me a lot of hope.
01:52:29.220 | - Just the trajectory of it all.
01:52:30.980 | - Yeah.
01:52:31.300 | - That we're together pushing towards a better future.
01:52:35.940 | - It is, you know, one thing that I wonder about
01:52:42.660 | is, is AGI gonna be more like some single brain?
01:52:46.340 | Or is it more like the sort of scaffolding
01:52:48.900 | in society between all of us?
01:52:50.340 | You have not had a great deal of genetic drift
01:52:55.700 | from your great, great, great grandparents.
01:52:57.700 | And yet what you're capable of is dramatically different.
01:53:00.980 | What you know is dramatically different.
01:53:03.380 | And that's not because of biological change.
01:53:07.620 | It is because, I mean, you got a little bit healthier,
01:53:09.380 | probably, you have modern medicine,
01:53:10.500 | you eat better, whatever.
01:53:13.300 | But what you have is this scaffolding
01:53:19.380 | that we all contributed to, built on top of.
01:53:21.700 | No one person is gonna go build the iPhone.
01:53:24.820 | No one person is gonna go discover all of science.
01:53:27.300 | And yet you get to use it.
01:53:29.060 | And that gives you incredible ability.
01:53:30.820 | And so in some sense, the like, we all created that.
01:53:35.700 | And that fills me with hope for the future.
01:53:37.300 | That was a very collective thing.
01:53:39.380 | - Yeah, we really are standing on the shoulders of giants.
01:53:43.060 | You mentioned when we were talking about theatrical,
01:53:45.860 | dramatic AI risks, that sometimes you might
01:53:53.940 | be afraid for your own life.
01:53:55.060 | Do you think about your death?
01:53:57.220 | Are you afraid of it?
01:53:57.940 | - I mean, I like, if I got shot tomorrow
01:53:59.940 | and I knew it today, I'd be like, oh, that's sad.
01:54:02.980 | I like, don't, you know, I wanna see what's gonna happen.
01:54:05.940 | - Yeah.
01:54:06.900 | - What a curious time.
01:54:08.900 | What an interesting time.
01:54:11.860 | But I would mostly just feel like very grateful for my life.
01:54:14.180 | - The moments that you did get.
01:54:16.340 | Yeah, me too.
01:54:19.140 | It's a pretty awesome life.
01:54:21.380 | I get to enjoy awesome creations of humans,
01:54:25.460 | of which I believe Chad GPT is one of,
01:54:29.380 | and everything that OpenAI is doing.
01:54:31.860 | Sam, it's really an honor and pleasure to talk to you again.
01:54:35.380 | - Great to talk to you.
01:54:36.580 | Thank you for having me.
01:54:37.300 | - Thanks for listening to this conversation with Sam Altman.
01:54:40.980 | To support this podcast,
01:54:42.260 | please check out our sponsors in the description.
01:54:44.420 | And now let me leave you with some words from Arthur C. Clarke.
01:54:48.260 | It may be that our role on this planet
01:54:51.700 | is not to worship God, but to create him.
01:54:55.700 | Thank you for listening and hope to see you next time.
01:55:01.060 | (upbeat music)
01:55:01.640 | (upbeat music)
01:55:02.140 | (upbeat music)
01:55:02.640 | (upbeat music)
01:55:03.140 | (upbeat music)
01:55:03.640 | (upbeat music)
01:55:04.140 | (upbeat music)
01:55:04.640 | (upbeat music)
01:55:07.200 | [BLANK_AUDIO]