back to indexSam Altman: OpenAI, GPT-5, Sora, Board Saga, Elon Musk, Ilya, Power & AGI | Lex Fridman Podcast #419
Chapters
0:0 Introduction
1:5 OpenAI board saga
18:31 Ilya Sutskever
24:40 Elon Musk lawsuit
34:32 Sora
44:23 GPT-4
55:32 Memory & privacy
62:36 Q
66:12 GPT-5
69:27 7 trillion of compute
77:35 Google and Gemini
88:40 Leap to GPT-5
92:24 AGI
110:57 Aliens
00:00:00.000 |
I think compute is going to be the currency of the future. I think it will be maybe the 00:00:03.880 |
most precious commodity in the world. I expect that by the end of this decade, and possibly 00:00:14.260 |
somewhat sooner than that, we will have quite capable systems that we look at and say, "Wow, 00:00:19.760 |
that's really remarkable." The road to AGI should be a giant power struggle. I expect 00:00:26.160 |
Whoever builds AGI first gets a lot of power. Do you trust yourself with that much power? 00:00:36.660 |
The following is a conversation with Sam Altman, his second time in the podcast. He is the 00:00:42.280 |
CEO of OpenAI, the company behind GPT-4, Chad GPT, Sora, and perhaps one day, the very 00:00:50.760 |
company that will build AGI. This is Alex Friedman Podcast. To support it, please check 00:00:58.880 |
out our sponsors in the description. And now, dear friends, here's Sam Altman. 00:01:05.680 |
Take me through the OpenAI board saga that started on Thursday, November 16th, maybe 00:01:13.720 |
That was definitely the most painful professional experience of my life. Chaotic and shameful 00:01:24.440 |
and upsetting and a bunch of other negative things. There were great things about it too, 00:01:31.760 |
and I wish it had not been in such an adrenaline rush that I wasn't able to stop and appreciate 00:01:39.360 |
them at the time. But I came across this old tweet of mine, or this tweet of mine from 00:01:48.040 |
that time period, which was like, it was like, you know, kind of going to your own eulogy, 00:01:51.880 |
watching people say all these great things about you and just like unbelievable support 00:01:57.280 |
from people I love and care about. That was really nice. 00:02:04.560 |
That whole weekend, with one big exception, I felt like a great deal of love and very 00:02:13.640 |
little hate, even though it felt like I have no idea what's happening and what's going 00:02:21.720 |
to happen here, and this feels really bad. There were definitely times I thought it was 00:02:25.840 |
going to be one of the worst things to ever happen for AI safety. Well, I also think I'm 00:02:31.400 |
happy that it happened relatively early. I thought at some point between when OpenAI 00:02:37.760 |
started and when we created AGI, there was going to be something crazy and explosive 00:02:43.080 |
that happened, but there may be more crazy and explosive things still to happen. It still, 00:02:51.240 |
I think, helped us build up some resilience and be ready for more challenges in the future. 00:03:02.280 |
But the thing you had a sense that you would experience is some kind of power struggle. 00:03:08.720 |
The road to AGI should be a giant power struggle. The world should. Well, not should, I expect 00:03:15.920 |
that to be the case. And so, you have to go through that, like you said, iterate as often 00:03:22.240 |
as possible in figuring out how to have a board structure, how to have organization, 00:03:28.000 |
how to have the kind of people that you're working with, how to communicate all that 00:03:32.760 |
in order to deescalate the power struggle as much as possible, pacify it. But at this 00:03:39.640 |
point it feels like something that was in the past that was really unpleasant and really 00:03:49.720 |
difficult and painful, but we're back to work and things are so busy and so intense that 00:03:57.120 |
I don't spend a lot of time thinking about it. 00:04:00.680 |
There was a time after, there was this fugue state for the month after, maybe 45 days after, 00:04:09.880 |
that I was just sort of drifting through the days. I was so out of it. I was feeling so 00:04:20.240 |
Yeah. Really painful. And hard to have to keep running open AI in the middle of that. 00:04:28.160 |
I just wanted to crawl into a cave and kind of recover for a while. But now it's like 00:04:37.880 |
Well, it's still useful to go back there and reflect on board structures, on power dynamics, 00:04:48.860 |
on how companies are run, the tension between research and product development and money 00:04:55.480 |
and all this kind of stuff, so that you, who have a very high potential of building AGI, 00:05:02.800 |
would do so in a slightly more organized, less dramatic way in the future. So there's 00:05:08.200 |
value there to go, both the personal psychological aspects of you as a leader and also just the 00:05:15.360 |
board structure and all this kind of messy stuff. 00:05:18.480 |
We learned a lot about structure and incentives and what we need out of a board. I think it 00:05:30.320 |
is valuable that this happened now in some sense. I think this is probably not the last 00:05:37.560 |
high-stress moment of open AI, but it was quite a high-stress moment. My company very 00:05:41.760 |
nearly got destroyed. We think a lot about many of the other things we've got to get 00:05:49.100 |
right for AGI, but thinking about how to build a resilient org and how to build a structure 00:05:54.720 |
that will stand up to a lot of pressure in the world, which I expect more and more as 00:05:58.920 |
we get closer, I think that's super important. 00:06:01.120 |
Do you have a sense of how deep and rigorous the deliberation process by the board was? 00:06:07.520 |
Can you shine some light on just human dynamics involved in situations like this? Was it just 00:06:13.320 |
a few conversations and all of a sudden it escalates and why don't we fire Sam kind of 00:06:19.400 |
I think the board members are well-meaning people on the whole. I believe that in stressful 00:06:37.520 |
situations where people feel time pressure or whatever, people understandably make sub-optimal 00:06:49.080 |
decisions. I think one of the challenges for open AI will be we're going to have to have 00:06:55.320 |
a board and a team that are good at operating under pressure. 00:07:02.960 |
I think boards are supposed to have a lot of power, but one of the things that we did 00:07:08.440 |
see is in most corporate structures, boards are usually answerable to shareholders. Sometimes 00:07:15.160 |
people have super voting shares or whatever. In this case, and I think one of the things 00:07:20.040 |
with our structure that we maybe should have thought about more than we did, is that the 00:07:26.000 |
board of a nonprofit has, unless you put other rules in place, quite a lot of power. They 00:07:32.720 |
don't really answer to anyone but themselves. There's ways in which that's good, but what 00:07:37.440 |
we'd really like is for the board of open AI to answer to the world as a whole as much 00:07:47.680 |
There's, I guess, a new smaller board at first and now there's a new final board. 00:07:53.840 |
Not a final board yet. We've added some, a lot more. 00:07:56.600 |
Okay. What is fixed in the new one that was perhaps broken in the previous one? 00:08:05.960 |
The old board sort of got smaller over the course of about a year. It was nine and then 00:08:11.080 |
it went down to six. Then we couldn't agree on who to add. The board also, I think, didn't 00:08:20.040 |
have a lot of experienced board members. A lot of the new board members at open AI have 00:08:25.280 |
just have more experience as board members. I think that'll help. 00:08:31.480 |
It's been criticized, some of the people that are added to the board. I heard a lot of people 00:08:36.000 |
criticizing the addition of Larry Summers, for example. What's the process of selecting 00:08:43.160 |
So, Brett and Larry were kind of decided in the heat of the moment over this very tense 00:08:48.680 |
weekend. That weekend was like a real roller coaster. It was like a lot of ups and downs. 00:08:56.760 |
We were trying to agree on new board members that both sort of the executive team here 00:09:05.360 |
and the old board members felt would be reasonable. Larry was actually one of their suggestions, 00:09:11.680 |
the old board members. Brett, I think, I had even previous to that weekend suggested, but 00:09:17.640 |
he was busy and didn't want to do it. Then we really needed help in wood. We talked about 00:09:22.600 |
a lot of other people too, but I felt like if I was going to come back, I needed new 00:09:32.440 |
board members. I didn't think I could work with the old board again in the same configuration. 00:09:39.640 |
We then decided, and I'm grateful that Adam would stay, but we considered various configurations 00:09:48.120 |
and decided we wanted to get to a board of three and had to find two new board members 00:09:54.000 |
over the course of sort of a short period of time. Those were decided honestly without 00:09:59.240 |
– that's like you kind of do that on the battlefield. You don't have time to design 00:10:03.640 |
a rigorous process then. For new board members since, new board members will add going forward, 00:10:11.260 |
we have some criteria that we think are important for the board to have, different expertise 00:10:17.640 |
that we want the board to have. Unlike hiring an executive where you need them to do one 00:10:22.560 |
role well, the board needs to do a whole role of kind of governance and thoughtfulness well. 00:10:30.880 |
One thing that Brett says, which I really like, is that we want to hire board members 00:10:34.160 |
in slates, not as individuals one at a time. Thinking about a group of people that will 00:10:40.080 |
bring nonprofit expertise, expertise in running companies, sort of good legal and governance 00:10:46.240 |
expertise, that's kind of what we've tried to optimize for. 00:10:48.920 |
So it's technical savvy important for the individual board members? 00:10:52.200 |
Not for every board member, but for certainly some you need that. That's part of what 00:10:56.560 |
So, I mean, the interesting thing that people probably don't understand about OpenAI, I 00:11:00.680 |
certainly don't, is like all the details of running the business. When they think about 00:11:04.640 |
the board, given the drama, they think about you, they think about like, if you reach AGI 00:11:11.200 |
or you reach some of these incredibly impactful products and you build them and deploy them, 00:11:16.200 |
what's the conversation with the board like? And they kind of think, all right, what's 00:11:21.000 |
the right squad to have in that kind of situation to deliberate? 00:11:25.080 |
Look, I think you definitely need some technical experts there. And then you need some people 00:11:29.840 |
who are like, how can we deploy this in a way that will help people in the world the 00:11:36.420 |
most and people who have a very different perspective? I think a mistake that you or 00:11:41.840 |
I might make is to think that only the technical understanding matters. And that's definitely 00:11:46.320 |
part of the conversation you want that board to have, but there's a lot more about how 00:11:49.960 |
that's going to just like impact society and people's lives that you really want represented 00:11:55.240 |
And you're just kind of, are you looking at the track record of people or you're just 00:12:01.000 |
Track record is a big deal. You of course have a lot of conversations, but I, you know, 00:12:08.760 |
there's some roles where I kind of totally ignore track record and just look at slope, 00:12:17.600 |
Thank you. Thank you for making it mathematical for the audience. 00:12:21.560 |
For a board member, like I do care much more about the Y-intercept. Like I think there 00:12:25.440 |
is something deep to say about track record there. And experience is sometimes very hard 00:12:32.280 |
Do you try to fit a polynomial function or exponential one to the, to the track record? 00:12:36.720 |
That's not that, an analogy doesn't carry that far. 00:12:39.280 |
All right. You mentioned some of the low points that weekend. What were some of the low points 00:12:45.600 |
psychologically for you? Did you consider going to the Amazon jungle and just taking 00:12:53.840 |
I mean, there's so many low, like it was a very bad period of time. There were great 00:12:58.920 |
high points too. Like my phone was just like sort of nonstop blowing up with nice messages 00:13:05.960 |
from people I work with every day, people I hadn't talked to in a decade. I didn't get 00:13:09.720 |
to like appreciate that as much as I should have because I was just like in the middle 00:13:12.680 |
of this firefight, but that was really nice. But on the whole, it was like a very painful 00:13:17.240 |
weekend and also just like a very, it was like a battle fought in public to a surprising 00:13:25.840 |
degree. And that's, that was extremely exhausting to me much more than I expected. I think fights 00:13:31.320 |
are generally exhausting, but this one really was, you know, the board did this Friday afternoon. 00:13:39.100 |
I really couldn't get much in the way of answers, but I also was just like, well, the board 00:13:44.320 |
gets to do this. And so I'm going to think for a little bit about what I want to do, 00:13:49.240 |
but I'll try to find the blessing in disguise here. And I was like, well, I, you know, my 00:13:56.560 |
current job at OpenAI is or it was like to like run a decently sized company at this 00:14:01.960 |
point. And the thing I had always liked the most was just getting to like work on, work 00:14:05.760 |
with the researchers. And I was like, yeah, I can just go do like a very focused AGI research 00:14:10.000 |
effort. And I got excited about that. It didn't even occur to me at the time to like, possibly 00:14:15.760 |
that this was all going to get undone. This was like Friday afternoon. 00:14:18.560 |
So you've accepted your, the death of this previous- 00:14:22.280 |
Very quickly, very quickly. Like within, you know, I mean, I went through like a little 00:14:26.360 |
period of confusion and rage, but very quickly. And by Friday night, I was like talking to 00:14:30.360 |
people about what was going to be next. And I was excited about that. I think it was Friday 00:14:38.920 |
night evening for the first time that I heard from the exec team here, which was like, hey, 00:14:42.840 |
we're going to like fight this. And, you know, we think, well, whatever. And then I went 00:14:48.600 |
to bed just still being like, okay, excited. Like onward, were you able to sleep? Not a 00:14:54.160 |
lot. It was one of the weird things was, it was this like period of four and a half days 00:14:59.160 |
where sort of didn't sleep much, didn't eat much and still kind of had like a surprising 00:15:04.640 |
amount of energy. It was, you learn like a weird thing about adrenaline and more time. 00:15:09.520 |
So you kind of accepted the death of a, you know, this baby opening eyes. 00:15:13.080 |
And I was excited for the new thing. I was just like, okay, this was crazy, but whatever. 00:15:18.560 |
And then Saturday morning, two of the board members called and said, hey, we, you know, 00:15:22.600 |
destabilize, we didn't mean to destabilize things. We don't want to destroy a lot of 00:15:25.600 |
value here. You know, can we talk about you coming back? And I immediately didn't want 00:15:31.520 |
to do that, but I thought a little more and I was like, well, I don't really care about 00:15:36.080 |
the people here, the partners, shareholders, like all of the, I love this company. 00:15:41.400 |
And so I thought about it and I was like, well, okay, but like, here's, here's the stuff 00:15:44.080 |
I would need. And, and then the most painful time of all was over the course of that weekend. 00:15:52.600 |
I kept thinking and being told, and we all kept, not just me, like the whole team here 00:15:57.280 |
kept thinking, well, we were trying to like keep OpenAI stabilized while the whole world 00:16:01.980 |
was trying to break it apart, people trying to recruit, whatever. 00:16:04.460 |
We kept being told like, all right, we're almost done. We're almost done. We just need 00:16:06.880 |
like a little bit more time. And it was this like very confusing state. And then Sunday 00:16:12.440 |
evening, when again, like every few hours I expected that we were going to be done and 00:16:17.180 |
we're going to like figure out a way for me to return and things to go back to how they 00:16:21.880 |
were, the board then appointed a new interim CEO. And then I was like, I mean, that is, 00:16:29.520 |
that is, that feels really bad. That was the low point of the whole thing. 00:16:36.320 |
You know, I'll tell you something. I, it felt very painful, but I felt a lot of love that 00:16:42.840 |
whole weekend. It was not other than that one moment, Sunday night, I would not characterize 00:16:48.520 |
my emotions as anger or hate. But I really just like, I felt a lot of love from people 00:16:56.720 |
towards people. It was like painful, but it would like the dominant emotion of the weekend 00:17:04.240 |
You've spoken highly of Mira Moradi that she helped, especially as you put in a tweet in 00:17:09.600 |
the quiet moments when it counts, perhaps we could take a bit of a tangent. What do 00:17:15.800 |
Well, she did a great job during that weekend in a lot of chaos, but, but people often see 00:17:22.240 |
leaders in the moment in like the crisis moments, good or bad. But a thing I really value in 00:17:29.000 |
leaders is how people act on a boring Tuesday at 9 46 in the morning. And in, in just sort 00:17:36.760 |
of the, the, the normal drudgery of the day to day, how someone shows up in a meeting, 00:17:43.260 |
the quality of the decisions they make. That was what I meant about the quiet moments. 00:17:48.160 |
Meaning like most of the work is done on a day by day in a meeting by meeting just, just 00:17:58.400 |
Yeah. I mean, look what you wanted to have wanted to spend the last 20 minutes about, 00:18:02.160 |
and I understand is like this one very dramatic weekend. Yeah. But that's not really what 00:18:07.240 |
opening eyes about opening eyes really about the other seven years. 00:18:10.600 |
Well, yeah. Human civilization is not about the invasion of the Soviet Union by Nazi Germany, 00:18:16.160 |
but still that's something people focus on very, very understandable. It gives us an 00:18:21.020 |
insight into human nature, the extremes of human nature, and perhaps some of the damage 00:18:25.540 |
and some of the triumphs of human civilization can happen in those moments. So it's like 00:18:29.620 |
illustrative. Let me ask you about Ilya. Is he being held hostage in a secret nuclear 00:18:42.460 |
I mean, it's becoming a meme at some point. You've known Ilya for a long time. He was 00:18:48.460 |
obviously in part of this drama with the board and all that kind of stuff. What's your relationship 00:18:57.060 |
I love Ilya. I have tremendous respect for Ilya. I don't have anything I can like say 00:19:02.720 |
about his plans right now. That's a question for him. But I really hope we work together 00:19:08.220 |
for certainly the rest of my career. He's a little bit younger than me. Maybe he works 00:19:14.540 |
There's a meme that he saw something. He maybe saw AGI and that gave him a lot of worry internally. 00:19:28.860 |
Ilya has not seen AGI. None of us have seen AGI. We've not built AGI. I do think one of 00:19:37.740 |
the many things that I really love about Ilya is he takes AGI and the safety concerns, broadly 00:19:47.340 |
speaking, including things like the impact this is going to have on society, very seriously. 00:19:55.780 |
As we continue to make significant progress, Ilya is one of the people that I spent the 00:20:01.140 |
most time over the last couple of years talking about what this is going to mean, what we 00:20:06.940 |
need to do to ensure we get it right, to ensure that we succeed at the mission. Ilya did not 00:20:14.620 |
see AGI, but Ilya is a credit to humanity in terms of how much he thinks and worries 00:20:29.940 |
I had a bunch of conversations with him in the past. I think when he talks about technology, 00:20:34.460 |
he's always doing this long-term thinking type of thing. He's not thinking about what 00:20:39.460 |
this is going to be in a year. He's thinking about it in 10 years. Just thinking from first 00:20:43.900 |
principles like, "Okay, if this scales, what are the fundamentals here? Where is this going?" 00:20:51.140 |
That's a foundation for them thinking about all the other safety concerns and all that 00:20:55.980 |
kind of stuff, which makes him a really fascinating human to talk with. Do you have any idea why 00:21:03.740 |
he's been quiet? Is it he's just doing some soul searching? 00:21:07.700 |
Again, I don't want to speak for Ilya. I think that you should ask him that. He's definitely 00:21:18.420 |
a thoughtful guy. I think Ilya is always on a soul search in a really good way. 00:21:26.660 |
Yes. Yeah. Also, he appreciates the power of silence. Also, I'm told he can be a silly guy, 00:21:37.300 |
I've never witnessed a silly Ilya, but I look forward to that as well. 00:21:43.860 |
I was at a dinner party with him recently, and he was playing with a puppy. And he was 00:21:47.740 |
in a very silly mood, very endearing. And I was thinking like, "Oh, man, this is not the 00:21:54.300 |
So just to wrap up this whole saga, are you feeling good about the board structure, 00:22:03.540 |
I feel great about the new board. In terms of the structure of OpenAI, one of the board's 00:22:08.780 |
tasks is to look at that and see where we can make it more robust. We wanted to get 00:22:13.980 |
new board members in place first, but we clearly learned a lesson about structure 00:22:19.900 |
throughout this process. I don't have, I think, super deep things to say. It was a crazy, 00:22:25.780 |
very painful experience. I think it was like a perfect storm of weirdness. It was like 00:22:30.220 |
a preview for me of what's going to happen as the stakes get higher and higher, and the 00:22:34.060 |
need that we have robust governance structures and processes and people. I am kind of happy 00:22:40.780 |
it happened when it did, but it was a shockingly painful thing to go through. 00:22:46.620 |
Did it make you be more hesitant in trusting people? 00:22:52.380 |
Yes. I think I'm an extremely trusting person. I've always had a life philosophy of don't 00:22:57.500 |
worry about all of the paranoia, don't worry about the edge cases. You get a little bit 00:23:02.260 |
screwed in exchange for getting to live with your guard down. This was so shocking to me, 00:23:09.380 |
I was so caught off guard, that it has definitely changed, and I really don't like this. It's 00:23:15.500 |
definitely changed how I think about just default trust of people and planning for the 00:23:21.180 |
You got to be careful with that. Are you worried about becoming a little too cynical? 00:23:26.580 |
I'm not worried about becoming too cynical. I think I'm the extreme opposite of a cynical 00:23:30.180 |
person, but I'm worried about just becoming less of a default trusting person. 00:23:35.380 |
I'm actually not sure which mode is best to operate in for a person who's developing AGI. 00:23:43.020 |
Trusting or untrusting. It's an interesting journey you're on. But in terms of structure, 00:23:49.660 |
see, I'm more interested on the human level. How do you surround yourself with humans that 00:23:54.740 |
are building cool shit, but also are making wise decisions? Because the more money you 00:24:01.340 |
start making, the more power the thing has, the weirder people get. 00:24:04.540 |
I think you could make all kinds of comments about the board members and the level of trust 00:24:12.860 |
I should have had there, or how I should have done things differently. But in terms of the 00:24:17.140 |
team here, I think you'd have to give me a very good grade on that one. I have just enormous 00:24:25.740 |
gratitude and trust and respect for the people that I work with every day. I think being 00:24:30.620 |
surrounded with people like that is really important. 00:24:40.020 |
Our mutual friend Elon sued OpenAI. What is the essence of what he's criticizing? To what 00:24:48.780 |
degree does he have a point? To what degree is he wrong? 00:24:52.380 |
I don't know what it's really about. We started off just thinking we were going to be a research 00:24:57.660 |
lab and having no idea about how this technology was going to go. Because it was only seven 00:25:04.540 |
or eight years ago, it's hard to go back and really remember what it was like then. But 00:25:08.500 |
before language models were a big deal, this was before we had any idea about an API or 00:25:12.900 |
selling access to a chatbot. This was before we had any idea we were going to productize 00:25:17.980 |
So we're just going to try to do research, and we don't really know what we're going 00:25:22.060 |
to do with that. I think with many fundamentally new things, you start fumbling through the 00:25:27.260 |
dark and you make some assumptions, most of which turn out to be wrong. Then it became 00:25:33.580 |
clear that we were going to need to do different things and also have huge amounts 00:25:42.460 |
more capital. So we said, "Okay, well, the structure doesn't quite work for that. How 00:25:46.700 |
do we patch the structure?" And then patch it again and patch it again, and you end up 00:25:51.260 |
with something that does look kind of eyebrow-raising, to say the least. But we got here gradually 00:25:57.900 |
with, I think, reasonable decisions at each point along the way. And it doesn't mean 00:26:03.460 |
I wouldn't do it totally differently if we could go back now with an oracle, but you 00:26:06.540 |
don't get the oracle at the time. But anyway, in terms of what Elon's real motivations 00:26:12.100 |
LR: To the degree you remember, what was the response that OpenAI gave in the blog post? 00:26:21.220 |
CB: Oh, Elon said this set of things. Here's the characterization of how this went down. 00:26:35.420 |
We tried to not make it emotional and just say, "Here's the history." 00:26:43.620 |
LR: I do think there's a degree of mischaracterization from Elon here about one of the points you 00:26:54.540 |
just made, which is the degree of uncertainty you had at the time. You guys are a bunch 00:27:00.780 |
of like a small group of researchers crazily talking about AGI when everybody's laughing 00:27:08.420 |
Wasn't that long ago Elon was crazily talking about launching rockets when people were laughing 00:27:14.060 |
at that thought? So I think he'd have more empathy for this. 00:27:20.260 |
CB: I mean, I do think that there's personal stuff here, that there was a split, that OpenAI 00:27:28.380 |
and a lot of amazing people here chose to part ways with Elon. So there's a personal-- 00:27:36.180 |
Can you describe that exactly, the choosing to part ways? 00:27:41.500 |
CB: He thought OpenAI was going to fail. He wanted total control to sort of turn it around. 00:27:46.580 |
We wanted to keep going in the direction that now has become OpenAI. He also wanted Tesla 00:27:50.940 |
to be able to build an AGI effort. At various times he wanted to make OpenAI into a for-profit 00:27:57.580 |
company that he could have control of or have it merge with Tesla. We didn't want to do 00:28:02.340 |
that and he decided to leave, which that's fine. 00:28:06.540 |
So you're saying, and that's one of the things that the blog post says, is that he wanted 00:28:11.380 |
OpenAI to be basically acquired by Tesla in the same way that-- Or maybe something similar 00:28:19.620 |
or maybe something more dramatic than the partnership with Microsoft. 00:28:23.100 |
My memory is the proposal was just like, "Yeah, get acquired by Tesla and have Tesla have 00:28:27.340 |
full control over it." I'm pretty sure that's what it was. 00:28:29.620 |
So what is the word open in OpenAI mean? To Elon at the time, Ilya has talked about this 00:28:37.980 |
in the email exchanges and all this kind of stuff. What does it mean to you at the time? 00:28:42.900 |
I would definitely pick a different-- Speaking of going back with an Oracle, I'd pick a different 00:28:46.540 |
name. One of the things that I think OpenAI is doing that is the most important of everything 00:28:53.260 |
that we're doing is putting powerful technology in the hands of people for free as a public 00:29:01.860 |
good. We don't run ads on our free version. We don't monetize it in other ways. We just 00:29:08.380 |
say as part of our mission, we want to put increasingly powerful tools in the hands of 00:29:15.940 |
I think that kind of open is really important to our mission. I think if you give people 00:29:23.060 |
great tools and teach them to use them or don't even teach them, they'll figure it out 00:29:26.420 |
and let them go build an incredible future for each other with that, that's a big deal. 00:29:31.940 |
So if we can keep putting free or low cost or free and low cost powerful AI tools out 00:29:37.660 |
in the world, I think that's a huge deal for how we fulfill the mission. 00:29:43.860 |
Open source or not, yeah, I think we should open source some stuff and not other stuff. 00:29:49.380 |
It does become this religious battle line where nuance is hard to have, but I think 00:29:55.540 |
So he said, change your name to closed AI and I'll drop the lawsuit. I mean, is it going 00:30:00.340 |
to become this battleground in the land of memes about the name? 00:30:06.180 |
I think that speaks to the seriousness with which Elon means the lawsuit. I mean, that's 00:30:21.580 |
I don't think the lawsuit, maybe correct me if I'm wrong, but I don't think the lawsuit 00:30:26.620 |
is legally serious. It's more to make a point about the future of AGI and the company that's 00:30:36.180 |
So look, I mean, Grok had not open sourced anything until people pointed out it was a 00:30:41.700 |
little bit hypocritical and then he announced that Grok will open source things this week. 00:30:45.740 |
I don't think open source versus not is what this is really about for him. 00:30:48.860 |
Well, we'll talk about open source and not. I do think maybe criticizing the competition 00:30:53.060 |
is great, just talking a little shit, that's great, but friendly competition versus like, 00:31:00.700 |
Look, I think this whole thing is like unbecoming of the builder and I respect Elon as one of 00:31:05.700 |
the great builders of our time. I know he knows what it's like to have haters attack 00:31:15.220 |
him and it makes me extra sad he's doing it to us. 00:31:17.700 |
Yeah, he's one of the greatest builders of all time, potentially the greatest builder 00:31:22.660 |
It makes me sad. I think it makes a lot of people sad. There's a lot of people who've 00:31:26.220 |
really looked up to him for a long time and I said in some interview or something that 00:31:31.140 |
I missed the old Elon and the number of messages I got being like, that exactly encapsulates 00:31:36.780 |
I think he should just win. He should just make Grok beat GPT and then GPT beats Grok 00:31:45.100 |
and it's just a competition and it's beautiful for everybody. But on the question of open 00:31:50.620 |
source, do you think there's a lot of companies playing with this idea? It's quite interesting. 00:31:55.540 |
I would say meta, surprisingly, has led the way on this or at least took the first step 00:32:03.860 |
in the game of chess of really open sourcing the model. Of course, it's not the state of 00:32:09.740 |
the art model, but open sourcing llama and Google is flirting with the idea of open sourcing 00:32:16.900 |
a smaller version. What are the pros and cons of open sourcing? Have you played around with 00:32:23.180 |
I think there is definitely a place for open source models, particularly smaller models 00:32:27.300 |
that people can run locally. I think there's huge demand for. I think there will be some 00:32:32.900 |
open source models, there will be some closed source models. It won't be unlike other ecosystems 00:32:38.780 |
I listened to an all-in podcast talking about this lawsuit and all that kind of stuff and 00:32:44.180 |
they were more concerned about the precedent of going from non-profit to this cap for profit. 00:32:52.140 |
What precedent does that set for other startups? 00:32:56.620 |
I would heavily discourage any startup that was thinking about starting as a non-profit 00:33:01.060 |
and adding a for-profit arm later. I'd heavily discourage them from doing that. I don't think 00:33:08.580 |
For sure. Again, if we knew what was going to happen, we would have done that too. 00:33:12.740 |
In theory, if you dance beautifully here, there's some tax incentives or whatever. 00:33:19.500 |
I don't think that's how most people think about these things. 00:33:21.860 |
So it's not possible to save a lot of money for a startup if you do it this way? 00:33:26.540 |
No, I think there's laws that would make that pretty difficult. 00:33:30.700 |
Where do you hope this goes with Elon? This tension, this dance, where do you hope this? 00:33:37.540 |
If we go one, two, three years from now, your relationship with him on a personal level 00:33:44.140 |
too, like friendship, friendly competition, just all this kind of stuff. 00:33:50.860 |
Yeah, I'd really respect Elon. And I hope that years in the future, we have an amicable 00:34:04.500 |
Yeah, I hope you guys have an amicable relationship this month. And just compete and win. And 00:34:13.340 |
explore these ideas together. I do suppose there's competition for talent or whatever, 00:34:20.300 |
but it should be friendly competition. Just build cool shit. And Elon is pretty good at 00:34:32.980 |
So speaking of cool shit, Sora, there's like a million questions I could ask. First of 00:34:40.300 |
all, it's amazing. It truly is amazing. On a product level, but also just on a philosophical 00:34:45.700 |
level. So let me just, technical/philosophical, ask, what do you think it understands about 00:34:52.780 |
the world more or less than GPT-4, for example? The world model, when you train on these patches 00:35:04.100 |
I think all of these models understand something more about the world model than most of us 00:35:10.820 |
give them credit for. And because there are also very clear things they just don't understand 00:35:16.580 |
or don't get right, it's easy to look at the weaknesses, see through the veil and say, 00:35:21.460 |
"Oh, this is all fake." But it's not all fake, it's just some of it works and some of it 00:35:26.340 |
doesn't work. I remember when I started first watching Sora videos, and I would see a person 00:35:31.860 |
walk in front of something for a few seconds and occlude it, and then walk away and the 00:35:35.540 |
same thing was still there. I was like, "Oh, this is pretty good." Or there's examples 00:35:39.820 |
where the underlying physics looks so well-represented over a lot of steps in a sequence. It's like, 00:35:49.300 |
But fundamentally, these models are just getting better, and that will keep happening. If you 00:35:54.740 |
look at the trajectory from Dali 1 to 2 to 3 to Sora, there were a lot of people that 00:36:00.300 |
were dunked on each person, saying, "It can't do this, it can't do that," and look at it 00:36:05.540 |
Well, the thing you just mentioned is with occlusions, is basically modeling the physics 00:36:12.340 |
of the three-dimensional physics of the world sufficiently well to capture those kinds of 00:36:18.500 |
Or maybe you can tell me, in order to deal with occlusions, what does the world model 00:36:24.820 |
Yeah, so what I would say is it's doing something to deal with occlusions really well. What 00:36:28.060 |
I represent that it has a great underlying 3D model of the world, it's a little bit more 00:36:33.700 |
But can you get there through just these kinds of two-dimensional training data approaches? 00:36:39.060 |
It looks like this approach is going to go surprisingly far. I don't want to speculate 00:36:42.660 |
too much about what limits it will surmount and which it won't, but... 00:36:46.260 |
What are some interesting limitations of the system that you've seen? I mean, there's been 00:36:52.060 |
There's all kinds of fun. I mean, like, you know, cats sprouting an extra limit at random 00:36:56.460 |
points in a video. Pick what you want, but there's still a lot of problems, a lot of 00:37:02.780 |
Do you think that's a fundamental flaw of the approach? Or is it just, you know, bigger 00:37:09.220 |
model or better, like, technical details or better data, more data, is going to solve 00:37:18.340 |
I would say yes to both. Like, I think there is something about the approach which just 00:37:22.460 |
seems to feel different from how we think and learn and whatever. And then also, I think 00:37:30.700 |
Like I mentioned, LLMs have tokens, text tokens, and Sora has visual patches. So it converts 00:37:35.860 |
all visual data, diverse kinds of visual data, videos, and images into patches. Is the training 00:37:41.780 |
to the degree you can say fully self-supervised? Or is there some manual labeling going on? 00:37:46.220 |
Like, what's the involvement of humans in all this? 00:37:49.780 |
I mean, without saying anything specific about the Sora approach, we use lots of human data 00:38:00.900 |
But not intranet-scale data. So lots of humans. Lots is a complicated word, Sam. 00:38:11.740 |
It doesn't, because to me, lots, like, listen, I'm an introvert, and when I hang out with 00:38:15.860 |
like three people, that's a lot of people. And four people, that's a lot. But I suppose 00:38:21.900 |
More than three people work on labeling the data for these models, yeah. 00:38:26.060 |
But fundamentally, there's a lot of self-supervised learning. Because what you mentioned in the 00:38:32.300 |
technical report is intranet-scale data. That's another beautiful, it's like poetry. So it's 00:38:39.100 |
a lot of data that's not human-labeled, it's like, it's self-supervised in that way. And 00:38:45.140 |
then the question is, how much data is there on the intranet that could be used in this, 00:38:50.780 |
that is conducive to this kind of self-supervised way? If only we knew the details of the self-supervised. 00:38:57.860 |
Do you, have you considered opening it up a little more? Details? 00:39:04.620 |
Sora specifically, because it's so interesting that, like, can this, can the same magic of 00:39:12.180 |
LLMs now start moving towards visual data? And what does that take to do that? 00:39:17.700 |
I mean, it looks to me like yes. But we have more work to do. 00:39:22.220 |
Sure. What are the dangers? Why are you concerned about releasing the system? What are some 00:39:29.460 |
I mean, frankly speaking, one thing we have to do before releasing the system is just 00:39:33.540 |
like, get it to work at a level of efficiency that will deliver the scale people are gonna 00:39:40.300 |
want from this. So I don't want to like downplay that. And there's still a ton, ton of work 00:39:45.980 |
to do there. But, you know, you can imagine like issues with deep fakes, misinformation. 00:39:55.980 |
Like we try to be a thoughtful company about what we put out into the world. And it doesn't 00:40:01.180 |
take much thought to think about the ways this can go badly. 00:40:05.300 |
There's a lot of tough questions here. You're dealing in a very tough space. Do you think 00:40:10.060 |
training AI should be or is fair use under copyright law? 00:40:14.820 |
I think the question behind that question is, do people who create valuable data deserve 00:40:19.220 |
to have some way that they get compensated for use of it? And that I think the answer 00:40:23.740 |
is yes. I don't know yet what the answer is. People have proposed a lot of different things. 00:40:29.700 |
We've tried some different models. But, you know, if I'm like an artist, for example, 00:40:35.780 |
A, I would like to be able to opt out of people generating art in my style. And B, if they 00:40:41.940 |
do generate art in my style, I'd like to have some economic model associated with that. 00:40:46.180 |
Yeah, it's that transition from CDs to Napster to Spotify. We have to figure out some kind 00:40:53.500 |
The model changes, but people have got to get paid. 00:40:55.060 |
Well, there should be some kind of incentive, if we zoom out even more, for humans to keep 00:41:02.580 |
Something I worry about, humans are going to do cool shit, and society is going to find 00:41:05.820 |
some way to reward it. That seems pretty hardwired. We want to create. We want to be useful. We 00:41:12.420 |
want to achieve status in whatever way. That's not going anywhere, I don't think. 00:41:17.220 |
But the reward might not be monetary, financial. It might be like fame and celebration of other 00:41:25.660 |
Maybe financial in some other way. Again, I don't think we've seen the last evolution 00:41:31.540 |
Yeah, but artists and creators are worried. When they see Sora, they're like, "Holy shit." 00:41:36.700 |
Sure. Artists were also super worried when photography came out. And then photography 00:41:41.500 |
became a new art form, and people made a lot of money taking pictures. I think things like 00:41:46.500 |
that will keep happening. People will use the new tools in new ways. 00:41:50.320 |
If we just look on YouTube or something like this, how much of that will be using Sora-like 00:41:56.040 |
AI-generated content, do you think, in the next five years? 00:42:01.780 |
People talk about how many jobs they're going to do in five years. And the framework that 00:42:06.260 |
people have is what percentage of current jobs are just going to be totally replaced 00:42:10.300 |
by some AI doing the job. The way I think about it is not what percent of jobs AI will 00:42:16.100 |
do, but what percent of tasks will AI do, and over what time horizon. 00:42:19.760 |
So if you think of all of the five-second tasks in the economy, the five-minute tasks, 00:42:24.340 |
the five-hour tasks, maybe even the five-day tasks, how many of those can AI do? I think 00:42:30.820 |
that's a way more interesting, impactful, important question than how many jobs AI can 00:42:37.180 |
do. Because it is a tool that will work at increasing levels of sophistication and over 00:42:43.020 |
longer and longer time horizons for more and more tasks, and let people operate at a higher 00:42:48.280 |
level of abstraction. So maybe people are way more efficient at the job they do, and 00:42:53.100 |
at some point, that's not just a quantitative change, but it's a qualitative one, too, about 00:42:57.740 |
the kinds of problems you can keep in your head. 00:43:00.380 |
I think that for videos on YouTube, it'll be the same. Many videos, maybe most of them, 00:43:06.300 |
will use AI tools in the production, but they'll still be fundamentally driven by a person 00:43:12.100 |
thinking about it, putting it together, doing parts of it, sort of directing and running 00:43:18.540 |
That's interesting. I mean, it's scary, but it's interesting to think about. I tend to 00:43:22.540 |
believe that humans like to watch other humans, or other human-like things. 00:43:29.500 |
Yeah. If there's a cooler thing that's better than a human, humans care about that for, 00:43:36.740 |
like, two days, and then they go back to humans. 00:43:41.900 |
It's the whole chess thing. Yeah, but no, let's, everybody keep playing chess. And let's 00:43:47.300 |
ignore the elephant in the room that humans are really bad at chess, relative to AI systems. 00:43:52.140 |
We still run races, and cars are much faster. I mean, there's like a lot of examples. 00:43:56.140 |
Yeah. And maybe it'll just be tooling, like, in the Adobe suite type of way, where it can 00:44:02.660 |
just make videos much easier, all that kind of stuff. Listen, I hate being in front of 00:44:08.940 |
the camera. If I can figure out a way to not be in front of the camera, I would love it. 00:44:12.900 |
Unfortunately, it'll take a while. Like that, generating faces, it's getting there. But 00:44:18.580 |
generating faces in video format is tricky, when it's specific people versus generic people. 00:44:23.620 |
Let me ask you about GPT-4. There's so many questions. First of all, also amazing. Looking 00:44:34.020 |
back, it'll probably be this kind of historic pivotal moment with 3, 5, and 4, which had 00:44:40.300 |
Maybe 3, 5 will be the pivotal moment, I don't know. Hard to say that looking forward. 00:44:44.620 |
We never know. That's the annoying thing about the future. It's hard to predict. But for 00:44:48.460 |
me, looking back, GPT-4, Chad, GPT is pretty damn impressive. Like, historically impressive. 00:44:54.940 |
So allow me to ask, what's been the most impressive capabilities of GPT-4 to you, and GPT-4 Turbo? 00:45:08.020 |
Typical human also. Gotten used to an awesome thing. 00:45:11.340 |
No, I think it is an amazing thing. But relative to where we need to get to, and where I believe 00:45:19.500 |
we will get to, at the time of GPT-3, people were like, "Oh, this is amazing. This is this 00:45:27.860 |
marvel of technology." And it is. It was. But now we have GPT-4, and look at GPT-3, that's 00:45:36.580 |
unimaginably horrible. I expect that the delta between 5 and 4 will be the same as between 00:45:43.260 |
4 and 3. And I think it is our job to live a few years in the future and remember that 00:45:49.660 |
the tools we have now are going to kind of suck looking backwards at them. And we make 00:45:59.700 |
What are the most glorious ways that GPT-4 sucks? 00:46:06.380 |
What are the best things it can do, and the limits of those best things that allow you 00:46:11.460 |
to say it sucks, therefore gives you inspiration and hope for the future? 00:46:16.340 |
You know, one thing I've been using it for more recently is sort of like a brainstorming 00:46:25.940 |
There's a glimmer of something amazing in there. I don't think it gets – when people 00:46:32.060 |
talk about what it does, they're like, "Oh, it helps me code more productively. It helps 00:46:36.420 |
me write faster and better. It helps me translate from this language to another." All these 00:46:42.420 |
amazing things. But there's something about the kind of creative brainstorming partner 00:46:51.620 |
– I need to come up with a name for this thing. I need to think about this problem 00:46:55.120 |
in a different way. I'm not sure what to do here – that I think gives a glimpse of something 00:47:03.420 |
One of the other things that you can see a very small glimpse of is when it can help 00:47:10.460 |
on longer-horizon tasks. You know, break down something into multiple steps, maybe execute 00:47:15.660 |
some of those steps, search the internet, write code, whatever, put that together. When 00:47:20.660 |
that works, which is not very often, it's very magical. 00:47:24.500 |
The iterative back-and-forth with a human. It works a lot for me. What do you mean? 00:47:29.940 |
Iterative back-and-forth with a human, it can get more often. When it can go do a 10-step 00:47:32.580 |
problem on its own. It doesn't work for that too often. Sometimes. 00:47:37.140 |
But multiple layers of abstraction, or do you mean just sequential? 00:47:40.580 |
Both. To break it down and then do things at different layers of abstraction and put 00:47:45.500 |
them together. Look, I don't want to downplay the accomplishment of GPT-4, but I don't 00:47:53.740 |
want to overstate it either. I think this point that we are on an exponential curve, 00:47:57.860 |
we will look back relatively soon at GPT-4, like we look back at GPT-3 now. 00:48:03.980 |
That said, I mean, chat-GPT was the transition to where people started to believe it. There 00:48:13.060 |
is an uptick of believing. Not internally at OpenAI, perhaps. There's believers here. 00:48:19.380 |
In that sense, I do think it'll be a moment where a lot of the world went from not believing 00:48:23.140 |
to believing. That was more about the chat-GPT interface, and by the interface and product, 00:48:30.620 |
I also mean the post-training of the model, and how we tune it to be helpful to you, and 00:48:35.420 |
how to use it, than the underlying model itself. 00:48:38.380 |
How much of those two, each of those things are important? The underlying model and the 00:48:45.760 |
RLHF, or something of that nature that tunes it to be more compelling to the human, more 00:48:55.380 |
I mean, they're both super important, but the RLHF, the post-training step, the little 00:49:01.180 |
wrapper of things that, from a compute perspective, little wrapper of things that we do on top 00:49:06.380 |
of the base model, even though it's a huge amount of work, that's really important, to 00:49:09.700 |
say nothing of the product that we build around it. 00:49:16.100 |
In some sense, we did have to do two things. We had to invent the underlying technology, 00:49:22.580 |
and then we had to figure out how to make it into a product people would love, which 00:49:30.860 |
is not just about the actual product work itself, but this whole other step of how you 00:49:37.180 |
How you make the scale work, where a lot of people can use it at the same time, all that 00:49:43.380 |
And that. That was a known difficult thing. We knew we were going to have to scale it 00:49:48.940 |
up. We had to go do two things that had never been done before, that were both, I would 00:49:53.940 |
say, quite significant achievements, and then a lot of things like scaling it up that other 00:50:01.540 |
How does the context window of going from 8K to 128K tokens compare from GPT-4 to GPT-4 00:50:13.500 |
Most people don't need all the way to 128K most of the time. Although, if we dream into 00:50:18.700 |
the distant future, we'll have a way distant future. We'll have context length of several 00:50:23.820 |
billion. You will feed in all of your information, all of your history over time, and it'll just 00:50:28.860 |
get to know you better and better, and that'll be great. 00:50:32.220 |
For now, the way people use these models, they're not doing that. People sometimes post 00:50:38.140 |
in a paper or a significant fraction of a code repository or whatever. But most usage 00:50:46.620 |
of the models is not using the long context most of the time. 00:50:49.420 |
I like that this is your "I have a dream" speech. One day, you'll be judged by the full 00:50:55.900 |
context of your character, or of your whole lifetime. That's interesting. That's part 00:51:01.740 |
of the expansion that you're hoping for, is a greater and greater context. 00:51:06.620 |
I saw this internet clip once. I'm going to get the numbers wrong, but it was Bill Gates 00:51:09.980 |
talking about the amount of memory on some early computer. Maybe 64K, maybe 640K, something 00:51:15.820 |
like that. Most of it was used for the screen buffer. He just couldn't seem genuine, couldn't 00:51:22.620 |
imagine that the world would eventually need gigabytes of memory in a computer, or terabytes 00:51:27.740 |
of memory in a computer. You always do just need to follow the exponential of technology. 00:51:37.100 |
We will find out how to use better technology. I can't really imagine what it's like right now 00:51:43.420 |
for context links to go out to the billions someday. They might not literally go there, 00:51:47.500 |
but effectively, it'll feel like that. But I know we'll use it and really not want to 00:51:54.460 |
go back once we have it. Yeah. Even saying billions 10 years from now might seem dumb, 00:51:59.980 |
because it'll be like trillions upon trillions. There'll be some kind of breakthrough 00:52:05.900 |
that will effectively feel like infinite context. But even 120, I have to be honest, 00:52:12.140 |
I haven't pushed it to that degree. Maybe putting in entire books, or parts of books and so on. 00:52:17.580 |
Papers. What are some interesting use cases of GPT-4 that you've seen? 00:52:23.420 |
The thing that I find most interesting is not any particular use case that we can talk about those, 00:52:27.580 |
but it's people who... This is mostly younger people, but people who use it as their default 00:52:35.420 |
start for any kind of knowledge work task. And it's the fact that it can do a lot of things 00:52:40.780 |
reasonably well. You can use GPT-V, you can use it to help you write code, you can use it to help 00:52:44.860 |
you do search, you can use it to edit a paper. The most interesting thing to me is the people 00:52:49.420 |
who just use it as the start of their workflow. I do as well for many things. I use it as a 00:52:55.100 |
reading partner for reading books. It helps me think through ideas, especially when the books 00:53:02.140 |
are classics, so it's really well written about. I find it often to be significantly better than 00:53:10.140 |
even Wikipedia on well-covered topics. It's somehow more balanced and more nuanced. Or maybe 00:53:16.620 |
it's me, but it inspires me to think deeper than a Wikipedia article does. I'm not exactly sure what 00:53:21.820 |
that is. You mentioned this collaboration. I'm not sure where the magic is, if it's in here, 00:53:26.380 |
or if it's in there, or if it's somewhere in between. I'm not sure. But one of the things 00:53:31.900 |
that concerns me for knowledge tasks when I start with GPT is I'll usually have to do fact-checking 00:53:37.820 |
after, like check that it didn't come up with fake stuff. How do you figure that out, that GPT can 00:53:48.140 |
come up with fake stuff that sounds really convincing? How do you ground it in truth? 00:53:54.620 |
That's obviously an area of intense interest for us. I think it's going to get a lot better 00:54:01.900 |
with upcoming versions, but we'll have to work on it, and we're not going to have it all solved 00:54:06.060 |
this year. Well, the scary thing is as it gets better, you'll start not doing the fact-checking 00:54:12.300 |
more and more, right? I'm of two minds about that. I think people are much more sophisticated 00:54:17.660 |
users of technology than we often give them credit for, and people seem to really understand 00:54:22.540 |
that GPT, any of these models, hallucinate some of the time, and if it's mission-critical, 00:54:26.620 |
you've got to check it. Except journalists don't seem to understand that. I've seen journalists 00:54:30.780 |
half-assedly just using GPT for... Of the long list of things I'd like to dunk on journalists 00:54:36.940 |
for, this is not my top criticism of them. Well, I think the bigger criticism is perhaps 00:54:43.500 |
the pressures and the incentives of being a journalist is that you have to work really 00:54:47.340 |
quickly, and this is a shortcut. I would love our society to incentivize like... I would too. 00:54:55.020 |
Long, like a journalist, journalistic efforts that take days and weeks and rewards great, 00:55:00.780 |
in-depth journalism. Also journalism that presents stuff in a balanced way where it's like, 00:55:05.820 |
celebrates people while criticizing them, even though the criticism is the thing that gets 00:55:10.540 |
clicks, and making shit up also gets clicks, and headlines that mischaracterize completely. 00:55:16.140 |
I'm sure you have a lot of people dunking on... Well, all that drama probably got a lot of clicks. 00:55:21.260 |
Probably did. And that's a bigger problem about human civilization. I'd love to see solid, 00:55:29.980 |
this is where we celebrate a bit more. You've given Chad GPT the ability to have memories, 00:55:34.620 |
you've been playing with that, about previous conversations. And also the ability to turn off 00:55:40.300 |
memory, which I wish I could do that sometimes, just turn on and off, depending. I guess sometimes 00:55:45.980 |
alcohol can do that, but not optimally, I suppose. What have you seen through that, 00:55:52.700 |
playing around with that idea of remembering conversations or not? 00:55:56.380 |
We're very early in our explorations here, but I think what people want, or at least what I want 00:56:01.340 |
for myself, is a model that gets to know me and gets more useful to me over time. 00:56:08.220 |
This is an early exploration. I think there's a lot of other things to do, 00:56:14.940 |
but that's where we'd like to head. You'd like to use a model and over the course of your life, 00:56:19.340 |
or use a system, there'll be many models, and over the course of your life, it gets better and better. 00:56:25.180 |
Yeah, how hard is that problem? Because right now it's more like remembering 00:56:28.780 |
little factoids and preferences and so on. What about remembering, don't you want GPT to remember 00:56:35.900 |
all the shit you went through in November and all the drama? 00:56:40.940 |
Because right now you're clearly blocking it out a little bit. 00:56:43.340 |
It's not just that I want it to remember that, I want it to integrate the lessons of that and 00:56:49.420 |
remind me in the future what to do differently or what to watch out for. 00:56:57.420 |
We all gain from experience over the course of our lives, 00:57:03.260 |
varying degrees, and I'd like my AI agent to gain with that experience too. 00:57:10.220 |
So if we go back and let ourselves imagine that trillions and trillions of context length, 00:57:15.900 |
if I can put every conversation I've ever had with anybody in my life in there, if I can have 00:57:21.740 |
all of my emails input out, all of my input output in the context window every time I ask a question, 00:57:28.300 |
Yeah, I think that would be very cool. People sometimes will hear that and be concerned about 00:57:33.820 |
privacy. What do you think about that aspect of it? The more effective the AI becomes at really 00:57:41.740 |
integrating all the experiences and all the data that happened to you and giving you advice? 00:57:47.500 |
I think the right answer there is just user choice. Anything I want stricken from the record 00:57:52.140 |
for my AI agent, I want to be able to take out. If I don't want it to remember anything, 00:57:55.020 |
I want that too. You and I may have different opinions about where on that privacy utility 00:58:03.020 |
trade-off for our own AI we want to be, which is totally fine. But I think the answer is just 00:58:06.780 |
like really easy user choice. But there should be some high level of transparency from a company 00:58:12.940 |
about the user choice. Because sometimes companies in the past have been kind of shady about like, 00:58:18.540 |
"Eh, it's kind of presumed that we're collecting all your data and we're using it for a good 00:58:24.780 |
reason, for advertisement and so on." But there's not a transparency about the details of that. 00:58:31.100 |
That's totally true. You mentioned earlier that I'm like blocking out the November stuff. 00:58:35.900 |
Well, I mean, I think it was a very traumatic thing and it did immobilize me 00:58:42.940 |
for a long period of time. Definitely the hardest work that I've had to do was just keep working 00:58:50.940 |
that period. Because I had to try to come back in here and put the pieces together while I was just 00:58:57.260 |
in sort of shock and pain. Nobody really cares about that. I mean, the team gave me a pass and 00:59:02.540 |
I was not working at my normal level. But there was a period where I was just like, 00:59:05.660 |
it was really hard to have to do both. But I kind of woke up one morning and I was like, 00:59:11.500 |
"This was a horrible thing to happen to me. I think I could just feel like a victim forever." 00:59:15.020 |
Or I can say, "This is like the most important work I'll ever touch in my life and I need to 00:59:19.820 |
get back to it." And it doesn't mean that I've repressed it because sometimes I wake in the 00:59:26.380 |
middle of the night thinking about it, but I do feel like an obligation to keep moving forward. 00:59:31.100 |
Well, that's beautifully said, but there could be some linkering stuff in there. Like, 00:59:36.220 |
what I would be concerned about is that trust thing that you mentioned, 00:59:42.060 |
that being paranoid about people as opposed to just trusting everybody or most people, 00:59:51.500 |
I mean, because I've seen in my part-time explorations, I've been diving deeply into 01:00:00.540 |
the Zelensky administration, the Putin administration, and the dynamics there 01:00:06.140 |
in wartime in a very highly stressful environment. And what happens is distrust 01:00:11.820 |
and you isolate yourself both and you start to not see the world clearly. And that's a concern. 01:00:19.340 |
That's a human concern. You seem to have taken it in stride and kind of learned the good lessons 01:00:24.060 |
and felt the love and let the love energize you, which is great, but still can linger in there. 01:00:29.500 |
There's just some questions I would love to ask your intuition about what's GPT able to do and 01:00:36.620 |
not. So it's allocating approximately the same amount of compute for each token it generates. 01:00:44.460 |
Is there room there in this kind of approach to slower thinking, sequential thinking? 01:00:51.500 |
I think there will be a new paradigm for that kind of thinking. 01:00:54.860 |
Will it be similar like architecturally as what we're seeing now with LLMs? 01:01:01.980 |
I can imagine many ways to implement that. I think that's less important 01:01:08.220 |
than the question you were getting at, which is, do we need a way to do 01:01:12.380 |
a slower kind of thinking where the answer doesn't have to get like, 01:01:16.540 |
you know, it's like, I guess like spiritually, you could say that you want an AI to be able to think 01:01:24.620 |
harder about a harder problem and answer more quickly about an easier problem. And I think 01:01:28.860 |
that will be important. Is that like a human thought that we're just having? You should be 01:01:32.220 |
able to think hard. Is that the wrong intuition? I suspect that's a reasonable intuition. 01:01:36.940 |
Interesting. So it's not possible once the GPT gets like GPT-7, we'll just be instantaneously 01:01:42.860 |
be able to see, you know, here's the proof of Fermat's theorem. 01:01:47.580 |
It seems to me like you want to be able to allocate more compute to harder problems. 01:02:02.540 |
if you ask a system like that, proof for Fermat's last theorem versus what's today's date, 01:02:09.580 |
unless it already knew and had memorized the answer to the 01:02:14.140 |
proof, assuming it's got to go figure that out, seems like that will take more compute. 01:02:19.500 |
But can it look like a basically LLM talking to itself, that kind of thing? 01:02:24.140 |
Maybe. I mean, there's a lot of things that you could imagine working. What, like, what the right 01:02:31.900 |
or the best way to do that will be? We don't know. 01:02:35.500 |
This does make me think of the mysterious, the lore behind Q*. What's this mysterious 01:02:44.780 |
Q* project? Is it also in the same nuclear facility? 01:02:51.340 |
That's what a person with a nuclear facility always says. 01:02:54.780 |
I would love to have a secret nuclear facility. There isn't one. 01:03:05.340 |
OpenAI is not a good company at keeping secrets. It would be nice, you know, 01:03:08.380 |
we're like been plagued by a lot of leaks, and it would be nice if we were able to have 01:03:17.020 |
See, but an answer like that means there's something to talk about. 01:03:28.700 |
We have said for a while that we think better reasoning in these systems is an important 01:03:45.900 |
Is there going to be moments, Q* or otherwise, where 01:03:52.300 |
there's going to be leaps similar to JadGBT, where you're like… 01:04:08.460 |
This is kind of a theme that you're saying is there's a gradual… 01:04:11.500 |
You're basically gradually going up an exponential slope. 01:04:13.980 |
But from an outsider perspective, for me just watching it, it does feel like there's leaps. 01:04:24.940 |
Part of the reason that we deploy the way we do is that we think… 01:04:30.140 |
Rather than go build in secret until we got all the way to GPT-5, we decided to talk about GPT-1, 01:04:38.460 |
Part of the reason there is I think AI and surprise don't go together. 01:04:43.020 |
Also, the world, people, institutions, whatever you want to call it, need time to adapt and think 01:04:50.780 |
I think one of the best things that open AI has done is this strategy and we get the world to pay 01:04:56.860 |
attention to the progress, to take AGI seriously, to think about what systems and structures and 01:05:04.380 |
governance we want in place before we're under the gun and have to make a rush decision. 01:05:09.180 |
But the fact that people like you and others say you still feel like there are these leaps 01:05:16.780 |
makes me think that maybe we should be doing our releasing even more iteratively. 01:05:23.820 |
But our goal is not to have shock updates to the world, the opposite. 01:05:38.860 |
Maybe we should think about releasing GPT-5 in a different way or something like that. 01:05:50.380 |
I don't know if you know humans, but they kind of have these milestones. 01:06:05.260 |
It's fun to say, declare victory on this one and go start the next thing. 01:06:09.020 |
But yeah, I feel like we're somehow getting this a little bit wrong. 01:06:23.500 |
I also, we will release an amazing model this year. 01:06:35.020 |
So that goes to the question of what's the way we release this thing? 01:06:41.740 |
We'll release over in the coming months, many different things. 01:06:49.340 |
I think before we talk about like a GPT-5 like model called that or not called that 01:06:54.940 |
or a little bit worse or a little bit better than what you'd expect from a GPT-5. 01:06:58.380 |
I know we have a lot of other important things to release first. 01:07:08.540 |
What are some of the biggest challenges and bottlenecks to overcome 01:07:11.420 |
for whatever it ends up being called, but let's call it GPT-5? 01:07:15.740 |
Just interesting to ask, what are, is it on the compute side? 01:07:21.020 |
Always all of these, I was, you know, what's the one big unlock? 01:07:31.420 |
Like the thing that OpenAI I think does really well. 01:07:35.900 |
This is actually an original Ilio quote that I'm going to butcher, but it's something like 01:07:39.660 |
we multiply 200 medium-sized things together into one giant thing. 01:07:46.620 |
So there's this distributed constant innovation happening. 01:07:54.380 |
So like even like detailed approaches, like you do detailed aspects of every... 01:07:58.300 |
How does that work with different disparate teams and so on? 01:08:02.460 |
Like how do they, how do the medium-sized things become one whole giant transformer? 01:08:07.980 |
There's a few people who have to like think about putting the whole thing together, 01:08:11.260 |
but a lot of people try to keep most of the picture in their head. 01:08:14.140 |
Oh, like the individual teams, individual contributors try to... 01:08:17.980 |
You don't know exactly how every piece works, of course, but one thing I generally believe 01:08:23.820 |
is that it's sometimes useful to zoom out and look at the entire map. 01:08:28.540 |
And I think this is true for like a technical problem. 01:08:33.500 |
I think this is true for like innovating in business. 01:08:36.140 |
But things come together in surprising ways and having an understanding of that whole picture, 01:08:43.100 |
even if most of the time you're operating in the weeds in one area, 01:08:51.260 |
In fact, one of the things that I used to have, and I think was super valuable, 01:08:55.980 |
was I used to have like a good map of that, all of the frontier, 01:09:01.420 |
or most of the frontiers in the tech industry. 01:09:03.420 |
And I could sometimes see these connections or new things that were possible that if I were only, 01:09:09.100 |
you know, deep in one area, I wouldn't be able to like have the idea for it because 01:09:25.180 |
Very different job now than what I used to have. 01:09:26.780 |
- Speaking of zooming out, let's zoom out to another cheeky thing, 01:09:42.780 |
I never said like we're raising $7 trillion, blah, blah, blah. 01:09:46.540 |
- Oh, but you said fuck it, maybe eight, I think. 01:09:50.220 |
- Okay, I meme like once there's like misinformation out in the world. 01:09:54.460 |
But sort of misinformation may have a foundation of like insight there. 01:09:59.980 |
- Look, I think compute is gonna be the currency of the future. 01:10:03.740 |
I think it will be maybe the most precious commodity in the world. 01:10:07.180 |
And I think we should be investing heavily to make a lot more compute. 01:10:13.740 |
it's an unusual, I think it's gonna be an unusual market. 01:10:23.180 |
the market for like chips for mobile phones or something like that. 01:10:30.300 |
And you can say that, okay, there's 8 billion people in the world, 01:10:33.500 |
maybe 7 billion of them have phones, maybe there are 6 billion, let's say. 01:10:38.460 |
So the market per year is 3 billion system-on-chip for smartphones. 01:10:41.660 |
And if you make 30 billion, you will not sell 10 times as many phones 01:10:51.900 |
Like intelligence is gonna be more like energy or something like that, 01:10:55.340 |
where the only thing that I think makes sense to 01:10:57.660 |
talk about is at price X, the world will use this much compute. 01:11:04.700 |
And at price Y, the world will use this much compute. 01:11:06.780 |
Because if it's really cheap, I'll have it like reading my email all day, 01:11:11.020 |
like giving me suggestions about what I maybe should think about or work on 01:11:15.980 |
And if it's really expensive, maybe I'll only use it, 01:11:19.980 |
So I think the world is gonna want a tremendous amount of compute. 01:11:23.580 |
And there's a lot of parts of that that are hard. 01:11:30.780 |
The supply chain is harder than, of course, fabricating enough chips is hard. 01:11:37.340 |
Like we're gonna want an amount of compute that's just hard to reason about right now. 01:11:54.300 |
but I'm happy there's like a race for fusion right now. 01:11:56.620 |
Nuclear fission, I think, is also like quite amazing. 01:12:00.140 |
And I hope as a world, we can re-embrace that. 01:12:03.100 |
It's really sad to me how the history of that went 01:12:05.260 |
and hope we get back to it in a meaningful way. 01:12:07.420 |
- So to you, part of the puzzle is nuclear fission, 01:12:10.220 |
like nuclear reactors as we currently have them. 01:12:12.700 |
And a lot of people are terrified because of Chernobyl and so on. 01:12:18.060 |
I think it's just like it's a shame that industry kind of ground to a halt. 01:12:21.340 |
- And what just mass hysteria is how you explain the halt? 01:12:25.580 |
- I don't know if you know humans, but that's one of the dangers. 01:12:29.340 |
That's one of the security threats for nuclear fission 01:12:37.020 |
And that's something we have to incorporate into the calculus of it. 01:12:40.700 |
So we have to kind of win people over and to show how safe it is. 01:12:47.180 |
- I think some things are going to go theatrically wrong with AI. 01:12:50.140 |
I don't know what the percent chance is that I eventually get shot, but it's not zero. 01:13:00.700 |
- How do you decrease the theatrical nature of it? 01:13:06.060 |
You know, I've already starting to hear rumblings 01:13:09.100 |
because I do talk to people on both sides of the political spectrum, 01:13:15.180 |
hear rumblings where it's going to be politicized, AI. 01:13:19.180 |
It really worries me because then it's like maybe the right is against AI 01:13:24.300 |
and the left is for AI because it's going to help the people 01:13:28.060 |
or whatever the narrative and formulation is that really worries me. 01:13:31.500 |
And then the theatrical nature of it can be leveraged fully. 01:13:37.180 |
- I think it will get caught up in like left versus right wars. 01:13:41.500 |
I don't know exactly what that's going to look like, 01:13:43.020 |
but I think that's just what happens with anything of consequence, unfortunately. 01:13:46.700 |
What I meant more about theatrical risks is like AI is going to have, 01:13:51.420 |
I believe, tremendously more good consequences than bad ones, 01:13:57.180 |
And there'll be some bad ones that are bad, but not theatrical. 01:14:05.340 |
You know, like a lot more people have died of air pollution 01:14:13.740 |
Most people worry more about living next to a nuclear reactor than a coal plant. 01:14:17.020 |
But something about the way we're wired is that 01:14:20.620 |
although there's many different kinds of risks we have to confront, 01:14:24.140 |
the ones that make a good climax scene of a movie carry much more weight with us 01:14:30.060 |
than the ones that are very bad over a long period of time, but on a slow burn. 01:14:37.340 |
And hopefully AI can help us see the truth of things, 01:14:40.220 |
to have balance, to understand what are the actual risks, 01:14:44.140 |
what are the actual dangers of things in the world. 01:14:45.900 |
What are the pros and cons of the competition in this space 01:14:50.300 |
and competing with Google, Meta, XAI, and others? 01:14:54.780 |
- I think I have a pretty straightforward answer to this 01:15:02.940 |
which is that we get better products and more innovation faster and cheaper, 01:15:09.820 |
And the con is that I think if we're not careful, 01:15:13.740 |
it could lead to an increase in sort of an arms race that I'm nervous about. 01:15:20.860 |
- Do you feel the pressure of the arms race, like in some negative-- 01:15:26.780 |
We spend a lot of time talking about the need to prioritize safety. 01:15:32.540 |
And I've said for like a long time that I think, 01:15:36.620 |
if you think of a quadrant of slow timelines to the start of AGI, 01:15:42.860 |
long timelines, and then a short takeoff or a fast takeoff, 01:15:46.300 |
I think short timelines, slow takeoff is the safest quadrant 01:15:51.420 |
But I do want to make sure we get that slow takeoff. 01:15:54.540 |
- Part of the problem I have with this kind of slight beef with Elon 01:16:01.260 |
as opposed to collaboration on the safety aspect of all of this. 01:16:04.860 |
It tends to go into silos and closed open source, perhaps in the model. 01:16:09.740 |
- Elon says at least that he cares a great deal about AI safety 01:16:14.940 |
And I assume that he's not gonna race on safely. 01:16:19.260 |
- Yeah, but collaboration here I think is really beneficial for everybody on that front. 01:16:27.340 |
- Well, he is known for caring about humanity 01:16:33.660 |
And so there's always attention and incentives and motivations. 01:16:41.180 |
- I was thinking, someone just reminded me the other day 01:16:44.700 |
about how the day that he got like surpassed Jeff Bezos 01:16:48.780 |
for like richest person in the world, he tweeted a silver medal at Jeff Bezos. 01:16:52.700 |
I hope we have less stuff like that as people start to work on. 01:16:59.260 |
- I think Elon is a friend and he's a beautiful human being 01:17:02.780 |
and one of the most important humans ever, that stuff is not good. 01:17:06.940 |
- The amazing stuff about Elon is amazing and I super respect him. 01:17:10.940 |
I think we need him, all of us should be rooting for him 01:17:14.540 |
and need him to step up as a leader through this next phase. 01:17:17.820 |
- Yeah, I hope you can have one without the other, 01:17:21.020 |
but sometimes humans are flawed and complicated and all that kind of stuff. 01:17:24.140 |
- There's a lot of really great leaders throughout history. 01:17:26.540 |
- Yeah, and we can each be the best version of ourselves and strive to do so. 01:17:32.220 |
- Let me ask you, Google, with the help of search, 01:17:38.940 |
has been dominating the past 20 years, I think it's fair to say, 01:17:45.340 |
in terms of the access, the world's access to information, 01:17:50.060 |
And one of the nerve-wracking things for Google, 01:17:53.180 |
but for the entirety of people in this space, 01:17:55.580 |
is thinking about how are people going to access information? 01:17:59.340 |
- Like you said, people show up to GPT as a starting point. 01:18:04.700 |
So is OpenAI going to really take on this thing that Google started 20 years ago, 01:18:13.420 |
If the question is, if we can build a better search engine than Google or whatever, 01:18:19.660 |
then sure, we should go, people should use a better product. 01:18:25.740 |
But I think that would so understate what this can be. 01:18:31.580 |
You know, Google shows you like 10 blue links, 01:18:41.740 |
But the thing that's exciting to me is not that we can go build a better copy of Google search, 01:18:48.940 |
but that maybe there's just some much better way to help people 01:18:53.020 |
find and act and on and synthesize information. 01:18:56.700 |
Actually, I think chat GPT is that for some use cases, 01:18:59.740 |
and hopefully we'll make it be like that for a lot more use cases. 01:19:03.340 |
But I don't think it's that interesting to say like, 01:19:06.460 |
how do we go do a better job of giving you like 10 ranked 01:19:14.860 |
how do we help you get the answer or the information you need? 01:19:19.980 |
synthesize that and others are pointed to it, 01:19:22.940 |
But a lot of people have tried to just make a better search engine than Google. 01:19:35.340 |
I don't think the world needs another copy of Google. 01:19:38.300 |
- And integrating a chat client, like a chat GPT with a search engine. 01:19:47.900 |
It's like, if you just do it simply, it's awkward. 01:19:51.260 |
Because like, if you just shove it in there, it can be awkward. 01:19:54.700 |
- As you might guess, we are interested in how to do that well. 01:19:59.420 |
- How to do that well, like a heterogeneous, like integrating. 01:20:05.980 |
I don't think anyone has cracked the code on yet. 01:20:16.540 |
- You know, I kind of hate ads just as like an aesthetic choice. 01:20:20.300 |
I think ads needed to happen on the internet for a bunch of reasons to get it going. 01:20:31.500 |
I like that people pay for chat GPT and know that the answers they're getting 01:20:40.140 |
There is, I'm sure there's an ad unit that makes sense for LLMs. 01:20:45.740 |
And I'm sure there's a way to like participate in the transaction stream 01:20:53.340 |
But it's also easy to think about like the dystopic visions of the future 01:20:58.540 |
where you ask chat GPT something and it says, 01:21:01.340 |
oh, here's, you know, you should think about buying this product 01:21:03.500 |
or you should think about, you know, this going here for your vacation or whatever. 01:21:13.660 |
we have a very simple business model and I like it. 01:21:19.180 |
Like I know I'm paying and that's how the business model works. 01:21:23.180 |
And when I go use like Twitter or Facebook or Google or any other great product, 01:21:36.140 |
And I think it gets worse, not better in a world with AI. 01:21:39.420 |
- Yeah, I mean, I can imagine AI would be better 01:21:46.620 |
but where the ads are for things you actually need. 01:21:49.740 |
But then does that system always result in the ads 01:21:56.700 |
driving the kind of stuff that's shown all that it's... 01:21:59.340 |
Yeah, I think it was a really bold move of Wikipedia 01:22:04.860 |
but then it makes it very challenging as a business model. 01:22:08.780 |
So you're saying the current thing with open AI 01:22:16.300 |
but it looks like we're gonna figure that out. 01:22:19.340 |
If the question is, do I think we can have a great business 01:22:33.660 |
I also just don't want to completely throw out ads as a... 01:22:38.380 |
I guess I'm saying I have a bias against them. 01:22:41.500 |
- Yeah, I have also a bias and just a skepticism in general. 01:22:49.900 |
'cause I personally just have like a spiritual 01:23:07.340 |
that doesn't interfere with the consumption of the content 01:23:09.900 |
and doesn't interfere in the big fundamental way, 01:23:13.420 |
Like it will manipulate the truth to suit the advertisers. 01:23:26.300 |
and like safety in the short term, safety in the long term. 01:23:34.780 |
and it generated black Nazis and black founding fathers. 01:23:40.860 |
I think fair to say it was a bit on the ultra woke side. 01:23:48.700 |
that if there is a human layer within companies 01:23:51.980 |
that modifies the safety or the harm caused by a model, 01:23:59.580 |
that fits sort of an ideological lean within a company. 01:24:05.740 |
- I mean, we work super hard not to do things like that. 01:24:19.900 |
One thing that we've been thinking about more and more is, 01:24:22.780 |
I think this was a great idea somebody here had, 01:24:30.140 |
Say, here's how this model is supposed to behave 01:24:40.540 |
the company should fix or behaving as intended 01:24:44.060 |
And right now it can sometimes be caught in between. 01:24:50.300 |
but there are a lot of other kind of subtle things 01:24:52.220 |
that you could make a judgment call on either way. 01:24:56.860 |
and make it public, you can use kind of language 01:25:03.820 |
- That doesn't, that's not what I'm talking about. 01:25:10.860 |
- So like literally who's better, Trump or Biden? 01:25:21.020 |
but I think you should have to say, you know, 01:25:35.020 |
about other representative anecdotal examples, 01:25:46.940 |
So San Francisco is a bit of an ideological bubble, 01:26:03.820 |
that affects the product, that affects the teams? 01:26:06.220 |
- I feel very lucky that we don't have the challenges 01:26:15.420 |
every company has got some ideological thing. 01:26:23.260 |
Like we are much less caught up in the culture war 01:26:27.500 |
than I've heard about at a lot of other companies. 01:26:29.900 |
San Francisco's a mess in all sorts of ways, of course. 01:26:35.900 |
- I'm sure it does in all sorts of subtle ways, 01:26:39.020 |
Like I think we've had our flare-ups for sure, 01:26:47.020 |
like what I hear about happen at other companies here. 01:26:53.820 |
How do you provide that layer that protects the model 01:27:03.420 |
where that's mostly what we think about the whole company. 01:27:19.740 |
- That's literally what humans will be thinking about 01:27:33.660 |
I wonder what are the full broad definition of that? 01:27:37.260 |
Like what are the different harms that could be caused? 01:27:46.380 |
it'll be people, state actors trying to steal the model. 01:27:50.460 |
It'll be all of the technical alignment work. 01:28:09.980 |
- How hard do you think people, state actors perhaps, 01:28:26.060 |
- I don't actually want any further details on this point. 01:28:29.680 |
But I presume it'll be more and more and more 01:28:46.540 |
but what aspects of the leap from GPT-4 to GPT-5 01:28:55.820 |
but I think the really special thing happening 01:28:58.940 |
is that it's not like it gets better in this one area 01:29:10.540 |
you hang out with people, and you talk to them. 01:29:37.740 |
in your crappy formulated prompts that you're doing 01:29:56.060 |
like when you're programming and you say something 01:30:01.180 |
it's just such a good feeling when it got you, 01:30:07.020 |
And I look forward to getting you even better. 01:30:09.340 |
On the programming front, looking out into the future, 01:30:28.300 |
- I mean, no one programs like writing bytecode. 01:30:33.260 |
Some people, no one programs the punch cards anymore. 01:30:38.540 |
- Yeah, you're gonna get a lot of angry comments now. 01:30:43.020 |
I've been looking for people who program FORTRAN. 01:30:48.300 |
But that changes the nature of what the skill set 01:30:56.460 |
How much it changes the predisposition, I'm not sure. 01:31:11.260 |
the best practitioners of the craft will use multiple tools. 01:31:14.220 |
And they'll do some work in natural language. 01:31:16.460 |
And when they need to go write C for something, 01:31:22.860 |
or humanoid robot brains from open AI at some point? 01:31:31.660 |
- I think it's sort of depressing if we have AGI 01:31:34.620 |
and the only way to get things done in the physical world 01:31:40.380 |
So I really hope that as part of this transition, 01:31:55.740 |
- But it hasn't quite done in terms of emphasis-- 01:32:00.940 |
And also robots were hard for the wrong reason at the time. 01:32:04.460 |
But like, we will return to robots in some way at some point. 01:32:14.540 |
- Because immediately we will return to robots. 01:32:17.580 |
It's kind of like, and they could determine-- 01:32:19.740 |
- We will return to work on developing robots. 01:32:21.820 |
We will not like turn ourselves into robots, of course. 01:32:24.140 |
When do you think we, you and we as humanity will build AGI? 01:32:30.300 |
- I used to love to speculate on that question. 01:32:33.340 |
I have realized since that I think it's like very poorly formed 01:32:37.180 |
and that people use extremely different definitions for what AGI is. 01:32:42.540 |
And so I think it makes more sense to talk about 01:32:48.700 |
when we'll build systems that can do capability X or Y or Z, 01:32:51.900 |
rather than when we kind of like fuzzily cross this one mile marker. 01:32:57.260 |
It's not like, like AGI is also not an ending. 01:32:59.500 |
It's much more of a, it's closer to a beginning, 01:33:01.820 |
but it's much more of a mile marker than either of those things. 01:33:04.300 |
But what I would say in the interest of not trying to dodge a question 01:33:21.180 |
we will have quite capable systems that we look at and say, 01:33:28.940 |
maybe we've adjusted by the time we get there. 01:33:30.540 |
- Yeah, but you know, if you look at Chad GPT, 01:33:34.140 |
even with 3.5 and you show that to Alan Turing 01:33:41.900 |
they would be like, "This is definitely AGI." 01:33:44.620 |
Well, not definitely, but there's a lot of experts 01:33:48.940 |
- Yeah, but I don't think 3.5 changed the world. 01:33:52.380 |
It maybe changed the world's expectations for the future 01:33:58.460 |
And it did kind of like get more people to take this seriously 01:34:07.020 |
I think I could retire after that accomplishment 01:34:13.100 |
I don't think we're gonna look back at that and say 01:34:16.060 |
that was a threshold that really changed the world itself. 01:34:20.140 |
- So to you, you're looking for some really major transition 01:34:39.020 |
- Like does the global economy feel any different to you now 01:34:55.980 |
people define AGI all sorts of different ways, 01:34:57.900 |
so maybe you have a different definition than I do, 01:34:59.580 |
but for me, I think that should be part of it. 01:35:01.900 |
- There could be major theatrical moments also. 01:35:04.780 |
What to you would be an impressive thing AGI would do? 01:35:17.820 |
I don't know if this is the right definition. 01:35:19.340 |
I think when a system can significantly increase 01:35:24.540 |
the rate of scientific discovery in the world, 01:35:31.500 |
comes from scientific and technological progress. 01:35:35.980 |
That's why I don't like the skepticism about science 01:35:53.100 |
have really novel intuitions, like scientific intuitions, 01:36:03.740 |
to build the AGI to be able to interact with it 01:36:12.780 |
- But what will I, I've actually thought a lot 01:36:16.380 |
If I were, someone was like, I think this is, 01:36:19.420 |
as we talked earlier, I think this is a bad framework. 01:36:21.580 |
But if someone were like, okay, Sam, we're finished. 01:36:31.260 |
I find it surprisingly difficult to say what I would ask, 01:36:37.340 |
that I would expect that first AGI to be able to answer. 01:36:39.820 |
Like that first one is not gonna be the one which is like, 01:36:44.300 |
go like, you know, I don't think, like go explain to me 01:36:53.740 |
I'd love to know the answer to that question. 01:37:00.060 |
- Well, then those are the first questions I would ask. 01:37:01.900 |
- Yes or no, just very, and then based on that, 01:37:05.180 |
are there other alien civilizations out there? 01:37:11.980 |
that this first AGI could answer any of those questions, 01:37:18.540 |
- Maybe you're gonna start assigning probabilities. 01:37:21.820 |
- Maybe, maybe we need to go invent more technology 01:37:32.860 |
you wanna know the answer to this question about physics. 01:37:36.780 |
and make these five measurements and tell me that. 01:37:39.020 |
- Yeah, like what the hell do you want from me? 01:37:43.100 |
and I'll help you deal with the data from that machine. 01:37:47.580 |
- And on the mathematical side, maybe prove some things. 01:37:51.580 |
Are you interested in that side of things too? 01:37:56.380 |
Whoever builds AGI first gets a lot of power. 01:38:14.340 |
Look, I was gonna, I'll just be very honest with this answer. 01:38:20.900 |
that it is important that I, nor any other one person, 01:38:31.300 |
And I think you want a robust governance system. 01:38:48.420 |
and was just like, yeah, that's the will of the board, 01:38:50.820 |
even though I think it's a really bad decision. 01:38:58.580 |
and why I think it was okay for me to fight it later. 01:39:08.260 |
although the board had the legal ability to fire me, 01:39:15.140 |
And that is its own kind of governance failure. 01:39:23.220 |
Now, again, I feel like I can completely defend 01:39:29.140 |
And I think most people would agree with that, 01:39:42.740 |
I continue to not want super-voting control over OpenAI. 01:39:52.980 |
Even after all this craziness, I still don't want it. 01:40:18.100 |
and I'm just willing to be misunderstood there. 01:40:27.060 |
But I think I have made plenty of bad decisions 01:40:35.060 |
for OpenAI along the way, and a lot of good ones. 01:40:50.260 |
I don't think any one person should be in control of an AGI, 01:41:00.900 |
That was really powerful and that was really insightful 01:41:03.140 |
that this idea that the board can fire you is legally true. 01:41:19.300 |
But I think there's also a much more positive version of that 01:41:30.980 |
- Are you afraid of losing control of the AGI itself? 01:41:37.140 |
That's a lot of people who worried about existential risk, 01:41:47.460 |
there have been times I worried about that more 01:41:53.300 |
- What's your intuition about it not being your worry? 01:42:09.460 |
And we have great people here who do work on that. 01:42:15.380 |
- To you, it's not super easy to escape the box at this time. 01:42:34.500 |
I think very well-meaning AI safety researchers 01:42:46.900 |
because I think we do need to think about this more. 01:42:54.740 |
a lot of the other very significant AI-related risks. 01:43:00.900 |
- Let me ask you about you tweeting with no capitalization. 01:43:11.060 |
I mean, other people are asking about that too. 01:43:23.300 |
to say like, "Fuck you to the system," kind of thing. 01:43:30.740 |
- It's like this guy doesn't follow the rules. 01:43:52.900 |
and you could log off Instant Messenger at some point. 01:44:09.780 |
now I'm really trying to reach for something, 01:44:11.780 |
but I think capitalization has gone down over time. 01:44:21.220 |
I personally think it's sort of like a dumb construct 01:44:31.940 |
And I used to, I think, even capitalize my tweets 01:44:37.780 |
'cause I was trying to sound professional or something. 01:44:57.140 |
to closer and closer to how I would text my friends. 01:45:26.980 |
- I was just mostly concerned about your well-being 01:45:35.700 |
if you're writing something just to yourself, 01:45:42.420 |
Yeah, there's a percentage, but it's a small one. 01:45:53.380 |
If it felt like a sign of respect to people or something, 01:45:57.620 |
But I don't know, I just don't think about this. 01:46:02.100 |
but I think it's just the conventions of civility 01:46:08.580 |
and then you realize it's not actually important 01:46:10.660 |
for civility if it's not a sign of respect or disrespect. 01:46:15.300 |
that just want you to have a philosophy around it 01:46:17.460 |
so they can let go of this whole capitalization thing. 01:46:19.540 |
- I don't think anybody else thinks about this as much. 01:46:22.340 |
- I think about this every day for many hours a day. 01:46:27.860 |
- Can't be the only person that doesn't capitalize tweets. 01:46:33.780 |
- I don't even think that's true, but maybe, maybe. 01:46:40.260 |
Given Thor's ability to generate simulated worlds, 01:46:50.340 |
if you ever had one, that we live in a simulation? 01:46:52.740 |
Maybe a simulated world generated by an AI system? 01:47:08.740 |
I don't think that's the strongest piece of evidence. 01:47:18.260 |
should increase everyone's probability somewhat, 01:47:35.380 |
and presumably it'll get better and better and better, 01:47:37.940 |
the fact that you can generate worlds, they're novel. 01:47:40.180 |
They're based in some aspect of training data, 01:47:47.300 |
That makes you think, how easy it is to do this thing? 01:47:53.940 |
Entire video game worlds that seem ultra-realistic 01:48:22.100 |
but very psychedelic insights that exist sometimes. 01:48:35.380 |
now I have to think about this new kind of number. 01:48:45.540 |
of a square root function that you can explain to a child 01:48:49.540 |
and exists by even looking at some simple geometry, 01:48:57.860 |
And that, this is why it's a psychedelic thing, 01:49:02.900 |
that tips you into some whole other kind of reality. 01:49:05.700 |
And you can come up with lots of other examples, 01:49:11.220 |
but I think this idea that the lowly square root operator 01:49:20.820 |
and a new realm of knowledge applies in a lot of ways. 01:49:25.540 |
And I think there are a lot of those operators 01:49:28.900 |
for why people may think that any version that they like 01:49:35.860 |
is maybe more likely than they thought before. 01:49:47.860 |
AI will serve as those kinds of gateways at its best. 01:50:00.660 |
I haven't done ayahuasca before, but I will soon. 01:50:04.180 |
I'm going to the aforementioned Amazon jungle 01:50:09.460 |
Not the ayahuasca part, but that's great, whatever. 01:50:11.460 |
But I'm gonna spend several weeks in the jungle, 01:50:17.780 |
because there's a lot of things that can eat you there 01:50:21.540 |
But it's also nature and it's the machine of nature. 01:50:25.220 |
And you can't help but appreciate the machinery of nature 01:50:29.060 |
'cause it's just like this system that just exists 01:50:33.620 |
and renews itself every second, every minute, every hour. 01:50:38.820 |
It makes you appreciate this thing we have here, 01:50:46.900 |
And it's most clearly on display in the jungle. 01:50:53.620 |
If not, this will be the last conversation we had, 01:51:02.100 |
intelligent ones, when you look up at the skies? 01:51:05.220 |
- I deeply wanna believe that the answer is yes. 01:51:31.460 |
is not good at handling powerful technologies. 01:51:35.620 |
But at the same time, I think I'm pretty confident 01:51:41.860 |
of intelligent alien civilizations out there. 01:51:44.180 |
It might just be really difficult to travel through space. 01:51:52.180 |
Maybe we're really blind to what intelligence looks like. 01:52:03.380 |
What gives you hope about the future of humanity? 01:52:08.900 |
This thing we've got going on, this human civilization? 01:52:14.900 |
I mean, we just look at what humanity has done 01:52:31.300 |
- That we're together pushing towards a better future. 01:52:35.940 |
- It is, you know, one thing that I wonder about 01:52:42.660 |
is, is AGI gonna be more like some single brain? 01:52:50.340 |
You have not had a great deal of genetic drift 01:52:57.700 |
And yet what you're capable of is dramatically different. 01:53:07.620 |
It is because, I mean, you got a little bit healthier, 01:53:24.820 |
No one person is gonna go discover all of science. 01:53:30.820 |
And so in some sense, the like, we all created that. 01:53:39.380 |
- Yeah, we really are standing on the shoulders of giants. 01:53:43.060 |
You mentioned when we were talking about theatrical, 01:53:59.940 |
and I knew it today, I'd be like, oh, that's sad. 01:54:02.980 |
I like, don't, you know, I wanna see what's gonna happen. 01:54:11.860 |
But I would mostly just feel like very grateful for my life. 01:54:31.860 |
Sam, it's really an honor and pleasure to talk to you again. 01:54:37.300 |
- Thanks for listening to this conversation with Sam Altman. 01:54:42.260 |
please check out our sponsors in the description. 01:54:44.420 |
And now let me leave you with some words from Arthur C. Clarke. 01:54:55.700 |
Thank you for listening and hope to see you next time.