back to indexSam Altman's World Tour, in 16 Moments
Chapters
0:0
1:27 'Strange Decisions'
9:7 Customise ChatGPT
10:19 Opensource 'unstoppable'
00:00:00.000 |
There have been 16 surprising and or fascinating moments from Sam Altman's world tour. 00:00:06.880 |
I could have done a video on each of them, but after watching over 10 hours of interviews, 00:00:11.780 |
I decided, you know what, let's just show you everything in one video. 00:00:15.940 |
From AIs designing new AIs, to fresh ChatGPT leaks, shooting railguns, to open source, 00:00:23.740 |
here's all 16 things I learnt in no particular order. 00:00:27.480 |
Let's start with Sam Altman's warning about AIs designing their own architecture. 00:00:32.940 |
Seems like a good idea, but Satsukawa could see one of their models designing the next model. 00:00:54.260 |
We are definitely very concerned about superintelligence. 00:00:57.460 |
It will be possible to build a computer, a computer cluster, a GPU farm, that is just smarter than any person, 00:01:04.820 |
that can do science and engineering much, much faster than even a large team of really experienced scientists and engineers. 00:01:11.700 |
And that is crazy. That is going to be unbelievably, extremely impactful. 00:01:17.060 |
It could engineer the next version of the system. 00:01:23.540 |
Let's return to Abu Dhabi where Sam Altman said he enjoys the power that being CEO of OpenAI brings, 00:01:32.180 |
but also mentioned strange decisions he might have to make. 00:01:36.020 |
I mean, I have like lots of selfish reasons for doing this. 00:01:38.220 |
And as you said, I get like all of the power of running OpenAI, but I can't think of like anything more fulfilling to work on. 00:01:44.900 |
I don't think it's like particularly altruistic because it would be if I like didn't already have a bunch of money. 00:01:49.300 |
Yeah, the money is going to like pile up faster than I can spend it anyway. 00:01:53.460 |
I don't think I can flick that on OpenAI because I think the chance that we have to make a very strange decision someday is non-trivial. 00:02:01.260 |
Speaking of big decisions, Sam Altman hinted twice, once in Jordan and once in India, 00:02:06.100 |
of possible regrets he might have had over firing the starting gun in the AI race. 00:02:11.420 |
We're definitely going to have some huge regrets 20 years from now. 00:02:15.180 |
I hope what we can say is that we did far, far, far more good than bad. 00:02:25.620 |
Honestly, I think if we're going to regret something, it may be that we already pushed the button. 00:02:40.300 |
Like this, we're out of the world is like out of the gates. 00:02:43.820 |
I guess the thing that I lose the most sleep over is that we already have done something really bad. 00:02:51.220 |
But the hypothetical that we're going to have to do something really bad is that we're going to have to do something really bad. 00:02:52.300 |
But the hypothetical that we're going to have to do something really bad is that we're going to have to do something really bad. 00:02:53.300 |
that we, by launching Chachi PT into the world, 00:02:59.540 |
and we now don't get to have much impact anymore. 00:03:10.240 |
and I think we're gonna address all the problems. 00:03:30.020 |
- To the last question, the superintelligent AI 00:03:32.000 |
that's out of control, yeah, that'd be pretty bad. 00:03:40.800 |
it would be a big mistake to build a superintelligence AI 00:03:49.240 |
- I think the world should treat that not as a, 00:03:52.100 |
you know, ha ha, never gonna come sci-fi risk, 00:03:59.220 |
- On a lighter note, Sam Altman didn't seem that perturbed, 00:04:04.640 |
but also on society getting used to misinformation. 00:04:08.240 |
- I wanna play a clip, maybe you guys can put on a clip 00:04:10.960 |
of something I recently heard Sam speak somewhere 00:04:17.400 |
- Hi, my name is Sam and I'm happy to be here today. 00:04:21.780 |
I also wanted to say that the gentleman on stage with me 00:04:27.060 |
And I also want to say that you should be very careful 00:04:29.360 |
with videos generated with artificial intelligence technology. 00:04:36.100 |
but nonetheless, I think it raises a real question, right? 00:04:38.980 |
When, you know, this video, if you look closely, 00:04:41.880 |
you can see the lips aren't perfectly synced, 00:04:43.440 |
but like you said, this stuff is only gonna get better 00:04:46.900 |
- Yeah, so that was like deeply in the uncanny valley. 00:04:48.940 |
It's very strange to watch, but we're not that far away 00:04:53.400 |
There's a lot of fear right now about the impact 00:04:55.720 |
this is gonna have on elections and on our society 00:05:01.860 |
I have some fear there, but I think when it comes 00:05:04.160 |
to like a video like that, I think as a society, 00:05:07.520 |
We're gonna learn very quickly that we don't trust videos 00:05:13.240 |
If people are saying something really important, 00:05:18.680 |
Sam Altman repeatedly stated that he didn't believe there should be a video that looks perfect. 00:05:20.360 |
Sam Altman repeatedly stated that he didn't believe there should be a video that looks perfect. 00:05:20.360 |
Indeed, throughout the world tour, Sam Altman repeatedly stated that he didn't believe there should be a video that looks perfect. 00:05:20.360 |
Sam Altman repeatedly stated that he didn't believe there should be a video that looks perfect. 00:05:21.580 |
Sam Altman repeatedly stated that he didn't believe there should be any regulation of current models. 00:05:24.520 |
- Everybody wants great education, productivity gains, discovery of new science, 00:05:29.340 |
all of this stuff that's gonna happen, and no one wants to destroy the world. 00:05:32.580 |
No one wants to do things, not even that bad, but still bad. 00:05:35.460 |
I totally believe it is possible to not stifle innovation and to address the big risks. 00:05:42.020 |
I think it would be a mistake to go regulate the current models of today. 00:05:45.880 |
- And in Poland, his co-founder Wozzech Zaremba agreed, 00:05:49.460 |
saying the risks of a video that looks perfect would be a mistake. 00:05:49.480 |
saying the risks of a video that looks perfect would be a mistake. 00:05:50.360 |
But the risks of superintelligence were 10 years away. 00:05:52.800 |
- Also, I would say that the fear is the fear of AI of the future, not the AI of today. 00:06:01.100 |
If the trajectory that we are on will continue, then in the decade or so, there will be built systems which are as powerful as today corporations. 00:06:13.420 |
But if I could speak to Sam Altman, I would bring his attention to this paper published this week. 00:06:18.920 |
This is a study out of Harvard University, which is a university that is very well-versed in AI. 00:06:20.360 |
This study was published in the Harvard University and MIT, and it involved some non-scientist students working for one hour. 00:06:27.100 |
In that hour, they were able to get chatbots to suggest four potential pandemic pathogens, 00:06:32.740 |
explain how they can be generated from synthetic DNA using reverse genetics, 00:06:37.680 |
supplied the names of DNA synthesis companies unlikely to screen orders, 00:06:42.320 |
and identify detailed protocols and how to troubleshoot them. 00:06:46.320 |
And they say that collectively, these results suggest that LLMs, 00:06:50.360 |
will make pandemic class agents widely accessible, even to people they say with little or no lab training. 00:06:58.040 |
And then there's this, these results strongly suggest that the existing evaluation and training process for large language models 00:07:05.680 |
is inadequate to prevent them from providing malicious actors with accessible expertise relevant to inflicting mass death. 00:07:13.880 |
And that more immediately, if unmitigated LLM chatbots render pandemic class agents more accessible, 00:07:20.360 |
even to people without training in the life sciences, 00:07:22.920 |
the number of individuals capable of killing tens of millions of people will dramatically increase. 00:07:28.200 |
They recommend that, at a minimum, new LLMs larger than GPT-3 should undergo evaluation by third parties, 00:07:35.800 |
skilled in assessing catastrophic biological risks before controlled access is given to the general public. 00:07:44.720 |
so that strongly contradicts Sam Altman's assertion that current models like GPT-4 shouldn't have 00:07:51.400 |
They say that even open source communities should welcome safeguards because a single instance of misuse and mass death 00:07:58.280 |
would trigger a backlash including the imposition of extremely harsh regulations. 00:08:03.480 |
One specific recommendation was that if biotech and information security experts 00:08:08.440 |
were able to identify the set of publications most relevant to causing mass death, 00:08:13.400 |
and companies like OpenAI and Google curated their training datasets to remove those publications, 00:08:20.360 |
then future models trained on the curated data would be far less capable of providing anyone intent on harm 00:08:26.760 |
with the "recipes for the creation or enhancement of pathogens". 00:08:30.920 |
This seems like an absolutely obvious move to me and I think Ilya Satskova would agree. 00:08:35.560 |
We are talking about as time goes by and the capability keeps increasing, 00:08:39.960 |
you know, and eventually it goes all the way to here, right? 00:08:43.000 |
Right now we are here. Today, that's where we are. 00:08:50.360 |
it's very powerful technology. It can be used for amazing applications. 00:08:54.360 |
You can say cure all disease. On the flip side, you can say create a disease. 00:08:59.560 |
Much more worse than anything that existed before. That'd be bad. 00:09:03.240 |
Moving on to the ChatGPT leak, it seems like we're going to get a new workspace 00:09:09.160 |
where we can customize our interaction with ChatGPT, giving it files and a profile with 00:09:15.000 |
any information that you'd like ChatGPT to remember about you and your preferences. 00:09:19.640 |
This was hinted at in the chat, but I'm not sure if this is the right way to do it. 00:09:20.360 |
We were hinted at on the world tour when one of Sam Altman's guests, 00:09:23.000 |
Johannes Heidecker from OpenAI Research, talked about customizing models. 00:09:27.320 |
We are trying to make our models both better at following certain guardrails that should 00:09:31.320 |
never be overwritten, not with jailbreaks, not if you ask nicely, not if you threaten it. 00:09:35.560 |
And we're also trying to make our models better at being customizable, 00:09:39.960 |
making them listen more to additional instructions of what kind of behavior the user or the developer 00:09:45.720 |
On a lighter note, the leaders of OpenAI were asked in Seoul, the capital of 00:09:50.360 |
South Korea, about the mixing of AI and religion. 00:09:54.120 |
Do you expect AI to replace the role of religious organizations like church? 00:09:58.520 |
I think that it's a good question how all human societies will integrate AI. 00:10:07.080 |
And we've already seen people building AI pastors, for example, and so the constituents 00:10:11.880 |
can ask questions to this pastor that can cite Bible verses and it can give advice. 00:10:15.880 |
But now back to Poland, where Sam Altman called open source "unstable" 00:10:20.360 |
"Realizing that open source is unstoppable and shouldn't be stopped, 00:10:24.280 |
and so this stuff is going to be out there and as a society we have to adapt." 00:10:27.400 |
But speaking of stopping AI, Sam Altman was asked about his own loved ones, 00:10:31.880 |
and in response he gave a utopic vision of the future and called the current world "barbaric". 00:10:37.400 |
If you truly believe that AI imposes a danger to humankind, why keep developing it? 00:10:43.720 |
Aren't you afraid for your own dear ones and family? 00:10:51.720 |
And the most troublesome part of our jobs is that we have to balance this incredible 00:10:59.400 |
promise and this technology that I think humans really need, and we can talk about 00:11:05.240 |
why in a second, with confronting these very serious risks. 00:11:10.040 |
Number one, I do think that when we look back at the standard of living and what we tolerate 00:11:15.480 |
for people today, it will look even worse than 00:11:20.360 |
when we look back at how people lived 500 or 1000 years ago. 00:11:24.120 |
And we'll say like, man, can you imagine that people lived in poverty? 00:11:27.880 |
Can you imagine people suffered from disease? 00:11:30.440 |
Can you imagine that everyone didn't have a phenomenal education and were able to live 00:11:36.360 |
I think everyone in the future is going to have better lives than the best people of today. 00:11:40.120 |
I think there's like a moral duty to figure out how to do that. 00:11:43.160 |
I also think this is like unstoppable, like this is the progress of technology. 00:11:50.920 |
And so we have to figure out how to manage the risk. 00:11:53.320 |
He doesn't seem to be 100% sure on this front though. 00:11:56.520 |
And here is an interview he gave with The Guardian when he was in London for his world 00:12:01.320 |
Speaking of super intelligence, he said, "It's not that it's not stoppable. 00:12:05.400 |
If governments around the world decided to act in concert to limit AI development, as 00:12:09.800 |
they have in other fields such as human cloning or bioweapon research, they may be able to." 00:12:14.520 |
But then he repeated, "But that would be to give up all that is possible. 00:12:17.800 |
I think that this will be the most tremendous leap forward, 00:12:20.360 |
in terms of the quality of life for people that we've ever had." 00:12:22.600 |
I did try to get tickets for the London leg of his world tour, 00:12:28.040 |
Sam Oldman does think that behaviour will change, however, 00:12:30.920 |
when these AGI labs stare existential risk in the face. 00:12:34.760 |
Sam Oldman: "One of the things we talked about is what's a structure that would let us 00:12:38.920 |
warmly embrace regulation that would hurt us the most. 00:12:42.760 |
And now that the time has come for that, we're out here advocating around the world for regulation 00:12:47.480 |
that will impact us the most. So, of course, we'll comply with it. 00:12:50.360 |
But I think it's more easy to get good behaviour out of people when they are staring existential 00:12:56.520 |
risk in the face. And so I think all of the people at the leading edge here, these different 00:13:01.480 |
companies, now feel this, and you will see a different collective response than you saw from 00:13:07.480 |
And in terms of opportunities, both Sam Oldman and Ilya Sutskova talked about solving climate change. 00:13:13.240 |
Sam Oldman: "I don't want to say this because climate change is so serious and so hard of a 00:13:16.360 |
problem, but I think once we have a really powerful super intelligence, 00:13:20.360 |
addressing climate change will not be particularly difficult for a system like that." 00:13:24.040 |
Ilya Sutskova: "We can even explain how. Here's how you solve climate change. You need a very large 00:13:28.360 |
amount of efficient carbon capture. You need the energy for the carbon capture, you need the 00:13:33.320 |
technology to build it, and you need to build a lot of it. If you can accelerate the scientific 00:13:38.120 |
progress, which is something that a powerful AI could do, we could get to a very advanced carbon 00:13:43.400 |
capture much faster. We could get to a very cheap power much faster. We could get to cheaper 00:13:48.680 |
manufacturing much faster. Ilya Sutskova: "I think that's a very important point. 00:13:50.360 |
Combine those three: cheap power, cheap manufacturing, advanced carbon capture. 00:13:54.840 |
Now you build lots of them. And now you sucked out all the excess CO2 from the atmosphere." 00:13:59.880 |
Sam Oldman: "You know, if you think about a system where you can say, 00:14:02.120 |
'Tell me how to make a lot of clean energy cheaply. Tell me how to efficiently capture carbon. 00:14:07.960 |
And then tell me how to build a factory to do this at planetary scale.' If you can do that, 00:14:11.880 |
you can do a lot of other things too." Ilya Sutskova: "Yeah. With one addition 00:14:15.240 |
that not only you ask it to tell it, you ask it to do it." Sam Oldman: "That would indeed be 00:14:20.360 |
amazing. But think of the power we would be giving to an AI if it was able to just do it, 00:14:25.960 |
just create those carbon capture factories." Ilya Sutskova: "If we did make that decision, 00:14:30.120 |
one thing that would help would be reducing hallucinations." Sam Oldman: "I think we will 00:14:34.280 |
get the hallucination problem to a much, much better place. It will take us, 00:14:38.040 |
my colleagues weigh in, I think it'll take us a year and a half, two years, something like that. 00:14:43.400 |
But at that point, we won't still talk about these." Ilya Sutskova: 00:14:45.560 |
Sam Oldman talked about that in New Delhi. That timeframe of 18 months to two years is 00:14:50.360 |
ambitious and surprising. But now onto jobs, which Sam Oldman was asked about on every leg of the 00:14:56.440 |
tour. On this front though, I do think it was Ilya Sutskova who gave the more honest answer. 00:15:01.320 |
Sam Oldman: "Economic dislocation indeed, like we already know that there are jobs that are 00:15:06.600 |
being impacted or they're being affected. In other words, some chunks of the jobs can be done. You 00:15:11.880 |
know, if you're a programmer, you don't write functions anymore, copilot writes them for you. 00:15:15.800 |
If you're an artist though, it's a bit different because a big chunk of the artists' economic 00:15:20.360 |
part of them activity has been taken by some of the image generators. And while new jobs will be 00:15:25.240 |
created, it's going to be a long period of economic uncertainty. There is an argument to be 00:15:29.720 |
made that even when they have full human level AI, full AGI, people will still have economic activity 00:15:35.800 |
to do. I don't know whether that's the case, but in either event, we will need to have something 00:15:42.600 |
that will soften the blow to allow for a smoother transition either to the totally new professions 00:15:50.360 |
or even if not, then we want government, the social systems will need to keep keen." 00:15:55.320 |
I do think the changes in the job market will be dramatic and we'll be following the story closely. 00:15:59.960 |
One thing I definitely agree with Sam Oldman on though, is the deep, 00:16:03.560 |
almost philosophical change that this solving of intelligence has brought to humanity. 00:16:08.680 |
Sam Oldman: "I grew up implicitly thinking that intelligence was this like really special 00:16:16.120 |
human thing and kind of somewhat magical. And I now think that it's sort of a fundamental 00:16:20.360 |
property of matter. And that's definitely a change to my worldview. The history of scientific 00:16:28.440 |
discovery is that humans are less and less at the center. We used to think that sun rotated around 00:16:33.880 |
us and then maybe at least we were, if not that, we were going to be the center of the galaxy and 00:16:38.280 |
there wasn't this big universe. And then multiverse really is kind of weird and depressing. And 00:16:42.520 |
if intelligence is a special, again, we're just further and further 00:16:45.560 |
away from main character energy. But that's all right. That's sort of like a 00:16:50.360 |
nice thing to realize actually." It's a bit like a Copernican and 00:16:53.880 |
Darwinian revolution all rolled in one. But I'll give the final word to Greg Brockman in 00:16:59.320 |
Seoul who talked about the unpredictability of scaling up models 10 times. 00:17:03.960 |
Greg Brockman: "That is the biggest theme in the history of AI is that it's full of surprises. 00:17:07.480 |
Every time you think you know something, you scale it up 10x, turns out you knew nothing. 00:17:11.080 |
And so I think that we as a humanity, as a species are really exploring this together." 00:17:15.240 |
Being all in it together and knowing nothing sounds about right. But thank you 00:17:20.360 |
for watching to the end. I know that Sam Altman has a couple more stops. I think it's Jakarta 00:17:25.720 |
and Melbourne on the world tour and I'll be watching those of course. But for now,