Back to Index

Sam Altman's World Tour, in 16 Moments


Chapters

0:0
1:27 'Strange Decisions'
9:7 Customise ChatGPT
10:19 Opensource 'unstoppable'

Transcript

There have been 16 surprising and or fascinating moments from Sam Altman's world tour. I could have done a video on each of them, but after watching over 10 hours of interviews, I decided, you know what, let's just show you everything in one video. From AIs designing new AIs, to fresh ChatGPT leaks, shooting railguns, to open source, here's all 16 things I learnt in no particular order.

Let's start with Sam Altman's warning about AIs designing their own architecture. Seems like a good idea, but Satsukawa could see one of their models designing the next model. We are definitely very concerned about superintelligence. It will be possible to build a computer, a computer cluster, a GPU farm, that is just smarter than any person, that can do science and engineering much, much faster than even a large team of really experienced scientists and engineers.

And that is crazy. That is going to be unbelievably, extremely impactful. It could engineer the next version of the system. AI building AI. It's just crazy. Let's return to Abu Dhabi where Sam Altman said he enjoys the power that being CEO of OpenAI brings, but also mentioned strange decisions he might have to make.

I mean, I have like lots of selfish reasons for doing this. And as you said, I get like all of the power of running OpenAI, but I can't think of like anything more fulfilling to work on. I don't think it's like particularly altruistic because it would be if I like didn't already have a bunch of money.

Yeah, the money is going to like pile up faster than I can spend it anyway. I like being non-confident. I don't think I can flick that on OpenAI because I think the chance that we have to make a very strange decision someday is non-trivial. Speaking of big decisions, Sam Altman hinted twice, once in Jordan and once in India, of possible regrets he might have had over firing the starting gun in the AI race.

We're definitely going to have some huge regrets 20 years from now. I hope what we can say is that we did far, far, far more good than bad. And I think we will. I think that's true. But the downside here is pretty big. And I think we feel that weight every day.

Honestly, I think if we're going to regret something, it may be that we already pushed the button. Like we've already launched this revolution. It's somewhat out of our hands. I think it's going to be great. But like this is going to happen now, right? Like this, we're out of the world is like out of the gates.

I guess the thing that I lose the most sleep over is that we already have done something really bad. I don't think we have. But the hypothetical that we're going to have to do something really bad is that we're going to have to do something really bad. I don't think we have.

But the hypothetical that we're going to have to do something really bad is that we're going to have to do something really bad. that we, by launching Chachi PT into the world, shot the industry out of a rail gun and we now don't get to have much impact anymore.

And there's gonna be an acceleration towards making these systems which again, I think will be used for tremendous good and I think we're gonna address all the problems. But maybe there's something in there that was really hard and complicated in a way we didn't understand, and we've now already kicked this off.

- But back to Tel Aviv where both Sam Altman and OpenAI's chief scientist Ilya Satskova agreed that the risks from superintelligence were not science fiction. - To the last question, the superintelligent AI that's out of control, yeah, that'd be pretty bad. Yeah, so it's like, it would be a big mistake to build a superintelligence AI that we don't know how to control.

- I think the world should treat that not as a, you know, ha ha, never gonna come sci-fi risk, but something that we may have to confront in the next decade, which is not very long. - On a lighter note, Sam Altman didn't seem that perturbed, not just about a deep fake of himself, but also on society getting used to misinformation.

- I wanna play a clip, maybe you guys can put on a clip of something I recently heard Sam speak somewhere and we can talk about it a bit. Could you play the clip please? - Hi, my name is Sam and I'm happy to be here today. Thank you all for joining.

I also wanted to say that the gentleman on stage with me is incredibly good looking. And I also want to say that you should be very careful with videos generated with artificial intelligence technology. - Okay, so you didn't say that recently, but nonetheless, I think it raises a real question, right?

When, you know, this video, if you look closely, you can see the lips aren't perfectly synced, but like you said, this stuff is only gonna get better and exponentially better. - Yeah, so that was like deeply in the uncanny valley. It's very strange to watch, but we're not that far away from something that looks perfect.

There's a lot of fear right now about the impact this is gonna have on elections and on our society and how we ever trust media that we see. I have some fear there, but I think when it comes to like a video like that, I think as a society, we're gonna rise to the occasion.

We're gonna learn very quickly that we don't trust videos unless we trust the sort of provenance. If people are saying something really important, they'll cryptographically sign it. - Indeed, throughout the world tour, Sam Altman repeatedly stated that he didn't believe there should be a video that looks perfect. Sam Altman repeatedly stated that he didn't believe there should be a video that looks perfect.

Indeed, throughout the world tour, Sam Altman repeatedly stated that he didn't believe there should be a video that looks perfect. Sam Altman repeatedly stated that he didn't believe there should be a video that looks perfect. Sam Altman repeatedly stated that he didn't believe there should be any regulation of current models.

- Everybody wants great education, productivity gains, discovery of new science, all of this stuff that's gonna happen, and no one wants to destroy the world. No one wants to do things, not even that bad, but still bad. I totally believe it is possible to not stifle innovation and to address the big risks.

I think it would be a mistake to go regulate the current models of today. - And in Poland, his co-founder Wozzech Zaremba agreed, saying the risks of a video that looks perfect would be a mistake. saying the risks of a video that looks perfect would be a mistake. But the risks of superintelligence were 10 years away.

- Also, I would say that the fear is the fear of AI of the future, not the AI of today. If the trajectory that we are on will continue, then in the decade or so, there will be built systems which are as powerful as today corporations. But if I could speak to Sam Altman, I would bring his attention to this paper published this week.

This is a study out of Harvard University, which is a university that is very well-versed in AI. This study was published in the Harvard University and MIT, and it involved some non-scientist students working for one hour. In that hour, they were able to get chatbots to suggest four potential pandemic pathogens, explain how they can be generated from synthetic DNA using reverse genetics, supplied the names of DNA synthesis companies unlikely to screen orders, and identify detailed protocols and how to troubleshoot them.

And they say that collectively, these results suggest that LLMs, will make pandemic class agents widely accessible, even to people they say with little or no lab training. And then there's this, these results strongly suggest that the existing evaluation and training process for large language models is inadequate to prevent them from providing malicious actors with accessible expertise relevant to inflicting mass death.

And that more immediately, if unmitigated LLM chatbots render pandemic class agents more accessible, even to people without training in the life sciences, the number of individuals capable of killing tens of millions of people will dramatically increase. They recommend that, at a minimum, new LLMs larger than GPT-3 should undergo evaluation by third parties, skilled in assessing catastrophic biological risks before controlled access is given to the general public.

Notice they said "larger than GPT-3", so that strongly contradicts Sam Altman's assertion that current models like GPT-4 shouldn't have any regulation. They say that even open source communities should welcome safeguards because a single instance of misuse and mass death would trigger a backlash including the imposition of extremely harsh regulations.

One specific recommendation was that if biotech and information security experts were able to identify the set of publications most relevant to causing mass death, and companies like OpenAI and Google curated their training datasets to remove those publications, then future models trained on the curated data would be far less capable of providing anyone intent on harm with the "recipes for the creation or enhancement of pathogens".

This seems like an absolutely obvious move to me and I think Ilya Satskova would agree. We are talking about as time goes by and the capability keeps increasing, you know, and eventually it goes all the way to here, right? Right now we are here. Today, that's where we are.

That's where we're going to get to. When we get to this point, then yeah, it's very powerful technology. It can be used for amazing applications. You can say cure all disease. On the flip side, you can say create a disease. Much more worse than anything that existed before. That'd be bad.

Moving on to the ChatGPT leak, it seems like we're going to get a new workspace where we can customize our interaction with ChatGPT, giving it files and a profile with any information that you'd like ChatGPT to remember about you and your preferences. This was hinted at in the chat, but I'm not sure if this is the right way to do it.

We were hinted at on the world tour when one of Sam Altman's guests, Johannes Heidecker from OpenAI Research, talked about customizing models. We are trying to make our models both better at following certain guardrails that should never be overwritten, not with jailbreaks, not if you ask nicely, not if you threaten it.

And we're also trying to make our models better at being customizable, making them listen more to additional instructions of what kind of behavior the user or the developer wants. On a lighter note, the leaders of OpenAI were asked in Seoul, the capital of South Korea, about the mixing of AI and religion.

Do you expect AI to replace the role of religious organizations like church? I think that it's a good question how all human societies will integrate AI. And we've already seen people building AI pastors, for example, and so the constituents can ask questions to this pastor that can cite Bible verses and it can give advice.

But now back to Poland, where Sam Altman called open source "unstable" "Realizing that open source is unstoppable and shouldn't be stopped, and so this stuff is going to be out there and as a society we have to adapt." But speaking of stopping AI, Sam Altman was asked about his own loved ones, and in response he gave a utopic vision of the future and called the current world "barbaric".

If you truly believe that AI imposes a danger to humankind, why keep developing it? Aren't you afraid for your own dear ones and family? I think it's a super fair and good question. And the most troublesome part of our jobs is that we have to balance this incredible promise and this technology that I think humans really need, and we can talk about why in a second, with confronting these very serious risks.

Why to build it? Number one, I do think that when we look back at the standard of living and what we tolerate for people today, it will look even worse than when we look back at how people lived 500 or 1000 years ago. And we'll say like, man, can you imagine that people lived in poverty?

Can you imagine people suffered from disease? Can you imagine that everyone didn't have a phenomenal education and were able to live their lives however they wanted? It's going to look barbaric. I think everyone in the future is going to have better lives than the best people of today. I think there's like a moral duty to figure out how to do that.

I also think this is like unstoppable, like this is the progress of technology. It won't work to some degree. It's going to stop it. And so we have to figure out how to manage the risk. He doesn't seem to be 100% sure on this front though. And here is an interview he gave with The Guardian when he was in London for his world tour.

Speaking of super intelligence, he said, "It's not that it's not stoppable. If governments around the world decided to act in concert to limit AI development, as they have in other fields such as human cloning or bioweapon research, they may be able to." But then he repeated, "But that would be to give up all that is possible.

I think that this will be the most tremendous leap forward, in terms of the quality of life for people that we've ever had." I did try to get tickets for the London leg of his world tour, but they were sold out within half an hour. Oh well. Sam Oldman does think that behaviour will change, however, when these AGI labs stare existential risk in the face.

Sam Oldman: "One of the things we talked about is what's a structure that would let us warmly embrace regulation that would hurt us the most. And now that the time has come for that, we're out here advocating around the world for regulation that will impact us the most. So, of course, we'll comply with it.

But I think it's more easy to get good behaviour out of people when they are staring existential risk in the face. And so I think all of the people at the leading edge here, these different companies, now feel this, and you will see a different collective response than you saw from the social media companies." And in terms of opportunities, both Sam Oldman and Ilya Sutskova talked about solving climate change.

Sam Oldman: "I don't want to say this because climate change is so serious and so hard of a problem, but I think once we have a really powerful super intelligence, addressing climate change will not be particularly difficult for a system like that." Ilya Sutskova: "We can even explain how.

Here's how you solve climate change. You need a very large amount of efficient carbon capture. You need the energy for the carbon capture, you need the technology to build it, and you need to build a lot of it. If you can accelerate the scientific progress, which is something that a powerful AI could do, we could get to a very advanced carbon capture much faster.

We could get to a very cheap power much faster. We could get to cheaper manufacturing much faster. Ilya Sutskova: "I think that's a very important point. Combine those three: cheap power, cheap manufacturing, advanced carbon capture. Now you build lots of them. And now you sucked out all the excess CO2 from the atmosphere." Sam Oldman: "You know, if you think about a system where you can say, 'Tell me how to make a lot of clean energy cheaply.

Tell me how to efficiently capture carbon. And then tell me how to build a factory to do this at planetary scale.' If you can do that, you can do a lot of other things too." Ilya Sutskova: "Yeah. With one addition that not only you ask it to tell it, you ask it to do it." Sam Oldman: "That would indeed be amazing.

But think of the power we would be giving to an AI if it was able to just do it, just create those carbon capture factories." Ilya Sutskova: "If we did make that decision, one thing that would help would be reducing hallucinations." Sam Oldman: "I think we will get the hallucination problem to a much, much better place.

It will take us, my colleagues weigh in, I think it'll take us a year and a half, two years, something like that. But at that point, we won't still talk about these." Ilya Sutskova: Sam Oldman talked about that in New Delhi. That timeframe of 18 months to two years is ambitious and surprising.

But now onto jobs, which Sam Oldman was asked about on every leg of the tour. On this front though, I do think it was Ilya Sutskova who gave the more honest answer. Sam Oldman: "Economic dislocation indeed, like we already know that there are jobs that are being impacted or they're being affected.

In other words, some chunks of the jobs can be done. You know, if you're a programmer, you don't write functions anymore, copilot writes them for you. If you're an artist though, it's a bit different because a big chunk of the artists' economic part of them activity has been taken by some of the image generators.

And while new jobs will be created, it's going to be a long period of economic uncertainty. There is an argument to be made that even when they have full human level AI, full AGI, people will still have economic activity to do. I don't know whether that's the case, but in either event, we will need to have something that will soften the blow to allow for a smoother transition either to the totally new professions or even if not, then we want government, the social systems will need to keep keen." I do think the changes in the job market will be dramatic and we'll be following the story closely.

One thing I definitely agree with Sam Oldman on though, is the deep, almost philosophical change that this solving of intelligence has brought to humanity. Sam Oldman: "I grew up implicitly thinking that intelligence was this like really special human thing and kind of somewhat magical. And I now think that it's sort of a fundamental property of matter.

And that's definitely a change to my worldview. The history of scientific discovery is that humans are less and less at the center. We used to think that sun rotated around us and then maybe at least we were, if not that, we were going to be the center of the galaxy and there wasn't this big universe.

And then multiverse really is kind of weird and depressing. And if intelligence is a special, again, we're just further and further away from main character energy. But that's all right. That's sort of like a nice thing to realize actually." It's a bit like a Copernican and Darwinian revolution all rolled in one.

But I'll give the final word to Greg Brockman in Seoul who talked about the unpredictability of scaling up models 10 times. Greg Brockman: "That is the biggest theme in the history of AI is that it's full of surprises. Every time you think you know something, you scale it up 10x, turns out you knew nothing.

And so I think that we as a humanity, as a species are really exploring this together." Being all in it together and knowing nothing sounds about right. But thank you for watching to the end. I know that Sam Altman has a couple more stops. I think it's Jakarta and Melbourne on the world tour and I'll be watching those of course.

But for now, thank you and have a wonderful day.