back to index

Altman Out: Reasons, Reactions and the Repercussions for the Industry


Whisper Transcript | Transcript Only Page

00:00:00.000 | It's not often that personnel changes are worth a video, but these ones are as they look set to rearrange the generative AI landscape.
00:00:10.160 | I've read pretty much every report, article, exclusive and rumor out there on the firing of the CEO of OpenAI, Sam Altman,
00:00:18.960 | and the resignation of the president and co-founder, Greg Brockman.
00:00:23.040 | I'm going to try to distill the most interesting bits and cover angles that I don't think others have.
00:00:29.080 | But let's start with this bombshell tweet from Greg Brockman, again, the president and co-founder of OpenAI, or I should say the former president.
00:00:37.960 | He said that last night Sam got a text from Ilya Sutskova, that's the chief scientist of OpenAI and a key board member, asking to talk at noon on Friday.
00:00:47.840 | And at this point, you can tell that either Sam Altman drafted this bit or a third party, maybe a lawyer, because it goes into the third person.
00:00:54.200 | Sam joined a Google meet and the whole board, except Greg himself, was there.
00:00:59.520 | Ilya told Sam he was being fired and that the news was going out very soon.
00:01:03.440 | But that must have been a very short call because at 12.19pm, Greg got a text from Ilya asking for a quick call.
00:01:10.840 | Firing the public face of your company, the CEO and co-founder, Sam Altman, in a 15 minute no notice Google meets call is quite a dramatic move.
00:01:21.560 | Anyway, Ilya Sutskova and the board were on a roll because four minutes later, Greg Brockman was told that he was being removed from the board.
00:01:29.680 | Interestingly, they told him he was vital to the company and he would retain his role.
00:01:34.840 | And I'll get to OpenAI statement in a moment.
00:01:37.160 | It does seem, though, that literally one of the only people in the world who had any real notice was Myra Murati, the new interim CEO who found out the night prior.
00:01:47.640 | Anyway, bear in mind that statement that Ilya Sutskova said that Greg Brockman was vital to the company.
00:01:53.480 | And even the OpenAI blog post put it like this.
00:01:56.120 | As part of this transition, Greg Brockman will be stepping down as chairman of the board, makes it seem voluntary, and will remain in his role at the company reporting to the CEO.
00:02:07.440 | We now know that Greg Brockman had other ideas, saying, based on today's news, the firing of Sam Altman, I quit.
00:02:14.280 | So that's someone supposedly vital to the company who is no longer going to be there.
00:02:19.800 | Now, at this point, the obvious question is, well, what did Sam Altman do to get fired?
00:02:24.080 | It must have been bad enough to get fired in a no notice 15 minute Google meet call, but not so bad that Greg Brockman didn't feel the need to resign in solidarity.
00:02:34.080 | Well, there are one or two leading theories, which I'll get to in a moment.
00:02:37.320 | But first, what did the board say?
00:02:39.240 | They say that his departure follows a deliberative review process by the board, which concluded that he was not consistently candid or honest in his communications with the board.
00:02:49.480 | And they go on hindering its ability to exercise its responsibilities.
00:02:54.360 | Their responsibility, which they later reiterate, is building safe AGI that benefits all of humanity.
00:03:01.200 | So they must have believed that Sam Altman not being honest hindered their ability to build safe AGI.
00:03:08.200 | They say the board no longer has confidence in his ability to continue leading open A.I.
00:03:13.560 | The natural follow on question, therefore, becomes what wasn't he honest about?
00:03:18.520 | It must have been something fairly dramatic that he wasn't candid about.
00:03:21.720 | Otherwise, you wouldn't fire the CEO, giving all of the investors literally no notice and doing it in such a way that multiple other employees have resigned, not just Greg Brockman.
00:03:32.800 | The information in an exclusive put it like this. Jacob Pachocki, the company's director of research, Alexandra Madry, head of a team evaluating potential risks from A.I.
00:03:42.320 | and Simon Seidl, a seven year researcher at the startup, have told associates that they have resigned, too.
00:03:47.720 | The departures are a sign of immense disappointment among some employees and underscore long simmering divisions at the chat GPT creator about A.I. safety practices.
00:03:58.040 | And don't forget, those aren't just any employees.
00:04:00.360 | Jacob Pachocki is the GPT for lead and director of research, or he was.
00:04:05.240 | So we have the guy responsible for the pre-training of GPT for resigning and the vital Greg Brockman resigning to.
00:04:12.480 | Will there even be a viable open A.I. after this?
00:04:15.720 | Well, yes, according to many people.
00:04:17.680 | And I'll get to that in a second.
00:04:19.000 | But what about Alexandra Madry?
00:04:20.720 | Well, it was just in late October, about three weeks ago, when he was announced to be the head of their preparedness team.
00:04:27.480 | That's a super important role leading the team that will help track, evaluate, forecast and protect against catastrophic risks.
00:04:34.960 | So it seems if this was a safety play, it's quite strange for him to resign to.
00:04:40.080 | Something definitely doesn't add up here.
00:04:41.800 | It's not just the safety people versus the accelerationists.
00:04:44.960 | And in another exclusive from the information, apparently there was an impromptu, not prepared, all hands meeting following the firing.
00:04:53.360 | Ilya Sutskova took questions and at least two employees asked Sutskova this.
00:04:57.760 | Did the firing amount to a coup or hostile takeover?
00:05:02.000 | The people in the room felt the question implied that Sutskova may have felt Altman was moving too quickly to commercialise the software
00:05:09.680 | and that that was at the expense of potential safety concerns.
00:05:13.440 | Anyway, Sutskova replied like this.
00:05:15.800 | You can call it that way, Sutskova said about the coup allegation, and I can understand why you chose this word.
00:05:21.600 | But I disagree with this.
00:05:23.240 | This was the board doing its duty to the mission of the non-profit, which is to make sure that OpenAI builds AGI that benefits all of humanity.
00:05:31.480 | He was then asked whether these backroom removals are a good way to govern the most important company in the world.
00:05:37.720 | That's a somewhat hype claim, but nevertheless, he answered, I mean, fair.
00:05:41.600 | I agree that there is not an ideal element to it, 100%.
00:05:45.920 | So, so far we have a notion of Altman not being totally honest about something to do with generating profits for the company that might put a risk safety.
00:05:54.280 | It feels to me that Sutskova doesn't do niceties, hence the 15 minute firing of Altman.
00:06:00.120 | And maybe he has an issue with what he perceives to be big egos.
00:06:04.080 | He recently tweeted this.
00:06:05.320 | Ego is the enemy of growth.
00:06:07.680 | At this point, I'll note that I feel there has been some amount of bullying of other members of the board who made the decision to fire Altman.
00:06:15.120 | But really, it seems pretty clear that the driving force behind it was Ilya Sutskova.
00:06:20.480 | So rather than blaming less experienced board members, I would direct questions towards Ilya Sutskova.
00:06:25.920 | Anyway, Microsoft, whose stock is down almost 2% in the wake of this news, has rushed out a statement.
00:06:32.600 | Remember, they were given literally one minute notice of this development.
00:06:36.560 | The Microsoft CEO Satya Nadella and CTO Kevin Scott, who were instrumental in bringing OpenAI into partnership with Microsoft, expressed utmost confidence in OpenAI following the unexpected news.
00:06:49.320 | Nadella, who just two weeks ago was on stage with Altman when Altman said, "I look forward to building AGI together," put out a fairly cold statement.
00:06:57.520 | He said, "We have a long-term agreement with OpenAI with full access to everything we need to deliver on our innovation agenda.
00:07:05.360 | We remain committed to our partnership and to Myra and the team."
00:07:08.880 | Remember, Microsoft has not only invested $13 billion into OpenAI, but promised them the most compute in the world to create AGI.
00:07:18.320 | They still need OpenAI to do really well.
00:07:21.440 | But again, we're left with that central question of what did he do that was so bad to get him fired so abruptly,
00:07:27.760 | but yet not too bad that plenty of his colleagues didn't feel the need to resign in sympathy?
00:07:34.520 | Well, one very well-connected journalist in San Francisco, Kara Swisher, gave us a bit more detail.
00:07:40.200 | As she understands it, it was a misalignment of the profit versus non-profit adherence at the company.
00:07:45.920 | We've seen hints of that already.
00:07:47.440 | The developer day was an issue.
00:07:49.880 | But what about the developer day?
00:07:51.800 | It couldn't have been the hype around the developer day because Ilya Sutskova retweeted this post by Sam Altman,
00:07:58.680 | talking about the great stuff that they have to show off to developers.
00:08:02.320 | So it can't have been the developer day happening or the hype around it,
00:08:05.800 | nor could it have been the iterative deployment of GPT-4 Turbo, the latest model.
00:08:11.280 | Myra Murati, the new interim CEO, said this about iterative deployment.
00:08:16.160 | She was asked by Wired Magazine whether she also believed in iterative, step-by-step deployment of models as a path toward AGI.
00:08:23.920 | And she said, "I haven't come up with a better way than iterative deployment.
00:08:28.080 | That way we get continuous adaptation and feedback from the real end feeding back into the technology."
00:08:33.560 | And it does seem unlikely for him to be fired over the technical implementation of Dev Day.
00:08:39.400 | That's despite the fact that there have been quite a few challenges with Dev Day.
00:08:43.200 | OpenAI have had to suspend new ChatGPT+ signups for a bit.
00:08:47.400 | And plenty of people have been complaining not just about the assistance API,
00:08:51.120 | but about things like using GPT 3.5 in production.
00:08:54.280 | Apparently, since Dev Day, there has been a lot more downtime for GPT 3.5.
00:08:59.360 | But there's a reason I highlighted this tweet in particular.
00:09:02.600 | Yes, it's from the founder of Lexica, a great creative tool.
00:09:06.200 | And the conclusion is dramatic.
00:09:07.920 | There's no point in using GPT 3.5 in production.
00:09:10.960 | But when I first saw this post on my mobile, quite an interesting person had liked it.
00:09:15.160 | It's off their likes now, but originally, Andrei Karpathy liked this tweet.
00:09:19.280 | Again, that's the conclusion.
00:09:20.440 | There's no point in using GPT 3.5 in production.
00:09:23.280 | Instead, I found Mistral 7b, fine-tuned deployed to be way cheaper and better.
00:09:28.120 | So maybe Dev Day included some rushed releases and the technical challenges go deeper than we thought.
00:09:33.760 | Having said that, it does seem like OpenAI are trying to bounce back
00:09:37.920 | with one of their engineering managers saying this.
00:09:40.360 | In response to the firing, he said,
00:09:41.680 | "For those wondering what will happen next, the answer is we'll keep shipping."
00:09:45.520 | Sam Altman and Greg Brockman weren't micro-managers.
00:09:48.720 | The sparkle comes from the many geniuses here in research, product, engineering, and design.
00:09:53.600 | There's a clear internal uniformity among these leaders that we're here for the bigger mission.
00:09:59.360 | Remember that safe AGI that's beneficial for all humanity.
00:10:03.520 | And even one of the co-founders of OpenAI, Wojciech Zaremba, said this.
00:10:07.960 | "It's been sad for me to see Sam Altman and Greg Brockman go.
00:10:11.440 | I love and respect them much.
00:10:12.840 | Despite all of these, the mission of OpenAI is bigger than any of us and stays the same.
00:10:19.000 | Build safe AGI to benefit humanity."
00:10:22.400 | Meanwhile, Sam Altman has been memeing a little bit.
00:10:25.360 | "If I start going off on a rant, the OpenAI board should go after me
00:10:30.080 | for the full value of my shares, which don't forget are zero.
00:10:34.000 | As in, they can't do anything to me."
00:10:36.280 | Of course, he did also describe how his day has been like reading his own eulogy
00:10:40.880 | and that he loved his time at OpenAI.
00:10:43.760 | It does seem like he loved being there as much as he loves lowercase letters.
00:10:48.000 | At this point, I do want to quickly bring in an enigmatic comment
00:10:52.200 | that Sam Altman made just two days ago.
00:10:55.080 | "I think this is like going to be the greatest leap forward that we've had yet far
00:10:58.880 | and the greatest leap forward of any of the big technological revolutions we've had so far.
00:11:03.560 | I'm super excited.
00:11:05.120 | I can't imagine anything more exciting to work on.
00:11:07.440 | And on a personal note, like four times now in the history of OpenAI,
00:11:10.800 | the most recent time was just in the last couple of weeks.
00:11:13.600 | I've gotten to be in the room when we sort of like push the front,
00:11:18.000 | the sort of the veil of ignorance back and the frontier of discovery forward.
00:11:22.480 | And getting to do that is like the professional honor of a lifetime."
00:11:25.640 | Now, remember that before training GPT-4,
00:11:28.480 | they trained smaller scale versions to test capabilities
00:11:32.480 | that allowed them to more accurately predict the full capabilities of GPT-4.
00:11:36.680 | When he refers to the veil of ignorance being pushed back,
00:11:39.880 | is he talking about the miniature version of GPT-5?
00:11:43.400 | If so, did he get the board's approval for OK-ing the training of that miniature model?
00:11:48.360 | Obviously, at this point, we hit the end of the road for speculation.
00:11:52.560 | We simply don't know until he tells his story or the board does.
00:11:56.880 | I imagine the board might go first because they are under immense pressure to explain themselves.
00:12:02.480 | But finally, what next for the industry?
00:12:05.080 | Well, the obvious prediction which I share with Jim Fan is a rival OpenAI emerges.
00:12:10.880 | It's a bit like the last time there was a major split at OpenAI.
00:12:14.760 | That was when Dario Amadei and the Anthropic team split off.
00:12:18.480 | That eventually gave us, of course, Claude too.
00:12:21.080 | If they did decide to set up a rival company,
00:12:23.960 | I am sure it would immediately attract billions of dollars of investments.
00:12:28.120 | Don't forget, as the information reports,
00:12:30.120 | Greg Brockman didn't formally manage any employees
00:12:33.320 | and instead spent much of his time writing software.
00:12:36.320 | In that role, he was one of the startup's most influential figures
00:12:39.560 | and had a say in everything from product decisions to setting directions for engineering teams.
00:12:45.000 | He could apparently tackle the most difficult coding problems
00:12:47.920 | and was bent on optimizing the speed and efficiency of the company's models.
00:12:52.360 | He would know pretty much everything there is to know about the training of GPT-5.
00:12:57.320 | Also bear in mind, and I recently spoke to someone who was offered a job at OpenAI,
00:13:01.960 | Sam Altman personally interviewed basically every employee who joined,
00:13:06.520 | so they all have a direct personal connection to Sam Altman.
00:13:10.280 | He might well be able to bring over dozens of OpenAI employees.
00:13:14.840 | And yes, some people are speculating that this will give a head start to Google Gemini.
00:13:20.120 | That was a model rumored to be better than GPT-4.
00:13:23.080 | However, in the last 48 hours, not only has it been strongly hinted
00:13:26.680 | that that's pushback to 2024 instead of December as expected,
00:13:30.840 | but also expectations are being damped down.
00:13:33.560 | The CEO of Google said, "Our goal with Gemini is to put out a state-of-the-art model,
00:13:38.440 | as in better than GPT-4.
00:13:39.960 | That is where we're going to start with Gemini 1.0,
00:13:43.320 | but it's only after that that they're going to add more innovations,
00:13:46.600 | truly make it multimodal, bringing in features like memory and planning."
00:13:50.760 | But in his original announcement, those were supposed to be part of Gemini.
00:13:54.800 | So now the expectations are more of a model just slightly better than GPT-4,
00:13:59.080 | but nothing truly revolutionary.
00:14:00.920 | And bear in mind that talent flows somewhat freely in Silicon Valley.
00:14:04.920 | One of the leaders of the Gemini project, who specializes in developing models
00:14:09.160 | that can incorporate both text and images, joined OpenAI in October.
00:14:13.080 | And of course, people talk behind the scenes.
00:14:15.160 | So if there were some special source in Gemini, expect it to be shared with GPT-5.
00:14:20.920 | So ultimately, what we are left with is a further fracturing of the industry.
00:14:25.480 | We might even have a new AGI lab on our hands before the end of the year.
00:14:30.040 | Of course, let me know in the comments what you think is going to happen.
00:14:33.160 | I'm also working on my own announcement, but that's a story for another day.
00:14:37.080 | Thank you so much for watching to the end and have a wonderful day.