Back to Index

Altman Out: Reasons, Reactions and the Repercussions for the Industry


Transcript

It's not often that personnel changes are worth a video, but these ones are as they look set to rearrange the generative AI landscape. I've read pretty much every report, article, exclusive and rumor out there on the firing of the CEO of OpenAI, Sam Altman, and the resignation of the president and co-founder, Greg Brockman.

I'm going to try to distill the most interesting bits and cover angles that I don't think others have. But let's start with this bombshell tweet from Greg Brockman, again, the president and co-founder of OpenAI, or I should say the former president. He said that last night Sam got a text from Ilya Sutskova, that's the chief scientist of OpenAI and a key board member, asking to talk at noon on Friday.

And at this point, you can tell that either Sam Altman drafted this bit or a third party, maybe a lawyer, because it goes into the third person. Sam joined a Google meet and the whole board, except Greg himself, was there. Ilya told Sam he was being fired and that the news was going out very soon.

But that must have been a very short call because at 12.19pm, Greg got a text from Ilya asking for a quick call. Firing the public face of your company, the CEO and co-founder, Sam Altman, in a 15 minute no notice Google meets call is quite a dramatic move. Anyway, Ilya Sutskova and the board were on a roll because four minutes later, Greg Brockman was told that he was being removed from the board.

Interestingly, they told him he was vital to the company and he would retain his role. And I'll get to OpenAI statement in a moment. It does seem, though, that literally one of the only people in the world who had any real notice was Myra Murati, the new interim CEO who found out the night prior.

Anyway, bear in mind that statement that Ilya Sutskova said that Greg Brockman was vital to the company. And even the OpenAI blog post put it like this. As part of this transition, Greg Brockman will be stepping down as chairman of the board, makes it seem voluntary, and will remain in his role at the company reporting to the CEO.

We now know that Greg Brockman had other ideas, saying, based on today's news, the firing of Sam Altman, I quit. So that's someone supposedly vital to the company who is no longer going to be there. Now, at this point, the obvious question is, well, what did Sam Altman do to get fired?

It must have been bad enough to get fired in a no notice 15 minute Google meet call, but not so bad that Greg Brockman didn't feel the need to resign in solidarity. Well, there are one or two leading theories, which I'll get to in a moment. But first, what did the board say?

They say that his departure follows a deliberative review process by the board, which concluded that he was not consistently candid or honest in his communications with the board. And they go on hindering its ability to exercise its responsibilities. Their responsibility, which they later reiterate, is building safe AGI that benefits all of humanity.

So they must have believed that Sam Altman not being honest hindered their ability to build safe AGI. They say the board no longer has confidence in his ability to continue leading open A.I. The natural follow on question, therefore, becomes what wasn't he honest about? It must have been something fairly dramatic that he wasn't candid about.

Otherwise, you wouldn't fire the CEO, giving all of the investors literally no notice and doing it in such a way that multiple other employees have resigned, not just Greg Brockman. The information in an exclusive put it like this. Jacob Pachocki, the company's director of research, Alexandra Madry, head of a team evaluating potential risks from A.I.

and Simon Seidl, a seven year researcher at the startup, have told associates that they have resigned, too. The departures are a sign of immense disappointment among some employees and underscore long simmering divisions at the chat GPT creator about A.I. safety practices. And don't forget, those aren't just any employees.

Jacob Pachocki is the GPT for lead and director of research, or he was. So we have the guy responsible for the pre-training of GPT for resigning and the vital Greg Brockman resigning to. Will there even be a viable open A.I. after this? Well, yes, according to many people. And I'll get to that in a second.

But what about Alexandra Madry? Well, it was just in late October, about three weeks ago, when he was announced to be the head of their preparedness team. That's a super important role leading the team that will help track, evaluate, forecast and protect against catastrophic risks. So it seems if this was a safety play, it's quite strange for him to resign to.

Something definitely doesn't add up here. It's not just the safety people versus the accelerationists. And in another exclusive from the information, apparently there was an impromptu, not prepared, all hands meeting following the firing. Ilya Sutskova took questions and at least two employees asked Sutskova this. Did the firing amount to a coup or hostile takeover?

The people in the room felt the question implied that Sutskova may have felt Altman was moving too quickly to commercialise the software and that that was at the expense of potential safety concerns. Anyway, Sutskova replied like this. You can call it that way, Sutskova said about the coup allegation, and I can understand why you chose this word.

But I disagree with this. This was the board doing its duty to the mission of the non-profit, which is to make sure that OpenAI builds AGI that benefits all of humanity. He was then asked whether these backroom removals are a good way to govern the most important company in the world.

That's a somewhat hype claim, but nevertheless, he answered, I mean, fair. I agree that there is not an ideal element to it, 100%. So, so far we have a notion of Altman not being totally honest about something to do with generating profits for the company that might put a risk safety.

It feels to me that Sutskova doesn't do niceties, hence the 15 minute firing of Altman. And maybe he has an issue with what he perceives to be big egos. He recently tweeted this. Ego is the enemy of growth. At this point, I'll note that I feel there has been some amount of bullying of other members of the board who made the decision to fire Altman.

But really, it seems pretty clear that the driving force behind it was Ilya Sutskova. So rather than blaming less experienced board members, I would direct questions towards Ilya Sutskova. Anyway, Microsoft, whose stock is down almost 2% in the wake of this news, has rushed out a statement. Remember, they were given literally one minute notice of this development.

The Microsoft CEO Satya Nadella and CTO Kevin Scott, who were instrumental in bringing OpenAI into partnership with Microsoft, expressed utmost confidence in OpenAI following the unexpected news. Nadella, who just two weeks ago was on stage with Altman when Altman said, "I look forward to building AGI together," put out a fairly cold statement.

He said, "We have a long-term agreement with OpenAI with full access to everything we need to deliver on our innovation agenda. We remain committed to our partnership and to Myra and the team." Remember, Microsoft has not only invested $13 billion into OpenAI, but promised them the most compute in the world to create AGI.

They still need OpenAI to do really well. But again, we're left with that central question of what did he do that was so bad to get him fired so abruptly, but yet not too bad that plenty of his colleagues didn't feel the need to resign in sympathy? Well, one very well-connected journalist in San Francisco, Kara Swisher, gave us a bit more detail.

As she understands it, it was a misalignment of the profit versus non-profit adherence at the company. We've seen hints of that already. The developer day was an issue. But what about the developer day? It couldn't have been the hype around the developer day because Ilya Sutskova retweeted this post by Sam Altman, talking about the great stuff that they have to show off to developers.

So it can't have been the developer day happening or the hype around it, nor could it have been the iterative deployment of GPT-4 Turbo, the latest model. Myra Murati, the new interim CEO, said this about iterative deployment. She was asked by Wired Magazine whether she also believed in iterative, step-by-step deployment of models as a path toward AGI.

And she said, "I haven't come up with a better way than iterative deployment. That way we get continuous adaptation and feedback from the real end feeding back into the technology." And it does seem unlikely for him to be fired over the technical implementation of Dev Day. That's despite the fact that there have been quite a few challenges with Dev Day.

OpenAI have had to suspend new ChatGPT+ signups for a bit. And plenty of people have been complaining not just about the assistance API, but about things like using GPT 3.5 in production. Apparently, since Dev Day, there has been a lot more downtime for GPT 3.5. But there's a reason I highlighted this tweet in particular.

Yes, it's from the founder of Lexica, a great creative tool. And the conclusion is dramatic. There's no point in using GPT 3.5 in production. But when I first saw this post on my mobile, quite an interesting person had liked it. It's off their likes now, but originally, Andrei Karpathy liked this tweet.

Again, that's the conclusion. There's no point in using GPT 3.5 in production. Instead, I found Mistral 7b, fine-tuned deployed to be way cheaper and better. So maybe Dev Day included some rushed releases and the technical challenges go deeper than we thought. Having said that, it does seem like OpenAI are trying to bounce back with one of their engineering managers saying this.

In response to the firing, he said, "For those wondering what will happen next, the answer is we'll keep shipping." Sam Altman and Greg Brockman weren't micro-managers. The sparkle comes from the many geniuses here in research, product, engineering, and design. There's a clear internal uniformity among these leaders that we're here for the bigger mission.

Remember that safe AGI that's beneficial for all humanity. And even one of the co-founders of OpenAI, Wojciech Zaremba, said this. "It's been sad for me to see Sam Altman and Greg Brockman go. I love and respect them much. Despite all of these, the mission of OpenAI is bigger than any of us and stays the same.

Build safe AGI to benefit humanity." Meanwhile, Sam Altman has been memeing a little bit. "If I start going off on a rant, the OpenAI board should go after me for the full value of my shares, which don't forget are zero. As in, they can't do anything to me." Of course, he did also describe how his day has been like reading his own eulogy and that he loved his time at OpenAI.

It does seem like he loved being there as much as he loves lowercase letters. At this point, I do want to quickly bring in an enigmatic comment that Sam Altman made just two days ago. "I think this is like going to be the greatest leap forward that we've had yet far and the greatest leap forward of any of the big technological revolutions we've had so far.

I'm super excited. I can't imagine anything more exciting to work on. And on a personal note, like four times now in the history of OpenAI, the most recent time was just in the last couple of weeks. I've gotten to be in the room when we sort of like push the front, the sort of the veil of ignorance back and the frontier of discovery forward.

And getting to do that is like the professional honor of a lifetime." Now, remember that before training GPT-4, they trained smaller scale versions to test capabilities that allowed them to more accurately predict the full capabilities of GPT-4. When he refers to the veil of ignorance being pushed back, is he talking about the miniature version of GPT-5?

If so, did he get the board's approval for OK-ing the training of that miniature model? Obviously, at this point, we hit the end of the road for speculation. We simply don't know until he tells his story or the board does. I imagine the board might go first because they are under immense pressure to explain themselves.

But finally, what next for the industry? Well, the obvious prediction which I share with Jim Fan is a rival OpenAI emerges. It's a bit like the last time there was a major split at OpenAI. That was when Dario Amadei and the Anthropic team split off. That eventually gave us, of course, Claude too.

If they did decide to set up a rival company, I am sure it would immediately attract billions of dollars of investments. Don't forget, as the information reports, Greg Brockman didn't formally manage any employees and instead spent much of his time writing software. In that role, he was one of the startup's most influential figures and had a say in everything from product decisions to setting directions for engineering teams.

He could apparently tackle the most difficult coding problems and was bent on optimizing the speed and efficiency of the company's models. He would know pretty much everything there is to know about the training of GPT-5. Also bear in mind, and I recently spoke to someone who was offered a job at OpenAI, Sam Altman personally interviewed basically every employee who joined, so they all have a direct personal connection to Sam Altman.

He might well be able to bring over dozens of OpenAI employees. And yes, some people are speculating that this will give a head start to Google Gemini. That was a model rumored to be better than GPT-4. However, in the last 48 hours, not only has it been strongly hinted that that's pushback to 2024 instead of December as expected, but also expectations are being damped down.

The CEO of Google said, "Our goal with Gemini is to put out a state-of-the-art model, as in better than GPT-4. That is where we're going to start with Gemini 1.0, but it's only after that that they're going to add more innovations, truly make it multimodal, bringing in features like memory and planning." But in his original announcement, those were supposed to be part of Gemini.

So now the expectations are more of a model just slightly better than GPT-4, but nothing truly revolutionary. And bear in mind that talent flows somewhat freely in Silicon Valley. One of the leaders of the Gemini project, who specializes in developing models that can incorporate both text and images, joined OpenAI in October.

And of course, people talk behind the scenes. So if there were some special source in Gemini, expect it to be shared with GPT-5. So ultimately, what we are left with is a further fracturing of the industry. We might even have a new AGI lab on our hands before the end of the year.

Of course, let me know in the comments what you think is going to happen. I'm also working on my own announcement, but that's a story for another day. Thank you so much for watching to the end and have a wonderful day.