Back to Index

Are We Back to Before? OpenAI 2.0, Inflection-2 and a Major AI Cancer Breakthrough


Transcript

OpenAI are back, but that doesn't mean nothing has changed. This video is about giving you more context, not only on what happened, but what it means and what else has been going on with AI that's arguably more important. And yes, it seems Sam Altman is also back at the helm of OpenAI, although interestingly, he said, I'm looking forward to returning to OpenAI this morning.

Seems like the terms of that arrangement aren't yet finalized. One thing that's definitely not back in fashion and has arguably been lost in the chaos is the original OpenAI charter. It said it would be wise to view any investment in OpenAI Global in the spirit of a donation, with the understanding that it may be difficult to know what role money will play in a post-AGI world.

Furthermore, the charter takes precedence over any obligation to generate a profit. The company may never make a profit and the company is under no obligation to do so. With a tentative $90 billion valuation, I wonder how many at OpenAI would sign up to that charter again now. Anyway, back to Las Vegas, where Sam Altman was attending the biggest party of the weekend, the Formula One race, when he was fired by the board.

Many of the possible reasons for that were covered in my previous video, but Ilya Sutskevo was asked directly to provide a reason. An hour and a half after firing Sam Altman, Ilya Sutskevo was asked by OpenAI employees, will they ever find out what Sam Altman did? To which he replied, no.

This is according to the Wall Street Journal. That reluctance apparently went so deep that even when the new CEO, Emmet Scheer, asked the board, demanded the board to provide concrete evidence of Sam Altman's wrongdoing, they did not do so. He then said he could not remain in the CEO role and tried to help both sides find common ground to get Sam Altman reinstated.

Speaking of the outgoing CEO, Emmet Scheer, apparently when he announced an all hands meeting, fewer than a dozen people showed up and apparently several employees responded with middle finger emojis. That follows on from the executive team of OpenAI pressing the board for 40 minutes for specific examples of Sam Altman's lack of candor or honesty.

The board refused, apparently citing legal reasons. And it seems at one point, law enforcement entities such as the US Attorney's Office even got involved. At this point, it seems a bit more of the reasoning came out. The board intimated that there wasn't one incident that led to their decision to eject Altman, but instead a consistent, slow erosion of trust over time that made them increasingly uneasy.

And as I mentioned in my previous video, also Sam Altman's mounting list of outside AI related ventures, like creating an AI chip, which raised questions for the board about how OpenAI's technology or intellectual property could be used. The board then discussed the matter with their own legal counsel. And after a few hours, they returned, still unwilling to provide specifics.

They said that Sam Altman wasn't candid and often got his way. And they said that he'd been so deft that they couldn't even give a specific example. Indeed, the only person to go on record about any lying actually works at Google DeepMind. This is Jeffrey Irving, who used to work at OpenAI.

He said Sam Altman lied to him on various occasions, was deceptive, manipulative and worse to others. He said that when he lied to him, Jeffrey Irving, it was about other people to try to drive wedges. But according to the New York Times, the firing might not have even been about lying after all.

Apparently, Sam Altman met one of the board members, Miss Toner, to discuss a paper she had written. Sam Altman then wrote an email to his colleagues reprimanding Miss Toner for that paper, saying it was dangerous to the company, particularly at a time when the FTC was investigating OpenAI. Miss Toner defended her paper as an academic paper, which I have read in part and I'll get to in a moment, and said it was analyzing the challenges that the public faces when trying to understand the intentions of the companies developing AI.

But Sam Altman disagreed and said, "I did not feel we're on the same page on the damage of all of this. Any amount of criticism from a board member carries a lot of weight. Honestly, though, before today, I hadn't heard of the paper or heard anyone discussing it." So what does the paper say?

Well, on page 30, we have this. First, heaps of praise for OpenAI for publishing a system card outlining the risks of GPT-4. After that praise, though, they do say that the release of ChatGPT four months earlier overshadowed the import of that system card. The paper then describes how ChatGPT's release sparked a sense of urgency inside major tech companies.

One thing the paper did do, though, was praise Anthropic, the rival to OpenAI. It singled out their different approach of delaying model releases until the industry had caught up. It quoted how Anthropic's chatbot, Claude, had been deliberately delayed in order to avoid advancing the rate of AI capabilities progress.

Now, many people won't remember this, but this was actually cited in the GPT-4 technical report. That report, released in March, warned of acceleration as a risk to be addressed, and advisors that they employed told them to delay GPT-4 by a further six months and be a bit quieter in the communication strategy around deployment.

Toner's paper did then describe the, quote, "frantic corner cutting" that the release of ChatGPT appeared to spur. But honestly, it's a fairly nuanced report and then critiques Anthropic's approach, saying it's ignored by many. The report says, "By burying the explanation of Claude's delayed release in the middle of a long, detailed document posted to the company's website, Anthropic appears to have ensured that this signal of its intentions around AI safety has gone largely unnoticed." And before you think that she must be alone at OpenAI in critiquing the release of ChatGPT, Sam Altman himself listed it as a possible future regret.

We're definitely going to have some huge regrets 20 years from now. I hope what we can say is that we did far, far, far more good than bad. And I think we will. I think that's true. But the downside here is pretty big, and I think we feel that weight every day.

Honestly, I think if we're going to regret something, it may be that we already pushed the button. Like, we've already launched this revolution. It's somewhat out of our hands. I think it's going to be great, but this is going to happen now, right? We're out of the... the world is out of the gates.

I guess the thing that I lose the most sleep over is that we already have done something really bad. I don't think we have, but the hypothetical that we, by launching ChatGPT into the world, shot the industry out of a railgun, and we now don't get to have much impact anymore, and there's going to be an acceleration towards making these systems, which again, I think will be used for tremendous good, and I think we're going to address all the problems.

But maybe there's something in there that was really hard and complicated in a way we didn't understand. And, you know, we've now already kicked this off. But if that's more of the context, what about what actually happened on the weekend? Well, Sam Altman's house became apparently a war room filled with OpenAI employees, including Mira Murati, the interim CEO.

They began to use X or Twitter in a coordinated fashion for their campaign. This was also the time of the open letter to the board of directors from the employees of OpenAI. By my last count, it was signed by 747 out of the 770 OpenAI employees. It described how the board stated that allowing the company to be destroyed wouldn't necessarily be inconsistent with the mission of creating safe and beneficial AGI.

The employees then threatened to resign from OpenAI and all join Microsoft. Notice how the threat is to take this step imminently unless all current board members resign. As of now, though, it seems like Sam Altman has agreed not to return to the board while Adam D'Angelo will stay. Furthermore, Sam Altman has agreed to allow an investigation into his affairs before the firing.

Remember that Adam D'Angelo, the CEO of Quora and one of the people who voted to remove Sam Altman, was described by Sam Altman in a blog post from a few years ago as one of the few names that people consistently mention when discussing the smartest CEOs in Silicon Valley.

And Sam Altman said he has a very long term focus, which has become a rare commodity in tech companies these days. I wonder if he would reaffirm that about Adam D'Angelo now. But back to Microsoft, who had not only announced the arrival of Sam Altman, but even gotten a floor of their office in San Francisco ready with laptops and clusters of GPUs.

Satya Nadella, the CEO of Microsoft, even said that Sam Altman and Greg Brockman were in the process of joining Microsoft. So how are you envisioning this role with this sort of advanced AI team that Sam and Greg would be joining and leading? Can you explain that? And are they actually Microsoft employees right now?

Like, who do they work for? Yeah, so they're all in the process of joining. And yes, I mean, the thing is, look, we have a ton of AI expertise in this company. And Kevin Scott, the CTO of Microsoft, said this. We have seen your petition and appreciate your desire potentially to join Sam Altman at Microsoft's new AI research lab.

Speaking as if Sam Altman had already joined. Know that if needed, you have a role at Microsoft that matches your compensation and advances our collective mission. So what does Microsoft now make of OpenAI not joining them and instead staying independent? Satya Nadella said this. We are encouraged by the changes to the OpenAI board.

We believe this is an essential first step on a path to a more stable, well-informed and effective governance. Sam, Greg and I have talked and agreed they have a key role to play. Notice it's not really defined with the OpenAI leadership team in ensuring OpenAI continues to thrive and build on its mission.

But a more revealing quote from Nadella came last night in an interview with Kara Swisher. We're never going to get back into a situation where we're surprised like this again. And Bloomberg are reporting that the final reworked OpenAI board will have up to nine new directors, with Microsoft getting possibly one or more seats.

One of the new board members will be Larry Summers, and I'll get to him in a moment. But first, there's something that slipped out during the chaos that I want to pick up on. At one point during the weekend, OpenAI's board approached Anthropic about a merger. And I'll give myself a little bit of credit for predicting that in one of my previous videos.

Anyway, apparently that was quickly turned down by Dario Amadei, the CEO of Anthropic. I'm going to get to other AI news in a second, but first a quick look at the views of Larry Summers on AI. He views language models as a major wake-up call for the white-collar worker.

I think it's coming for the cognitive class. Chet CPT is going to replace what doctors do. Hearing symptoms and making diagnoses before it changes what nurses do. Before we leave this OpenAI saga, there's one more thing I wanted to point out. I'm going all the way back to that OpenAI charter that I introduced in the beginning.

And the fifth clause says this, "The board determines when we've reached AGI." Again, by AGI, we mean a highly autonomous system that outperforms humans at most economically valuable work. When the board determines that we have attained AGI, such a system is excluded from IP licenses and other commercial terms with Microsoft.

But unless OpenAI and the board want this entire saga to repeat again, I think they really need to better define AGI. What does highly autonomous mean? More importantly, what does outperforming humans at most economically valuable work even mean? The AI and broader technology we have today, if we transported it back 200 years in the past, would be able to outperform most humans at most economically valuable work.

Obviously, what happened is that humans reacted to that and changed how they work. So this debate could arise again in the not too distant future. OpenAI talk about aligning superintelligence within four years. But if the board declares AGI, Microsoft might be able to say, "Well, we haven't yet reached post-labor economics.

People are still contributing in economically valuable ways as carers, or as entertainers, or YouTube hosts, or whatever else happens." I do feel the world needs to come up with a much more rigorous definition of AGI. So yes, I agree with Mustafa Suleiman that it has been an utterly insane weekend.

But it's time at last to move on to other AI news. Because apparently, Inflection AI have finished training the second best large language model in the world. Some will be surprised that it's only the second best LLM, still behind GPT-4. It does though say that we're scaling much further.

And they are more than capable of doing so. To train Inflection 2, the startup used only 5,000 NVIDIA H100 GPUs when they have a supercomputer with 22,000 H100s. Nevertheless, you can see the improvements from Inflection 1 to Inflection 2 beating Palm 2, which now powers BARD. It also apparently beats XAI's Grok 1 and Anthropic's Claude 2.0 model.

And Anthropic, who we've already mentioned multiple times today, have introduced Claude 2.1. Notice the number there, not going for the dramatic 2.5 or 3, just 2.1. That's kind of an admission that this is quite a small iterative update. Essentially, they've scaled up their context window to 200,000 tokens. That's maybe 150,000 words.

If this was open AI, this would probably be an entire event, but it's just a blog post. Apparently, they've made it so that Claude 2.1 is much more honest about when it doesn't know the answer to a hard question. Instead of just blurting out a hallucination, it will be much more likely just to say, "I don't know." The number of errors that Claude 2.1 makes about questions relating to the middle, beginning and end of long documents seems to have gone down.

But that definitely doesn't mean that it's reliable for questions, particularly if those questions pertain to the middle of the document. Notice if the fact that you're trying to find comes right at the beginning, this top row, or right at the end of a document, it will do well. But especially when you're dealing with those really long books or multiple documents up to 200,000 tokens, the errors for the middle can be quite dramatic.

Compare that to a shorter, say, 15,000 word document. This column is all green, even for the middle of that document. But all of what I've covered in this video pales in comparison to AI advances like this one. This was published in Nature on Monday. AI detection of a certain type of pancreatic cancer, pancreatic ductal adenocarcinoma, apparently the most deadly solid malignancy, has now overtaken mean radiologist performance, translated AI doing better than human doctor performance.

And by a fair margin, 34% in sensitivity and 6% in specificity. Sensitivity is about not missing cancers that are there, and specificity is about not saying there is a cancer when there isn't. But in both cases, it's outperforming mean radiologist performance. This is an AI advance that could genuinely save thousands of lives.

And on that crazy optimistic note, I want to end the video. Thank you as always for watching to the end. I promise to get to more non-open AI AI news in the future, and have a wonderful day.