back to indexAre We Back to Before? OpenAI 2.0, Inflection-2 and a Major AI Cancer Breakthrough
00:00:00.000 |
OpenAI are back, but that doesn't mean nothing has changed. 00:00:03.720 |
This video is about giving you more context, not only on what happened, 00:00:07.840 |
but what it means and what else has been going on with AI 00:00:13.320 |
And yes, it seems Sam Altman is also back at the helm of OpenAI, 00:00:17.920 |
although interestingly, he said, I'm looking forward to returning to OpenAI 00:00:22.760 |
Seems like the terms of that arrangement aren't yet finalized. 00:00:26.040 |
One thing that's definitely not back in fashion 00:00:28.480 |
and has arguably been lost in the chaos is the original OpenAI charter. 00:00:33.600 |
It said it would be wise to view any investment in OpenAI Global 00:00:37.800 |
in the spirit of a donation, with the understanding that it may be difficult 00:00:41.480 |
to know what role money will play in a post-AGI world. 00:00:45.240 |
Furthermore, the charter takes precedence over any obligation 00:00:50.280 |
The company may never make a profit and the company is under no obligation 00:00:58.280 |
I wonder how many at OpenAI would sign up to that charter again now. 00:01:02.520 |
Anyway, back to Las Vegas, where Sam Altman was attending 00:01:05.480 |
the biggest party of the weekend, the Formula One race, 00:01:10.440 |
Many of the possible reasons for that were covered in my previous video, 00:01:14.040 |
but Ilya Sutskevo was asked directly to provide a reason. 00:01:17.560 |
An hour and a half after firing Sam Altman, Ilya Sutskevo was asked 00:01:21.240 |
by OpenAI employees, will they ever find out what Sam Altman did? 00:01:27.680 |
This is according to the Wall Street Journal. 00:01:30.240 |
That reluctance apparently went so deep that even when the new CEO, 00:01:34.200 |
Emmet Scheer, asked the board, demanded the board to provide concrete evidence 00:01:38.760 |
of Sam Altman's wrongdoing, they did not do so. 00:01:41.640 |
He then said he could not remain in the CEO role and tried to help 00:01:45.800 |
both sides find common ground to get Sam Altman reinstated. 00:01:49.600 |
Speaking of the outgoing CEO, Emmet Scheer, apparently when he announced 00:01:53.160 |
an all hands meeting, fewer than a dozen people showed up 00:01:56.520 |
and apparently several employees responded with middle finger emojis. 00:02:00.560 |
That follows on from the executive team of OpenAI pressing the board 00:02:05.080 |
for 40 minutes for specific examples of Sam Altman's lack of candor or honesty. 00:02:10.520 |
The board refused, apparently citing legal reasons. 00:02:13.440 |
And it seems at one point, law enforcement entities such as the US 00:02:19.480 |
At this point, it seems a bit more of the reasoning came out. 00:02:22.800 |
The board intimated that there wasn't one incident that led 00:02:26.160 |
to their decision to eject Altman, but instead a consistent, 00:02:29.400 |
slow erosion of trust over time that made them increasingly uneasy. 00:02:33.640 |
And as I mentioned in my previous video, also Sam Altman's mounting list 00:02:37.400 |
of outside AI related ventures, like creating an AI chip, 00:02:41.400 |
which raised questions for the board about how OpenAI's technology 00:02:46.800 |
The board then discussed the matter with their own legal counsel. 00:02:50.160 |
And after a few hours, they returned, still unwilling to provide specifics. 00:02:54.440 |
They said that Sam Altman wasn't candid and often got his way. 00:02:57.960 |
And they said that he'd been so deft that they couldn't even give a specific example. 00:03:03.640 |
Indeed, the only person to go on record about any lying actually works at Google DeepMind. 00:03:08.400 |
This is Jeffrey Irving, who used to work at OpenAI. 00:03:11.840 |
He said Sam Altman lied to him on various occasions, was deceptive, manipulative and worse to others. 00:03:17.360 |
He said that when he lied to him, Jeffrey Irving, 00:03:20.120 |
it was about other people to try to drive wedges. 00:03:23.040 |
But according to the New York Times, the firing might not have even been about lying after all. 00:03:27.680 |
Apparently, Sam Altman met one of the board members, Miss Toner, to discuss a paper she had written. 00:03:33.040 |
Sam Altman then wrote an email to his colleagues reprimanding Miss Toner for that paper, 00:03:38.000 |
saying it was dangerous to the company, particularly at a time when the FTC was investigating OpenAI. 00:03:43.400 |
Miss Toner defended her paper as an academic paper, which I have read in part and I'll get to in a moment, 00:03:48.280 |
and said it was analyzing the challenges that the public faces when trying to understand the intentions of the companies developing AI. 00:03:55.280 |
But Sam Altman disagreed and said, "I did not feel we're on the same page on the damage of all of this. 00:04:00.760 |
Any amount of criticism from a board member carries a lot of weight. 00:04:04.120 |
Honestly, though, before today, I hadn't heard of the paper or heard anyone discussing it." 00:04:12.120 |
First, heaps of praise for OpenAI for publishing a system card outlining the risks of GPT-4. 00:04:19.360 |
After that praise, though, they do say that the release of ChatGPT four months earlier overshadowed the import of that system card. 00:04:27.920 |
The paper then describes how ChatGPT's release sparked a sense of urgency inside major tech companies. 00:04:34.160 |
One thing the paper did do, though, was praise Anthropic, the rival to OpenAI. 00:04:40.240 |
It singled out their different approach of delaying model releases until the industry had caught up. 00:04:46.720 |
It quoted how Anthropic's chatbot, Claude, had been deliberately delayed in order to avoid advancing the rate of AI capabilities progress. 00:04:55.360 |
Now, many people won't remember this, but this was actually cited in the GPT-4 technical report. 00:05:00.760 |
That report, released in March, warned of acceleration as a risk to be addressed, 00:05:06.840 |
and advisors that they employed told them to delay GPT-4 by a further six months and be a bit quieter in the communication strategy around deployment. 00:05:16.480 |
Toner's paper did then describe the, quote, "frantic corner cutting" that the release of ChatGPT appeared to spur. 00:05:24.560 |
But honestly, it's a fairly nuanced report and then critiques Anthropic's approach, saying it's ignored by many. 00:05:31.480 |
The report says, "By burying the explanation of Claude's delayed release in the middle of a long, detailed document posted to the company's website, 00:05:39.640 |
Anthropic appears to have ensured that this signal of its intentions around AI safety has gone largely unnoticed." 00:05:45.960 |
And before you think that she must be alone at OpenAI in critiquing the release of ChatGPT, Sam Altman himself listed it as a possible future regret. 00:05:55.960 |
We're definitely going to have some huge regrets 20 years from now. I hope what we can say is that we did far, far, far more good than bad. 00:06:04.160 |
And I think we will. I think that's true. But the downside here is pretty big, and I think we feel that weight every day. 00:06:10.360 |
Honestly, I think if we're going to regret something, it may be that we already pushed the button. 00:06:16.840 |
Like, we've already launched this revolution. It's somewhat out of our hands. 00:06:21.000 |
I think it's going to be great, but this is going to happen now, right? We're out of the... the world is out of the gates. 00:06:28.600 |
I guess the thing that I lose the most sleep over is that we already have done something really bad. 00:06:34.680 |
I don't think we have, but the hypothetical that we, by launching ChatGPT into the world, shot the industry out of a railgun, 00:06:44.040 |
and we now don't get to have much impact anymore, and there's going to be an acceleration towards making these systems, 00:06:52.280 |
which again, I think will be used for tremendous good, and I think we're going to address all the problems. 00:06:57.320 |
But maybe there's something in there that was really hard and complicated in a way we didn't understand. 00:07:02.280 |
And, you know, we've now already kicked this off. 00:07:04.520 |
But if that's more of the context, what about what actually happened on the weekend? 00:07:08.640 |
Well, Sam Altman's house became apparently a war room filled with OpenAI employees, including Mira Murati, the interim CEO. 00:07:17.240 |
They began to use X or Twitter in a coordinated fashion for their campaign. 00:07:22.280 |
This was also the time of the open letter to the board of directors from the employees of OpenAI. 00:07:29.120 |
By my last count, it was signed by 747 out of the 770 OpenAI employees. 00:07:36.480 |
It described how the board stated that allowing the company to be destroyed wouldn't necessarily be inconsistent with the mission of creating safe and beneficial AGI. 00:07:47.240 |
The employees then threatened to resign from OpenAI and all join Microsoft. 00:07:52.920 |
Notice how the threat is to take this step imminently unless all current board members resign. 00:07:59.200 |
As of now, though, it seems like Sam Altman has agreed not to return to the board while Adam D'Angelo will stay. 00:08:05.560 |
Furthermore, Sam Altman has agreed to allow an investigation into his affairs before the firing. 00:08:10.960 |
Remember that Adam D'Angelo, the CEO of Quora and one of the people who voted to remove Sam Altman, 00:08:17.200 |
was described by Sam Altman in a blog post from a few years ago as one of the few names that people consistently mention when discussing the smartest CEOs in Silicon Valley. 00:08:27.560 |
And Sam Altman said he has a very long term focus, which has become a rare commodity in tech companies these days. 00:08:34.360 |
I wonder if he would reaffirm that about Adam D'Angelo now. 00:08:37.920 |
But back to Microsoft, who had not only announced the arrival of Sam Altman, 00:08:42.280 |
but even gotten a floor of their office in San Francisco ready with laptops and clusters of GPUs. 00:08:48.840 |
Satya Nadella, the CEO of Microsoft, even said that Sam Altman and Greg Brockman were in the process of joining Microsoft. 00:08:57.280 |
So how are you envisioning this role with this sort of advanced AI team that Sam and Greg would be joining and leading? 00:09:04.560 |
Can you explain that? And are they actually Microsoft employees right now? 00:09:11.600 |
Yeah, so they're all in the process of joining. 00:09:14.480 |
And yes, I mean, the thing is, look, we have a ton of AI expertise in this company. 00:09:19.840 |
And Kevin Scott, the CTO of Microsoft, said this. 00:09:22.800 |
We have seen your petition and appreciate your desire potentially to join Sam Altman at Microsoft's new AI research lab. 00:09:30.080 |
Speaking as if Sam Altman had already joined. 00:09:32.320 |
Know that if needed, you have a role at Microsoft that matches your compensation and advances our collective mission. 00:09:38.160 |
So what does Microsoft now make of OpenAI not joining them and instead staying independent? 00:09:45.200 |
We are encouraged by the changes to the OpenAI board. 00:09:48.640 |
We believe this is an essential first step on a path to a more stable, well-informed and effective governance. 00:09:55.520 |
Sam, Greg and I have talked and agreed they have a key role to play. 00:09:59.680 |
Notice it's not really defined with the OpenAI leadership team in ensuring OpenAI continues to thrive and build on its mission. 00:10:06.640 |
But a more revealing quote from Nadella came last night in an interview with Kara Swisher. 00:10:11.280 |
We're never going to get back into a situation where we're surprised like this again. 00:10:16.400 |
And Bloomberg are reporting that the final reworked OpenAI board will have up to nine new directors, 00:10:23.280 |
with Microsoft getting possibly one or more seats. 00:10:26.960 |
One of the new board members will be Larry Summers, and I'll get to him in a moment. 00:10:31.440 |
But first, there's something that slipped out during the chaos that I want to pick up on. 00:10:35.840 |
At one point during the weekend, OpenAI's board approached Anthropic about a merger. 00:10:41.440 |
And I'll give myself a little bit of credit for predicting that in one of my previous videos. 00:10:46.560 |
Anyway, apparently that was quickly turned down by Dario Amadei, the CEO of Anthropic. 00:10:51.920 |
I'm going to get to other AI news in a second, but first a quick look at the views of Larry Summers on AI. 00:10:58.480 |
He views language models as a major wake-up call for the white-collar worker. 00:11:07.520 |
Chet CPT is going to replace what doctors do. 00:11:11.280 |
Hearing symptoms and making diagnoses before it changes what nurses do. 00:11:18.000 |
Before we leave this OpenAI saga, there's one more thing I wanted to point out. 00:11:22.320 |
I'm going all the way back to that OpenAI charter that I introduced in the beginning. 00:11:26.560 |
And the fifth clause says this, "The board determines when we've reached AGI." 00:11:31.600 |
Again, by AGI, we mean a highly autonomous system 00:11:35.280 |
that outperforms humans at most economically valuable work. 00:11:38.880 |
When the board determines that we have attained AGI, 00:11:42.160 |
such a system is excluded from IP licenses and other commercial terms with Microsoft. 00:11:47.520 |
But unless OpenAI and the board want this entire saga to repeat again, 00:11:52.720 |
I think they really need to better define AGI. 00:11:57.920 |
More importantly, what does outperforming humans at most economically valuable work even mean? 00:12:02.720 |
The AI and broader technology we have today, if we transported it back 200 years in the past, 00:12:08.560 |
would be able to outperform most humans at most economically valuable work. 00:12:13.200 |
Obviously, what happened is that humans reacted to that and changed how they work. 00:12:17.360 |
So this debate could arise again in the not too distant future. 00:12:20.720 |
OpenAI talk about aligning superintelligence within four years. 00:12:24.240 |
But if the board declares AGI, Microsoft might be able to say, 00:12:27.760 |
"Well, we haven't yet reached post-labor economics. 00:12:30.320 |
People are still contributing in economically valuable ways as carers, 00:12:34.800 |
or as entertainers, or YouTube hosts, or whatever else happens." 00:12:38.560 |
I do feel the world needs to come up with a much more rigorous definition of AGI. 00:12:43.920 |
So yes, I agree with Mustafa Suleiman that it has been an utterly insane weekend. 00:12:48.960 |
But it's time at last to move on to other AI news. 00:12:52.960 |
Because apparently, Inflection AI have finished training 00:12:56.000 |
the second best large language model in the world. 00:12:59.440 |
Some will be surprised that it's only the second best LLM, still behind GPT-4. 00:13:04.160 |
It does though say that we're scaling much further. 00:13:10.000 |
To train Inflection 2, the startup used only 5,000 NVIDIA H100 GPUs 00:13:17.280 |
when they have a supercomputer with 22,000 H100s. 00:13:22.240 |
Nevertheless, you can see the improvements from Inflection 1 to Inflection 2 00:13:30.400 |
It also apparently beats XAI's Grok 1 and Anthropic's Claude 2.0 model. 00:13:35.760 |
And Anthropic, who we've already mentioned multiple times today, 00:13:41.520 |
Notice the number there, not going for the dramatic 2.5 or 3, just 2.1. 00:13:47.040 |
That's kind of an admission that this is quite a small iterative update. 00:13:51.520 |
Essentially, they've scaled up their context window to 200,000 tokens. 00:13:58.400 |
If this was open AI, this would probably be an entire event, 00:14:03.040 |
Apparently, they've made it so that Claude 2.1 is much more honest 00:14:07.120 |
about when it doesn't know the answer to a hard question. 00:14:09.840 |
Instead of just blurting out a hallucination, 00:14:12.400 |
it will be much more likely just to say, "I don't know." 00:14:15.040 |
The number of errors that Claude 2.1 makes about questions relating to the middle, 00:14:19.840 |
beginning and end of long documents seems to have gone down. 00:14:24.160 |
But that definitely doesn't mean that it's reliable for questions, 00:14:27.600 |
particularly if those questions pertain to the middle of the document. 00:14:31.680 |
Notice if the fact that you're trying to find comes right at the beginning, 00:14:35.120 |
this top row, or right at the end of a document, it will do well. 00:14:39.280 |
But especially when you're dealing with those really long books 00:14:46.240 |
the errors for the middle can be quite dramatic. 00:14:48.800 |
Compare that to a shorter, say, 15,000 word document. 00:14:52.800 |
This column is all green, even for the middle of that document. 00:14:56.960 |
But all of what I've covered in this video pales in comparison 00:15:05.280 |
AI detection of a certain type of pancreatic cancer, 00:15:08.800 |
pancreatic ductal adenocarcinoma, apparently the most deadly solid malignancy, 00:15:14.400 |
has now overtaken mean radiologist performance, 00:15:17.760 |
translated AI doing better than human doctor performance. 00:15:21.840 |
And by a fair margin, 34% in sensitivity and 6% in specificity. 00:15:26.640 |
Sensitivity is about not missing cancers that are there, 00:15:29.840 |
and specificity is about not saying there is a cancer when there isn't. 00:15:33.680 |
But in both cases, it's outperforming mean radiologist performance. 00:15:37.520 |
This is an AI advance that could genuinely save thousands of lives. 00:15:41.840 |
And on that crazy optimistic note, I want to end the video. 00:15:47.680 |
I promise to get to more non-open AI AI news in the future,