In the last few days, Sam Altman, the CEO of OpenAI, has publicly stated how much money he expects the company to make and how he intends to distribute it. Many people will assume he is bluffing, but I think GPT-4 shows that he's not. This video will cover his plans, his predictions of massive inequality, and OpenAI's new paper on job impacts, together with just released studies that back it all up.
But let's start with money. This week in the New York Times, he said that his grand idea is that OpenAI will capture much of the world's wealth through the creation of AGI, and then redistribute this wealth to the people. And yes, he mentioned several figures, $100 billion, $1 trillion, even $100 trillion.
If OpenAI make even a fraction of these figures, Sam Altman will become one of the most important people on the planet. That's not to say that he would become that rich. The Wall Street Journal this week reported that he has no direct financial stake in the business. But deciding where trillions of dollars of wealth goes does make you incredibly powerful.
So where does he want all the money to go? Well, he seems to have two main ideas, plus a third one that I'll touch on at the end. His first idea is UBI, or Universal Basic Income. We also have funded the largest and most comprehensive universal basic income study sponsored by OpenAI.
And I think it's like an area we should just be looking into. How exactly would that work? Well, he laid out his theory in this blog post. And he began it with this. He says he's reminded every day about the magnitude of socioeconomic change that is coming sooner than most people believe.
He said that the price of many kinds of labor, which drives the costs of goods and services, will fall towards zero once sufficiently powerful AI joins the workforce. He said that that was great for people buying products, but not so much for those working to earn a wage. So where would their money come from?
He proposed something called the American Equity Fund. It would be capitalized by taxing companies, that are above a certain valuation, 2.5% of their market value each year. And it would also be funded by taxing 2.5% of the value of all privately held land. By his calculation, that will be worth around $13,500 in about 2030.
And he said that that money would have much greater purchasing power than it does now, because technology would have greatly reduced the cost of goods and services. It does raise the question for me, though, about those countries that aren't the home of massive AI companies, where are they going to get the wealth from?
On Lex Friedman's podcast, he admitted it wasn't a full solution. I think it is a component of something we should pursue. It is not a full solution. I think people work for lots of reasons besides money. He thinks much more will be needed because the cost of intelligence could fall to almost zero.
My basic model of the next decade is that the marginal cost of intelligence and the marginal cost of energy are going to trend rapidly towards zero, like surprisingly far. So what is his other main idea? Simply use the money to fund the economy. Even with these two ideas, he admits there's still a big problem.
As he put it recently, he sees a lot of people who are not going to be able to afford to pay for their own services. There are a lot of people getting very rich in the short to medium term, but others might not fare as well. If it is as divergent as I think it could be for like some people doing incredibly well and others not, I think society just won't tolerate it this time.
But Sam Altman isn't the only one making predictions. OpenAI itself released this paper around 10 days ago. It calculated that with access to a large language model, about 15% of all worker tasks in the US could be completed significantly faster at the same level of quality. But crucially, when incorporating software and tooling built on top of LLMs, this share increases to around 50% of all tasks.
That is a colossal impact for better or worse just with GPT-4 plus software. On page 17 of the paper, it had this table, which I think captures a lot of the interesting analysis. Let me briefly explain what it shows. We have a column of example occupations in the middle and the education that is required for each of them and the job preparation.
But the things on the right are where it gets interesting. These are the percentages of exposure graded alpha, beta and zeta. The human assessment of exposure is titled H and the M is for the machine assessment. They actually got GPT-4 to do an assessment too. Notice that for the most part, GPT-4 agrees with the human assessors.
So what are these three grades? Alpha is the proportion of tasks in these occupations affected by current language models alone without any further advances or integrations. Beta represents the percentage of tasks exposed in a realistic scenario of language models plus a bit of software integration and a few advances.
You could think of it as their median prediction. Finally, zeta is a bit like their most extreme scenario with full adoption of software plus advances of LLMs. By the way, we're not talking GPT-5 here or text-to-video, just basic software integration like a longer context window or text-to-image. The trend that immediately stuck out for me was how when you go up the educational levels and the salary ranges, the effect of these large language models on task exposure goes up and up and up until you reach master's degree or higher levels.
Then it seems to dip down a little. Maybe this is why Sam Altman predicted inequality. The people on the very cutting edge of science would still get paid well, probably better than ever, but there may be a further hollowing out of the middle class with working class occupations left largely untouched.
The paper also touches on why so few people might be currently focused on language models. I don't know about you, but have you noticed that feeling where it seems to be us being super interested in this technology with most people not being that interested? Well, here might be one reason why.
Currently, only 3% of US workers have over half of their tasks exposed to LLMs. But that's only when considering existing language and code capabilities without additional software or modalities. So not that many people are seeing a massive change in their work. But it says that when we account for other general generative models and complementary technologies, our human estimates indicate that up to 49% of workers could have half or more of their tasks exposed to LLMs.
Whether this means doubling the amount of work done or halving the number of workers doing it, I'll talk more about later in the video. But maybe this was the dramatic economic impact that Ilya Satskova once predicted on Lex Friedman. What do you think is the bar for impressing us?
Do you think that bar will continuously be moved? Definitely. I think when you start to see really dramatic economic impact, that's when I think that's in some sense the next barrier. Because right now, if you think about the work in AI, it's really confusing. It's really hard to know what to make of all these advances.
The paper also points out that the growing economic effect of LLMs is expected to persist and increase even if we halt the development of new capabilities today. They refer to recent studies revealing the potential of LLMs to program and control other digital tools. Such as APIs, search engines, and even other generative AI systems.
In my previous video on the self-improvement in GPT-4, I mentioned hugging GPT. But I am doing a lot of research on the new Microsoft Jarvis model and Auto-GPT that I'm hoping to bring to you soon. But interestingly, there were some tasks that neither GPT-4 nor the human assessors could quite agree on in terms of the impact that LLMs would have.
Even GPT-4 couldn't quite figure out if meetings and negotiations would work. Or to what extent counselling or other jobs that involve empathy would be affected. And the paper concludes with this. The paper then picks up on a particular survey that shows worker adoption of LLMs. Here is the survey with the rather dramatic headline of 1 in 4 companies have already replaced workers with ChatGPT.
I don't think that assertion is fully backed up by the evidence, but they did survey 1,000 US business leaders. And there were some interesting findings. On the question of replacing workers, it says that when asked if ChatGPT will lead to any workers being laid off by the end of 2023, 33% of business leaders say definitely, while 26% say probably.
Others are a bit more optimistic. Goldman Sachs said this. This economic analysis was published only a few days ago, and they say about 7% of workers will be fully displaced over the next 10 years, but that most are able to find new employment in only slightly less productive positions.
They also predict that generative AI will raise overall labor productivity growth by around 1.5 percentage points per year, which effectively doubles the rate. Going back to Sam Altman, last week he was asked about this augmentation versus replacement question. So in terms of really replaced jobs, is that a worry for you?
It is. I'm trying to think of like a big category that I believe can be massively impacted. I guess I would say customer service is a category that I could see. There are just way fewer jobs relatively soon. I'm not even certain about that, but I could believe it.
Whatever call center employees are doing now. I found that last comment on call centers quite interesting, given that the GPT-4 technical, which is a very important report, talked about using language models for upskilling in call centers. So does this mean immense productivity in the short term, but replacement in the long term?
A couple of days ago, Sam Altman put it like this. I always try to be honest and say in the very long term, I don't know what's going to happen here. And no one does. And I'd like to at least acknowledge that. In the short term, it certainly seems like there was a huge overhang of the amount of output the world wanted.
And if people are way more effective, they're just doing way more. And I think that's what's happening with coding and people that got early access to copilot reported this. And now that the tools are much better, people report it even more. Yeah. But we're now in this sort of GPT-4 era, seeing it in all sorts of other jobs as well, where you give people better tools and they just do more stuff, better stuff.
The productivity point is backed up by experiments like this. When developers were split into two groups, half that used OpenAI's copilot and half that didn't. Not only did more of those who use copilot finish, but they finished in less than half the time. This paper from a few weeks ago shows that when white collar professionals were given a language model like ChatGPT, the time they took to do writing tasks dropped massively.
Compared to the control group, you can see that they took less than 20 minutes versus almost 30. And when the assisted group and control group were blindly graded, you can see that the mean grade was higher for those who use the language models. But surely if productivity goes up, that means higher work.
And that's not necessarily the case. But what about the wages for those jobs? Well, not necessarily. A couple of days ago, Sam Altman laid out how it might be more efficient to use one worker to do the tasks of two or three. There's a huge cost premium on work that has to be split across two people.
There's the communication overhead. There's the miscommunication. There's everything else. And if you can make one person twice as productive, you don't do as much as two people could do. Maybe you do as much as three and a half or four people could do for many kinds of tasks. Is there anything that might slow this economic impact down?
I think there might be a few things, starting with politics. This survey from YouGov America was released only three days ago. And while I think it is a somewhat leading question, it does show that over 69% of Americans would support a six month pause on some kinds of AI development.
And if we see dramatic negative economic impact, I expect that figure would go higher. Politicians would then be incentivized to slow down, tax and/or regulate AI development. Indeed, two days ago, President Biden tweeted this: "When it comes to AI, we must both support responsible innovation and ensure appropriate guardrails." And also, don't forget, if you live in a country where English is not the main spoken language, GPT-4 isn't as good.
Notice that in many languages found in India, GPT-4 is worse performing than the previous model, GPT-3.5 is, in English. This is just one reason why Goldman Sachs predicted different levels of automation in different countries. The other factor could be cultural pushback. When Levi's wanted to test AI generated clothing models, and they said their reason was to "increase diversity", that announcement was met with backlash.
They then had to back down slightly and say that they're not replacing the job of any model. If people vote with their wallets for human made goods and services, that could have a massive impact. And there is another big factor. People seem to intrinsically prefer human made output to machine generated output.
This piece came out recently from Wired, and in it they test the brain chemical reaction to human made art and computer made art. These were the same pictures, it's just that sometimes people were told they were made by humans, and other times they were told they were made by computers.
It says a clear winner emerged. People not only claim to prefer the identical human made pictures, their brains pleasure sensors actually lit up more brightly. So human goods and services may have the edge simply by virtue of being made. But I want to end the video where I began it with Sam Altman's piece in the New York Times.
Some of you may have noticed that I said Sam Altman had a third idea of how to distribute the wealth that I would mention at the end. Well, he admitted, if AGI does create all that wealth, he is not sure how the company will redistribute it. Money could mean something very different in this new world.
But what's the idea? He said, "I feel like the AGI can help with that." Maybe GPT-4 can help with that. Maybe GPT-5 will decide where the money made using GPT-5 will go. Thank you so much for watching to the end, and have a wonderful day.