back to index

Do We Get the $100 Trillion AI Windfall? Sam Altman's Plans, Jobs & the Falling Cost of Intelligence


Chapters

0:0 Intro
0:28 Money
1:21 Altmans Theory
3:39 Open AI
8:31 Worker Adoption
9:59 Productivity
11:54 Slow Down

Whisper Transcript | Transcript Only Page

00:00:00.000 | In the last few days, Sam Altman, the CEO of OpenAI, has publicly stated how much money he
00:00:06.320 | expects the company to make and how he intends to distribute it. Many people will assume he is
00:00:12.540 | bluffing, but I think GPT-4 shows that he's not. This video will cover his plans, his predictions
00:00:18.780 | of massive inequality, and OpenAI's new paper on job impacts, together with just released studies
00:00:25.560 | that back it all up. But let's start with money. This week in the New York Times, he said that his
00:00:30.180 | grand idea is that OpenAI will capture much of the world's wealth through the creation of AGI,
00:00:35.660 | and then redistribute this wealth to the people. And yes, he mentioned several figures,
00:00:40.560 | $100 billion, $1 trillion, even $100 trillion. If OpenAI make even a fraction of these figures,
00:00:47.080 | Sam Altman will become one of the most important people on the planet. That's not to say that he
00:00:51.400 | would become that rich. The Wall Street Journal this week reported that he has no direct
00:00:55.540 | financial stake in the business. But deciding where trillions of dollars of wealth goes does
00:01:00.460 | make you incredibly powerful. So where does he want all the money to go? Well, he seems to have
00:01:04.900 | two main ideas, plus a third one that I'll touch on at the end. His first idea is UBI, or Universal
00:01:11.200 | Basic Income. We also have funded the largest and most comprehensive universal basic income study
00:01:16.580 | sponsored by OpenAI. And I think it's like an area we should just be looking into.
00:01:21.260 | How exactly would that work? Well, he laid out his theory in this blog post.
00:01:25.520 | And he began it with this. He says he's reminded every day about the magnitude of socioeconomic
00:01:30.780 | change that is coming sooner than most people believe. He said that the price of many kinds
00:01:35.660 | of labor, which drives the costs of goods and services, will fall towards zero once sufficiently
00:01:41.340 | powerful AI joins the workforce. He said that that was great for people buying products,
00:01:45.740 | but not so much for those working to earn a wage. So where would their money come from? He proposed
00:01:51.020 | something called the American Equity Fund. It would be capitalized by taxing companies,
00:01:55.500 | that are above a certain valuation, 2.5% of their market value each year. And it would also be funded
00:02:01.660 | by taxing 2.5% of the value of all privately held land. By his calculation, that will be worth around
00:02:08.320 | $13,500 in about 2030. And he said that that money would have much greater purchasing power than it
00:02:15.220 | does now, because technology would have greatly reduced the cost of goods and services. It does
00:02:20.180 | raise the question for me, though, about those countries that aren't the home of massive AI companies,
00:02:25.480 | where are they going to get the wealth from? On Lex Friedman's podcast, he admitted it wasn't a
00:02:29.860 | full solution. I think it is a component of something we should pursue. It is not a full
00:02:34.120 | solution. I think people work for lots of reasons besides money. He thinks much more will be needed
00:02:38.980 | because the cost of intelligence could fall to almost zero. My basic model of the next decade
00:02:43.800 | is that the marginal cost of intelligence and the marginal cost of energy are going to trend rapidly
00:02:48.180 | towards zero, like surprisingly far. So what is his other main idea? Simply use the money to fund
00:02:55.460 | the economy. Even with these two ideas, he admits there's still a big problem. As he put it recently,
00:03:24.600 | he sees a lot of people who are not going to be able to afford to pay for their own
00:03:25.440 | services. There are a lot of people getting very rich in the short to medium term, but others might
00:03:29.420 | not fare as well. If it is as divergent as I think it could be for like some people doing incredibly
00:03:35.820 | well and others not, I think society just won't tolerate it this time. But Sam Altman isn't the
00:03:40.320 | only one making predictions. OpenAI itself released this paper around 10 days ago. It
00:03:45.640 | calculated that with access to a large language model, about 15% of all worker tasks in the US
00:03:51.680 | could be completed significantly faster at the same level of quality.
00:03:55.420 | But crucially, when incorporating software and tooling built on top of LLMs, this share increases
00:04:01.960 | to around 50% of all tasks. That is a colossal impact for better or worse just with GPT-4 plus
00:04:09.460 | software. On page 17 of the paper, it had this table, which I think captures a lot of the
00:04:14.360 | interesting analysis. Let me briefly explain what it shows. We have a column of example occupations
00:04:19.820 | in the middle and the education that is required for each of them and the job preparation. But the
00:04:25.400 | things on the right are where it gets interesting. These are the percentages of exposure graded
00:04:29.940 | alpha, beta and zeta. The human assessment of exposure is titled H and the M is for the machine
00:04:36.280 | assessment. They actually got GPT-4 to do an assessment too. Notice that for the most part, GPT-4
00:04:41.720 | agrees with the human assessors. So what are these three grades? Alpha is the proportion of tasks in these
00:04:48.140 | occupations affected by current language models alone without any further advances or integrations. Beta represents the
00:04:55.380 | percentage of tasks exposed in a realistic scenario of language models plus a bit of software
00:05:01.200 | integration and a few advances. You could think of it as their median prediction. Finally, zeta is a
00:05:06.800 | bit like their most extreme scenario with full adoption of software plus advances of LLMs. By the
00:05:13.320 | way, we're not talking GPT-5 here or text-to-video, just basic software integration like a longer
00:05:18.480 | context window or text-to-image. The trend that immediately stuck out for me was how when you go up the educational
00:05:25.360 | levels and the salary ranges, the effect of these large language models on task exposure goes up and up and up
00:05:31.980 | until you reach master's degree or higher levels. Then it seems to dip down a little. Maybe this is why Sam Altman
00:05:38.600 | predicted inequality. The people on the very cutting edge of science would still get paid well, probably better than
00:05:43.980 | ever, but there may be a further hollowing out of the middle class with working class occupations left largely
00:05:50.440 | untouched. The paper also touches on why so few people might be currently focused on
00:05:55.340 | language models. I don't know about you, but have you noticed that feeling where it seems to be us being super
00:06:00.340 | interested in this technology with most people not being that interested? Well, here might be one reason why. Currently,
00:06:06.500 | only 3% of US workers have over half of their tasks exposed to LLMs. But that's only when considering existing language and
00:06:15.380 | code capabilities without additional software or modalities. So not that many people are seeing a massive change in their work. But it says that when we account for other general
00:06:25.320 | generative models and complementary technologies, our human estimates indicate that up to 49% of workers could have half or more of their tasks exposed to LLMs.
00:06:36.040 | Whether this means doubling the amount of work done or halving the number of workers doing it, I'll talk more about later in the video. But maybe this was the dramatic economic impact that Ilya Satskova once predicted on Lex Friedman.
00:06:48.920 | What do you think is the bar for impressing us? Do you think that bar will continuously be moved?
00:06:54.640 | Definitely.
00:06:55.300 | I think when you start to see really dramatic economic impact, that's when I think that's in some sense the next barrier. Because right now, if you think about the work in AI, it's really confusing. It's really hard to know what to make of all these advances.
00:07:08.540 | The paper also points out that the growing economic effect of LLMs is expected to persist and increase even if we halt the development of new capabilities today. They refer to recent studies revealing the potential of LLMs to program and control other digital tools.
00:07:25.280 | Such as APIs, search engines, and even other generative AI systems.
00:07:29.640 | In my previous video on the self-improvement in GPT-4, I mentioned hugging GPT. But I am doing a lot of research on the new Microsoft Jarvis model and Auto-GPT that I'm hoping to bring to you soon.
00:07:43.000 | But interestingly, there were some tasks that neither GPT-4 nor the human assessors could quite agree on in terms of the impact that LLMs would have. Even GPT-4 couldn't quite figure out if meetings and negotiations would work.
00:07:55.260 | Or to what extent counselling or other jobs that involve empathy would be affected.
00:08:00.180 | And the paper concludes with this.
00:08:25.240 | The paper then picks up on a particular survey that shows worker adoption of LLMs.
00:08:31.300 | Here is the survey with the rather dramatic headline of 1 in 4 companies have already replaced workers with ChatGPT.
00:08:38.500 | I don't think that assertion is fully backed up by the evidence, but they did survey 1,000 US business leaders.
00:08:44.940 | And there were some interesting findings.
00:08:47.100 | On the question of replacing workers, it says that when asked if ChatGPT will lead to any workers being laid off by the end of 2023,
00:08:55.220 | 33% of business leaders say definitely, while 26% say probably.
00:09:00.240 | Others are a bit more optimistic.
00:09:01.820 | Goldman Sachs said this.
00:09:04.160 | This economic analysis was published only a few days ago, and they say about 7% of workers will be fully displaced over the next 10 years,
00:09:12.240 | but that most are able to find new employment in only slightly less productive positions.
00:09:16.860 | They also predict that generative AI will raise overall labor productivity growth by around 1.5 percentage points per year,
00:09:25.200 | which effectively doubles the rate.
00:09:26.800 | Going back to Sam Altman, last week he was asked about this augmentation versus replacement question.
00:09:32.580 | So in terms of really replaced jobs, is that a worry for you?
00:09:35.480 | It is. I'm trying to think of like a big category that I believe can be massively impacted.
00:09:40.280 | I guess I would say customer service is a category that I could see.
00:09:44.020 | There are just way fewer jobs relatively soon.
00:09:46.620 | I'm not even certain about that, but I could believe it.
00:09:48.720 | Whatever call center employees are doing now.
00:09:50.400 | I found that last comment on call centers quite interesting, given that the GPT-4 technical,
00:09:55.240 | which is a very important report, talked about using language models for upskilling in call centers.
00:10:00.420 | So does this mean immense productivity in the short term, but replacement in the long term?
00:10:05.800 | A couple of days ago, Sam Altman put it like this.
00:10:08.420 | I always try to be honest and say in the very long term, I don't know what's going to happen here.
00:10:11.960 | And no one does.
00:10:12.860 | And I'd like to at least acknowledge that.
00:10:14.940 | In the short term, it certainly seems like there was a huge overhang of the amount of output the world wanted.
00:10:21.900 | And if people are way more effective, they're just doing way more.
00:10:25.160 | And I think that's what's happening with coding and people that got early access to copilot reported this.
00:10:30.340 | And now that the tools are much better, people report it even more.
00:10:32.480 | Yeah.
00:10:33.340 | But we're now in this sort of GPT-4 era, seeing it in all sorts of other jobs as well, where you give people better tools and they just do more stuff, better stuff.
00:10:42.660 | The productivity point is backed up by experiments like this.
00:10:46.060 | When developers were split into two groups, half that used OpenAI's copilot and half that didn't.
00:10:51.860 | Not only did more of those who use copilot finish,
00:10:55.140 | but they finished in less than half the time.
00:10:57.900 | This paper from a few weeks ago shows that when white collar professionals were given a language model like ChatGPT,
00:11:04.440 | the time they took to do writing tasks dropped massively.
00:11:08.080 | Compared to the control group, you can see that they took less than 20 minutes versus almost 30.
00:11:13.580 | And when the assisted group and control group were blindly graded,
00:11:17.520 | you can see that the mean grade was higher for those who use the language models.
00:11:22.420 | But surely if productivity goes up, that means higher work.
00:11:25.120 | And that's not necessarily the case.
00:11:26.120 | But what about the wages for those jobs?
00:11:27.120 | Well, not necessarily.
00:11:28.120 | A couple of days ago, Sam Altman laid out how it might be more efficient to use one worker to do the tasks of two or three.
00:11:35.120 | There's a huge cost premium on work that has to be split across two people.
00:11:40.120 | There's the communication overhead.
00:11:42.120 | There's the miscommunication.
00:11:43.120 | There's everything else.
00:11:44.120 | And if you can make one person twice as productive, you don't do as much as two people could do.
00:11:50.120 | Maybe you do as much as three and a half or four people could do for many kinds of tasks.
00:11:55.100 | Is there anything that might slow this economic impact down?
00:11:58.100 | I think there might be a few things, starting with politics.
00:12:01.100 | This survey from YouGov America was released only three days ago.
00:12:05.100 | And while I think it is a somewhat leading question, it does show that over 69% of Americans would support a six month pause on some kinds of AI development.
00:12:14.100 | And if we see dramatic negative economic impact, I expect that figure would go higher.
00:12:20.100 | Politicians would then be incentivized to slow down, tax and/or regulate AI development.
00:12:25.080 | Indeed, two days ago, President Biden tweeted this:
00:12:28.080 | "When it comes to AI, we must both support responsible innovation and ensure appropriate guardrails."
00:12:33.080 | And also, don't forget, if you live in a country where English is not the main spoken language, GPT-4 isn't as good.
00:12:40.080 | Notice that in many languages found in India, GPT-4 is worse performing than the previous model, GPT-3.5 is, in English.
00:12:48.080 | This is just one reason why Goldman Sachs predicted different levels of automation in different countries.
00:12:55.060 | The other factor could be cultural pushback.
00:12:57.060 | When Levi's wanted to test AI generated clothing models, and they said their reason was to "increase diversity", that announcement was met with backlash.
00:13:07.060 | They then had to back down slightly and say that they're not replacing the job of any model.
00:13:11.060 | If people vote with their wallets for human made goods and services, that could have a massive impact.
00:13:17.060 | And there is another big factor. People seem to intrinsically prefer human made output to machine generated output.
00:13:25.040 | This piece came out recently from Wired, and in it they test the brain chemical reaction to human made art and computer made art.
00:13:33.040 | These were the same pictures, it's just that sometimes people were told they were made by humans, and other times they were told they were made by computers.
00:13:41.040 | It says a clear winner emerged. People not only claim to prefer the identical human made pictures, their brains pleasure sensors actually lit up more brightly.
00:13:51.040 | So human goods and services may have the edge simply by virtue of being made.
00:13:55.020 | But I want to end the video where I began it with Sam Altman's piece in the New York Times.
00:14:01.020 | Some of you may have noticed that I said Sam Altman had a third idea of how to distribute the wealth that I would mention at the end.
00:14:08.020 | Well, he admitted, if AGI does create all that wealth, he is not sure how the company will redistribute it.
00:14:15.020 | Money could mean something very different in this new world.
00:14:18.020 | But what's the idea? He said, "I feel like the AGI can help with that."
00:14:23.020 | Maybe GPT-4 can help with that.
00:14:25.000 | Maybe GPT-5 will decide where the money made using GPT-5 will go.
00:14:30.000 | Thank you so much for watching to the end, and have a wonderful day.