back to index

OpenAI CEO on Artificial Intelligence Changing Society


Whisper Transcript | Transcript Only Page

00:00:00.000 | the jorogan experience and wondering what the potential for the future is and
00:00:05.720 | whether or not that's a good thing. I think it's gonna be a great thing but I
00:00:12.680 | think it's not gonna be all a great thing and that that is where I think
00:00:18.440 | that's where all of the complexity comes in for people it's not this like clean
00:00:22.880 | story of we're gonna do this and it's all gonna be great it's we're gonna do
00:00:25.960 | this it's gonna be net great but it's gonna be like a technological revolution
00:00:31.000 | it's gonna be a societal revolution and those always come with change and even
00:00:36.400 | if it's like net wonderful you know there's things we're gonna lose along
00:00:39.800 | the way some kinds of jobs some kind of parts of our way of life some parts of
00:00:43.320 | the way we live are gonna change go away and eat no matter how tremendous the
00:00:49.000 | upside is there and I believe it will be tremendously good you know there's a lot
00:00:52.520 | of stuff we got to navigate through to make sure that's that's a complicated
00:00:59.520 | thing for anyone to wrap their heads around and there's you know deep and
00:01:02.760 | super understandable emotions around that that's a very honest answer that
00:01:06.840 | it's not all gonna be good but it seems inevitable at this point it's yeah I
00:01:13.960 | mean it's definitely inevitable my my view of the world you know when you're
00:01:18.440 | like a kid in school you learn about this technological revolution and then
00:01:21.580 | that one and then that one and my view of the world now sort of looking
00:01:24.960 | backwards and forwards is that this is like one long technological revolution
00:01:29.620 | and we had sure like first we had to figure out agriculture so that we had
00:01:35.040 | the resources and time to figure out how to build machines then we got this
00:01:38.240 | industrial revolution and that made us learn about a lot of stuff a lot of
00:01:41.560 | other scientific discovery to let us do the computer revolution and that's now
00:01:45.520 | letting us as we scale up to these massive systems do the AI revolution but
00:01:49.680 | it really is just one long story of humans discovering science and
00:01:54.580 | technology and co-evolving with it and I think it's the most exciting story of
00:01:58.580 | all time I think it's how we get to this world of abundance and although you know
00:02:04.960 | although we do have these things to navigate and there will be these
00:02:07.500 | downsides if you think about what it means for the world and for people's
00:02:11.400 | quality of lives if we can get to a world where the the cost of intelligence
00:02:17.700 | and the abundance that comes with that the cost dramatically falls the
00:02:23.900 | abundance goes ways up goes way up I think we'll do the same thing with the
00:02:26.580 | energy and I think those are the two sort of key inputs to everything else we
00:02:30.220 | want so if we can have abundant and cheap energy and intelligence that will
00:02:34.760 | transform people's lives largely for the better and I think it's gonna in the
00:02:39.740 | same way that if we could go back now 500 years and look at someone's life
00:02:42.840 | we'd say well there there's some great things but they didn't have this they
00:02:45.560 | didn't have that can you believe they didn't have modern medicine that's what
00:02:48.860 | people are gonna look back at us like but in 50 years when you think about the
00:02:53.280 | people that currently rely on jobs that AI will replace when you think about
00:02:59.500 | whether it's truck drivers or automation workers people that working in factory
00:03:04.420 | assembly lines what if anything what strategies can be put to mitigate the
00:03:11.760 | negative downsides of those jobs being eliminated by AI so I'll talk about some
00:03:21.580 | general thoughts but I find making very specific predictions difficult because
00:03:27.100 | the way the technology goes has been so different than even my own intuitions or
00:03:32.740 | certainly my own intuitions maybe we should stop there and back up a little
00:03:36.460 | what we what were your initial thoughts if you had asked me ten years ago I
00:03:42.480 | would have said first AI is gonna come for blue-collar labor basically it's
00:03:47.800 | gonna drive trucks and do factory work and you know it'll handle heavy
00:03:52.100 | machinery then maybe after that it'll do like some kinds of cognitive labor kind
00:03:59.840 | of you know but not it won't be off doing what I think of personally is the
00:04:03.520 | really hard stuff it won't be off proving new mathematical theorems won't
00:04:07.240 | be off you know discovering new science won't be off writing code and then
00:04:12.580 | eventually maybe but maybe last of all maybe never because human creativity is
00:04:18.060 | this magic special special thing last of all it'll come for the creative jobs
00:04:22.080 | that's what I would have said now a it looks to me like and for a while AI is
00:04:29.640 | much better at doing tasks than doing jobs it can do these little pieces super
00:04:34.120 | well but sometimes it goes off the rails it can't keep like very long coherence
00:04:38.320 | so people are instead just able to do their existing jobs way more
00:04:43.520 | productively but you really still need the human there today and then B it's
00:04:47.500 | going exactly the other direction could do the creative work first stuff like
00:04:51.200 | coding second they can do things like other kinds of cognitive labor third and
00:04:55.960 | we're the furthest away from like humanoid robots hmm so back to the
00:05:02.340 | initial question if we do have something that completely eliminates factory
00:05:10.040 | workers completely eliminates truck drivers delivery drivers things along
00:05:15.460 | those lines that creates this massive vacuum in our society so I think there's
00:05:23.040 | things that we're gonna do that are good to do but not sufficient so I think at
00:05:29.100 | some point we will do something like a UBI or some other kind of like very
00:05:33.840 | long-term unemployment insurance something but we'll have some way of
00:05:37.920 | giving people like redistributing money in society and as a cushion for people
00:05:44.040 | as people figure out the new jobs but you know and I maybe I should touch on
00:05:48.240 | that I I'm not a believer at all that there won't be lots of new jobs I think
00:05:53.520 | human creativity desire for status wanting different ways to compete invent
00:05:59.000 | new things feel part of a community feel valued that's not gonna go anywhere
00:06:03.880 | people have worried about that forever what happens is we get better tools and
00:06:08.760 | we just invent new things and more amazing things to do and there's a big
00:06:12.840 | universe out there and and I think I mean that like literally in that there's
00:06:17.560 | like space is really big but also there's just so much stuff we can all do
00:06:22.000 | if we do get to this world of abundant intelligence where you can sort of just
00:06:26.240 | think of a new idea and it gets created but but again that doesn't to the point
00:06:34.880 | we started with that that that doesn't provide like great solace to people who
00:06:38.760 | are losing their jobs today so saying there's gonna be this great indefinite
00:06:43.120 | stuff in the future people are like what are we doing today so you know well I
00:06:48.720 | think we will as a society do things like UBI and other ways of
00:06:53.160 | redistribution but I don't think that gets at the core of what people want I
00:06:56.480 | think what people want is like agency self-determination the ability to play a
00:07:02.120 | role in architecting the future along with the rest of society the ability to
00:07:06.120 | express themselves and create something meaningful to them and also I think a
00:07:15.600 | lot of people work jobs they hate and I think there's we as a society are always
00:07:20.040 | a little bit confused about whether we want to work more work less but but
00:07:23.980 | somehow that we all get to do something meaningful and we all get to play our
00:07:32.160 | role in driving the future forward that's really important and what I hope
00:07:36.680 | is as those truck driving long-haul truck driving jobs go away which you
00:07:41.520 | know people have been wrong about predicting how fast that's gonna happen
00:07:44.200 | but it's gonna happen we figure out not just a way to solve the economic problem
00:07:52.480 | by like giving people the equivalent of money every month but that there's a way
00:07:58.120 | that and we've got a lot of ideas about this there's a way that we like share
00:08:02.440 | ownership and decision-making over the future I think I say a lot about AGI is
00:08:08.920 | that everyone everyone realizes we're gonna have to share the benefits of that
00:08:13.760 | but we also have to share like the decision-making over it and access to
00:08:18.240 | the system itself like I'd be more excited about a world where we say
00:08:21.560 | rather than give everybody on earth like one eight billionth of the AGI money
00:08:26.320 | which we should do that to we say you get like one eight billionth of a one
00:08:31.520 | eight billionth slice of the system you can sell it to somebody else you can sell
00:08:37.360 | to a company you can pool it with other people you can use it for whatever
00:08:39.840 | creative pursuit you want you can use it to figure out how to start some new
00:08:43.040 | business and with that you get sort of like a voting right over how this is all
00:08:49.620 | going to be used and so the better the AGI gets the more your little one
00:08:53.640 | eight billionth ownership is is worth to you we were joking around the other day
00:08:57.640 | on the podcast where I was saying that what we need is an AI government that
00:09:02.960 | we should have an AI president and have AI make all the decisions yeah I have
00:09:08.160 | something that's completely unbiased absolutely rational has the accumulated
00:09:14.280 | knowledge of the entire human history yeah at its disposal including all
00:09:19.280 | knowledge of psychology and psychological study including UBI because
00:09:24.240 | that comes with a host of you know pitfalls and and issues that people have
00:09:29.160 | with it so I'll say something there um I think we're still very far away from a
00:09:33.280 | system that is capable enough and reliable enough that you that any of us
00:09:39.200 | would want that but I'll tell you something I love about that someday
00:09:42.920 | let's say that thing gets built the fact that it can go around and talk to every
00:09:47.040 | person on earth understand their exact preferences at a very deep level you
00:09:51.640 | know how they think about this issue and that one how they balance the trade-offs
00:09:54.440 | and what they want and then understand all of that and and like collectively
00:09:59.400 | optimize optimize for the collective preferences of humanity or of citizens
00:10:04.800 | of the u.s. that's awesome as long as it's not co-opted right our government
00:10:11.100 | currently is co-opted that's for sure we know for sure that our government is
00:10:15.440 | heavily influenced by special interests if we could have an artificial
00:10:21.320 | intelligence government that has no influence nothing has influence on it
00:10:26.280 | what a fascinating idea it's possible and I think it might be the only way
00:10:30.920 | where you're gonna get completely objective the absolute most intelligent
00:10:37.760 | decision for virtually every problem every dilemma that we face currently in
00:10:43.560 | society would you truly be comfortable handing over like final decision-making
00:10:47.840 | and say alright AI you got it no no but I'm not comfortable doing that with
00:10:52.600 | anybody right you know I mean I don't write I was uncomfortable with the
00:10:56.520 | Patriot Act I'm uncomfortable with you know many decisions of people that are
00:11:00.400 | being made it's just there's so much obvious evidence that decisions that are
00:11:06.280 | being made are not being made in the best interests of the overall well the
00:11:09.560 | people it's being made in the decisions of whatever gigantic corporations that
00:11:16.880 | have donated to and what whatever the military industrial complex and
00:11:20.960 | pharmaceutical industrial complex and and it's just the money it's that's
00:11:25.240 | really what we know today that the money has a massive influence on on our society
00:11:30.320 | and the choices that get made and the overall good or bad for the population
00:11:34.320 | yeah I have no disagreement at all that the current system is super broken not
00:11:40.240 | working for people super corrupt corrupt and for sure like unbelievably run by
00:11:45.520 | money yeah and and I think there is a way to do a better job than that with AI
00:11:53.400 | just in some way but and this might just be like a factor of sitting with the
00:11:58.600 | systems all day and watching all of the ways they fail we got a long way to go a
00:12:02.440 | long way to go I'm sure but when you think of AGI when you think of the
00:12:09.040 | possible future like where it goes to do you ever extrapolate do you ever like
00:12:14.720 | sit and pause and say well if this thing if this becomes sentient and it has the
00:12:19.680 | ability to make better versions of itself how long before we're literally
00:12:24.760 | dealing with a god so the way that I think about this is it used to be that
00:12:31.440 | like AGI was this very binary moment it was before and after and I think I was
00:12:36.240 | totally wrong about that and the right way to think about it is this continue
00:12:41.760 | continuum of intelligence this smooth exponential curve back all the way to
00:12:46.440 | that sort of smooth curve curve of technological revolution the the amount
00:12:51.480 | of compute power we can put into the system the scientific ideas about how to
00:12:56.340 | make it more efficient and smarter to give it the ability to do reasoning to
00:13:01.280 | think about how to improve itself that will all come but my model for a long
00:13:07.560 | time I think if you look at the world of AGI thinkers there's there's sort of two
00:13:13.140 | particularly around the safety issues you're talking about there's two axes
00:13:16.880 | that matter there's the short what called short timelines or long timelines
00:13:20.960 | you know to the first milestone of AGI whatever that's gonna be is that gonna
00:13:26.080 | happen in a few years a few decades maybe even longer although at this point
00:13:30.220 | I think most people are a few years a few decades and then there's takeoff
00:13:33.320 | speed once we get there from there at that point you're talking about where
00:13:36.820 | it's capable of the rapid self-improvement is that a slower a
00:13:40.780 | faster process the the world that I think we're heading that we're in and
00:13:45.800 | also the world that I think is the most controllable and the safest is the short
00:13:52.240 | timelines and slow takeoff quadrant and I think we're gonna have you know there
00:13:59.920 | were a lot of very smart people for a while we're like the thing you were
00:14:02.720 | just talking about happens in a day or three days and I don't that doesn't seem
00:14:06.720 | likely to me given the shape of the technology as we understand it now now
00:14:11.220 | even if that happens in a decade or three decades it's still like the blink
00:14:16.760 | of an eye from a historical perspective and there are gonna be some real
00:14:21.720 | challenges to getting that right and the decisions we make the the sort of safety
00:14:28.400 | systems and the and the checks that the world puts in place how we think about
00:14:34.320 | global regulation or rules of the road from a safety perspective for those
00:14:39.280 | projects it's super important because you can imagine many things going
00:14:43.040 | horribly wrong but I've been I feel cheerful about the progress the world is
00:14:49.880 | making towards taking this seriously and you know it reminds me of what I've read
00:14:55.160 | about the conversations that the world had right around the development of
00:14:58.480 | nuclear weapons
00:15:01.040 | [BLANK_AUDIO]