back to index

Elon Musk: Regulation of AI Safety


Whisper Transcript | Transcript Only Page

00:00:00.000 | So, on a darker topic, you've expressed serious concern about existential threats of AI.
00:00:09.120 | It's perhaps one of the greatest challenges our civilization faces, but since I would
00:00:14.160 | say we're kind of an optimistic descendants of apes, perhaps we can find several paths
00:00:18.840 | of escaping the harm of AI.
00:00:21.120 | So if I can give you three options, maybe you can comment which do you think is the
00:00:25.240 | most promising.
00:00:26.840 | So one is scaling up efforts on AI safety and beneficial AI research in hope of finding
00:00:33.600 | an algorithmic or maybe a policy solution.
00:00:37.240 | Two is becoming a multi-planetary species as quickly as possible.
00:00:41.200 | And three is merging with AI and riding the wave of that increasing intelligence as it
00:00:49.120 | continuously improves.
00:00:51.160 | What do you think is most promising, most interesting as a civilization that we should
00:00:54.960 | invest in?
00:00:57.640 | I think there's a tremendous amount of investment going on in AI.
00:01:01.040 | Where there's a lack of investment is in AI safety.
00:01:05.600 | And there should be, in my view, a government agency that oversees anything related to AI
00:01:12.120 | to confirm that it does not represent a public safety risk.
00:01:16.160 | Just as there is a regulatory authority for the Food and Drug Administration, there's
00:01:21.400 | the NHTSA for automotive safety, there's the FAA for aircraft safety.
00:01:27.360 | We generally come to the conclusion that it is important to have a government referee
00:01:31.160 | or a referee that is serving the public interest in ensuring that things are safe when there's
00:01:38.880 | a potential danger to the public.
00:01:41.160 | I would argue that AI is unequivocally something that has potential to be dangerous to the
00:01:47.520 | public and therefore should have a regulatory agency just as other things that are dangerous
00:01:52.120 | to the public have a regulatory agency.
00:01:53.960 | But let me tell you, the problem with this is that it governs very slowly.
00:02:00.360 | And the rate of, usually the way a regulatory agency comes into being is that something
00:02:08.080 | terrible happens.
00:02:09.280 | There's a huge public outcry.
00:02:12.360 | And years after that, there's a regulatory agency or a rule put in place.
00:02:17.640 | Take something like seatbelts.
00:02:19.560 | It was known for a decade or more that seatbelts would have a massive impact on safety and
00:02:29.360 | save so many lives and serious injuries.
00:02:32.320 | And the car industry fought the requirement to put seatbelts in tooth and nail.
00:02:38.360 | That's crazy.
00:02:39.360 | And hundreds of thousands of people probably died because of that.
00:02:45.200 | And they said people wouldn't buy cars if they had seatbelts, which is obviously absurd.
00:02:50.120 | Or look at the tobacco industry and how long they fought anything about smoking.
00:02:55.840 | That's part of why I helped make that movie, Thank You For Smoking.
00:03:00.120 | You can sort of see just how pernicious it can be when you have these companies effectively
00:03:09.360 | achieve regulatory capture of government.
00:03:12.440 | People in the AI community refer to the advent of digital superintelligence as a singularity.
00:03:22.520 | That is not to say that it is good or bad, but that it is very difficult to predict what
00:03:28.640 | will happen after that point.
00:03:30.200 | And that there's some probability it will be bad, some probability it will be good.
00:03:34.320 | But obviously I want to affect that probability and have it be more good than bad.
00:03:39.880 | [BLANK_AUDIO]
00:03:49.880 | [BLANK_AUDIO]