back to index

Judea Pearl: Human-Level AI and the Test of Free Will | AI Podcast Clips


Whisper Transcript | Transcript Only Page

00:00:00.000 | I know you're not a futurist, but are you excited?
00:00:05.000 | Have you, when you look back at your life,
00:00:08.580 | long for the idea of creating
00:00:10.600 | a human level intelligence system?
00:00:12.000 | - Yeah, I'm driven by that.
00:00:14.400 | All my life I'm driven just by one thing.
00:00:16.540 | (laughs)
00:00:18.620 | But I go slowly, I go from what I know
00:00:22.920 | to the next step incrementally.
00:00:25.440 | - So without imagining what the end goal looks like,
00:00:28.500 | do you imagine what--
00:00:30.600 | - The end goal is gonna be a machine
00:00:33.840 | that can answer sophisticated questions,
00:00:36.960 | counterfactuals of regret, compassion,
00:00:39.920 | responsibility, and free will.
00:00:43.920 | - So what is a good test?
00:00:47.680 | Is a Turing test a reasonable test?
00:00:50.960 | - A test of free will doesn't exist yet.
00:00:52.920 | - How would you test free will?
00:00:57.000 | - So far we know only one thing.
00:00:58.600 | If robots can communicate with reward and punishment
00:01:06.040 | among themselves, hitting each other on the wrist
00:01:11.540 | and say you shouldn't have done that,
00:01:13.400 | playing better soccer because they can do that.
00:01:18.360 | - What do you mean because they can do that?
00:01:22.060 | - Because they can communicate among themselves.
00:01:24.220 | - Because of the communication they can do this?
00:01:26.180 | - Because they communicate like us.
00:01:28.740 | Reward and punishment, yes, you didn't pass the ball
00:01:31.740 | the right time and so forth, therefore you're gonna sit
00:01:35.300 | on the bench for the next two.
00:01:37.620 | If they start communicating like that,
00:01:39.740 | the question is will they play better soccer?
00:01:42.460 | As opposed to what?
00:01:43.740 | As opposed to what they do now.
00:01:45.740 | Without this ability to reason about reward and punishment,
00:01:50.740 | responsibility.
00:01:52.500 | - And counterfactuals.
00:01:54.460 | - So far I can only think about communication.
00:01:57.780 | - Communication is not necessarily natural language
00:02:01.460 | but just communication.
00:02:02.300 | - No, just communication.
00:02:03.660 | And that's important to have a quick and effective means
00:02:08.060 | of communicating knowledge.
00:02:10.160 | If the coach tells you you should have passed the ball,
00:02:12.540 | ping, he conveys so much knowledge to you
00:02:14.820 | as opposed to what?
00:02:16.580 | Go down and change your software.
00:02:18.640 | That's the alternative.
00:02:21.380 | But the coach doesn't know your software.
00:02:23.820 | So how can a coach tell you you should have passed the ball?
00:02:27.620 | But our language is very effective.
00:02:30.340 | You should have passed the ball.
00:02:31.680 | You know your software, you tweak the right module,
00:02:34.780 | and next time you don't do it.
00:02:37.360 | - Now that's for playing soccer where the rules
00:02:40.120 | are well defined.
00:02:41.260 | - No, no, no, no, the rules are not well defined.
00:02:43.500 | When you should pass the ball--
00:02:44.900 | - It's not well defined.
00:02:46.100 | - No, it's very soft, very noisy.
00:02:50.500 | - Yeah, the mystery.
00:02:51.340 | - You have to do it under pressure.
00:02:52.740 | - It's art.
00:02:54.020 | But in terms of aligning values between computers and humans,
00:02:59.020 | do you think this cause and effect type of thinking
00:03:06.300 | is important to align the values?
00:03:08.380 | Values, morals, ethics under which the machines
00:03:11.600 | make decisions, is the cause effect
00:03:14.540 | where the two can come together?
00:03:18.180 | - Cause effect is necessary component
00:03:20.820 | to build an ethical machine.
00:03:24.340 | Cause the machine has to empathize,
00:03:26.540 | to understand what's good for you,
00:03:28.700 | to build a model of you as a recipient,
00:03:33.220 | which should be very much, what is compassion?
00:03:36.980 | They imagine that you suffer pain as much as me.
00:03:41.980 | - As much as me.
00:03:43.100 | - I do have already a model of myself, right?
00:03:46.360 | So it's very easy for me to map you to mine.
00:03:48.820 | I don't have to rebuild the model.
00:03:50.700 | It's much easier to say, oh, you're like me.
00:03:52.980 | Okay, therefore I will not hate you.
00:03:54.780 | - And the machine has to imagine,
00:03:58.140 | it has to try to fake to be human,
00:04:00.060 | essentially so you can imagine that you're like me.
00:04:04.020 | - And moreover, who is me?
00:04:07.740 | That's the first, that's consciousness.
00:04:10.260 | They have a model of yourself.
00:04:11.820 | Where do you get this model?
00:04:14.180 | You look at yourself as if you are a part of the environment.
00:04:18.460 | If you build a model of yourself versus the environment,
00:04:21.500 | then you can say, I need to have a model of myself.
00:04:24.300 | I have abilities, I have desires and so forth.
00:04:26.740 | I have a blueprint of myself, though.
00:04:30.420 | Not a full detail because I cannot get the whole thing
00:04:34.100 | problem, but I have a blueprint.
00:04:36.780 | So on that level of a blueprint, I can modify things.
00:04:40.300 | I can look at myself in the mirror and say,
00:04:42.660 | hmm, if I change this, tweak this model,
00:04:45.260 | I'm going to perform differently.
00:04:48.420 | That is what we mean by free will.
00:04:50.420 | - And consciousness.
00:04:53.020 | - And consciousness.
00:04:54.220 | (upbeat music)
00:04:56.800 | (upbeat music)
00:04:59.380 | (upbeat music)
00:05:01.960 | (upbeat music)
00:05:04.540 | (upbeat music)
00:05:07.120 | (upbeat music)
00:05:09.700 | [BLANK_AUDIO]