Back to Index

Yann LeCun: Was HAL 9000 Good or Evil? - Space Odyssey 2001 | AI Podcast Clips


Transcript

You said that 2001 Space Odyssey is one of your favorite movies. Hal 9000 decides to get rid of the astronauts for people haven't seen the movie, spoiler alert, because he, it, she believes that the astronauts, they will interfere with the mission. Do you see Hal as flawed in some fundamental way or even evil or did he do the right thing?

Neither, there's no notion of evil in that in that context other than the fact that people die but it was an example of what people call value misalignment, right, you give an objective to a machine and the machine strives to achieve this objective and if you don't put any constraints on this objective like don't kill people and don't do things like this, the machine given the power will do stupid things just to achieve this objective or damaging things to achieve this objective.

It's a little bit like, I mean, we are used to this in the context of human society. We put in place laws to prevent people from doing bad things because spontaneously they would do those bad things, right, so we have to shape their cost function, the objective function if you want through laws to kind of correct and education obviously to sort of correct for those.

So maybe just pushing a little further on that point, Hal, you know, there's a mission, there's a, this fuzziness around the ambiguity around what the actual mission is but, you know, do you think that there will be a time from a utilitarian perspective where an AI system, where it is not misalignment, where it is alignment for the greater good of society, that an AI system will make decisions that are difficult?

Well, that's the trick. I mean, eventually we'll have to figure out how to do this and again, we're not starting from scratch because we've been doing this with humans for millennia. So designing objective functions for people is something that we know how to do and we don't do it by programming things, although the legal code is called code.

So that tells you something and it's actually the design of an objective function that's really what legal code is, right, it tells you here's what you can do, here's what you can't do, if you do it you pay that much, that's an objective function. So there is this idea somehow that it's a new thing for people to try to design objective functions that are aligned with the common good but no, we've been writing laws for millennia and that's exactly what it is.

So that's where the science of lawmaking and computer science will come together. So there's nothing special about HAL or AI systems, it's just the continuation of tools used to make some of these difficult ethical judgments that laws make. Yeah, and we have systems like this already that make many decisions for ourselves in society that need to be designed in a way that they like rules about things that sometimes have bad side effects and we have to be flexible enough about those rules so that they can be broken when it's obvious that they shouldn't be applied.

So you don't see this on the camera here but all the decoration in this room is all pictures from 2001 A Space Odyssey. Wow, is that by accident or is there a lot? It's not by accident, it's by design. Oh wow, so if you were to build HAL 10,000, so an improvement of HAL 9,000, what would you improve?

Well, first of all, I wouldn't ask it to hold secrets and tell lies because that's really what breaks it in the end, that's the fact that it's asking itself questions about the purpose of the mission and it's, you know, pieces things together that it's heard, you know, all the secrecy of the preparation of the mission and the fact that it was the discovery on the lunar surface that really was kept secret and one part of HAL's memory knows this and the other part does not know it and is supposed to not tell anyone and that creates internal conflict.

So you think there never should be a set of things that an AI system should not be allowed, like a set of facts that should not be shared with the human operators? Well, I think, no, I think that it should be a bit like in the design of autonomous AI systems, there should be the equivalent of, you know, the oath that Hippocrates, that doctors signed up to, right?

So there's certain things, certain rules that you have to abide by and we can sort of hardwire this into our machines to kind of make sure they don't go. So I'm not, you know, an advocate of the three laws of robotics, you know, the Asimov kind of thing because I don't think it's practical but you know, some level of limits.

But to be clear, this is not, these are not questions that are kind of really worth asking today because we just don't have the technology to do this. We don't have autonomous intelligent machines, we have intelligent machines, some are intelligent machines that are very specialized but they don't really sort of satisfy an objective, they're just, you know, kind of trained to do one thing.

So until we have some idea for a design of a full-fledged autonomous intelligent system, asking the question of how we design this objective, I think is a little too abstract. It's a little too abstract, there's useful elements to it in that it helps us understand our own ethical codes, humans.

So even just as a thought experiment, if you imagine that an AGI system is here today, how would we program it as a kind of nice thought experiment of constructing how should we have a law, have a system of laws for us humans. It's just a nice practical tool.

And I think there's echoes of that idea too in the AI systems we have today that don't have to be that intelligent. Like autonomous vehicles, these things start creeping in that are worth thinking about but certainly they shouldn't be framed as as hell. you you you you you you