Back to Index

Judea Pearl: Concerns for AGI | AI Podcast Clips


Transcript

- Do you have concerns about the future of AI? All the different trajectories of all of our research. - Yes. - Where's your hope? Where the movement heads, where are your concerns? - I'm concerned because I know we are building a new species that has a capability of exceeding us, exceeding our capabilities and can breed itself and take over the world, absolutely.

It's a new species that is uncontrolled. We don't know the degree to which we control it, we don't even understand what it means to be able to control this new species. So I'm concerned. I don't have anything to add to that because it's such a gray area, that's unknown.

It never happened in history. The only time it happened in history was evolution with human beings. And it wasn't very successful, was it? Some people say it was a great success. - For us it was, but a few people along the way, a few creatures along the way would not agree.

So it's just because it's such a gray area, there's nothing else to say. - We have a sample of one. - Sample of one. - That's us. - But some people would look at you and say, yeah, but we were looking to you to help us make sure that sample two works out okay.

- We have more than a sample of one. We have theories. And that's good. We don't need to be statisticians. So sample of one doesn't mean poverty of knowledge. It's not. Sample of one plus theory, conjectural theory of what could happen. That we do have. But I really feel helpless in contributing to this argument because I know so little and my imagination is limited and I know how much I don't know and I, but I'm concerned.

(silence) (silence) (silence) (silence) (silence) (silence) (silence) (silence)