back to index

Origin of the term AGI (Ben Goertzel) | AI Podcast Clips


Whisper Transcript | Transcript Only Page

00:00:00.000 | - Maybe it's good to step back a little bit.
00:00:03.680 | I mean, we've been using the term AGI.
00:00:05.640 | People often cite you as the creator,
00:00:08.520 | or at least the popularizer of the term AGI,
00:00:10.760 | artificial general intelligence.
00:00:13.380 | Can you tell the origin story of the term?
00:00:16.240 | - Sure, sure.
00:00:17.080 | So yeah, I would say I launched the term AGI
00:00:21.720 | upon the world for what it's worth
00:00:24.280 | without ever fully being in love with the term.
00:00:29.320 | What happened is I was editing a book,
00:00:33.040 | and this process started around 2001 or two.
00:00:35.520 | I think the book came out 2005, finally.
00:00:38.160 | I was editing a book which I provisionally
00:00:40.800 | was titling "Real AI."
00:00:43.480 | And I mean, the goal was to gather together
00:00:46.480 | fairly serious academic-ish papers
00:00:49.320 | on the topic of making thinking machines
00:00:51.560 | that could really think in the sense like people can,
00:00:54.400 | or even more broadly than people can, right?
00:00:56.880 | So then I was reaching out to other folks
00:01:00.400 | that I had encountered here or there
00:01:01.720 | who were interested in that,
00:01:05.040 | which included some other folks
00:01:07.960 | who I knew from the transhumanist and singularitarian world,
00:01:12.000 | like Peter Vos, who has a company,
00:01:14.080 | AGI Incorporated still in California,
00:01:17.440 | and included Shane Legg, who had worked for me
00:01:21.800 | at my company WebMind in New York in the late '90s,
00:01:25.240 | who by now has become rich and famous.
00:01:28.120 | He was one of the co-founders of Google DeepMind.
00:01:30.440 | But at that time, Shane was,
00:01:32.960 | I think he may have just started doing his PhD
00:01:39.440 | with Markus Hutter, who at that time
00:01:43.560 | hadn't yet published his book "Universal AI,"
00:01:46.360 | which sort of gives a mathematical foundation
00:01:48.720 | for artificial general intelligence.
00:01:51.080 | So I reached out to Shane and Markus and Peter Vos
00:01:53.800 | and Pei Wang, who was another former employee of mine
00:01:57.120 | who had been Douglas Hofstadter's PhD student,
00:01:59.520 | who had his own approach to AGI,
00:02:00.920 | and a bunch of some Russian folks
00:02:03.360 | reached out to these guys
00:02:05.680 | and they contributed papers for the book.
00:02:09.000 | But that was my provisional title, but I never loved it
00:02:12.080 | because in the end, I was doing some,
00:02:16.960 | what we would now call narrow AI as well,
00:02:19.720 | like applying machine learning to genomics data
00:02:22.280 | or chat data for sentiment analysis.
00:02:25.520 | I mean, that work is real.
00:02:27.400 | In a sense, it's really AI.
00:02:30.400 | It's just a different kind of AI.
00:02:33.640 | Ray Kurzweil wrote about narrow AI versus strong AI.
00:02:37.960 | But that seemed weird to me because,
00:02:41.320 | first of all, narrow and strong are not antonyms.
00:02:44.440 | (laughing)
00:02:45.280 | - That's right.
00:02:46.360 | - But secondly, strong AI was used
00:02:49.560 | in the cognitive science literature
00:02:51.000 | to mean the hypothesis that digital computer AIs
00:02:54.240 | could have true consciousness like human beings.
00:02:57.760 | So there was already a meaning to strong AI,
00:03:00.160 | which was complexly different but related, right?
00:03:04.080 | So we were tossing around on an email list
00:03:08.160 | whether what title it should be.
00:03:10.840 | And so we talked about narrow AI, broad AI, wide AI,
00:03:15.200 | narrow AI, general AI.
00:03:17.400 | And I think it was either Shane Legg or Peter Vos
00:03:22.400 | on the private email discussion we had.
00:03:25.760 | He said, "Well, why don't we go with AGI,
00:03:27.800 | "artificial general intelligence?"
00:03:29.440 | And Pei Wang wanted to do GAI,
00:03:31.920 | general artificial intelligence,
00:03:33.400 | 'cause in Chinese it goes in that order.
00:03:35.520 | But we figured gay wouldn't work
00:03:37.840 | in US culture at that time, right?
00:03:40.880 | So we went with the AGI.
00:03:44.960 | We used it for the title of that book.
00:03:47.120 | And part of Peter and Shane's reasoning
00:03:51.080 | was you have the G factor in psychology,
00:03:53.080 | which is IQ, general intelligence, right?
00:03:55.120 | So you have a meaning of GI,
00:03:57.120 | general intelligence in psychology.
00:03:59.840 | So then you're looking like artificial GI.
00:04:03.040 | So then--
00:04:03.880 | - Oh, that makes a lot of sense, I think.
00:04:05.640 | - Yeah, we used that for the title of the book.
00:04:08.040 | And so I think, maybe both Shane and Peter
00:04:11.680 | think they invented the term.
00:04:12.840 | But then later, after the book was published,
00:04:15.960 | this guy Mark Gubrid came up to me,
00:04:18.800 | and he's like, "Well, I published an essay
00:04:21.240 | "with the term AGI in like 1997 or something."
00:04:24.760 | And so I'm just waiting for some Russian to come out
00:04:28.160 | and say they published that in 1953, right?
00:04:31.040 | I mean, that term-- - For sure.
00:04:32.880 | - That term is not dramatically innovative or anything.
00:04:35.960 | It's one of these obvious, in hindsight, things,
00:04:39.240 | which is also annoying in a way,
00:04:42.520 | because, you know, Joe Chabac, who you interviewed,
00:04:47.120 | is a close friend of mine.
00:04:48.040 | He likes the term synthetic intelligence,
00:04:50.880 | which I like much better,
00:04:51.920 | but it hasn't actually caught on, right?
00:04:54.680 | Because, I mean, artificial is a bit off to me,
00:04:59.400 | 'cause artifice is like a tool or something,
00:05:02.240 | but not all AGIs are gonna be tools.
00:05:05.400 | I mean, they may be now,
00:05:06.320 | but we're aiming toward making them agents
00:05:08.240 | rather than tools.
00:05:10.440 | And in a way, I don't like the distinction
00:05:12.480 | between artificial and natural,
00:05:14.880 | because, I mean, we're part of nature also,
00:05:17.040 | and machines are part of nature.
00:05:19.800 | I mean, you can look at evolved versus engineered,
00:05:22.520 | but that's a different distinction.
00:05:24.840 | Then it should be engineered general intelligence, right?
00:05:27.680 | And then general, well,
00:05:29.600 | if you look at Marcus Hutter's book "Universal AI,"
00:05:33.080 | what he argues there is, you know,
00:05:35.880 | within the domain of computation theory,
00:05:38.180 | which is limited but interesting,
00:05:39.600 | so if you assume computable environments,
00:05:41.360 | or computable reward functions,
00:05:43.280 | then he articulates
00:05:44.920 | what would be a truly general intelligence,
00:05:47.720 | a system called AIXI, which is quite beautiful.
00:05:50.840 | - AIXI.
00:05:51.680 | - AIXI, and that's the middle name
00:05:53.680 | of my latest child, actually.
00:05:57.000 | - What's the first name?
00:05:57.840 | - First name is QORXI, Q-O-R-X-I,
00:06:00.080 | which my wife came up with,
00:06:01.440 | but that's an acronym for
00:06:03.080 | quantum organized rational expanding intelligence.
00:06:06.560 | And his middle name is XIPHONES, actually,
00:06:11.320 | which means the former principal underlying AIXI.
00:06:16.020 | But in any case--
00:06:17.160 | - You're giving Elon Musk's new child a run for his money.
00:06:19.800 | - Well, I did it first.
00:06:21.440 | He copied me with this new freakish name.
00:06:24.960 | But now if I have another baby,
00:06:26.240 | I'm gonna have to outdo him.
00:06:27.680 | - Outdo him.
00:06:28.520 | - It's become an arms race of weird, geeky baby names.
00:06:32.200 | We'll see what the babies think about it, right?
00:06:34.000 | - Yeah.
00:06:34.840 | - But, I mean, my oldest son, Zarathustra, loves his name,
00:06:37.840 | and my daughter, Sherazade, loves her name.
00:06:40.920 | So, so far, basically, if you give your kids weird names--
00:06:44.600 | - They live up to it.
00:06:45.520 | - Well, you're obliged to make the kids weird enough
00:06:47.480 | that they like the names, right?
00:06:49.240 | It directs their upbringing in a certain way.
00:06:51.600 | But, yeah, anyway, I mean, what Marcus showed in that book
00:06:55.360 | is that a truly general intelligence,
00:06:58.200 | theoretically, is possible,
00:06:59.480 | but would take infinite computing power.
00:07:01.520 | So then the artificial is a little off.
00:07:04.020 | The general is not really achievable within physics,
00:07:07.480 | as we know it.
00:07:08.960 | And, I mean, physics, as we know it, may be limited,
00:07:11.160 | but that's what we have to work with now.
00:07:12.960 | Intelligence--
00:07:13.800 | - Infinitely general, you mean, like,
00:07:15.280 | from an information processing perspective, yeah.
00:07:18.120 | - Yeah, intelligence is not very well-defined, either.
00:07:22.440 | I mean, what does it mean?
00:07:24.440 | I mean, in AI now, it's fashionable to look at it
00:07:27.200 | as maximizing an expected reward over the future,
00:07:31.000 | but that sort of definition is pathological in various ways.
00:07:35.480 | And my friend David Weinbaum, aka Weaver,
00:07:38.960 | he had a beautiful PhD thesis on open-ended intelligence,
00:07:42.520 | trying to conceive intelligence in a--
00:07:44.520 | - Without a reward.
00:07:45.880 | Without-- - Yeah, he's just looking
00:07:47.120 | at it differently.
00:07:47.960 | He's looking at complex self-organizing systems
00:07:50.360 | and looking at an intelligence system as being one
00:07:53.040 | that revises and grows and improves itself
00:07:56.560 | in conjunction with its environment
00:07:59.400 | without necessarily there being one objective function
00:08:02.560 | it's trying to maximize.
00:08:03.720 | Although, over certain intervals of time,
00:08:06.200 | it may act as if it's optimizing
00:08:07.640 | a certain objective function.
00:08:09.040 | Very much Solaris from Stanislav Lom's novels, right?
00:08:12.240 | So yeah, the point is, artificial general and intelligence--
00:08:15.560 | - Don't work. - They're all bad.
00:08:16.840 | On the other hand, everyone knows what AI is,
00:08:19.720 | and AGI seems immediately comprehensible
00:08:23.560 | to people with a technical background.
00:08:25.240 | So I think that the term has served
00:08:27.080 | as sociological function.
00:08:28.400 | Now it's out there everywhere, which baffles me.
00:08:32.440 | - It's like KFC, I mean, that's it.
00:08:34.760 | We're stuck with AGI probably for a very long time
00:08:37.880 | until AGI systems take over and rename themselves.
00:08:41.320 | - Yeah, and then we'll be--
00:08:43.400 | - We're stuck with GPUs too,
00:08:45.240 | which mostly have nothing to do with graphics anymore.
00:08:48.200 | - I wonder what the AGI system will call us humans.
00:08:50.920 | That was maybe-- - Grandpa.
00:08:52.640 | (laughing)
00:08:54.280 | - GPs. (laughing)
00:08:56.040 | - Grandpa processing unit.
00:08:58.040 | - Biological grandpa processing units.
00:09:00.000 | (laughing)
00:09:02.280 | (upbeat music)
00:09:04.880 | (upbeat music)
00:09:07.480 | (upbeat music)
00:09:10.080 | (upbeat music)
00:09:12.680 | (upbeat music)
00:09:15.280 | (upbeat music)