back to indexPamela McCorduck: Machines Who Think and the Early Days of AI | Lex Fridman Podcast #34
Chapters
0:0
5:12 Four Founding Fathers
5:15 Founding Fathers
11:44 The Iliad
20:10 The Literary Problem
20:59 Frankenstein
21:39 The Villain in Frankenstein
47:14 The Geriatric Robot
52:6 Concerns about Ai the Existential Threats
55:4 The Four Possible Futures for Women in Tech
00:00:00.000 |
The following is a conversation with Pamela McCordick. She's an author who has written on 00:00:04.800 |
the history and the philosophical significance of artificial intelligence. Her books include 00:00:10.400 |
Machines Who Think in 1979, The Fifth Generation in 1983 with Ed Vangenbaum, who's considered to 00:00:18.160 |
be the father of expert systems, The Edge of Chaos that features women, and many more books. 00:00:24.800 |
I came across her work in an unusual way by stumbling on a quote from Machines Who Think 00:00:30.320 |
that is something like, "Artificial intelligence began with the ancient wish to forge the gods." 00:00:36.880 |
That was a beautiful way to draw a connecting line between our societal relationship with AI 00:00:42.960 |
from the grounded day-to-day science, math, and engineering to popular stories and science fiction 00:00:50.000 |
and myths of automatons that go back for centuries. Through her literary work, 00:00:55.520 |
she has spent a lot of time with the seminal figures of artificial intelligence, including 00:01:01.040 |
the founding fathers of AI from the 1956 Dartmouth Summer Workshop where the field was launched. 00:01:08.960 |
I reached out to Pamela for a conversation in hopes of getting a sense of what those early 00:01:14.400 |
days were like and how their dreams continue to reverberate through the work of our community today. 00:01:20.160 |
I often don't know where the conversation may take us, but I jump in and see. Having no constraints, 00:01:26.880 |
rules, or goals is a wonderful way to discover new ideas. This is the Artificial Intelligence Podcast. 00:01:33.760 |
If you enjoy it, subscribe on YouTube, give it five stars on iTunes, support it on Patreon, 00:01:39.680 |
or simply connect with me on Twitter @LexFriedman, spelled F-R-I-D-M-A-N. 00:01:45.600 |
And now, here's my conversation with Pamela McCordick. 00:01:49.760 |
In 1979, your book, Machines Who Think, was published. In it, you interview some of the 00:01:58.240 |
early AI pioneers and explore the idea that AI was born not out of maybe math and computer science, 00:02:06.320 |
but out of myth and legend. So, tell me if you could the story of how you first 00:02:13.360 |
arrived at the book, the journey of beginning to write it. 00:02:18.000 |
I had been a novelist. I'd published two novels. And I was sitting under the portal at Stanford 00:02:29.680 |
one day, the house we were renting for the summer, and I thought, "I should write a novel about these 00:02:34.480 |
weird people in AI I know." And then I thought, "Ah, don't write a novel, write a history. Simple, 00:02:41.920 |
just go around, interview them, splice it together, voila, instant book. Ha, ha, ha." It was 00:02:48.880 |
much harder than that. But nobody else was doing it. And so, I thought, "Oh, this is a great 00:02:54.960 |
opportunity." And there were people who, John McCarthy, for example, thought it was a nutty 00:03:04.400 |
idea. The field had not evolved yet, so on. And he had some mathematical thing he thought I should 00:03:11.680 |
write instead. And I said, "No, John, I am not a woman in search of a project. This is what I want 00:03:18.480 |
to do. I hope you'll cooperate." And he said, "Oh, mutter, mutter. Well, okay, it's your time." 00:03:24.800 |
What was the pitch for the, I mean, such a young field at that point? How do you write a personal 00:03:31.920 |
history of a field that's so young? I said, "This is wonderful. The founders of the field 00:03:37.920 |
are alive and kicking and able to talk about what they're doing." Did they sound or feel like 00:03:43.120 |
founders at the time? Did they know that they have founded something? Oh, yeah. They knew what 00:03:49.760 |
they were doing was very important. Very. What I now see in retrospect is that they were at the 00:03:58.560 |
height of their research careers. And it's humbling to me that they took time out from all the things 00:04:06.240 |
that they had to do as a consequence of being there. And to talk to this woman who said, 00:04:12.720 |
"I think I'm going to write a book about you." No, it was amazing. Just amazing. 00:04:17.040 |
- So who stands out to you? Maybe looking 63 years ago, the Dartmouth conference. 00:04:25.120 |
So Marvin Minsky was there. McCarthy was there. Claude Shannon, Alan Newell, Herb Simon, 00:04:32.960 |
some of the folks you've mentioned. Then there's other characters, right? One of your co-authors. 00:04:40.160 |
- He wasn't at Dartmouth. - He wasn't at Dartmouth. 00:04:43.920 |
- No. He was, I think, an undergraduate then. - And of course, Joe Traub. I mean, 00:04:50.960 |
all of these are players, not at Dartmouth, but in that era. It's CMU and so on. So who are the 00:04:59.280 |
characters, if you could paint a picture, that stand out to you from memory? Those people you've 00:05:04.880 |
interviewed and maybe not, people that were just in the... - In the atmosphere. 00:05:09.680 |
- In the atmosphere. - Of course, the four founding 00:05:13.200 |
fathers were extraordinary guys. They really were. - Who are the founding fathers? 00:05:17.040 |
- Alan Newell, Herbert Simon, Marvin Minsky, John McCarthy. They were the four who were not only at 00:05:24.800 |
the Dartmouth conference, but Newell and Simon arrived there with a working program called the 00:05:29.920 |
Logic Theorist. Everybody else had great ideas about how they might do it, but they weren't 00:05:37.520 |
going to do it yet. And you mentioned Joe Traub, my husband. I was immersed in AI before I met Joe, 00:05:48.640 |
because I had been Ed Feigenbaum's assistant at Stanford. And before that, I had worked on a book 00:05:56.080 |
edited by Feigenbaum and Julian Feldman called Computers and Thought. It was the first textbook 00:06:05.600 |
of readings of AI. And they only did it because they were trying to teach AI to people at Berkeley, 00:06:12.080 |
and there was nothing, you'd have to send them to this journal and that journal. This was not 00:06:15.920 |
the internet where you could go look at an article. So, I was fascinated from the get-go by AI. I was 00:06:23.920 |
an English major. What did I know? And yet I was fascinated. And that's why you saw that 00:06:32.320 |
historical, that literary background, which I think is very much a part of the continuum of AI. 00:06:44.640 |
- Yeah, that tradition. What drew you to AI? How did you even think of it back then? What 00:06:53.040 |
was the possibilities, the dreams? What was interesting to you? 00:06:56.880 |
- The idea of intelligence outside the human cranium, this was a phenomenal idea. And even 00:07:06.480 |
when I finished Machines Who Think, I didn't know if they were going to succeed. In fact, 00:07:11.600 |
the final chapter is very wishy-washy, frankly. Well, succeed the field did. 00:07:19.040 |
- Yeah, so was there the idea that AI began with the wish to forge the gods? So, the 00:07:27.040 |
spiritual component that we crave to create this other thing greater than ourselves. 00:07:34.240 |
- For those guys, I don't think so. Newell and Simon were cognitive psychologists. What they 00:07:42.720 |
wanted was to simulate aspects of human intelligence, and they found they could do it 00:07:50.640 |
on the computer. Minsky just thought it was a really cool thing to do. Likewise, McCarthy. 00:08:00.080 |
McCarthy had got the idea in 1949 when he was a Caltech student. And he listened to somebody's 00:08:10.400 |
lecture. It's in my book. I forget who it was. And he thought, "Oh, that would be fun to do. How 00:08:17.200 |
do we do that?" And he took a very mathematical approach. Minsky was hybrid, and Newell and Simon 00:08:24.960 |
were very much cognitive psychology. How can we simulate various things about human cognition? 00:08:34.000 |
What happened over the many years is, of course, our definition of intelligence expanded tremendously. 00:08:41.680 |
I mean, these days, biologists are comfortable talking about the intelligence of a cell, 00:08:47.360 |
the intelligence of the brain, not just human brain, but the intelligence of any kind of brain. 00:08:56.720 |
Cephalopods, I mean, an octopus is really intelligent by any amount. We wouldn't have 00:09:03.600 |
thought of that in the '60s, even the '70s. So, all these things have worked in. And I did hear 00:09:11.120 |
one behavioral primatologist, Franz De Waal, say, "AI taught us the questions to ask." 00:09:20.640 |
- Yeah, this is what happens, right? When you try to build it is when you start to actually 00:09:27.760 |
ask questions. It puts a mirror to ourselves. - Yeah, right. 00:09:31.040 |
- So, you were there in the middle of it. It seems like not many people were asking the questions 00:09:38.640 |
that you were, or just trying to look at this field the way you were. 00:09:42.640 |
- I was solo. When I went to get funding for this, because I needed somebody to transcribe the 00:09:50.320 |
interviews, and I needed travel expenses, I went to everything you could think of, the NSF, the 00:10:01.120 |
DARPA. There was an Air Force place that doled out money. And each of them said, "Well, that was 00:10:12.720 |
very interesting. That's a very interesting idea, but we'll think about it." And the National Science 00:10:22.000 |
Foundation actually said to me in plain English, "Hey, you're only a writer. You're not a historian 00:10:27.600 |
of science." And I said, "Yeah, that's true, but the historians of science will be crawling all 00:10:32.960 |
over this field. I'm writing for the general audience." So, I thought. And they still wouldn't 00:10:40.480 |
budge. I finally got a private grant without knowing who it was from, from Ed Fredkin at MIT. 00:10:47.280 |
He was a wealthy man, and he liked what he called crackpot ideas. And he considered-- 00:10:54.720 |
- This a crackpot idea, and he was willing to support it. I am ever grateful, let me say that. 00:10:59.920 |
- You know, some would say that a history of science approach to AI, or even just a history, 00:11:07.040 |
or anything like the book that you've written, hasn't been written since. Maybe I'm not familiar, 00:11:14.400 |
but it's certainly not many. If we think about bigger than just these couple of decades, 00:11:20.720 |
few decades, what are the roots of AI? - Oh, they go back so far. Yes, of course, 00:11:30.320 |
there's all the legendary stuff, the golem and the early robots of the 20th century. 00:11:38.800 |
But they go back much further than that. If you read Homer, Homer has robots in the Iliad. And 00:11:46.560 |
a classical scholar was pointing out to me just a few months ago, "Well, you said you just read 00:11:52.800 |
the Odyssey. The Odyssey is full of robots." "It is?" I said. "Yeah, how do you think Odysseus' 00:11:59.120 |
ship gets from one place to another? He doesn't have the crew people to do that, the crewmen." 00:12:03.760 |
"Yeah, it's magic, it's robots." "Oh," I thought, "how interesting." 00:12:09.840 |
So we've had this notion of AI for a long time. And then toward the end of the 19th century, 00:12:19.200 |
the beginning of the 20th century, there were scientists who actually tried to 00:12:24.000 |
make this happen some way or another. Not successfully, they didn't have the technology 00:12:29.200 |
for it. And of course, Babbage, in the 1850s and '60s, he saw that what he was building was capable 00:12:40.080 |
of intelligent behavior. And when he ran out of funding, the British government finally said, 00:12:47.040 |
"That's enough." He and Lady Lovelace decided, "Oh, well, why don't we play the ponies with this?" 00:12:54.720 |
He had other ideas for raising money too. - But if we actually reach back once again, 00:13:01.520 |
I think people don't actually really know that robots do appear, or the ideas of robots. You 00:13:07.440 |
talk about the Hellenic and the Hebraic points of view. - Oh, yes. - Can you tell me about each? 00:13:14.240 |
- I defined it this way. The Hellenic point of view is robots are great. They are party help. 00:13:22.400 |
They help this guy Hephaestus, this god Hephaestus in his forge. I presume he made them to help him, 00:13:30.240 |
and so on and so forth. And they welcome the whole idea of robots. The Hebraic view has to do with, 00:13:39.360 |
I think it's the second commandment, "Thou shalt not make any graven image." In other words, 00:13:46.640 |
you better not start imitating humans because that's just forbidden. It's the second commandment. 00:13:54.480 |
And a lot of the reaction to artificial intelligence has been a sense that this is 00:14:07.440 |
somehow wicked. This is somehow blasphemous. We shouldn't be going there. Now, you can say, 00:14:16.880 |
"Yeah, but there are going to be some downsides." And I say, "Yes, there are, 00:14:20.080 |
but blasphemy is not one of them." - You know, there is a kind of fear that feels to be almost 00:14:25.920 |
primal. Is there religious roots to that? Because so much of our society has religious roots, 00:14:33.760 |
and so there is a feeling of, like you said, blasphemy, of creating the other, of creating 00:14:40.240 |
something, you know, it doesn't have to be artificial intelligence, it's creating life 00:14:45.520 |
in general. It's the Frankenstein idea. - There's the annotated Frankenstein on my 00:14:51.440 |
coffee table. It's a tremendous novel. It really is just beautifully perceptive. Yes, 00:15:01.280 |
we do fear this, and we have good reason to fear it, but because it can get out of hand. 00:15:06.880 |
- Maybe you can speak to that fear, the psychology, if you've thought about it. You know, 00:15:11.360 |
there's a practical set of fears, concerns in the short term. You can think of, if we actually think 00:15:16.320 |
about artificial intelligence systems, you can think about bias, of discrimination in algorithms, 00:15:24.400 |
you can think about, there's social networks have algorithms that recommend the content you see, 00:15:33.120 |
thereby these algorithms control the behavior of the masses, there's these concerns. 00:15:37.760 |
But to me, it feels like the fear that people have is deeper than that. So, 00:15:44.320 |
have you thought about the psychology of it? - I think in a superficial way I have. 00:15:50.640 |
There is this notion that if we produce a machine that can think, it will outthink us, 00:15:59.760 |
and therefore replace us. - I guess that's a primal fear of almost 00:16:05.840 |
a kind of mortality. So, around the time you said you worked at Stanford with Ed Feigenbaum. 00:16:18.640 |
So, let's look at that one person throughout his history, clearly a key person, one of the many 00:16:24.640 |
in the history of AI. How has he changed in general around him? How has Stanford changed in the last, 00:16:33.680 |
how many years are we talking about here? - Oh, since '65. 00:16:38.400 |
- '65. So, maybe it doesn't have to be about him, it could be bigger, but because he was a key person 00:16:45.200 |
at Expert Systems, for example, how is that, how are these folks who you've interviewed 00:16:51.440 |
in the '70s, '79, changed through the decades? - 00:16:59.480 |
In Ed's case, I know him well, we are dear friends, we see each other 00:17:09.760 |
every month or so. He told me that when Machines Who Think first came out, he really thought all 00:17:15.760 |
the front matter was kind of baloney. And 10 years later, he said, "No, I see what you're getting at. 00:17:25.200 |
Yes, this is an impulse that has been, this has been a human impulse for thousands of years, 00:17:31.440 |
to create something outside the human cranium that has intelligence." 00:17:36.720 |
- I think it's very hard when you're down at the algorithmic level, and you're just trying to make 00:17:47.680 |
something work, which is hard enough, to step back and think of the big picture. It reminds me of 00:17:55.920 |
when I was in Santa Fe, I knew a lot of archaeologists, which was a hobby of mine. And 00:18:04.400 |
I would say, "Yeah, yeah, well, you can look at the shards and say, oh, this came from this 00:18:09.040 |
tribe, and this came from this trade route, and so on, but what about the big picture?" 00:18:14.560 |
And a very distinguished archaeologist said to me, "They don't think that way. No, they're trying to 00:18:23.120 |
match the shard to where it came from. That's, you know, where did this corn, the remainder of this 00:18:30.160 |
corn come from? Was it grown here? Was it grown elsewhere?" And I think this is part of the AI, 00:18:36.320 |
any scientific field. You're so busy doing the hard work, and it is hard work, 00:18:45.200 |
that you don't step back and say, "Oh, well, now let's talk about the, you know, 00:18:49.920 |
the general meaning of all this." - So, none of the, even Minsky and McCarthy, 00:18:58.080 |
they-- - Oh, those guys did, yeah. The founding fathers did. - Early on, or-- - Pretty early on. 00:19:04.560 |
- Yeah. - But in a different way from how I looked at it, the two cognitive psychologists, 00:19:11.200 |
Newell and Simon, they wanted to imagine reforming cognitive psychology so that we would really, 00:19:20.960 |
really understand the brain. Minsky was more speculative, and John McCarthy saw it as, 00:19:31.520 |
I think I'm doing him right by this, he really saw it as a great boon for human beings to have 00:19:40.320 |
this technology, and that was reason enough to do it. And he had wonderful, wonderful 00:19:48.880 |
fables about how if you do the mathematics, you will see that these things are really good for 00:19:56.800 |
human beings. And if you had a technological objection, he had an answer, a technological 00:20:03.440 |
answer, but here's how we could get over that, and then blah, blah, blah, blah. And one of his 00:20:09.600 |
favorite things was what he called the literary problem, which of course he presented to me 00:20:14.960 |
several times, that is, everything in literature, there are conventions in literature. One of the 00:20:22.800 |
conventions is that you have a villain and a hero, and the hero in most literature is human, 00:20:36.160 |
and the villain in most literature is a machine. And he said, "No, that's just not the way it's 00:20:41.600 |
going to be." But that's the way we're used to it, so when we tell stories about AI, 00:20:46.000 |
it's always with this paradigm. I thought, "Yeah, he's right." You know, looking back, 00:20:51.840 |
the classics, RUR is certainly the machines trying to overthrow the humans. 00:20:59.680 |
Frankenstein is different. Frankenstein is a creature. He never has a name. Frankenstein, 00:21:10.160 |
of course, is the guy who created him, the human, Dr. Frankenstein. This creature wants to be loved, 00:21:19.280 |
wants to be accepted, and it is only when Frankenstein turns his head, in fact, 00:21:26.400 |
runs the other way, and the creature is without love, that he becomes the monster that he later 00:21:38.000 |
becomes. So who's the villain in Frankenstein? It's unclear, right? Oh, it is unclear, yeah. 00:21:45.520 |
It's really the people who drive him. By driving him away, they bring out the worst. That's right. 00:21:54.720 |
They give him no human solace, and he is driven away, you're right. He becomes, at one point, 00:22:04.560 |
the friend of a blind man. He serves this blind man, and they become very friendly. 00:22:12.000 |
But when the sighted people of the blind man's family come in, "Ah, you've got a monster here." 00:22:19.760 |
So it's very didactic in its way. What I didn't know is that Mary Shelley and Percy Shelley were 00:22:28.640 |
great readers of the literature surrounding abolition in the United States, the abolition 00:22:34.480 |
of slavery, and they picked that up wholesale. You are making monsters of these people because 00:22:42.000 |
you won't give them the respect and love that they deserve. Do you have, if we get philosophical for 00:22:50.080 |
a second, do you worry that once we create machines that are a little bit more intelligent, 00:22:57.280 |
let's look at Roomba that vacuums the clean air, that this darker part of human nature where we 00:23:04.720 |
abuse the other, the somebody who's different, will come out? I don't worry about it. I could 00:23:15.120 |
imagine it happening. But I think that what AI has to offer the human race will be so attractive 00:23:25.680 |
that people will be won over. - So you have looked deep into these people, 00:23:33.200 |
had deep conversations, and it's interesting to get a sense of stories of the way they were 00:23:40.640 |
thinking and the way it was changed, the way your own thinking about AI has changed. So you mentioned 00:23:45.040 |
Mr. McCarthy. What about the years at CMU, Carnegie Mellon, with Joe? - Sure. Joe was not in AI, he was 00:23:58.960 |
in algorithmic complexity. - Was there always a line between AI and computer science, for example? 00:24:07.280 |
Is AI its own place of outcasts? Was that the feeling? - There was a kind of outcast 00:24:14.320 |
period for AI. For instance, in 1974, the new field was hardly 00:24:21.360 |
10 years old. The new field of computer science was asked by the National Science Foundation, 00:24:31.680 |
I believe, but it may have been the National Academies, I can't remember, 00:24:34.400 |
to tell us, tell your fellow scientists where computer science is and what it means. 00:24:44.160 |
And they wanted to leave out AI. And they only agreed to put it in because Don Knuth said, 00:24:53.520 |
"Hey, this is important. You can't just leave that out." - Really? Don, dude? - Don Knuth, yes. 00:24:59.280 |
- I talked to him recently, too. So out of all the people. - Yes. But you see, an AI person couldn't 00:25:06.080 |
have made that argument. He wouldn't have been believed. But Knuth was believed, yes. - So 00:25:11.840 |
Joe Traub worked on the real stuff. - Joe was working on algorithmic complexity, but he would 00:25:19.680 |
say in plain English again and again, "The smartest people I know are in AI." - Really? - Oh, yes. No 00:25:26.080 |
question. Anyway, Joe loved these guys. What happened was that, I guess it was as I started 00:25:36.480 |
to write Machines Who Think, Herb Simon and I became very close friends. He would walk past our 00:25:42.960 |
house on Northumberland Street every day after work. And I would just be putting my cover on my 00:25:48.720 |
typewriter. And I would lean out the door and say, "Herb, would you like a sherry?" And Herb almost 00:25:55.840 |
always would like a sherry. So he'd stop in. And we'd talk for an hour, two hours. My journal says 00:26:04.000 |
we talked this afternoon for three hours. - What was on his mind at the time in terms of 00:26:10.000 |
on the AI side of things? - Oh, we didn't talk too much about AI. We talked about other things. 00:26:14.720 |
- Just life. - We both love literature. And Herb had read Proust in the original French 00:26:23.280 |
twice all the way through. I can't. I read it in English in translation. So we talked about 00:26:29.680 |
literature. We talked about languages. We talked about music because he loved music. We talked 00:26:36.000 |
about art because he was actually enough of a painter that he had to give it up because he was 00:26:44.720 |
afraid it was interfering with his research and so on. So no, it was really just chat, chat, 00:26:50.960 |
but it was very warm. So one summer I said to Herb, "You know, my students have all the really 00:26:59.360 |
interesting conversations." I was teaching at the University of Pittsburgh then in the English 00:27:03.920 |
department. You know, they get to talk about the meaning of life and that kind of thing. 00:27:08.880 |
And what do I have? I have university meetings where we talk about the photocopying budget and, 00:27:15.120 |
you know, whether the course on romantic poetry should be one semester or two. So Herb laughed. 00:27:22.080 |
He said, "Yes, I know what you mean." He said, "But you know, you could do something about that." 00:27:27.680 |
Dot, that was his wife, Dot and I used to have a salon at the University of Chicago every Sunday 00:27:34.320 |
night and we would have essentially an open house and people knew it wasn't for a small talk. It was 00:27:43.200 |
really for some topic of depth. He said, "But my advice would be that you choose the topic ahead 00:27:52.240 |
of time." "Fine," I said. So, we exchanged mail over the summer. That was US Post in those days 00:28:01.280 |
because you didn't have personal email. And I decided I would organize it and there would be 00:28:10.720 |
eight of us--Alan Newell and his wife, Herb Simon and his wife, Dorothea. There was a novelist in 00:28:20.880 |
town, a man named Mark Harris. He had just arrived. And his wife, Josephine. Mark was most famous 00:28:29.040 |
then for a novel called Bang the Drum Slowly, which was about baseball. And Joe and me, so eight 00:28:35.600 |
people. And we met monthly and we just sank our teeth into really hard topics and it was great 00:28:45.440 |
fun. How have your own views around artificial intelligence changed through the process of 00:28:54.480 |
writing Machines Who Think and afterwards, the ripple effects? I was a little skeptical that 00:29:01.760 |
this whole thing would work out. It didn't matter. To me, it was so audacious. The whole thing being 00:29:06.640 |
AI. AI generally, yeah. And in some ways, it hasn't worked out the way I expected so far. 00:29:17.680 |
That is to say, there's this wonderful lot of apps, thanks to deep learning and so on. 00:29:26.880 |
But those are algorithmic. And in the part of symbolic processing, there's very little yet. 00:29:37.920 |
And that's a field that lies waiting for industrious graduate students. 00:29:45.600 |
Maybe you can tell me some figures that popped up in your life in the 80s with expert systems, 00:29:53.040 |
where there was the symbolic AI possibilities of what's... What most people think of as AI, 00:30:00.320 |
if you dream of the possibilities of AI, it's really expert systems. And those hit a few walls 00:30:07.520 |
and there were challenges there. And I think, yes, they will reemerge again with some new 00:30:12.160 |
breakthroughs and so on. But what did that feel like, both the possibility and the winter that 00:30:17.760 |
followed, the slowdown in research? - Ah, you know, this whole thing 00:30:22.560 |
about AI winter is to me, a crock. - Snow winters. 00:30:26.960 |
- Because I look at the basic research that was being done in the 80s, which is supposed to be, 00:30:33.760 |
my God, it was really important. It was laying down things that nobody had thought about before, 00:30:40.320 |
but it was basic research. You couldn't monetize it. Hence the winter. 00:30:47.600 |
You know, research, scientific research goes in fits and starts. It isn't this nice, smooth, 00:30:53.600 |
"Oh, this follows this, follows this." No, it just doesn't work that way. 00:30:59.280 |
- The interesting thing, the way winters happen, it's never the fault of the researchers. 00:31:03.600 |
It's the some source of hype over-promising. Well, no, let me take that back. Sometimes it 00:31:12.000 |
is the fault of the researchers. Sometimes certain researchers might over-promise the 00:31:17.280 |
possibilities. They themselves believe that we're just a few years away. Sort of just recently 00:31:23.600 |
talked to Elon Musk and he believes he'll have an autonomous vehicle, will have autonomous 00:31:27.760 |
vehicles in a year. And he believes it. - A year? 00:31:30.640 |
- A year, yeah. With mass deployment of autonomous. - For the record, this is 2019 right now. 00:31:37.040 |
So he's talking 2020. - To do the impossible, 00:31:40.480 |
you really have to believe it. And I think what's going to happen when you believe it, 00:31:45.200 |
'cause there's a lot of really brilliant people around him, is some good stuff will come out of 00:31:50.160 |
it. Some unexpected, brilliant breakthroughs will come out of it when you really believe it, 00:31:55.360 |
when you work that hard. - I believe that. And I believe 00:31:58.560 |
autonomous vehicles will come. I just don't believe it'll be in a year. I wish. 00:32:02.720 |
- But nevertheless, there is, autonomous vehicles is a good example. There's a feeling many companies 00:32:09.920 |
have promised by 2021, by 2022, Ford, GM, basically every single automotive company has promised 00:32:17.520 |
they'll have autonomous vehicles. So that kind of over promise is what leads to the winter. 00:32:22.480 |
Because we'll come to those dates, there won't be autonomous vehicles. And there'll be a feeling, 00:32:28.320 |
well wait a minute, if we took your word at that time, that means we just spent billions of dollars, 00:32:35.440 |
had made no money, and there's a counter response to where everybody gives up on it. 00:32:41.600 |
Sort of intellectually, at every level, the hope just dies. And all that's left is a few 00:32:49.200 |
basic researchers. So you're uncomfortable with some aspects of this idea? - Well, it's the 00:32:56.320 |
difference between science and commerce. - So you think science goes on the way it does? 00:33:04.240 |
- Oh, science can really be killed by not getting proper funding, or timely funding. 00:33:11.920 |
I think Great Britain was a perfect example of that. The Lighthill Report in, 00:33:19.360 |
I don't remember the year, essentially said, there's no use Great Britain putting any money 00:33:26.480 |
into this, it's going nowhere. And this was all about social factions in Great Britain. 00:33:35.520 |
Ed Murrow hated Cambridge, and Cambridge hated Manchester, and somebody else can write that 00:33:44.320 |
story. But it really did have a hard effect on research there. Now, they've come roaring back 00:33:53.760 |
with DeepMind, but that's one guy and his visionaries around him. 00:34:01.360 |
- But just to push on that, it's kind of interesting, you have this dislike of the 00:34:06.720 |
idea of an AI winter. Where's that coming from? - Oh, because I just don't think it's true. 00:34:15.440 |
- There was particular periods of time. It's a romantic notion, certainly. 00:34:21.280 |
- Yeah, well. No, I admire science perhaps more than I admire commerce. Commerce is fine. Hey, 00:34:32.880 |
we all got to live. But science has a much longer view than commerce, and continues 00:34:46.640 |
almost regardless. It can't continue totally regardless, but almost regardless of what's 00:34:56.400 |
saleable and what's not, what's monetizable and what's not. - So the winter is just something 00:35:01.680 |
that happens on the commerce side, and the science marches. That's a beautifully 00:35:08.720 |
optimistic and inspiring message. I agree with you. I think if we look at the key people that 00:35:15.440 |
work in AI, that work in key scientists in most disciplines, they continue working out of the 00:35:21.200 |
love for science. You can always scrape up some funding to stay alive, and they continue working 00:35:28.560 |
diligently. But there certainly is a huge amount of funding now, and there's a concern on the AI 00:35:37.520 |
side and deep learning. There's a concern that we might, with overpromising, hit another slowdown 00:35:44.000 |
in funding, which does affect the number of students, that kind of thing. - Yeah, I know it 00:35:48.480 |
does. - So the kind of ideas you had to machine to think, did you continue that curiosity through 00:35:55.120 |
the decades that followed? - Yes, I did. - And what was your view, historical view, of how AI 00:36:01.840 |
community evolved, the conversations about it, the work? Has it persisted the same way from its 00:36:08.960 |
birth? - No, of course not. It's just, we were just talking, the symbolic AI really kind of 00:36:19.200 |
dried up, and it all became algorithmic. I remember a young AI student telling me what he was doing, 00:36:27.200 |
and I had been away from the field long enough. I'd gotten involved with complexity at the Santa 00:36:33.280 |
Fe Institute. I thought, algorithms, yeah, they're in the service of, but they're not the main event. 00:36:40.960 |
No, they became the main event. That surprised me. And we all know the downside of this. We all know 00:36:49.840 |
that if you're using an algorithm to make decisions based on a gazillion human decisions, 00:36:58.960 |
baked into it are all the mistakes that humans make, the bigotries, the short-sightedness, 00:37:05.040 |
so on and so on. - So you mentioned Santa Fe Institute. So you've written the novel, 00:37:13.760 |
"Edge of Chaos," but it's inspired by the ideas of complexity, a lot of which have been extensively 00:37:22.400 |
explored at the Santa Fe Institute. It's another fascinating topic of just sort of emergent 00:37:32.240 |
complexity from chaos. Nobody knows how it happens, really, but it seems to where all the interesting 00:37:38.240 |
stuff does happen. So how did first, not your novel, but just complexity in general and the 00:37:45.280 |
work at Santa Fe fit into the bigger puzzle of the history of AI? Or maybe even your personal 00:37:52.400 |
journey through that. - One of the last projects I did concerning AI in particular was looking at 00:38:02.640 |
the work of Harold Cohen, the painter. And Harold was deeply involved with AI. He was a painter first. 00:38:15.360 |
And what his project, "Aaron," which was a lifelong project, did was reflect 00:38:25.440 |
his own cognitive processes. Harold and I, even though I wrote a book about it, we had a lot of 00:38:33.840 |
friction between us. And I thought, "This is it." The book died. It was published and fell into a 00:38:44.480 |
ditch. "This is it. I'm finished. It's time for me to do something different." By chance, this was a 00:38:53.600 |
sabbatical year for my husband. And we spent two months at the Santa Fe Institute and two months 00:38:59.840 |
at Caltech, and then the spring semester in Munich, Germany. Okay. Those two months at 00:39:11.760 |
the Santa Fe Institute were so restorative for me. And I began to... The Institute was very small 00:39:19.920 |
then. It was in some kind of office complex on Old Santa Fe Trail. Everybody kept their door open. 00:39:26.560 |
So you could crack your head on a problem, and if you finally didn't get it, you could walk in to 00:39:33.920 |
see Stuart Kaufman or any number of people and say, "I don't get this. Can you explain?" 00:39:42.480 |
And one of the people that I was talking to about complex adaptive systems was Murray Gell-Mann. 00:39:51.120 |
And I told Murray what Harold Cohen had done. And I said, "You know, this sounds to me like 00:39:59.200 |
a complex adaptive system." And he said, "Yeah, it is." "Well, what do you know? Harold Aaron 00:40:06.480 |
had all these kids and cousins all over the world in science and in economics and so on and so forth." 00:40:13.040 |
I was so relieved. I thought, "Okay, your instincts are okay. You're doing the right thing." 00:40:19.680 |
I didn't have the vocabulary. And that was one of the things that the Santa Fe Institute gave me. 00:40:25.840 |
If I could have rewritten that book -- no, it had just come out, I couldn't rewrite it -- I would 00:40:30.880 |
have had a vocabulary to explain what Aaron was doing. Okay, so I got really interested in 00:40:37.680 |
what was going on at the Institute. The people were, again, bright and funny and willing to 00:40:46.960 |
explain anything to this amateur. George Cowan, who was then the head of the Institute, 00:40:54.080 |
said he thought it might be a nice idea if I wrote a book about the Institute. 00:40:57.600 |
And I thought about it, and I had my eye on some other project, God knows what. And I said, 00:41:06.640 |
"I'm sorry, George. Yeah, I'd really love to do it, but, you know, just not going to work for me at 00:41:11.760 |
this moment." And he said, "Oh, too bad. I think it would make an interesting book." Well, he was 00:41:16.560 |
right and I was wrong. I wish I'd done it. But that's interesting. I hadn't thought about that, 00:41:21.600 |
that that was a road not taken that I wish I'd taken. 00:41:24.960 |
Well, you know what, just on that point, it's quite brave for you as a writer, as sort of 00:41:34.560 |
coming from a world of literature, the literary thinking, historical thinking, I mean, just from 00:41:40.080 |
that world and bravely talking to quite, I assume, large egos in AI or in complexity and so on. How 00:41:54.160 |
did you do it? Like, where did you, I mean, I suppose they could be intimidated of you as well, 00:42:00.560 |
because it's two different worlds coming together. 00:42:02.880 |
I never picked up that anybody was intimidated by me. 00:42:06.160 |
But how were you brave enough? Where did you find the guts to sort of-- 00:42:08.720 |
God, just dumb, dumb luck. I mean, this is an interesting rock to turn over. I'm going to write 00:42:14.320 |
a book about it. And you know, people have enough patience with writers if they think they're going 00:42:19.920 |
to end up at a book that they let you flail around and so on. 00:42:24.240 |
Well, but they also look if the writer has, there's like, if there's a sparkle in their eye, 00:42:31.680 |
Right? When were you at the Santa Fe Institute? 00:42:35.920 |
The time I'm talking about is 1990, yeah, 1990, '91, '92. But we then, because Joe was an external 00:42:45.840 |
faculty member, we're in Santa Fe every summer. We bought a house there. And I didn't have that 00:42:52.160 |
much to do with the Institute anymore. I was writing my novels. I was doing whatever I was doing. 00:43:00.560 |
But I loved the Institute. And I loved the, again, the audacity of the ideas. 00:43:12.960 |
I think that there's this feeling, much like in great, great institutes of neuroscience, for 00:43:21.680 |
example, that it's, they're in it for the long game of understanding something fundamental about 00:43:29.840 |
reality and nature. And that's really exciting. So if we start now to look a little bit more 00:43:36.320 |
recently, how AI is really popular today. How is this world, you mentioned algorithmic, but in 00:43:48.800 |
general, is the spirit of the people, the kind of conversations you hear through the grapevine and 00:43:54.400 |
so on, is that different than the roots that you remember? 00:43:57.520 |
No. The same kind of excitement. The same kind of, "This is really going to make a difference 00:44:05.840 |
You know, a lot of folks, especially young, 20 years old or something, 00:44:10.240 |
they think, "We've just found something special here. We're going to change the world tomorrow." 00:44:16.640 |
On a time scale, do you have a sense of what, of the time scale at which breakthroughs in AI 00:44:28.080 |
I really don't. Because look at deep learning. That was, Jeffrey Hinton came up with the algorithm 00:44:39.040 |
in '86. But it took all these years for the technology to be good enough to actually 00:44:51.440 |
be applicable. So, no, I can't predict that at all. I can't. I wouldn't even try. 00:44:57.680 |
Well, let me ask you to, not to try to predict, but to speak to the, 00:45:02.480 |
you know, I'm sure in the '60s, as it continues now, there's people that think, 00:45:08.160 |
let's call it, we can call it this fun word, the singularity. When there's a phase shift, 00:45:14.560 |
there's some profound feeling where we're all really surprised by what's able to be achieved. 00:45:21.600 |
I'm sure those dreams are there. I remember reading quotes in the '60s and those continued. 00:45:26.240 |
How have your own views, maybe if you look back, about the timeline of a singularity changed? 00:45:34.960 |
Well, I'm not a big fan of the singularity as Ray Kurzweil has presented it. 00:45:52.560 |
If I understand Kurzweil's view, it's sort of, there's going to be this moment when machines 00:45:59.280 |
are smarter than humans and, you know, game over. However, the game over is, I mean, 00:46:06.560 |
do they put us on a reservation? Do they, et cetera, et cetera. And first of all, 00:46:13.280 |
machines are smarter than humans in some ways all over the place. And they have been since 00:46:20.160 |
adding machines were invented. So, it's not going to come like some great eatable crossroads, 00:46:28.640 |
you know, where they meet each other and our offspring Oedipus says, "You're dead." 00:46:37.920 |
Yeah, so it's already game over with calculators, right? They're already out to do much better at 00:46:45.600 |
basic arithmetic than us. But, you know, there's a human-like intelligence. And it's not the ones 00:46:53.680 |
that destroy us, but, you know, somebody that you can have as a friend, you can have deep 00:46:59.600 |
connections with that kind of passing the Turing test and beyond, those kinds of ideas. Have you 00:47:10.240 |
In a book I wrote with Ed Feigenbaum, there's a little story called the geriatric robot. 00:47:16.160 |
And how I came up with the geriatric robot is a story in itself. But here's what the geriatric 00:47:24.880 |
robot does. It doesn't just clean you up and feed you and wheel you out into the sun. 00:47:30.640 |
Its great advantage is it listens. It says, "Tell me again about the great coup of '73. 00:47:42.720 |
Tell me again about how awful or how wonderful your grandchildren are," and so on and so forth. 00:47:52.080 |
And it isn't hanging around to inherit your money. It isn't hanging around because it can't get any 00:47:59.680 |
other job. This is its job, and so on and so forth. Well, I would love something like that. 00:48:08.320 |
- Yeah, I mean, for me, that deeply excites me. So, I think there's a lot of us-- 00:48:15.200 |
- Lex, you gotta know, it was a joke. I dreamed it up because I needed to talk to college students, 00:48:20.880 |
and I needed to give them some idea of what AI might be. And they were rolling in the aisles 00:48:26.880 |
as I elaborated and elaborated and elaborated. When it went into the book, 00:48:32.160 |
they took my hide off in the New York Review of Books. This is just what we have thought about 00:48:40.240 |
these people in AI. They're inhuman. Oh, come on, get over it. 00:48:44.320 |
- Don't you think that's a good thing for the world, that AI could potentially-- 00:48:49.040 |
- Moi, I do, absolutely. And furthermore, I want, you know, I'm pushing 80 now. By the time I need 00:48:58.480 |
help like that, I also want it to roll itself in a corner and shut the fuck up. 00:49:04.080 |
- Let me linger on that point. Do you really, though? 00:49:10.960 |
- Don't you want it to push back a little bit? 00:49:13.120 |
- A little, but I have watched my friends go through the whole issue around having help in 00:49:21.040 |
the house. And some of them have been very lucky and had fabulous help. And some of them have had 00:49:29.360 |
people in the house who want to keep the television going on all day, who want to talk on their phones 00:49:34.640 |
all day. No. Just roll yourself in the corner and shut up. 00:49:39.440 |
- Unfortunately, us humans, when we're assistants, we care, we're still, even when we're 00:49:46.320 |
assisting others, we care about ourselves more. And so you create more frustration. And a robot, 00:49:53.200 |
AI assistant can really optimize the experience for you. I was just speaking to the point, 00:50:01.600 |
you actually bring up a very, very good point, but I was speaking to the fact that 00:50:05.440 |
us humans are a little complicated, that we don't necessarily want a perfect servant. 00:50:11.200 |
I don't, maybe you disagree with that, but there's, I think there's a push and pull with humans. 00:50:21.440 |
A little tension, a little mystery that, of course, that's really difficult for AI to get right. 00:50:28.080 |
But I do sense, especially in today with social media, that people are getting more and more 00:50:35.120 |
lonely, even young folks, and sometimes especially young folks, that loneliness, there's a longing 00:50:42.960 |
for connection and AI can help alleviate some of that loneliness. Some, just somebody who listens, 00:50:55.600 |
- So to speak, yeah, so to speak. Yeah, that to me is really exciting. But so if we look at that 00:51:05.520 |
level of intelligence, which is exceptionally difficult to achieve actually, 00:51:10.640 |
as the singularity or whatever, that's the human level bar, that people have dreamt of that too. 00:51:18.320 |
Turing dreamt of it. He had a date, timeline. Do you have, how have your own timeline 00:51:25.280 |
evolved on past-- - I don't even think about it. 00:51:32.880 |
field has been so full of surprises for me. - You just taking in and see-- 00:51:39.440 |
- Yeah, whoa, whoa, that's great. Yeah, I just can't. Maybe that's because I've been around 00:51:46.960 |
the field long enough to think, don't go that way. Herb Simon was terrible about making these 00:51:54.000 |
predictions of when this and that would happen. And he was a sensible guy. 00:51:59.040 |
- Yeah. And his quotes are often used, right, as a-- 00:52:03.600 |
- As a bludgeon, yeah. - Yeah. Do you have concerns about 00:52:11.280 |
AI, the existential threats, as many people, like Elon Musk and Sam Harris and others are thinking 00:52:21.440 |
takes up a half a chapter in my book. I call it the male gaze. 00:52:26.560 |
- (laughs) - Well, you hear me out. The male gaze is 00:52:33.120 |
actually a term from film criticism. And I'm blocking on the women who dream this up. But she 00:52:41.760 |
pointed out how most movies were made from the male point of view, that women were objects, not 00:52:50.720 |
subjects. They didn't have any agency, so on and so forth. So when Elon and his pals Hawking and 00:53:00.160 |
so on came, "AI's gonna eat our lunch and our dinner and our midnight snack too," I thought, 00:53:07.120 |
"What?" And I said to Ed Feigenbaum, "Oh, this is the first guy, first, these guys have always been 00:53:13.120 |
the smartest guy on the block, and here comes something that might be smarter. Ooh, let's stamp 00:53:18.800 |
it out before it takes over." And Ed laughed. He said, "I didn't think about it that way." 00:53:24.080 |
But I did. I did. And it is the male gaze. Okay, suppose these things do have agency. 00:53:34.480 |
Well, let's wait and see what happens. Can we imbue them with ethics? Can we imbue them with 00:53:45.520 |
a sense of empathy? Or are they just gonna be, "I know we've had centuries of guys like that." 00:53:54.960 |
That's interesting that the ego, the male gaze is immediately threatened. 00:54:03.680 |
And so you can't think in a patient, calm way of how the tech could evolve. 00:54:13.360 |
Speaking of which, your '96 book, "The Future of Women," I think at the time and now, 00:54:20.720 |
certainly now, I mean, I'm sorry, maybe at the time, but I'm more cognizant of now, 00:54:25.680 |
is extremely relevant. You and Nancy Ramsey talk about four possible futures of women in science 00:54:33.760 |
and tech. So if we look at the decades before and after the book was released, can you tell a 00:54:41.920 |
history, sorry, of women in science and tech and how it has evolved? How have things changed? 00:54:50.000 |
Where do we stand? Not enough. They have not changed enough. The way that women are 00:54:58.080 |
ground down in computing is simply unbelievable. But what are the four possible futures for women 00:55:08.160 |
in tech from the book? What you're really looking at are various aspects of the present. 00:55:14.240 |
So for each of those, you could say, "Oh yeah, we do have backlash. Look at what's happening 00:55:21.200 |
with abortion," and so on and so forth. We have one step forward, one step back. 00:55:26.640 |
The golden age of equality was the hardest chapter to write. And I used something from 00:55:33.440 |
the Santa Fe Institute, which is the sandpile effect, that you drop sand very slowly onto a pile 00:55:41.760 |
and it grows and it grows and it grows until suddenly it just breaks apart. 00:55:45.920 |
And in a way, #MeToo has done that. That was the last drop of sand that broke everything apart. 00:55:58.240 |
That was a perfect example of the sandpile effect. And that made me feel good. It didn't change all 00:56:04.400 |
of society, but it really woke a lot of people up. - But are you in general optimistic about 00:56:10.800 |
maybe after #MeToo? I mean, #MeToo is about a very specific kind of thing. 00:56:16.560 |
- Boy, solve that and you'll solve everything. - But are you in general optimistic about the future? 00:56:22.800 |
- Yes, I'm a congenital optimist. I can't help it. - What about AI? What are your thoughts 00:56:30.800 |
about the future of AI? - Of course, I get asked, "What do you worry about?" 00:56:36.720 |
And the one thing I worry about is the things we can't anticipate. 00:56:41.280 |
There's going to be something out of left field that we will just say, "We weren't prepared for 00:56:48.480 |
that." I am generally optimistic. When I first took up being interested in AI, 00:56:59.680 |
like most people in the field, more intelligence was like more virtue. What could be bad? 00:57:06.880 |
And in a way, I still believe that, but I realize that my notion of intelligence 00:57:15.280 |
has broadened. There are many kinds of intelligence, and we need to imbue our 00:57:20.880 |
machines with those many kinds. - So you've now just finished, or in the process of finishing 00:57:29.520 |
the book that you've been working on, the memoir. How have you changed? I know it's just writing, 00:57:38.320 |
but how have you changed the process? If you look back, what kind of stuff did it bring up 00:57:44.000 |
to you that surprised you, looking at the entirety of it all? 00:57:49.360 |
- The biggest thing, and it really wasn't a surprise, is how lucky I was. Oh my! To be, 00:58:01.200 |
to have access to the beginning of a scientific field that is going to change the world. 00:58:10.480 |
How did I luck out? And yes, of course, my view of things has widened a lot. 00:58:20.880 |
If I can get back to one feminist part of our conversation, without knowing it, 00:58:31.680 |
it really was subconscious. I wanted AI to succeed because I was so tired of hearing 00:58:39.280 |
that intelligence was inside the male cranium. And I thought if there was something out there 00:58:46.240 |
that wasn't a male thinking and doing well, then that would put a lie to this whole notion of 00:58:55.920 |
intelligence resides in the male cranium. I did not know that until one night, Harold Cohen and I 00:59:04.400 |
were having a glass of wine, maybe two, and he said, "What drew you to AI?" And I said, "Oh, 00:59:12.320 |
you know, smartest people I knew, great project, blah, blah, blah." And I said, "And I wanted 00:59:17.600 |
something besides male smarts." And it just bubbled up out of me, Lex. And I, "What? What's this?" 00:59:27.440 |
- It's kind of brilliant, actually. So AI really humbles all of us and humbles the people that 00:59:37.600 |
- Wow, that is so beautiful. Pamela, thank you so much for talking to me. It was really a huge honor. 00:59:44.400 |
- Oh, it's been a great pleasure. - Thank you.