Back to Index

Donald Knuth: Programming, Algorithms, Hard Problems & the Game of Life | Lex Fridman Podcast #219


Chapters

0:0 Introduction
0:48 First programs
24:11 Literate programming
27:20 Beauty in programming
33:15 OpenAI
42:26 Optimization
48:31 Consciousness
57:14 Conway's game of life
70:1 Stable marriage
73:21 Richard Feynman
84:15 Knuth-Morris-Pratt Algorithm
93:47 Hardest problem
111:26 Open source
116:39 Favorite symbols
126:12 Productivity
133:53 Meaning of life

Transcript

The following is a conversation with Donald Knuth, his second time on this podcast. Don is a legendary computer scientist, Turing Award winner, father of algorithm analysis, author of the art of computer programming, creator of tech that led to latech, and one of the kindest and most fascinating human beings I've ever got a chance to talk to.

I wrote him a letter a long time ago. He responded, and the rest, as they say, is history. We've interacted many times since then, and every time has been joyful and inspiring. To support this podcast, please check out our sponsors in the description. This is the Lex Friedman Podcast, and here is my conversation with Donald Knuth.

Your first large-scale program, you wrote it in IBM 650 Assembler in the summer of 1957? I wrote it in decimal machine language. I didn't know about Assembler until a year later. But the year, 1957, and the program is Tic-Tac-Toe. Yeah, and I learned about Assembler later that summer. I probably did.

In 1957, hardly anybody had heard of Assembler. You looked at the user manuals. How would you write a program for this machine? It would say 69, which meant load the distributor, and then you would give the address of the number you wanted to load into the distributor. Yesterday, my friend Doug Spicer at the Computer History Museum sent me a link to something that just went on YouTube.

It was IBM's progress report from 1956, which is very contemporary with 1957. And in 1956, IBM had donated to Stanford University an IBM 650, one of the first ones, when they showed a picture of the assembly line for IBM 650s. And they said, "This is number 500 or something coming off the assembly line." And I had never seen so many IBM 650s.

I did in this movie that's on YouTube now. And it showed the picture from Stanford. They said, "Look, we donated one of these to Stanford, one to MIT," and they mentioned one other college. And in December of '56, they donated to my university, K-STEC. But anyway, they showed a picture then of a class session where a guy was teaching programming.

And on the blackboard, it said 69, 8,000. I mean, he was teaching them how to write code for this IBM 650, which was in decimal numbers. So the instructions were 10 decimal digits. You had two digits that said what to do, four digits to say what to do it to, and four more digits to say where to get your next instruction.

And there's a manual that describes what each of the numbers mean. And the manual was actually... If the manual had been well-written, I probably never would have gone into computer science, but it was so badly written, I figured that I must have a talent for it because I'm only a freshman and I could write a better manual.

And- That's what you did. And so I started working at the computer center and wrote some manuals then. But yeah, but this was the way we did it. And my first program then was June of 1957. The Tic-Tac-Toe. No, that was the second program. The first program was factoring a number.

Okay. So you dial a number on the switches. I mean, you sat at this big mainframe and you turn the dials, set a number, and then it would punch out the factors of that number on cards. So that's the input is the number? The input was, yeah, the input was a number, a 10-digit number, and the output was its factors.

And I wrote that program. I still have a copy of it somewhere. How many lines of code? Do you remember? Well, yeah, it started out as about 20, but then I kept having to debug it. And I discovered debugging, of course, when I wrote my first program. What does debugging look like on a program with just all numbers?

Well, you sit there and you... I don't remember how I got it into the machine, but I think there was a way to punch it on cards. So each instruction would be one card. Maybe I could get seven instructions on a card, eight instructions, I don't know. But anyway, so I'm sitting there at the console of the machine.

I mean, I'm doing this at night when nobody else is around. Of course. And so you have one set of switches where you dial the number I'm inputting, but there's another switch that says, "Okay, now execute one instruction and show me what you did." Or there were another four switches I'd say, "Stop if you get to that instruction." So I could say, "Now go until you get there again and watch." Okay, so I could watch.

It would take that number and it would divide it by two. And if there's no remainder, then okay, two is a factor. So then I work on... But if not divisible by two, divide by three. Okay, keep trying until you know you're at the end. And you would find a bug if you were just surprised that something weird happened?

Well, certainly. I mean, first of all, I might have tried to divide by one instead of two. You're off by one error, as people make all the time. But maybe I go to the wrong instruction. Maybe I left something in a register that I shouldn't have done. But the first bugs were pretty...

Probably on the first night, I was able to get the factors of 30 as equal to two, three, and five. Okay? Sorry to interrupt. So you're sitting there late at night. It feels like you spent many years late at night working on a computer. Oh, yeah. So what's that like?

So most of the world is sleeping, and you have to be there at night because that's when you get access to the computer. Between my freshman and sophomore year, I didn't need sleep. I used to do all nighters. When I was in high school, I used to do the whole student newspaper every Monday night.

I would just stay up all night, and it would be done on Tuesday morning. I didn't get ulcers and stuff like that until later. I don't know if you know Rodney Brooks. Rod Brooks, of course. Yeah. He told me a story. He really looked up to you. He was actually afraid of you.

Well, vice versa, I must say. But he tells a story when you were working on tech, that they screwed up something with a machine. I think this might have been MIT. I don't know. And you were waiting for them to fix the machine so you can get back to work late at night.

Oh, that happened all the time. He was really intimidated. He's like, "Dr. Knuth is not happy with this." That's interesting. But no, no. The machine at Danford AI Lab was down an awful lot because they had many talented programmers changing the operating system every day. And so the operating system was getting better every day, but it was also crashing.

So I wrote almost the entire manual for tech during downtime of that machine. But that's another story. Well, he was saying it's a hardware problem. They tried to fix it and they reinserted something and smoke was everywhere. Oh, wow. Because he was hurt. Well, that didn't happen as often as the operating system.

Anyway, it's a funny story because you're saying there's this tall Don Knuth that I look up to and there was pressure to fix the computer. It's funny. The kind of things we remember that stick in our memory. Well, okay. Yeah. Well, I could tell you a bunch of Rodbrook stories too, but let's go back to the 650.

So I'm debugging my first program and I had more bugs in it than number of lines of code. I mean, the number of lines of code kept growing. And let me explain. So I had to punch the answers on cards. So suppose I'm factoring the number 30, then I got to put two somewhere on the card.

I got to put a three somewhere on the card. I got to put a five somewhere on the card. And you know what? It was my first program. I probably screwed up and it fell off the edge of the card or something like that. But I didn't realize that there are some tensioned numbers that have more than eight factors.

And the card has only 80 columns. And so I need 10 columns for every factor. So my first program didn't take account for the fact that I would have to punch more than one card. My first program just lined the stuff up in memory and then I punched the card.

So by the time I finished, I had to deal with lots of things. Also, if you put a large prime number in there, my program might have sat there for 10 minutes. The 650 was pretty slow. And so it would sit there spinning its wheels and you wouldn't know if it was in a loop or whatever.

You said 10 digit is the input? 10 digits, yeah. So I think the largest is sort of 9999999997 or something like that. That would take me a while for that first one. Anyway, that was my first program. Well, what was your goal with that program? Was there something you were hoping to find a large prime maybe or the opposite?

No, my goal was to see the lights flashing and understand how this magical machine would be able to do something that took so long by hand. So what was your second program? My second program was a converted number from binary to decimal or something like that. It was much simpler.

It didn't have that many bugs in it. My third program was Tic-Tac-Toe. Yeah. And it had some machines. So the Tic-Tac-Toe program is interesting on many levels, but one of them is that it had some, you can call machine learning in it. That's, yeah, that's right. I don't know how long it's going to be before the name of our field has changed from computer science to machine learning.

But anyway, it was my first experience with machine learning. Okay, so here we had... Yeah. How does the program... Well, first of all, what is the problem you were solving? What is Tic-Tac-Toe? What are we talking about? And then how is it designed? Right. So you've got a three by three grid and each can be in three states.

It can be empty or it can have an X or an O. Yeah. All right. So three to the ninth is a... Well, what is... How big is it? I should know. But it's 81 times 81 times three. So anyway, eight is like two to the third. So that would be like two to the sixth.

But that'd be 64. Then you have to... Anyway. I love how you're doing the calculation. Anyway, the three comes from the fact that it's either empty, an X or an O. Right. And the 650 was a machine that had only 2,010 digit words. You go from 0,0,0,0 to 1,9,9,9 and that's it.

And each word, you have a 10 digit number. So that's not many bits. I mean, I got to have... In order to have a memory of every position I've seen, I need three to the ninth bits. Okay. But it was a decimal machine too. It didn't have bits. But it did have strange instruction where if you had a 10 digit number, but all the digits were either eight or nine, you'd be 89988 or something like that.

You could make a test whether it was eight or nine. That was one of the strange things IBM engineers put into the machine. I have no idea why. Well, hardly ever used. But anyway, I needed one digit for every position I'd seen. Zero meant it was a bad position.

Nine meant it was a good position. I think I started out at five or six. But if you win a game, then you increase the value of that position for you, but you decrease it for your opponent. But I had that much total memory for every possible position was one digit.

And I had a total of 20,000 digits, which had to also include my program and all the logic and everything, including how to ask the user what the moves are and things like this. Okay. So I think I had to work it out. Every position in tic-tac-toe is equivalent to roughly eight others because you can rotate the board, which gives you a factor of four, and you can also flip it over.

And that's another factor too. So I might have needed only three to the ninth over eight positions plus a little bit. But anyway, that was a part of the program to squeeze it into this tiny... So you tried to find an efficient representation that took account for that kind of rotation?

I had to, otherwise I couldn't do the learning. Wow. But I had three parts to my tic-tac-toe program, and I called it brain one, brain two, and brain three. So brain one just played a... Let's see. At random. Okay. It's your turn. Okay. You got to put an X somewhere.

It has to go in an empty space, but that's it. Okay. Choose one and play there. Brain two had a canned routine, and I think it also... Maybe it assumed you were the first player, or maybe it allowed you to be first. I think you're allowed to be either first or second, but it had a canned built-in strategy known to be optimum for tic-tac-toe.

Before I forget, by the way, I learned many years later that Charles Babbage had thought about programming tic-tac-toe for his dream machine that he was never able to finish. Wow. So that was the program he thought about? More than 100 years ago. Yeah. Wow. Yeah. He did that. Okay.

And I had, however, been influenced by a demonstration at the Museum of Science and Industry in Chicago. It's like Boston's science museum. I think Bell Labs had prepared a special exhibit about telephones and relay technology, and they had a tic-tac-toe playing machine as part of that exhibit. So that had been one of my...

Something I'd seen before I was a freshman in college and inspired me to see if I could write a program for it. Okay. So anyway, brain one, random, knowing nothing. Brain two, knowing everything. Then brain three was the learning one. And I could play brain one against brain one, brain one against brain two, and so on.

And so you could also play against the user, against a live universe. So I started going, the learning thing, and I said, "Okay, take two random people just playing tic-tac-toe, knowing nothing." And after about... I forget the number now, but it converged after about 600 games to a safe draw.

The way my program learned was actually, it learned how not to make mistakes. It didn't try to do anything for winning, it just tried to say not losing. Yeah, draw. Yeah, not lose. So that was probably because of the way I designed the learning thing. I could have had a different reinforcement function that would reward brilliant play.

But anyway, it didn't... And if I took a novice against a skilled player, it was able to learn how to play a good game. And that was really my... But after I finished that, I felt I understood programming. Was there... Did a curiosity and interest in learning systems persist for you?

So why did you want Brain 3 to learn? Yeah, I think naturally, we're talking about Rod Brooks. He was teaching all kinds of very small devices to learn stuff. If a leaf drops off of a tree, he was saying something, "Well, it learns if there's wind or not." But I mean, he pushed that a little bit too far, but he said he could probably train some little mini bugs to scour out dishes if he had enough financial support.

I don't know. Can I ask you about that? Because he also mentioned that during those years, there was discussion about, inspired by Turing, about computation, of what is computation. Yeah. And... Yeah, I never thought about any stuff like that. That was way too philosophical. I mean, I was a freshman after all.

I mean, I didn't... I was pretty much a machine. So it's almost like, yeah, I got you. It's a tinkering mindset, not a philosophical mindset. It was just exciting to me to be able to control something, but not to say, "Hmm, am I solving a big problem or something like that?

Or is this a step for humankind or anything?" No, no way. When did you first start thinking about computation in the big sense? You know, like the universal Turing machine sense? Well, I mean, I had to pass an exam. I had to take classes on computability when I was a senior.

So we read this book by Martin Davis and, yeah, this is cool stuff. But I learned about it because I needed to pass the exams, but I didn't invent any of that boring stuff. But I had great fun playing with the machine. I wrote programs because it was fun to write programs and get this...

I mean, it was like watching miracles happen. You mentioned in an interview that when reading a program, you can tell when the author of the program changed. How the heck can you do that? What makes a distinct style for a programmer, do you think? There's different... Hemingway has a style of writing versus James Joyce or something.

Yeah, those are pretty easy to imitate, but it's the same with music and whatever. During the pandemic, I spent a lot more time playing the piano and I found something that I'd had when I was taking lessons before I was a teenager. And it was Yankee Doodle played in the style of...

You had Beethoven and you had Debussy and Chopin, and the last one was Gershwin. And I played over and over again. I thought it was so brilliant, but it was so easy. But also to appreciate how this author, Mario, somebody or other, had been able to reverse engineer the styles of those composers.

But now, particularly to your question, I mean, it was pretty obvious in this program I was reading, it was a compiler and it had been written by a team at Carnegie Mellon. And I have no idea which program was responsible for it, but you would get to a part where the guy would just not know how to move things between registers very efficiently.

And so everything that could be done in one instruction would take three or something like that. That would be a pretty obvious change in style. But there were also flashes of brilliance where you could do in one instruction. Normally I used two, because you knew enough about the way the machine worked that you could accomplish two goals in one step.

So it was mostly the brilliance of the concept more than the semicolons or the use of short sentences versus long sentences or something like that. So you would see the idea in the code, and you can see the different style of thinking expressed. Right. It was stylistic. I mean, I could identify authors by the amount of technical aptitude they had, but not by style in the sense of rhythm or something like that.

So if you think about Mozart, Beethoven, Bach, if somebody looked at Don Knuth's code, would they be able to tell that this is a distinct style of thinking going on here? What do you think? And what would be the defining characteristic of the style? Well, my code now is literate programming, so it's a combination of English and C mostly.

But if you just looked at the C part of it, you would also probably notice that I use a lot of global variables that other people don't. And I expand things in line more than instead of calling. Anyway, I have different subset of C that I use. Okay. But that's a little bit stylistic.

But with literate programming, you alternate between English and C or whatever. And by the way, people listening to this should look up literate programming. It's very interesting concept that you proposed and developed over the years. Yeah. That's the most significant thing, I think, to come out of the tech project.

I realized that my programs were to be read by people and not just by computers, and that typography could massively enhance that. And so, I mean, they're just wondering if they're going to look it up, that they should also look up this book called Physically Based Rendering by Matt Farr.

And gosh, anyway, it got an Academy Award. But all the graphic effects you see in movies are accomplished by algorithms. And the whole book is a literate program. It tells you not only how you do all the shading and bring images in that you need for animation and textures and so on, but it also, you can run the code.

And so, I find it an extension of how to teach programming by telling a story as part of the program. So, it works as a program, but it's also readable by humans. Yes, and especially by me a week later or a year later. That's a good test. You yourself understand the code easily a week or a month or a year later.

So, it's the greatest thing since sliced bread. Programming? Or literate? Literate programming. Okay. You heard it here first. Okay. You dodged this question in an interview I listened to. So, let me ask you again here. What makes for a beautiful program? What makes for a beautiful program? Yeah. What are the characteristics you see?

Like you just said, literate programming. What are the characteristics you see in a program that make you sit back and say, "That's pretty good"? Well, the reason I didn't answer is because there are dozens and dozens of answers to that. Because you can define beauty, the same person will define beauty a different way from hour to hour.

I mean, it depends on what you're looking for. At one level, it's beautiful just if it works at all. Another level, it's beautiful if it can be understood easily. It's beautiful if it's literate programming, it's beautiful, it makes you laugh. I mean… Yeah. I'm actually… So, I'm with you.

I think beauty, if it's readable… Readable, yeah. If you understand what's going on and also understand the elegance of thought behind it. And then also, as you said, wit and humor. I was always… I remember having this conversation. I had this conversation on Stack Overflow, whether humor is good in comments.

And I think it is. Whether humor is good in comments. Like when you add comments in the code. Yeah. I always thought a little bit of humor is good. It shows personality. It shows character, shows wit and fun and all those kinds of things of the personality of the programmer.

Yeah. Okay. So, a couple days ago, I received a wonderful present from my former editor at Aston Wesley. He's downsizing his house and he found that somebody at the company had found all of their internal files about the art of computer programming from the 1960s, and they gave it to him.

And then, before throwing it in the garbage. And then, so he said, "Oh, yeah." He planned to keep it for posterity, but now he realized that posterity is a bit too much for him to handle, so he sent it to me. And so, I just received this big stack of letters, some of which I had written to them, but many of which they had written to early guinea pigs who were telling them whether they should publish or not.

And one of the things was in the comments to volume one, the major reader was Bob Floyd, who is my great co-worker in the '60s, died early, unfortunately. And he commented about the humor in it. And so, we had, he ran it by me, he says, "Keep this joke in or not." They also sent it out to focus groups.

What do you think about humor in a book about computer programming? What's the conclusion? And I stated my philosophy is that the ideal thing is that it's something where the reader knows that there's probably a joke here if he only understood it, and this is a motivation to understand, to think about it a little bit.

But anyway, very delicate humor. I mean, it's really, each century invents a different kind of humor too. Different cultures have different kinds of humor. Yeah, like we talked about Russia a little bit offline. There's dark humor. And when a country goes through something difficult- Right, "Veteran of the Night Live" and stuff like this.

And Jack Benny, I mean, Steve Allen wrote this book about humor, and it was the most boring book. But he was one of my idols. But yeah, it's called "The Funny Men" or something like that. But yeah, okay. So anyway, I think it's important to know that this is part of life, and it should be fun and not.

And so I wrote this organ composition, which is based on the Bible, but I didn't refrain from putting little jokes in it also in the music. It's hidden in the music. It's there, yeah. A little humor is okay? Yeah. I mean, not egregious humor. So in this correspondence, there were things I said, "Yeah, I really shouldn't have done that." But other ones I insisted on.

And I've got jokes in there that nobody has figured out yet. In fact, in volume two, I've got a cryptogram, a message, enciphered. And in order to decipher it, you're going to have to break an RSA key, which is larger than people know how to break. And so if computers keep getting faster and faster, then it might be 100 years, but somebody will figure out what this message is, and they will laugh.

I mean, I've got a joke in there. So that one you really have to work for. I don't know if you've heard about this. Let me explain it. Maybe you'll find it interesting. So OpenAI is a company that does AI work, and they have this language model. It's a neural network that can generate language pretty well.

But they also, on top of that, developed something called OpenAI Codex. And together with GitHub, they developed a system called OpenAI Copilot. Let me explain what it does. There's echoes of literate programming in it. So what you do is you start writing code, and it completes the code for you.

So for example, you start, let's go to your factoring program. You write in JavaScript and Python and any language that it trained on. You write the first line and some comments, like what this code does, and it generates the function for you. And it does an incredibly good job.

It's not provably right, but it often does a really good job of completing the code for you. - I see. But how do you know whether it did a good job or not? - You could see a lot of examples where it did a good job. And so it's not a thing that generates the code for you.

It starts, it gives you, so it puts the human in the seat of fixing issues versus writing from scratch. Do you find that kind of idea at all interesting? - Every year we're going to be losing more and more control over what machines are doing. And people are saying, well, it seems to, when I was a professor at Caltech, in the sixties, we had this guy who talked a good game.

He could give inspiring lectures and you'd think, well, certainly the things he was talking about an hour later, you say, well, what did he say? But he really felt that it didn't matter whether computers got the right answer or not. It just mattered whether it made you happy or not.

In other words, if your boss paid for it, then you had a job, you could take care of your wife. - So happiness is more important than truth. - Exactly. He didn't believe in truth, but he was a philosopher. - I like it. And somehow you see... - We're going that way.

I mean, so many more things are taken over by saying, well, this seems to work. And when there's a competing interest involved, neither side understands why the decision is being made. We realize now that it's bad, but consider what happens five years, 10 years down the line, when things get even more further detached and each thing is based on something from the previous year.

- Yeah. So you start to lose... The more you automate, the more you start to lose track of some deep human thing. - Exponentially. - Exponentially. So that's the dark side. The positive side is the more you automate, the more you let humans do what humans do best. So maybe programming, maybe humans should focus on a small part of programming that requires that genius, the magic of the human mind, and the mess you let the machine generate.

I mean, that's the positive, but of course, it does come with the darkness of automation. What's better, correctness? - I'm never going to try to write a book about that. I'm never going to recommend to any of my students to work for them. - So you're on the side of correctness, not beauty.

- I'm on the side of understanding. - Understanding. - And I think these things are really marvelous if what they do is, all of a sudden we have a better medical diagnosis or it'll help guide some scientific experiment or something like this, curing diseases or whatever. But when it affects people's lives in a serious way, so if you're writing code for, "Oh yeah, here, this is great.

This will make a slaughter bot." - Okay. So I see. So you have to be very careful. Like right now, it seems like fun and games. It's useful to write a little JavaScript program that helps you with a website. But like you said, one year passes, two years passes, five years, and you forget.

You start building on top of it, and then all of a sudden you have autonomous weapon systems. - Well, we're all dead. It doesn't matter in that sense. - Well, in the end, this whole thing ends anyway. - There is a heat death of the universe predicted, but I'm trying to postpone that for...

- For a little bit? Well, it'd be nice that at the end, as we approach the heat death of the universe, there's still some kind of consciousness there to appreciate it. Hopefully human consciousness. - I'll settle for 10 to the 10 to the 10 to the 10th year, some finite number.

But things like this might be the reason we don't pick up any signals from extraterrestrial... - They don't want anything to do with us. Oh, because they... - Because they invented it too. - So you do have a little bit of worry on the existential threats of AI and automation.

So like removing the human from the picture. - Et cetera, yeah. People have more potential to do harm now by far than they did 100 years ago. - But are you optimistic about... So humans are good at creating destructive things, but also humans are good at solving problems. - Yeah, I mean, there's half empty and half full, you know.

- So are we half full? - I can go... So let me put it this way, because it's the only way I can be optimistic. But think of things that have changed because of civilization. They don't occur just in nature. So just imagine the room we're in, for example.

Okay, we've got pencils, we've got books, we've got tables, we've got microphones, clothing, food. All these things were added. Somebody invented them one by one, and millions of things that we inherit, okay? And it's inconceivable that so many millions of billions of things wouldn't have problems, and we'd get it all right.

And each one would have no negative effects and so on. So it's very amazing that as much works as does work. - It's incredibly amazing. And actually, that's the source of my optimism as well, including for artificial intelligence. So we drive over bridges, we use all kinds of technology, we don't know how it works, and there's millions of brilliant people involved in building a small part of that, and it doesn't go wrong, and it works.

And I mean, it works, and it doesn't go wrong often enough for us to suffer. - And we can identify things that aren't working and try to improve on them. - In a often suboptimal way. - Oh, absolutely. - But it's still so- - But the kind of things that I know how to improve require human beings to be rational, and I'm losing my confidence that human beings are rational.

- Yeah, yeah. Now, here you go again with the worst-case analysis. They may not be rational, but they're clever and beautiful in their own kind of way. I tend to think that most people have the desire and the capacity to be good to each other, and love will ultimately win out.

If they're given the opportunity, that's where they lean. In "The Art of Computer Programming," you wrote, "The real problem is that programmers have spent far too much time worrying about efficiency in the wrong places and at the wrong times. Premature optimization is the root of all evil," in parentheses, or at least most of it, in programming.

Can you explain this idea? What's the wrong time? What is the wrong place for optimization? - So, first of all, the word optimization. I started out writing software, and optimization was, I was a compiler writer, so optimization meant making a better translation so that it would run faster on a machine.

So, an optimized program is just like, you run a program and you set the optimization level to the compiler. So, that's one word for optimization. At that time, I happened to be looking in an unabridged dictionary for some reason or other, and I came to the word optimize. What's the meaning of the word optimize?

And it says, "To view with optimism." And you look in Webster's dictionary of English language in the early 1960s, that's what optimize meant. Okay. So, people started doing cost optimization, other kinds of things, whole subfields of algorithms and economics and whatever are based on what they call optimization now.

But to me, optimization, when I was saying that, was changing a program to make it more tuned to the machine. And I found out that when a person writes a program, he or she tends to think that the parts that were hardest to write are going to be hardest for the computer to execute.

So, maybe I have 10 pages of code, but I had to work a week writing this page. I mentally think that when the computer gets to that page, it's going to slow down. Right. It's going to say, "Oh, I don't understand what I'm doing. I better be more careful." Anyway, this is, of course, silly, but it's something that we don't know when we write a piece of code.

We don't know whether the computer is actually going to be executing that code very much. So, people had a very poor understanding of what the computer was actually doing. I made one test where we studied a Fortran compiler, and it was spending more than 80% of its time reading the comments card.

But as a programmer, we were really concerned about how fast it could take a complicated expression that had lots of levels of parentheses and convert that into something. But that was just less than 1%. So, if we optimize that, we didn't know what we were doing. But if we knew that it was spending 80% of its time on the comments card in 10 minutes, we could make the compiler run more than twice as fast.

You could only do that once you've completed the program, and then you empirically study where... I had some kind of profiling that I knew what was important. Yeah. So, you don't think this applies generally? I mean, there's something that rings true to this across all of them. I'm glad that it applied generally, but it was only my good luck.

I said it, but I said it in a limited context, and I'm glad if it makes people think about stuff. But it applies in another sense too, that is, sometimes I will do optimization in a way that does help the actual running time, but makes the program impossible to change next week, because I've changed my data structure or something that made it less adaptable.

So, one of the great principles of computer science is laziness, or whatever you call it, late binding. Hold off decisions when you can. And we understand now, quantitatively, how valuable that is. What do you mean we understand? People have written thesis about how late binding will improve the... I mean, just-in-time manufacturing or whatever, you can defer a decision instead of doing your advanced planning and say, "I'm going to allocate 30% to this and 50%..." So, in all kinds of domains, there's an optimality to laziness in many cases.

Decision is not made in advance. So, instead, you design, in order to be flexible, to change with the way the wind is blowing. Yeah. But so, the reason that line resonated with a lot of people is because there's something about the programmer's mind that wants, that enjoys optimization. So, it's a constant struggle to balance laziness and late binding with the desire to optimize, the elegance of a well-optimized code is something that's compelling to programming.

Yeah. It's another concept of beauty. Let me ask you a weird question. So, Roger Penrose has talked about computation, computers, and he proposed that the way the human mind discovers mathematical ideas is something more than a computer, that a universal Turing machine cannot do everything that a human mind can do.

Now, this includes discovering mathematical ideas, and it also includes, he's written a book about it, consciousness. So, I don't know if you know, Roger, but... My daughter's kids played with his kids in Oxford. Nice. So, do you think there is such a limit to the computer? Do you think consciousness is more than a computation?

Do you think the human mind, the way it thinks, is more than a computation? I mean, I can say yes or no, but I have no reason. I mean... So, you don't find it useful to have an intuition in one way or the other? Like, when you think about algorithms, isn't it useful to think about...

I think it's an unanswerable question, in my opinion, is no better than anybody else's. You think it's unanswerable. So, you don't think eventually science... How many angels can dance on the head of a... I mean, I don't know. But angels... Anyway, there are lots of things that we can speculate about, but I don't want somebody to say, "Oh, yeah, Cano said this, and so he's smart, and so that must be..." I mean, I say it's something that we'll never know.

Interesting. Okay, that's a strong statement. I personally think it's something we will know eventually. Like, there's no reason to me why the workings of the human mind are not within the reach of science. That's absolutely possible, and I'm not denying it. Yeah. But right now, you don't have a good intuition one way or the other.

That's also possible that an AI created the universe. Intelligent design has all been done by an AI. All of these things are... But you're asking me to pronounce on it, and I don't have any expertise. I'm a teacher that passes on knowledge, but I don't know... The fact that I vote yes or no on...

Well, you do have expertise as a human, not as a teacher or a scholar of computer science. I mean, that's ultimately the realm of where the discussion of human thought and consciousness is. I know where Penrose is coming from. I'm sure he has no... He might even have thought he proved it, but...

No, he doesn't. He doesn't prove it. He is following intuition. I mean, you have to ask John McCarthy. John McCarthy, I think, was totally unimpressed by these statements. So you don't think... So even like the Turing paper on the Turing test that starts by asking, "Can machines think?" You don't think these kind of...

So Turing doesn't like that question. Yeah. I don't consider it important, let's just put it that way, because it's in the category of things that it would be nice to know, but I think it's beyond knowledge. And so I don't... I'm more interested in knowing about the Riemann hypothesis or something.

So when you say... It's an interesting statement, "beyond knowledge." Yeah. I think what you mean is it's not sufficiently well... It's not even known well enough to be able to formalize it in order to ask a clear question. And so that's why it's beyond knowledge, but that doesn't mean it's not eventually going to be formalized.

Yeah. Maybe consciousness will be understood some day, but the last time I checked, it was still 200 years away. I haven't been specializing in this by any means, but I went to lectures about it 20 years ago when I was... There was a symposium at the American Academy in Cambridge, and it started out by saying essentially, "Everything that's been written about consciousness is hogwash." I tend to disagree with that a little bit.

So consciousness for the longest time still is in the realm of philosophy. So it's just conversations without any basis and understanding. Still, I think once you start creating artificial intelligence systems that interact with humans, and they have personality, they have identity, you start flirting with the question of consciousness, not from a philosophical perspective, but from an engineering perspective.

Yeah. And then it starts becoming much more... I feel like... Yeah. Don't misunderstand me. I certainly don't disagree with that at all. Even at these lectures that we had 20 years ago, there were neurologists pointing out that that human beings had actually decided to do something before they were conscious of making that decision.

I mean, they could tell that signals were being sent to their arms before they knew that they were sick, and things like this are true. And Les Valiant has an architecture for the brain, and more recently, Christos Papadimitriou, in the Academy of Science Proceedings a year ago with two other people, but I know Christos very well.

And he's got this model of this architecture by which you could create things that correlate well with experiments that are done on consciousness. And he actually has a machine language in which you can write code and test hypotheses. And so we might have a big breakthrough. My personal feeling is that consciousness, the best model I've heard of to explain the miracle of consciousness is that somehow inside of our brains, we're having a continual survival for the fittest competition.

As I'm speaking to you, all the possible things I might be wanting to say... Are all in there. There's like a voting going on. Yeah, right. And one of them is winning, and that's affecting the next sentence and so on. And there was this book, Machine Intelligence or something?

On Intelligence. On Intelligence. Yeah. Bill Atkinson was a total devotee of that book. Well, I like whether it's consciousness or something else, I like the storytelling part that it feels like for us humans, it feels like there's a concrete... It's almost like literary programming. I don't know what the programming going on on the inside, but I'm getting a nice story here about what happened.

And it feels like I'm in control and I'm getting a nice clear story. But it's also possible there's a computation going on that's really messy. There's a bunch of different competing ideas. And in the end, it just kind of generates a story for you to... A consistent story for you to believe.

And that makes it all nice. Yeah. And so I prefer to talk about things that I have some expertise in than things which I'm only on the sideline. So there's a tricky thing. I don't know if you have any expertise in this. You might be a little bit on the sideline.

It'd be interesting to ask though. What are your thoughts on cellular automata in the game of life? Have you ever played with those kinds of little games? I think the game of life is wonderful and shows all kinds of stuff about how things can evolve without the creator understanding anything more than the power of learning in a way.

But to me, the most important thing about the game of life is how it focused for me what it meant to have free will or not. Because the game of life is obviously totally deterministic. Yes. I find it hard to believe that anybody who's ever had children cannot believe in free will.

On the other hand, this makes it crystal clear. John Conway said he wondered whether it was immoral to shut the computer off after he got into a particularly interesting play of the game of life. Wow. Yeah. So to me, the reason I love the game of life is exactly as you said, a clear illustration that from simple initial conditions with simple rules, you know exactly how the system is operating, is deterministic.

And yet, if you allow yourself to lose that knowledge a little bit, enough to see the bigger organisms that emerge, and then all of a sudden they seem conscious. They seem not conscious, but living. If the universe is finite, we're all living in the game of life to slow down.

I mean, it's sped up a lot. But do you think technically some of the ideas that you used for analysis of algorithms can be used to analyze the game of life? Can we make sense of it or is it too weird? Yeah. I mean, I've got a dozen exercises in volume for fascicle six that actually work rather well for that purpose.

Bill Gosspers came up with the algorithm that allowed Gali to run thousands and thousands of times faster. You know the website called Gali? G-O-L-L-Y. It simulates the cellular automata, a game of life. Yeah. You got to check it out. Yeah. Can I ask you about John Conway? Yes. In fact, I'm just reading now the issue of mathematical intelligence that came in last week.

It's a whole issue devoted to remembrance of him. Did you know him? I slept overnight in his house several times. Yeah. He recently passed away. Yeah. He died a year ago, May, I think it was, of COVID. What are some memories of him, of his work that stand out for you?

On a technical level, did any of his work inspire you? On a personal level, did he himself inspire you in some way? Absolutely, all of those things. But let's see, when did I first meet him? I guess I first met him at Oxford in 1967 when I was... Wow.

Okay. That's a long time ago. Yeah. You were minus 20 years old or something. I don't know, 1967. But there was a conference where Minus 20 years old. I think I was speaking about something known as the Knuth-Bendix algorithm now. But he gave a famous talk about knots. I didn't know at the time, but that talk had now...

It was the source of thousands and thousands of papers since then. He was reported on something that he had done in high school, almost 10 years earlier, before this conference, but he never published it. He climaxed his talk by building some knots. You have these little plastic things that you can stick together.

It's something like Lego, but easier. He made a whole bunch of knots in front of the audience and so on, and then disassembled it. It was a dramatic lecture. Before he had learned how to give even more dramatic lectures later. All right. Were you at that lecture? And I was there, yeah, because I was at the same conference.

For some reason, I happened to be in Calgary at the same day that he was visiting Calgary. It was the spring of '72, I'm pretty sure. And we had lunch together. He wrote down during the lunch on a napkin, all of the facts about what he called numbers. He covered the napkin with the theorems about his idea of numbers.

I thought it was incredibly beautiful. Later in 1972, my sabbatical year began and I went to Norway. In December of that year, in the middle of the night, the thought came to me, Conway's theory about numbers would be a great thing to teach students how to invent research and what the joys are of research.

I had also read a book in dialogue by Alfred Rényi, kind of a Socratic thing where the two characters were talking to each other about mathematics. At the end, in the morning, I woke up my wife and said, "Jill, I think I want to write a book about Conway's theory.

You know, I'm supposed to be writing the Art of Computer Programming and doing all this other stuff, but I really want to write this other book." And so we made this plan, but I said, "I thought I could write it in a week." And we made the plan then.

So in January, I rented a room in a hotel in downtown Oslo. We were in sabbatical in Norway. And I rented the hotel in downtown Oslo and did nothing else except write up Conway's theory. And I changed the name to Surreal Numbers. So this book is now published as Surreal Numbers.

And we'd always wondered, "Would he like to have an affair in a hotel room?" So we figured out that she would visit me twice during the week. Things like this, you know, we would try to sneak in. This hotel was run by a mission organization. These ladies were probably very strict.

But anyway, so- Yeah. And- It's a wild week in every way. But the thing is, I had lost that napkin in which he wrote the theory. But I looked for it, but I couldn't find it. So I tried to recreate from memory what he had told me at that lunch in Calgary.

And as I wrote the book, I was going through exactly what the characters in the book were supposed to be doing. So I start with the two axioms and start out the whole thing. And everything is defined, flows from that, but you have to discover why. And every mistake that I make as I'm trying to discover it, my characters make too.

And so it's a long, long story. But I worked through this week. And it was one of the most intense weeks of my life. And I described it in other places. But anyway, after six days, I finished it. And on the seventh day, I rested. And I sent to my secretary to type it.

It was flowing as I was writing it, faster than I could think almost. But after I finished it, and try to write a letter to my secretary, telling her how to type it, I couldn't write anymore. You gave it everything. The muse had left me completely. Can you explain how that week could have happened?

Like why is that seems like such a magical week? I have no idea. But anyway, there was some, it was almost as if I was channeling. So the book was typed, I sent it to Conway. And he said, well, Dan, you got the one axiom wrong. There's a difference between less than or equal and not greater than.

I don't know. The opposite of being greater than and less than or equal. But anyway, technically, it can make a difference when you're developing a logical theory. And the way I had chosen was harder to do than John's original. So, and we visited him at his house in Cambridge.

In April, we took a boat actually from Norway over to across the channel and so on and stayed with him for some days. And, and he told he talked, we talked about all kinds of things he has. He had puzzles that I'd never heard of before. He had a great way to solve the game of solitaire.

Many of the common interests that we'd, you know, he'd never written them up. But anyway, then in the summertime, I took another week off and went to a place in the mountains of Norway and rewrote the book using the correct axiom. So that was the most intensive connection with Conway.

After that, it started with a napkin. It started with a napkin. But we would run into each other. Yeah, the next really, I was giving lectures in Montreal. I was giving a series of seven lectures about a topic called stable marriages. And he arrived in Montreal between my sixth and seventh lecture.

And we met at a party. And I started telling him about the topic I was doing. And he sat and thought about it. He came up with a beautiful theory to show that the, I mean, in technical terms, it's that the set of all stable marriages, it forms a lattice.

And there was a simple way to find the greatest lower bound of two stable pairings and least upper bound of two stable marriage. And so I could use it in my lecture the next day. And he came up with this theorem during the party. And it's a brilliant. It's a distributive lesson.

I mean, it added greatly to the theory of stable matching. So you mentioned your wife, Jill, you mentioned stable marriage. Can you tell the story of how you two met? So we celebrated 60 years of wedded bliss last month. And we met because I was dating her roommate. This was my sophomore year, her freshman year.

I was dating her roommate and I wanted her advice on strategy or something like this. And anyway, I found I enjoyed her advice better than her. I enjoyed her roommate. You guys were majoring the same thing? No, no, no. Because I read something about working on a computer in grad school, on a difficult computer science topic.

So she's an artist and I'm a geek. Okay. What was she doing with a computer science book? Or I read, was it the manual that she was reading? What was she reading? I wrote the manual that she had to take a class in computer science. Okay. And so... You're the tutor.

No, no. Yeah. There were terrible times trying to learn certain concepts, but I learned art from her. And so we worked together occasionally in design projects, but every year we write a Christmas card and we each have to compromise our own notions of beauty. Yes. When did you fall in love with her?

That day that I asked her about her roommate. Okay. I mean, no, I... Okay. So you're... I don't mind telling these things, depending on how you far, how far you go, but... But let me tell you... I promise. I promise not to go too far. Let me tell you this, that I never really enjoyed kissing until I found how she did it.

Wow. And 60 years. Is there a secret you can say in terms of stable marriages of how you stayed together so long? The topic stable marriage, by the way, is not... It is a technical term. Yes. It's a joke, Don. But two different people will have to learn how to compromise and work together, and you're going to have ups and downs and crises and so on.

And so as long as you don't set your expectation on having 24 hours of bliss, then there's a lot of hope for stability. But if you decide that it's... That there's going to be no frustration, then... So you're going to have to compromise on your notions of beauty when you write Christmas cards.

That's it. You mentioned that Richard Feynman was someone you looked up to. Yep. Probably you've met him in Caltech. Well, we knew each other. Yeah, at Caltech for sure. You are one of the seminal personalities of computer science. He's one for physics. Is there specific things you picked up from him by way of inspiration or...?

So we used to go to each other's lectures, but if I saw him sitting in the front row, it would throw me for a loop, actually. I would miss a few sentences. What unique story do I have? I mean, I often refer to his time in Brazil, where he essentially said they were teaching all the physics students the wrong way.

They were just learning how to pass exams and not learning any physics. And he said, "If you want me to prove it, here, I'll turn to any page of this textbook, and I'll tell you what's wrong with this page." And he did so. And the textbook had been written by his host, and it was a big embarrassing incident, but he had previously asked his host if he was supposed to tell the truth.

But anyway, it epitomizes the way education goes wrong in all kinds of fields, and has to periodically be brought back from a process of giving credentials to a process of giving knowledge. Yeah, that's probably a story that continues to this day in a bunch of places. Where it's too easy for educational institutions to fall into credentialism versus inspirationalism.

I don't know if those are words, but sort of understanding versus just giving a little plaque. And it's very much like what we were talking about. If you want the computer to, if you want to be able to believe the answer, a computer is doing. One of the things Bob Floyd showed me in the '60s, there was a, he loved this cartoon.

There were two guys standing in front of, in those days, a computer was a big thing, you know. And the first guy says to the other guy, he said, "This machine can do in one second what it would take a million people to do in a hundred years." And the other guy says, "Oh, so how do you know it's right?" That's a good line.

Is there some interesting distinction between physics and math to you? Have you looked at physics much? Like speaking of Richard Feynman, the difference between the physics community, the physics way of thinking, the physics intuition versus the computer science, the theoretical computer science, the mathematical sciences, do you see that as a gap or are they strongly overlapping?

It's quite different, in my opinion. I started as a physics major and I switched into math. And probably the reason was that I could get A+ on the physics exam, but I never had any idea why I would have been able to come up with the problems that were on those exams.

But in math, I knew why the teacher set those problems and I thought of other problems that I could set too. And I believe it's quite a different mentality. It has to do with your philosophy of geekdom? Some of my computer scientist friends are really good at physics and others are not.

And I'm really good at algebra, but not at geometry. Talk about different parts of mathematics. So they're different kinds of physical, but physicists think of things in terms of waves. And I can think of things in terms of waves, but it's like a dog walking on hind legs if I'm thinking about it.

So you basically, you like to see the world in discrete ways and then physics is more continuous. Yeah. I'm not sure if Turing would have been a great physicist. I think he was a pretty good chemist, but I don't know. But anyway, I see things. I believe that computer science is largely driven by people who have brains who are good at resonating with certain kinds of concepts.

And quantum computers, it takes a different kind of brain. Yeah, that's interesting. Yeah. Well, quantum computers is almost like at the intersection in terms of brain between computer science and physics, because it involves both at least at this time. But there is like the physicists I've known, they have incredibly powerful intuition.

And I mean, statistical mechanics. So I study statistical mechanics and random processes are related to algorithms in a lot of ways. But there's lots of different flavors of physics as there are different flavors of mathematics as well. But the thing is that I don't see, well, actually when they talk to physicists, use a completely different language than when they're talking to, when they're writing expository papers.

And so I didn't understand quantum mechanics at all from reading about it in Scientific American. But when I read how they described it to each other, talking about eigenvalues and various mathematical terms that made sense, then it made sense to me. But Hawking said that every formula you put in a book, you lose half of your readers.

And so he didn't put any formulas in the book. So I couldn't understand his book at all. You could say you understood it, but I really didn't. Well, Feynman also spoke in this way. So Feynman, I think, prided himself on a really strong intuition, but at the same time, he was hiding all the really good, the deep computation he was doing.

So there was one thing that I was never able to, I wish I'd had more time to work out with him, but I guess I could describe it for you. There's something that got my name attached to it called Knuth-Arrow notation, but it's a notation for very large numbers.

And so I find out that somebody invented it in the 1830s. It's fairly easy to understand anyway. So you start with X plus X plus X plus X, N times, and you can call that XN. So XN is multiplication. Then you take X times X, times X times X, and N times.

That gives you exponentiation, X to the Nth power. So that's one arrow. So XN with no arrows is multiplication. X arrow N is X to the Nth power. Yeah. So just to clarify for the... So X times X times X, N times is obviously XN. X plus X plus X, N times.

Oh yeah. Okay. And then... XN, no arrows. And then multiplication is X to the N. And then here, the arrow is when you're doing the same kind of repetitive operation for the exponential. So I put in one arrow and I get X to the Nth power. Now I put in two arrows, and that takes X to the X to the X to the X to the X, N times power.

So in other words, if it's two double arrow three, that would be two to the two to the two. So that would be two to the fourth power. That'd be 16. Okay. So that's the double arrow. And now you can do a triple arrow, of course, and so on.

And I had this paper called, well, essentially big numbers. You try to impress your friend, but by saying a number they never thought of before. And I gave a special name for it, and designed a font for it that has script K and so on. But it really is 10, I think, like 10 quadruple arrow three or something like that.

And I claim that that number is so mind boggling that you can't comprehend how large it is. But anyway, I talked to Feynman about this, and he said, "Oh, let's just use double arrow. But instead of taking integers, let's consider complex numbers." So you have, I mean, okay, X arrow, arrow two, that means X to the X.

But what about X double arrow 2.5? Well, that's not too hard to figure out. That's interpolate between those. But what about X double arrow I, or one plus I, or some complex number? And so he claimed that there was no analytic function that would do the job. But I didn't know how he could claim that that wasn't true.

And his next question was, did then have a complex number of arrows? Yeah, okay. Wow, okay. So that's Feynman. Can you describe what the Knuth-Morris-Pratt algorithm does, and how did you come to develop it? One of the many things that you're known for, and has your name attached to it.

Yeah, all right. So it should be actually Morris-Pratt-Knuth, but we decided to use alphabetical order when we published the paper. The problem is something that everybody knows now if they're using a search engine. You have a large collection of text, and you want to know if the word Knuth appears anywhere in the text, let's say, or some other word that's less interesting than Knuth.

That's the most interesting word. Like Morris or something. Like Morris, right. So anyway, we have a large piece of text, and it's all one long, one-dimensional thing. You know, first letter, second letter, et cetera, et cetera, et cetera. And so the question, we'd like to be able to do this quickly.

And the obvious way is, let's say we're looking for Morris. So we would go through and wait till we get to letter M. Then we look at the next word, and sure enough, it's an O, and then an R. But then, too bad, the next letter is E. So we missed out on Morris.

And so we go back and start looking for another. I'll go over again. So that's the obvious way to do it. All right. And Jim Morris noticed there was a more clever way to do it. The obvious way would have started, let's say, we found that letter M at character position 1000.

So it was started next at character position 1001. But he said, no, look, we already read the O and the R, and we know that they aren't M's. So we don't have to read those over again. And this gets pretty tricky when the word isn't Morris, but it's more like abracadabra, where you have patterns that are occurring.

Like repeating patterns, at the beginning, at the middle, and-- Right. So he worked it out, and he put it into the system software at Berkeley, I think it was, where he was writing some Berkeley Unix, I think it was some routine that was supposed to find occurrences of patterns in text.

And we didn't explain it. And so he found out that several months later, somebody had looked at it, didn't look right, and so they ripped it out. So he had this algorithm, but it didn't make it through, because it wasn't understood. Nobody knew about this particularly. Von Pratt also had independently discovered it a year or two later.

I forget why. I think Von was studying some technical problem about palindromes or something like that. Von wasn't working on text searching, but he was working on an abstract problem that was related. Well, at that time, Steve Cook was a professor at Berkeley, and it was the greatest mistake that Berkeley CS department made, was not to give him tenure.

So Steve went to Toronto. But I knew Steve while he was at Berkeley, and he had come up with a very peculiar theorem about a technical concept called a stack automaton. And a stack automaton is a machine that can't do everything a Turing machine can do, but it can only look at something at the top of a stack, or it can put more things on the stack, or it can take things off the stack.

It can't remember a long string of symbols, but it can remember them in reverse order. So if you tell a stack automaton A, B, C, D, E, it can tell you afterwards E, D, C, B, A. It doesn't have any other memory except this one thing that it can see.

And Steve Cook proved this amazing thing that says, if a stack automaton can recognize a language where the strings of the language are length N, in any amount of time whatsoever, to the stack automaton, you might use a zillion steps, a regular computer can recognize that same language in time N log N.

So Steve had a way of transforming a computation that goes on and on and on and on, into using different data structures, into something that you can do on a regular computer, fast. The stack automaton goes slow, but somehow the fact that it can do it at all means that there has to be a fast way.

So I thought this was a pretty cool theorem. And so I tried it out on a problem where I knew a stack automaton could do it, but I couldn't figure out a fast way to do it on a regular computer. I thought I was a pretty good programmer, but by golly, I couldn't think of any way to recognize this language efficiently.

So I went through Steve Cook's construction. I filled my blackboard with all the, everything that stack automaton did, you know, I wrote down. And then I tried to see patterns in that, and how did he convert that into a computer program on a regular machine? And finally, I psyched it out.

What was the thing I was missing so that I could say, oh yeah, this is what I should do in my program. And now I have a efficient program. And so I would never have thought about that if I hadn't had his theorem, which was a purely abstract thing.

So you used this theorem to try to intuit how to use the stack automaton for the string matching problem? Yeah. So the problem I had started with was not the string matching problem, but then I realized that the string matching problem was another thing which could be done by a stack automaton.

And so when I looked at what that told me, then I had a nice algorithm for this string matching problem. And it told me exactly what I should remember as I'm going through the string. And I worked it out, and I wrote this little paper called "Automata Theory Can Be Useful." And the reason was that it was the first, I mean, I had been reading all kinds of papers about automata theory, but it never taught me, it never improved my programming for everyday problems.

It was something that you published in journals and it was interesting stuff. But here was a case where I couldn't figure out how to write the program. I had a theorem from automata theory, then I knew how to write the program. So this was, for me, a change in life.

I started to say, maybe I should learn more about automata theory. And I showed this note to Vaughn Pratt, and he said, "Hey, that's similar to something I was working on." And then, and Jim Morris was at Berkeley too at the time. Anyway, he's had an illustrious career, but I haven't kept track of Jim, but Vaughn is my colleague at Stanford and my student later.

But this was before Vaughn, Vaughn was still a graduate student and hadn't come to Stanford yet. So we found out that we'd all been working on the same thing. So it was our algorithm, we each discovered it independently, but each of us had discovered a different part of the elephant, a different aspect of it.

And so we could put our things together. It was my job to write the paper. How did the elephant spring to life? Spring to life was because I had drafted this paper, automata theory can be useful, which was seen by Vaughn and then by Jim. And then we combined, because maybe they had also been thinking of writing something up about it.

About specifically the string match? Specifically the string match problem, period. Let me ask a ridiculous question. Last time we talked, you told me what the most beautiful algorithm is actually for strongly connected graphs. What is the hardest problem, puzzle, idea in computer science for you personally that you had to work through?

Just something that was just a... The hardest thing that I've ever been involved with? Yeah. Okay. Well, yeah, that's, I don't know how to answer questions like that, but in this case, it's pretty clear. Because it's called the birth of the giant component. Okay, so now let me explain that because this is actually gets into physics too.

And it gets into something called Bose-Einstein statistics, but anyway, it's got some interesting stories and it connected with Berkeley again. So start with the idea of a random graph. Now this is, here we just say we have N points that are totally unconnected and there's no geometry involved. There's no saying some points are further apart than others.

All points are exactly alike. And let's say we have a hundred points and we number them from zero to nine, nine. All right. Now let's take pi, the digits of pi, so two at a time. So we had 31, 41, 59, 26. We can go through pi. And so we take the first two, 31, 41, and let's put a connection between 0.31 and 0.41.

That's an edge in the graph. Then we take 59, 26 and make another edge. And the graph gets bigger, gets more and more connected as we add these things one at a time. Okay. We started out with N points and we add M edges. Okay. Each edge is completely, we forgot about edges we had before.

We make an edge twice. We make an edge from a point to itself even. Maybe pi is going to have a run of four digits in there. But anyway, we're evolving a graph at random. And a magical thing happens when the number of edges is like 0.49. And so maybe N is a million and I have 490,000 edges.

Then it almost all the time, it consists of isolated trees, not even any loops. It's a very small number of edges so far. A little less than half N. But if I had 0.51 edges, it's a little more than half N. So a million points, 510,000 edges. Now it probably has one component that's much bigger than the others.

And we call that the giant component. >> Can you clarify, first of all, is there a name for this kind of super cool pi random graph? >> Well, I call it the pi graph. No, no, the pi graph is actually, my pi graph is based on binary representation of pi, not the decimal representation of pi.

But anyway, let's suppose I was rolling dice instead. >> So it doesn't have to be pi? >> It could be any source. The point is every step, choose totally at random one of those endpoints. Choose totally at random another one of the endpoints. Make that an edge. That's the process.

>> Yeah. So there's nothing magical about pi that you were just giving us as an example? >> No, no. I was using pi to, I was sort of saying pi is sort of random that nobody knows a pattern in. >> Exactly. Got it. Got it. >> But it's not, yeah, I could have just as well drawn straws or something.

This was a concept invented by Erdos and Rényi, and they called it the evolution of random graphs. And if you start out with a large number n and you repeat this process, all of a sudden a big bang happens at one half n. There'll be two points together, then maybe we'll have three, and then they maybe branch out a little bit.

But they'll all be separate until we get to one half n. And we pass one half n, and all of a sudden there's substance to it. There's a big clump of stuff that's all joined together. >> So it's almost like a phase transition of some kind. >> It's exactly, it's a phase transition, but it's a double phase transition.

It turns out it happens, there's actually two things going on at once at this phase transition, which is very remarkable about it. Okay. So a lot of the most important algorithms are based on random processes. And so I want to understand random processes and how, so there are data structures that sort of grow this way.

Okay. So Dick Karp, one of the leading experts on randomized algorithms, had his students looking at this at Berkeley. And we heard a rumor that the students had found something interesting happening. The students are generating this, or simulating this random evolution of graphs, and they're taking snapshots every so often to take a look at what the graph is.

And the rumor was that every time they looked, there was only one component that had loops in it, almost always. They do a million experiments, and only three or four times did they ever happen to see a loop at this point. I mean, no, more than one component with a loop.

So they watch until the graph gets completely full. So it starts out totally empty and gets more and more edges all the time. And so, okay, certainly a loop comes along once, but now all the loops stay somehow joined to that one. There never were two guys with loops.

Wow, interesting. In these experiments. Okay. So anyway, almost always, certainly not always. But with very high probability, this seemed to be true. So we heard about this rumor at Stanford, and we said, "Hmm, if that's true, then a lot more must also be true. So there's a whole theory out there waiting to be discovered that we haven't ever thought about.

So let's take a look at it." And so we looked closer and we found out, no, actually, it's not true. But in fact, it's almost true. Namely, there's a very short interval of time when it's true. And if you don't happen to look at it during that short interval of time, then you miss it.

So in other words, there'll be a period where there are two or three components that have loops, but they join together pretty soon. Okay. So- If you don't have a real fast shutter speed, you're going to miss that instant. So separate loops don't exist for long. That's it. Yeah.

I started looking at this to make it quantitative. And the basic problem was to slow down the Big Bang so that I could watch it happening. I think I can explain it actually in fairly elementary terms, even without writing a formula- Let's try. Like Hawking would do. And so let's watch the evolution.

And at first, these edges are coming along and they're just making things without loops, which we call trees. So then all of a sudden, a loop first appears. So at that point, I have one component that has a loop. Now I say that the complexity of a component is the number of edges minus the number of vertices.

So if I have a loop, I have like a loop of length five, has five edges and five vertices. Mm-hmm. Or I could put a tail on that, and that would be another edge, another vertex. It's like a zero, one, two complexity kind of thing. So if the complexity is zero, we have one loop I call it, a cycle, or I call it a cyclic component.

So a cyclic component looks like a wheel to which you attach fibers or trees. They go branching, but there are no more loops. There's only one loop and everything else feeds into that loop. And that has complexity zero. But a tree itself has complexity minus one, because it has, like it might have 10 vertices and nine edges to tie them together.

So nine minus 10 is minus one. So complexity minus one is a tree. It's got to be connected. That's what I mean by a component. It's got to be connected. If I have 10 things connected, I have to have nine edges. Can you clarify why when complexity goes, can go above zero?

I'm a little confused why. Right. So the complexity plus one is the number of loops. So if complexity is zero, I have one loop. If complexity is one, that means I have one more edge than I have vertices. So I might have like 11 edges and 10 vertices. So it turns, we call it a bicycle because it's got two loops and it's got to have two loops in it.

Why can't it be trees just going off of the loop? I would need more edges than that. Oh, right. Right. Okay. I got it. So every time I get another loop, I get another excess of edges over vertices. I got you. Okay. So in other words, we start out and after I have one loop, I have one component that has a cycle in it.

Now the next step, according to the rumor, would be that at the next step, I would have a bicycle in the evolution of almost all graphs. It would go from cycle to bicycle. But in fact, there's a certain probability it goes from cycle to two different cycles. And I worked out the probability was something like five out of 24.

That's pretty high. It was substantial. But still, soon they're going to merge together almost. Okay. That's so cool. But then it splits again after you have either two or one, one. The next step is you either have three or you have two, one or you have one, one, one.

Okay. And so I worked out the probability for those transitions. And I worked it out up to the first five transitions. And I had these strange numbers, five 24s. And I stayed up all night and about 3 a.m. I had the numbers computed and I looked at them and here were the denominator was something like 223023.

So the probability was something over 23023. I don't know how you worked that out, but- I had a formula that I could calculate the probability. And I could find the limiting probability as n goes to infinity. And it turned out to be this number, but the denominator was two three.

And I looked at the denominator and I said, wait a minute, this number factors because 1001 is equal to seven times 11 times 13. I had learned that in my first computer program. So 23023 is seven times 11 times 13 times 23. That's not a random number. There has to be a reason why those small primes appear in the denominator.

But my thing, so all of a sudden that suggested another way of looking at the problem where small prime factors would occur. So what would that be? So that said, oh yeah, let me take the logarithm of this formula. And sure enough, it's going to simplify. And it happened.

And I wouldn't have noticed it except for this factorization. So I go to bed and I say, oh, okay, this looks like I'm slowing down the big bang. I can figure out what's going on here. And the next day it turned out Bill Gates comes to Stanford to visit.

They're trying to sell him on donating money for a new computer science building. Sure. And I said, so they gave me an appointment to talk to Bill. And I wrote down on the blackboard this evolutionary diagram, going from one to two, five 24ths and all this business. And I wrote it down.

And anyway, at the end of the day, he was discussing people with the development office. And he said, boy, I was really impressed with what Professor Knuth said about this giant component. And so, I love this story because it shows that theoretical computer science is really worthwhile. Have you ever talked to Bill Gates about it since then?

Yeah. That's a cool little moment in history. Anyway, he happened to visit on exactly the day after I had found this pattern. And that allowed me to crack the problem so that I could develop the theory some more and understand what's happening because I could now write down explicit formulas for stuff.

And so, it would work not only the first few steps, but also the whole study, the whole process. And I worked further and further with two authors, co-authors. And we finally figured out that the probability that the rumor was true. In other words, look at the evolution of a random graph going from zero to complete and say, what's the probability that at every point in time, there was only one component with a cycle?

We started with this rumor saying there's only one component with a cycle. And- So, the rumor was that it's 100%. Rumor was that it was 100%. It turned out the actual numbers is like 87%. I should remember the number, but I don't have it with me. But anyway, but the number, it turned out to be like 12 over pi squared or eight over pi.

Anyway, it was a nice, it related to pi. And we could never done that with... So, that's the hardest problem I ever solved in my life was to prove that this probability is- It was proven. The probability was proven. Yeah. I was able to prove this, and this shed light on a whole bunch of other things about random graphs.

That was sort of the major thing we were after. That's super cool. What was the connection to physics that you mentioned? Well, Bose-Einstein statistics is a study of how molecules bond together without geometry, without distance. You created the tech typesetting system and released it as open source. Just on that little aspect, why did you release it as open source?

What is your vision for open source? Okay. Well, the word open source didn't exist at that time, but I didn't want proprietary rights over it because I saw how proprietary rights were holding things back. In the late '50s, people at IBM developed a language called Fortran. They could have kept it proprietary.

They could have said, "Only IBM can use this language. Everybody else has to." But they didn't. They said, "Anybody who can translate Fortran into the language of their machines is allowed to make Fortran compilers, too." On the other hand, in the typography industry, I had seen a lot of languages that were developed for composing pages.

Each manufacturer had his own language for composing pages. That was holding everything back because people were tied to a particular manufacturer, and then a new equipment is invented a year later. But printing machines, they have to expect to amortize the cost over 20, 30 years. So you didn't want that for tech?

I didn't need the income. I already had a good job. People were buying enough books that it would bring me plenty of supplemental income for everything my kids needed for education, whatever. So there was no reason for me to try to maximize income any further. Income is sort of a threshold function.

If you don't have enough, you're starving. But if you get over the threshold, then you start thinking about philanthropy or else, or trying to take it with you. But anyway, my income was over the threshold. So I didn't need to keep it. And so, I specifically could see the advantage of making it open for everybody.

Do you think most software should be open? So I think that people should charge for non-trivial software, but not for trivial software. Yeah, you give an example of, I think, Adobe Photoshop versus GIMP on Linux, as Photoshop has value. So it's definitely worth paying for all this stuff. I mean, well, they keep adding stuff that my wife and I don't care about, but somebody obviously does.

But they have built in a fantastic undo feature, for example, in Photoshop, where you can go through a sequence of a thousand complicated steps on graphics, and it can take you back anywhere in that sequence. Yeah, that's a long history. With really beautiful algorithms. I mean, yeah, it's... Oh, that's interesting.

I didn't think about what algorithm. It must be some kind of efficient representation. It's really, yeah, no. I mean, there's a lot of really subtle Nobel Prize class creation of intellectual property in there. And with patents, you get a limited time to... I mean, eventually, the idea of patents is that you publish so that it's not a trade secret.

That said, you've said that I currently use Ubuntu Linux on a standalone laptop. It has no internet connection. I occasionally carry flash memory drives between the machine and the Macs that I use for network surfing and graphics. But I trust my family jewels only to Linux. Why do you love Linux?

The version of Linux that I use is stable. Actually, I'm going to have to upgrade one of these days, but... To a newer version of Ubuntu? Yeah, I'll stick with Ubuntu. But right now, I'm running something that doesn't support a lot of the new software. The last stable... I don't remember the number, like 14.

Anyway, it's quite... And I'm going to get a new computer. I'm getting new solid state memory instead of a hard disk. Yeah, the basics. Well, let me ask you, sticking on the topic of tech, when thinking about beautiful typography, what is your favorite letter, number, or symbol? I know, I know, ridiculous question, but is there a...

Let me show you here. Look at the last page. The very end of the index. What is that? There's a book by Dr. Seuss called On Beyond Zebra, and he gave a name to that. Did you say Dr. Seuss gave a name to that? Dr. Seuss, this is S-E-U-S-S.

He wrote children's books in the '50s, '40s and '50s. Wait, are you talking about Cat in the Hat? Cat in the Hat, yeah. That's it, yeah. I like how you hit the spelling. On Beyond Zebra. Did it get to the Soviet Union? Yeah, Dr. Seuss did not come to the Soviet Union, but...

Since you... Oh, actually, I think he did, actually, a little bit when we were... That was a book, maybe Cat in the Hat or Green Eggs and Ham, I think was used to learn English. Okay. So I think it made it in that way. Okay. I didn't like those as much as Bartholomew Cubbins, but I used to know Bartholomew Cubbins by heart when I was young.

So what the heck is this symbol we're looking at? There's so much going on. He has a name for it at the end of his book on Beyond Zebra. Who made it? He did. He did. So there's... It looks like a bunch of vines. Is that symbol existent? By the way, he made a movie in the early '50s.

I don't remember the name of the movie now. You can probably find it easily enough. But it features dozens and dozens of pianos all playing together at the same time. But all the scenery is sort of based on the kind of artwork that was in his books and the fantasy based of Soosland or something.

And I saw the movie only once or twice, but it's quite... I'd like to see it again. That's really fascinating that you gave them... They gave them shout out here. Okay. Is there some elegant basic symbol that you're attracted to? Something that gives you pleasure? Something used a lot?

Pi? Pi, of course. I try to use pi as often as I can when I need a random example, because it doesn't have any known characters. So for instance, I don't have it here to show you, but do you know the game called Masyu? M-A-S-Y-U? No. It's a great recreation.

I mean, Sudoku is easier to understand, but Masyu is more addictive. You have black and white stones like on a go board, and you have to draw a path that goes straight through a white stone and makes a right angle turn at the black stone. And it turns out to be really, really nice puzzle because it doesn't involve numbers, but it's visual, but it's really pleasant to play with.

So I wanted to use it as example in art of computer programming. And I have exercise on how to design cool Masyu puzzles. You can find that on Wikipedia, certainly, as an example, M-A-S-Y-U. And so I decided I would take pi, the actual image of it, and I had pixels, and I would put a stone wherever it belongs in the letter pi, in the Greek letter pi.

But the problem was find a way to make some of the stones white, some of the stones black, so that there's a unique solution to the Masyu puzzle. That was a good test case for my algorithm on how to design Masyu puzzles because I insisted in advance that the stones had to be placed in exactly the positions that make the letter pi, make a Greek letter pi.

All right. That's cool. And it turned out there was a unique way to do that. And so pi is a source of examples where I can prove that I'm starting with something that isn't canned. And most recently, I was writing about something called graceful graphs. Graceful graphs is the following.

You have a graph that has M edges to it. And you attach numbers to every vertex in the following way. So every time you have an edge between vertices, you take the difference between those numbers. And that difference has got to be, I'll tell you what edge it is.

So one edge, two numbers will be one apart. There'll be another edge where the numbers are two apart. And so it's a great computer problem. Can you find a graceful way to label a graph? So I started with a graph that I use for an organic graph, not a mathematically symmetric graph or anything.

And I take the 49 states of the United States. The edges go from one state to the next state. So for example, California, be next to Oregon, Nevada, Arizona. And I include District of Columbia. So I have 49. I can't get Alaska and Hawaii in there because they don't touch.

You have to be able to drive from one to the other. So is there a graceful labeling of the United States? Each state gets a number. And then if California is number 30 and Oregon is number 11, that edge is going to be number 19. The difference between those.

So is there a way to do this for all the states? And so I was thinking of having a contest for people to get it as graceful as they could. But my friend Tom Rukiki actually solved the problem by proving that. I mean, I was able to get it down within seven or something like that.

He was able to get a perfect solution. The actual solution or to prove that a solution exists? More precisely, I had figured out a way to put labels on so that all the edges were labeled somewhere between one and 117. But there were some gaps in there because I should really have gone from one to 105 or whatever the number is.

So I gave myself too much, a lot of slack. He did it without any slack whatsoever, perfect graceful labeling. And so I call out the contest because problems already solved and too easy in a sense because Tom was able to do it in an afternoon. He, sorry, he gave the algorithm or for this particular?

For the United States. For the United States. This problem is incredibly hard. I mean- For the general. General. Okay. But it's like coloring. But it was very lucky that it worked for the United States. I think. But the theory is still very incomplete. But anyway, then Tom came back a couple of days later and he had been able to not only find a graceful labeling, but the label of Washington was 31.

The label of Idaho was 41 following the digits of pi. Going across the topic, the United States, he has the digits of pi perfectly. Did he do it on purpose? He was able to still get a graceful labeling with that extra thing. What? Wow. Wow. It's a miracle. Okay.

But I like to use pi in my book, you see, and this is the- All roads lead to pi. Yeah. Somehow often hidden in the middle of like the most difficult problems. Can I ask you about productivity? Productivity. Yeah. You said that, "My scheduling principle is to do the thing I hate most on my to-do list.

By week's end, I'm very happy." Can you explain this process to a productive life? Oh, I see. But all the time I'm working on what I don't want to do, but still, I'm glad to have all those unpleasant tasks finished. Yes. Is that something you would advise to others?

Yeah. I don't know how to say it. During the pandemic, I feel my productivity actually went down by half because I have to communicate by writing, which is slow. I don't like to send out a bad sentence, so I go through and reread what I've written and edit and fix it.

So everything takes a lot longer when I'm communicating by text messages instead of just together with somebody in a room. It's also slower because the libraries are closed and stuff. But there's another thing about scheduling that I learned from my mother that I should probably tell you, and that is different from what people in robotics field do, which is called planning.

So she had this principle that was, "See something that needs to be done and do it." Instead of saying, "I'm going to do this first, and I'm going to do this first," just, you know... Just do it. Oh, yeah. Pick this up, you know. But you're... At any one moment, there's a set of tasks that you can do, and you're saying a good heuristic is to do the one you want to do least.

Right. The one I haven't got any good reason to... I'll never be able to do it any better than I am now. There are some things that I know, if I do something else first, I'll be able to do that one better. But there's some that are going to be harder because, you know, I've forgotten some of the groundwork that went into it or something like that.

So I just finished a pretty tough part of the book, and so now I'm, you know, doing the parts that are more fun. But the other thing is, as I'm writing the book, of course, I want the reader to think that I'm happy all the time I'm writing the book.

You know, it's upbeat. I can have humor. I can say, "This is cool." You know, "Wow." And this. I have to disguise the fact that it was painful in any way to come up with this. The road to that excitement is painful. Yeah, it's laden with pain. Okay. Is there...

You've given some advice to people before, but can you... You give me too much credit, but anyway, this is my turn to say things that I believe. But I want to preface it by saying I also believe that other people do a lot of these things much better than I do, so I can only tell you my side of it.

So can I ask you to give advice to young people today, to high school students, to college students, whether they're geeks or the other kind, about how to live a life they can be proud of, how to have a successful career, how to have a successful life? It's always the same as I've said before, I guess, not to do something because it's trendy, but it's something that you personally feel that you were called to do, rather than somebody else expects you to do.

How do you know you're called to do something? You try it and it works or it doesn't work. I mean, you learn about yourself. Life is a binary search. You try something and you find out, oh yeah, I have a background that helped me with this, or maybe I could do this if I worked a little bit harder.

But you try something else and you say, well, I have really no intuition for this and it looks like it doesn't have my name on it. Was there advice along the way that you got about what you should and shouldn't work on, or do you just try to listen to yourself?

Yeah, I probably overreacted another way. When something, when I see everybody else going some way, I probably say, hmm, too much competition. But mostly I played with things that were interesting to me. And then later on I found, actually the most important thing I learned was how to be interested in almost anything.

I mean, not to be bored. It hurts. It makes me very sad when I see kids talking to each other and they say, that was boring. And to me, a person should feel upset if he had to admit that he wasn't able to find something interesting. It's a skill.

They're saying, I haven't learned how to enjoy life. I have to have somebody entertain me instead of... That's really interesting. It is a skill. David Foster Wallace, I really like the thing he says about this, which is the key to life is to be unboreable. And I do really like you saying that it's a skill, because I think that's really good advice, which is if you find something boring, that's not, I don't believe it's because it's boring.

It's because you haven't developed the skill, how to find the beauty in it, how to find the fun in it. That's a really good point. Sometimes it's more difficult than others to do this. I mean, during the COVID, lots of days when I never saw another human being, but I still find other ways to...

It still was a pretty fun time. Yeah. I came a few minutes early today and I walked around Foster City. I didn't know what was going on in Foster City. I saw some beautiful flowers at the nursery at Home Depot a few blocks away. Life is amazing. It's full of amazing things like this.

Yeah. Sometimes I'll sit there and just stare at a tree. Nature is beautiful. Let me ask you the big ridiculous question. I don't think I asked you last time. So I have to ask this time in case you have a good answer. What is the meaning of life? Our existence here on earth, the whole thing.

No, no, you can't. I will not allow you to try to escape answering this question. You have to answer definitively because there's surely, surely, Don Knuth, there must be an answer. What is the answer? Is it 42? Yes. Well, I don't think it's in numerical. That's the SDS. That was in Zen.

Okay. But all right. So anyway, it's only for me, but I personally think of my belief that God exists, although I have no idea what that means. But I believe that there is something beyond human capabilities. It might be some AI, but whatever it is, but whatever. But I do believe that there is something that goes beyond the realm of human understanding, but that I can try to learn more about how to resonate with whatever that being would like me to do.

So you think you can have occasional glimpses of that being? I strive for that. Not that I ever think I'm going to get close to it, but it's not for me. It's saying, what should I do that that being wants me to do? I'm trying to ask, I mean, does that being want me to be talking to Lex Friedman right now?

And I said, yes. Okay. Thank you. Well, thank you. But what I'm trying to say is, I'm not trying to say, of all the strategies I could choose or something, which one? I try to do it not strategically, but I try to imagine that I'm following somebody's wishes. Even though you're not smart enough to...

To know what they are. Yeah. That funny little dance. Well, I mean, this AI or whatever, it probably is smart enough to help to give me clues. And to make the whole journey from clue to clue a fun one. Yeah. I mean, as so many people have said, it's the journey, not the destination.

And people live through crises, help each other. Things come up, history repeats itself. You try to say in the world today, is there any government that's working? I read history. I know that things were... They were a lot worse in many ways. There's a lot of bad things all the time.

And I read about, I look at things and people had good ideas and they were working on great projects. And then I know that it didn't succeed, though, in the end. But the new insight I've gotten, actually, in that way was, I was reading... What book was I reading recently?

It was by Ken Follett, and it was called The Man from St. Petersburg. But it was talking about the prequel to World War I. And Winston Churchill, according to this book, sees that Germany has been spending all its gold reserves building up a huge military. And there's no question that if Germany would attack England, that England would be wiped out.

So he wants Russia to help to attack Germany from the other side, because Germany doesn't have enough of an army to be fighting two wars at one. Okay, now, then there's an anarchist in Russia, who sees that wars are something that leaders start, but actually people get killed. And so he wants to stop any alliance between England and Russia, because that would mean that thousands and thousands of people in Russia would be killed that wouldn't be otherwise killed.

All right, and so his life's goal is to assassinate a Russian prince who's visiting England, because that will make, will mean the Tsar will not form the alliance. All right. So we have this question about what should the government do? Should it actually do something that will lead to, you know, is the war inevitable?

Or is there a way to have peace? And it struck me that if I were in a position of responsibility for people's lives, in most cases, I wouldn't have any confidence that any of my decisions were good. That these questions are too hard, probably for any human being, but certainly for me.

Well, I think coupling the not being sure that the decisions are right, so that that's actually a really good thing, coupled with the fact that you do have to make a decision and carry the burden of that. And ultimately, I have faith in human beings, in the great leaders to arise and help build a better world.

I mean, that's the hope of democracy. Yeah, Ben, let's hope that we can enhance their abilities with algorithms. Well put, Don. It's such a huge honor. You've been an inspiration to me and to millions for such a long time. Thank you for spending your really valuable time with me.

Once again, it's a huge honor. I really enjoyed this conversation. Thanks for listening to this conversation with Donald Knuth. To support this podcast, please check out our sponsors in the description. And now, let me leave you with some words from Don Knuth himself. Science is what we understand well enough to explain to a computer.

Art is everything else we do. Thank you for listening. I hope to see you next time. Bye. Bye. Bye. Bye.