Back to Index

Was This Man Responsible For Inventing The Computer?


Chapters

0:0 Cal's intro
1:0 Sam Harris and quitting Twitter
3:35 Who is responsible for invention of computer
4:35 Alan Turing
5:45 The Turing Machine
10:20 The Imitation Game
10:50 Claude Shannon

Transcript

So the topic I want to discuss might seem unusual at first, but there is a backstory here. I will, I will quickly tell you here it is. Did Alan Turing invent the computer? All right. Why are we talking about this? Well, it goes back to last week. I was a guest on Sam Harris's podcast making sense.

I don't know if you've heard that one yet, Jesse. Uh, but it's an episode that's creating quite a stir. Not because of me, but because the episode is, uh, begins with a 45 minute monologue from Sam, where he explains why he decided a few days before that episode posted to quit Twitter.

So here is where, if we had a, uh, applause sound effect, a confetti and applause sound effect is where, where we would, where we would hear it. So Sam quit, uh, Twitter and it sort of made sense to attach his announcement about that to this interview. I'd recorded with him a few weeks earlier, because obviously when you talk to Cal Newport, you're going to, you're going to hear a lot about tech and tech and society and not necessarily a very positive view of Twitter.

Now, interestingly in that interview, later in the interview, and we're talking about Twitter, Sam is cataloging a lot of his concerns about it. And I took a swing and made a pitch in the interview directly. Sam, you should quit Twitter. Now I don't get to take credit for that.

Sam actually specifically addresses this at the end of his monologue. He says, as you will hear, I already had doubts when I was doing this interview and Cal was pushing me to quit it. I didn't quit Twitter because Cal told me to, but he was one of the voices in my head when I made the decision.

So I'm going to take, let's take a partial W yeah, on that Jesse. Um, but anyways, obviously I'm glad he did it, not just for his own sanity, but I think because it's a role model. Uh, as long time listeners. No, my issue is not with the existence of these social media platforms.

It is with the assumption of ubiquity. That I think is what the problem is. The idea that everyone has to use the platform. That's what I, that's what I might. I don't care that Twitter exists. I care that it's a big deal that I don't use it. And so when we have more high profile people like Sam opt out, it opens up that possibility to others, others who maybe feel like the.

Cost or outweighing the benefits. It makes it easier and easier for those who follow in his wake to say, you know what, I'm going to do something similar. I got a lot of heat when I originally left Twitter, Sam, who's way more well-known to me is getting a huge amount of heat right now, but more of us who go through this, the easier it will become for those who follow.

So I think that's all good news. Anyways, it's a long interview. A lot of it, we're dealing with tech issues, a lot of tech and society, a lot of tech criticism. Very interesting stuff. A lot of new theories you haven't heard me talk about before. Worth listening to. But earlier in the show, Sam and I wandered across a bunch of esoteric topics that just sort of popped to our head as we were chatting.

And one of the topics that came up relatively early in the conversation was this thought experiment that Sam had considered before, where he was thinking if I could go back in time, somewhere between the 1930s, the 1940s. And, uh, let's say kill a single individual. If my goal was to delay as long as possible, the development of the modern computer, which individual would you kill now, obviously that's a, maybe a, a, a violent construction for what's an actually the very interesting question, you know, who was probably most singly influentially responsible for the development of the modern computer.

I thought that was a cool thought experiment. So we got into it. Now, one of the names that often comes to people's mind when they think about the invention of the computer is Alan Turing. And as Sam and I talked about, I think Turing gets too much credit as a, uh, initiator of the development of modern digital computing.

I'm a huge Alan Turing fan. I teach Turing to our doctoral students at Georgetown. I know his work very well. He's the father of theoretical computer science and incredibly influential thinker and a very original thinker, but his role in the invention of the computer, I think has been inflated in recent decades, he would not be, in other words, my choice of who to go back and, and, uh, rub out.

If I was trying to delay the development of the computer. Now I'll tell you soon who I think that person is, but first let's return to the question of Turing. So what did he do that became so connected to computing in the modern minds? Well, it really comes down to the notion of the Turing machine.

If you want to understand the notion of the Turing machine, and I promise you, I'm not going to get into professor mode here. I'll be very brief, but if we want to get into the notion of the Turing machine, you have to go back to this paper he wrote called on computable numbers and their connection to the Einstein problem, which is a German name for a problem that was posed in the late 19th century by David Hilbert.

Now this problem had nothing to do with computers. This is from the 1800s. But what it asked is, can we come up with what they would call back then an effective procedure? Today, we might call this an algorithm, but back then they would call it an effective procedure. That is a step-by-step series of instructions for solving any math problem we might want to solve.

Does every math problem have a step-by-step way to solve it? This was a big question in mathematical logic. A lot of people were working on it and Turing came up with an answer. And the way he came up with an answer is he said, let's have a formal definition of an effective procedure.

And that's when he came up with this thought experiment of the Turing machine. It's a set of instructions, an infinite tape, a read head that can move from position to position on the tape, read what's there, look up in the instructions what to do, maybe overwrite what's there, move one direction or the other.

Turing made this argument that this abstract machine in theory could implement any possible effective procedure. So every effective procedure has a corresponding Turing machine. He then did a bit of mathematical logical tricks where he said, look, we could describe any such Turing machine with a sequence of whole numbers.

And we could just put those whole numbers together and just get a really big whole number. So every Turing machine and therefore every effective procedure has a corresponding whole number. Now it might be really big. It might be a couple hundred thousand digits long, but just conceptually speaking, there is a way to label every possible effective procedure with the whole number.

Then he looked at what do we mean by a problem? And he focused in on a subset of problems you might try to solve. These were called decision problems. He did a little bit of mathematical logic and he argued every problem can be represented by a real number. That is a number, a decimal point number that has an infinite number of decimal places.

You know, 1.0146578 often to infinity. And in fact, there's a one-to-one correspondence there that you could, you could, you could take every real number and that exactly describes a particular decision problem. This was a big deal because there is a well-known result going back to Cantor. Now we're going back to the 19th century that says there are many more real numbers than natural numbers.

There exists no way to map every natural number onto a real number such that you've covered all the real numbers. The impact of that is, okay, if we, if we map every possible effective procedure to the problem it solves, there'll be many problems left over that aren't being mapped to by an effective procedure.

Math, math, math, logic, logic, logic. And the conclusion is there's many more problems out there in the universe and there are algorithms or effective procedures that can solve them. Most things that most problems out there can't be solved by effective procedures. This was the question that Hilbert was trying to answer.

Turing answered it. So this was all about logic, mathematical logic, foundational math that was going on right then. None of this had to do with computers. The reason why we connect this to modern digital computing is you can say Turing's notion of a Turing machine is an abstract notion of a computer because you have this, that the tape could have on it instructions that a Turing machine could run.

He talked about in his original paper, something called the universal Turing machine, where the input on the tape is a description of another Turing machine and it simulates it. So you do have some of the conceptual basics there of a computer reading a program and executing it. Okay, fair enough.

Also, we do know that von Neumann at Princeton was familiar with Turing's work. He met Turing when Turing was visiting the Institute for Advanced Study in Princeton. Von Neumann later advanced the von Neumann architecture for modern computers, which is the one we use today. So there's a little bit of an influence there as well.

But the idea that Turing single handedly sort of introduced this idea that we could have these universal computing machines, that's just not true. Before Turing even did this work, well before this work was well known outside of esoteric mathematical circles, we already had general purpose analog electronic computers. We had, for example, Vannevar Bush's differential analyzer at MIT.

In the mid 1930s, we got the very first, we began to get the very first ideas being proposed for making fully electronic computing machines. As the war went on, there was a huge push to have more advanced electronic computing. They were using these to calculate artillery tables and to help aim at the aircraft guns and some sort of cybernetic sensors.

A huge research effort for this. And while it's true that Turing after the war got involved in a project in the UK to develop an electronic computer, this was one of at least a half dozen ongoing projects, many of which finished sooner. I think the ENIAC at Penn, for example, there was a Van Nuyman's project at Princeton.

There was a project going on at Harvard. A lot of people were working on this problem. And they didn't need Turing to do it. The final thing people point to is they saw that that movie about whatever it was called, the imitation game. And like, well, didn't he invent these sort of computing machines to break the enigma code?

No, those were developed by the Poles. The Polish code breakers developed those. Turing was just building a more advanced version of those machines. They had more funding. So they used the initial work that the Poles had put into breaking the enigma and then they expanded it. I love Turing, but he didn't invent the computer.

So who would I go back and rub out if I was trying to delay the computer? I would say Claude Shannon in the early 1930s. Claude Shannon in the early 1930s wrote the most important master thesis that anyone has ever written before. It was called a symbolic analysis of relay and switching circuits.

This master thesis is what figured out the entire field of digital electronics. This was the really key breakthrough that everything else was built on. Shannon had been interning at Bell labs where he was seeing electromagnetic relays, phone networks use electromagnetic relays to automatically connect calls using electrical signals. He was also studying for a degree in mathematics at MIT.

He put those two things together and he said, wait a second, you can take purely logical statements expressed in Boolean algebra and you can implement them with electronic circuits using these electromechanic relays. So you can take an arbitrary mathematical specification of a logical circuit and build it, anything you can come up with, any Boolean algebra statement you can come up with, we have a systematic way of building that with wires and magnets.

We can build an electronic circuit. I have a quote. He said this later in life, it just happened that no one else was familiar with both of these fields at the same time. So he happened to be in both worlds, math, phone company came together. That was probably the single biggest innovation because once we realized we can build arbitrary logic into electrical circuits, that's what opened up the whole hope that whatever idea we have that we want to implement, whatever adding circuit or logic circuit or whatever we need to implement our conceptual design of a computer, whatever we can come up with, if we can specify it mathematically, we can build it.

So if we're going to follow this sort of oddly Marshall exploration of early computing, Shannon in the thirties, getting rid of Shannon in the thirties would probably have a bigger impact. Then getting rid of Turing in the thirties. (upbeat music)