Back to Index

Biological versus Artificial Neural Networks (John Hopfield) | AI Podcast Clips


Chapters

0:0 Intro
3:25 Evolutionary Process
6:17 Adaptation
7:24 Mathematical system
9:7 Evolutionary adaptation
12:22 Understanding
13:51 Feedback
15:49 Generation after generation
17:41 Collective properties

Transcript

- What difference between biological neural networks and artificial neural networks is most captivating and profound to you? At the higher philosophical level, let's not get technical just yet. - One of the things that very much intrigues me is the fact that neurons have all kinds of components, properties to them.

In evolutionary biology, if you have some little quirk in how a molecule works or how a cell works, and it can be made use of, evolution will sharpen it up and make it into a useful feature rather than a glitch. And so you expect in neurobiology for evolution to have captured all kinds of possibilities of getting neurons, of how you get neurons to do things for you.

And that aspect has been completely suppressed in artificial neural networks. - Do the glitches become features in the biological neural network? - They can. Look, let me take one of the things that I used to do research on. If you take things which oscillate, they have rhythms which are sort of close to each other.

Under some circumstances, these things will have a phase transition and suddenly the rhythm will, everybody will fall into step. There was a marvelous physical example of that in the Millennium Bridge across the Thames River about, built about 2001. And pedestrians walking across, pedestrians don't walk, synchronize, they don't walk in lockstep.

But they all walk at about the same frequency. And the bridge could sway at that frequency and the slight sway made pedestrians tend a little bit to lock into step. And after a while, the bridge was oscillating back and forth and the pedestrians were walking in step to it.

And you could see it in the movies made out of the bridge. And the engineers made a simple-minded mistake. They assumed when you walk, it's step, step, step, and it's back and forth motion. But when you walk, it's also right foot, left foot, side to side motion. And it's the side to side motion for which the bridge was strong enough, but it wasn't stiff enough.

And as a result, you would feel the motion and you'd fall into step with it. And people were very uncomfortable with it. They closed the bridge for two years while they built stiffening for it. Now, nerves, look, nerve cells produce action potentials. You have a bunch of cells which are loosely coupled together producing action potentials of the same rate.

There'll be some circumstances under which these things can lock together. Other circumstances in which they won't. Well, they fire together, you can be sure that other cells are gonna notice it. So you can make a computational feature out of this in an evolving brain. Most artificial neural networks don't even have action potentials, let alone have the possibility for synchronizing them.

- And you mentioned the evolutionary process. So the evolutionary process that builds on top of biological systems leverages that the weird mess of it somehow. So how do you make sense of that ability to leverage all the different kinds of complexities in the biological brain? - Well, look, at the biological molecule level, you have a piece of DNA which encodes for a particular protein.

You could duplicate that piece of DNA and now one part of it can code for that protein, but the other one could itself change a little bit and thus start coding for a molecule which is slightly different. Now, if that molecule was just slightly different, had a function which helped any old chemical reaction was as important to the cell, it would go ahead and let that try and evolution would slowly improve that function.

And so you have the possibility of duplicating and then having things drift apart. One of them retain the old function, the other one do something new for you. And there's evolutionary pressure to improve. Look, there is in computers too, but it's improvement has to do with closing some companies and opening some others.

The evolutionary process looks a little different. - Yeah, similar time scale perhaps. - Much shorter in time scale. - Companies close, yeah, go bankrupt and are born. Yeah, shorter, but not much shorter. Some company lasts a century, but yeah, you're right. I mean, if you think of companies as a single organism that builds and you all know, yeah, it's a fascinating dual correspondence there between biological- - And companies have difficulty having a new product competing with an old product.

When IBM built this first PC, you probably read the book, they made a little isolated internal unit to make the PC. And for the first time in IBM's history, they didn't insist that you build it out of IBM components, but they understood that they could get into this market, which is a very different thing by completely changing their culture.

And biology finds other markets in a more adaptive way. - Yeah, it's better at it. It's better at that kind of integration. So maybe you've already said it, but what to use the most beautiful aspect or mechanism of the human mind? Is it the adaptive, the ability to adapt as you've described or is there some other little quirk that you particularly like?

- Adaptation is everything when you get down to it, but the difference, there are differences between adaptation where you're learning goes on only over generations and over evolutionary time, where you're learning goes on at the timescale of one individual who must learn from the environment during that individual's lifetime.

And biology has both kinds of learning in it. And the thing which makes neurobiology hard is that a mathematical system, as it were, built on this other kind of evolutionary system. - What do you mean by mathematical system? Where's the math and the biology? - Well, when you talk to a computer scientist about neural networks, it's all math.

The fact that biology actually came about from evolution, the thing that, and the fact that biology is about a system which you can build in three dimensions. If you look at computer chips, computer chips are basically two-dimensional structures, maybe 2.1 dimensions, but they really have difficulty doing three-dimensional wiring.

Biology is, the neocortex is actually also sheet-like, and it sits on top of the white matter, which is about 10 times the volume of the gray matter and contains all what you might call the wires. But there's a huge, the effect of computer structure on what is easy and what is hard is immense.

So-- - And biology does, it makes some things easy that are very difficult to understand how to do computationally. On the other hand, you can't do simple floating-point arithmetic, 'cause it's awfully stupid. - Yeah, and you're saying this kind of three-dimensional, complicated structure makes, it's still math. It's still doing math.

The kind of math it's doing enables you to solve problems of a very different kind. - That's right, that's right. - So you mentioned two kinds of adaptation, the evolutionary adaptation and the adaptation, or learning at the scale of a single human life. Which do you, which is particularly beautiful to you and interesting from a research and from just a human perspective?

And which is more powerful? - I find things most interesting that I begin to see how to get into the edges of them and tease them apart a little bit and see how they work. And since I can't see the evolutionary process going on, I'm in awe of it.

But I find it just a black hole as far as trying to understand what to do. And so in a certain sense, I'm in awe of it, but I couldn't be interested in working on it. - The human life's time scale is however thing you can tease apart and study.

- Yeah, you can do, there's the developmental neurobiology which understands how the connections and how the structure evolves from a combination of what the genetics is like and the real, the fact that you're building a system in three dimensions. - In just days and months, those early days of a human life are really interesting.

- They are and of course, there are times of immense cell multiplication. There are also times of the greatest cell death in the brain is during infancy. It's turnover. - So what is not effective, what is not wired well enough to use the moment, throw it out. - It's a mysterious process.

From, let me ask, from what field do you think the biggest breakthroughs in understanding the mind will come in the next decades? Is it neuroscience, computer science, neurobiology, psychology, physics, maybe math, maybe literature? - Well of course I see the world always through a lens of physics. I grew up in physics and the way I pick problems is very characteristic of physics and of an intellectual background which is not psychology, which is not chemistry and so on and so on.

- Yeah, both of your parents were physicists. - Both of my parents were physicists and the real thing I got out of that was a feeling that the world is an understandable place and if you do enough experiments and think about what they mean and structure things so you can do the mathematics of the relevant to the experiments, you also be able to understand how things work.

But that was a few years ago. Did you change your mind at all through many decades of trying to understand the mind, of studying it different kinds of ways, not even the mind, just biological systems? You still have hope that physics, that you can understand? - There's a question of what do you mean by understand?

- Of course. - When I taught freshman physics, I used to say I wanted to give physics to understand the subject, to understand Newton's laws. I didn't want them simply to memorize a set of examples to which they knew the equations to write down to generate the answers. I had this nebulous idea of understanding so that if you looked at a situation, you could say, "Oh, I expect the ball to make that trajectory or I expect some intuitive notion of understanding." And I don't know how to express that very well.

I've never known how to express it well. And you run smack up against it when you look at these simple neural nets, feedforward neural nets, which do amazing things, and yet you know contain nothing of the essence of what I would have felt was understanding. Understanding is more than just an enormous lookup table.

- Let's linger on that. How sure you are of that? What if the table gets really big? So, I mean, ask another way. These feedforward neural networks, do you think they'll ever understand? - Could answer that in two ways. I think if you look at real systems, feedback is an essential aspect of how these real systems compute.

On the other hand, if I have a mathematical system with feedback, I know I can unlayer this and do it. But I have an exponential expansion in the amount of stuff I have to build if I can solve the problem that way. - So feedback is essential. So we can talk even about recurrent neural net, so recurrence.

But do you think all the pieces are there to achieve understanding through these simple mechanisms? Like, back to our original question, what is the fundamental, is there a fundamental difference between artificial neural networks and biological? Or is it just a bunch of surface stuff? - Suppose you ask a neurosurgeon, when is somebody dead?

- Yeah. - They'll probably go back to saying, well, I can look at the brain rhythms and tell you this is a brain which is never gonna function again. This one is, this other one is one which if we treat it well, is still recoverable. And then just do that by some electrodes looking at simple electrical patterns which don't look in any detail at all at what individual neurons are doing.

These rhythms are utterly absent from anything which goes on at Google. - Yeah, but the rhythms-- - But the rhythms what? - So, well, that's like comparing, okay, I'll tell you. It's like you're comparing the greatest classical musician in the world to a child first learning to play. The question I'm at, but they're still both playing the piano.

I'm asking, is there, will it ever go on at Google? Do you have a hope? Because you're one of the seminal figures in both launching both disciplines, both sides of the river. - I think it's going to go on generation after generation the way it has where what you might call the AI computer science community says, let's take the following.

This is our model of neurobiology at the moment. Let's pretend it's good enough and do everything we can with it. And it does interesting things. And after the while, it sort of grinds into the sand and you say, ah, something else is needed for neurobiology and some other grand thing comes in.

And enable you to go a lot further. But we'll go into the sand again. And I think it could be generations of this evolution. I don't know how many of them. And each one is going to get you further into what our brain does. In some sense, past the Turing test, longer and more broad aspects.

And how many of these are good there are going to have to be before you say, I've made something, I've made a human. I don't know. - But your sense is it might be a couple. - My sense is it might be a couple more. - Yeah. - And going back to my brain waves as it were.

From the AI point of view, they would say, ah, maybe these are an epiphenomenon and not important at all. The first car I had, a real wreck of a 1936 Dodge, go above 45 miles an hour and the wheels would shimmy. - Yeah. - Good speedometer that. Now, nobody designed the car that way.

The car is malfunctioning to have that. But in biology, if it were useful to know when are you going more than 45 miles an hour, you just capture that and you wouldn't worry about where it came from. - Yeah. - It's going to be a long time before that kind of thing, which can take place in large complex networks of things is actually used in the computation.

Look, how many transistors are there in your laptop these days? - Actually, I don't know the number. It's- - It's on the scale of 10 to the 10. I can't remember the number either. - Yeah. - And all the transistors are somewhat similar. And most physical systems with that many parts, all of which are similar, have collective properties.

- Yes. - Sound waves in air, earthquakes, what have you have collective properties, weather. There are no collective properties used in artificial neural networks in AI. - Yeah, it's very- - If biology uses them, it's going to take us to more generations of things for people to actually dig in and see how they are used and what they mean.

- See, you're very right. We might have to return several times to neurobiology and try to make our transistors more messy. - Yeah, yeah. At the same time, the simple ones will conquer big aspects. And I think one of the most biggest surprises to me was how well learning systems, which are manifestly non-biological, how important they can be actually and how important and how useful they can be in AI.

(screen whooshes) (screen whooshes) (screen whooshes) (screen whooshes) (screen whooshes) (screen whooshes)