Back to Index

François Chollet: Scientific Progress is Not Exponential | AI Podcast Clips


Chapters

0:0 Scientific Progress is Not Exponential
1:30 How to measure scientific progress
3:35 Temporal density of significance

Transcript

- What is your intuition why an intelligence explosion is not possible? Like taking the scientific, all the scientific revolutions, why can't we slightly accelerate that process? - So you can absolutely accelerate any problem-solving process. So recursive search improvement is absolutely a real thing. But what happens with recursively self-improving system is typically not explosion because no system exists in isolation. And so tweaking one part of the system means that suddenly another part of the system becomes a bottleneck. And if you look at science, for instance, which is clearly recursively self-improving, clearly a problem-solving system, scientific progress is not actually exploding. If you look at science, what you see is the picture of a system that is consuming an exponentially increasing amount of resources. But it's having a linear output in terms of scientific progress. And maybe that will seem like a very strong claim. Many people are actually saying that, scientific progress is exponential. But when they're claiming this, they're actually looking at indicators of resource consumptions, resource consumption by science. For instance, the number of papers being published, the number of patents being filed and so on, which are just completely correlated with how many people are working on science today. So it's actually an indicator of resource consumption. But what you should look at is the output, is progress in terms of the knowledge that science generates in terms of the scope and significance of the problems that we solve. And some people have actually been trying to measure that. Like Michael Nielsen, for instance. He had a very nice paper, I think that was last year about it. So his approach to measure scientific progress was to look at the timeline of scientific discoveries over the past, you know, 100, 150 years. And for each major discovery, ask a panel of experts to rate the significance of the discovery. And if the output of science as an institution were exponential, you would expect the temporal density of significance to go up exponentially, maybe because there's a faster rate of discoveries, maybe because the discoveries are, you know, increasingly more important. And what actually happens if you plot this temporal density of significance measured in this way, is that you see very much a flat graph. You see a flat graph across all disciplines, across physics, biology, medicine, and so on. And it actually makes a lot of sense if you think about it, because think about the progress of physics 110 years ago, right? It was a time of crazy change. Think about the progress of technology, you know, 170 years ago, when we started having, you know, replacing horses with cars, when we started having electricity and so on. It was a time of incredible change. And today is also a time of very, very fast change, but it would be an unfair characterization to say that today technology and science are moving way faster than they did 50 years ago, 100 years ago. And if you do try to rigorously plot the temporal density of the significance, yeah, of significance, of significance, sorry, you do see very flat curves. - That's fascinating. - And you can check out the paper that Michael Nielsen had about this idea. And so the way I interpret it is, as you make progress, you know, in a given field or in a given subfield of science, it becomes exponentially more difficult to make further progress. Like the very first person to work on information theory, if you enter a new field, and it's still the very early years, there's a lot of low hanging fruit you can pick. - That's right, yeah. - But the next generation of researchers is gonna have to dig much harder actually to make smaller discoveries, a probably larger number of smaller discoveries. And to achieve the same amount of impact, you're gonna need a much greater headcount. And that's exactly the picture you're seeing with science, is that the number of scientists and engineers is in fact increasing exponentially. The amount of computational resources that are available to science is increasing exponentially and so on. So the resource consumption of science is exponential, but the output in terms of progress, in terms of significance is linear. And the reason why is because, and even though science is recursively self-improving, meaning that scientific progress turns into technological progress, which in turn helps science. If you look at computers, for instance, our products of science and computers are tremendously useful in speeding up science. The internet, same thing. The internet is a technology that's made possible by very recent scientific advances. And itself, because it enables scientists to network, to communicate, to exchange papers and ideas much faster, it is a way to speed up scientific progress. So even though you're looking at a recursively self-improving system, it is consuming exponentially more resources to produce the same amounts of problem solving very much. - So that's a fascinating way to paint it. And certainly that holds for the deep learning community. Right? If you look at the temporal, what did you call it? The temporal density of significant ideas. If you look at in deep learning, I think, I'd have to think about that, but if you really look at significant ideas in deep learning, they might even be decreasing. - So I do believe the per paper significance is decreasing, but the amount of papers is still today exponentially increasing. So I think if you look at an aggregate, my guess is that you would see a linear progress. If you were to sum the significance of all papers, you would see a roughly linear progress. And in my opinion, it is not a coincidence that you're seeing linear progress in science, despite exponential resource consumption. I think the resource consumption is dynamically adjusting itself to maintain linear progress because we as a community expect linear progress, meaning that if we start investing less and seeing less progress, it means that suddenly there are some lower hanging fruits that become available and someone's gonna step up and pick them. So it's very much like a market for discoveries and ideas. - But there's another fundamental part which you're highlighting, which is the hypothesis that science or like the space of ideas, any one path you travel down, it gets exponentially more difficult to develop new ideas. In your sense, is that's gonna hold across our mysterious universe? - Yes, well, exponential progress triggers exponential friction. So that if you tweak one part of the system, suddenly some other part becomes a bottleneck. For instance, let's say you develop some device that measures its own acceleration and then it has some engine and it outputs even more acceleration in proportion of its own acceleration and you drop it somewhere. It's not gonna reach infinite speed because it exists in a certain context. So the air around it is gonna generate friction and it's gonna block it at some top speed. And even if you were to consider the broader context and lift the bottleneck there, like the bottleneck of friction, then some other part of the system would start stepping in and creating exponential friction, maybe the speed of flight or whatever. And this definitely holds true when you look at the problem solving algorithm that is being run by science as an institution, science as a system. As you make more and more progress, despite having this recursive self-improvement component, you are encountering exponential friction. Like the more researchers you have working on different ideas, the more overhead you have in terms of communication across researchers. If you look at, you were mentioning quantum mechanics, right? Well, if you want to start making significant discoveries today, significant progress in quantum mechanics, there is an amount of knowledge you have to ingest, which is huge. So there's a very large overhead to even start to contribute. There's a large amount of overhead to synchronize across researchers and so on. And of course, the significant practical experiments are going to require exponentially expensive equipment because the easier ones have already been run, right? - So in your senses, there's no way escaping, there's no way of escaping this kind of friction with artificial intelligence systems. - Yeah, no, I think science is a very good way to model what would happen with a superhuman or a recursive release of improving AI. - That's your sense, I mean, the- - That's my intuition. It's not like a mathematical proof of anything. That's not my point. Like I'm not trying to prove anything. I'm just trying to make an argument to question the narrative of intelligence explosion, which is quite a dominant narrative. And you do get a lot of pushback if you go against it. Because, so for many people, right, AI is not just a subfield of computer science. It's more like a belief system. Like this belief that the world is headed towards an event, the singularity, past which, you know, AI will become, will go exponential very much and the world will be transformed and humans will become obsolete. And if you go against this narrative, because it is not really a scientific argument, but more of a belief system, it is part of the identity of many people. If you go against this narrative, it's like you're attacking the identity of people who believe in it. It's almost like saying God doesn't exist or something. - Right. - So you do get a lot of pushback if you try to question these ideas. (upbeat music) (upbeat music) (upbeat music) (upbeat music) (upbeat music) (upbeat music)