Back to Index

Scott Aaronson: Quantum Supremacy | AI Podcast Clips


Chapters

0:0 Intro
0:16 What is Quantum Supremacy
1:47 Quantum Computers vs Classical Computers
2:35 What Quantum Computers Do
3:50 What Quantum Computers Want
5:56 Sampling Problems
7:42 Sampling with Quantum Computers
10:46 We dont want 100 qubits
13:36 Reductions

Transcript

Google announced with their work in the paper in Nature with quantum supremacy. Can you describe, again, back to the basic, what is, perhaps not so basic, what is quantum supremacy? - Absolutely. So quantum supremacy is a term that was coined by, again, by John Preskill in 2012. Not everyone likes the name, you know, but you know, it sort of stuck.

You know, we don't, we sort of haven't found a better alternative. - It's technically quantum computational supremacy. - Yeah, yeah, supremacy, that's right, that's right. But the basic idea is actually one that goes all the way back to the beginnings of quantum computing when Richard Feynman and David Deutch, people like that, were talking about it in the early '80s.

And quantum supremacy just refers to sort of the point in history when you can first use a quantum computer to do some well-defined task much faster than any known algorithm running on any of the classical computers that are available. Okay, so you know, notice that I did not say a useful task, okay?

You know, it could be something completely artificial, but it's important that the task be well-defined. So in other words, you know, there is, it is something that has right and wrong answers, you know, that are knowable independently of this device, right, and we can then, you know, run the device, see if it gets the right answer or not.

- Can you clarify a small point? You said much faster than the classical implementation. What about, sort of what about the space with where the class, there's no, there's not, it doesn't even exist a classical algorithm to-- - So maybe I should clarify, everything that a quantum computer can do, a classical computer can also eventually do, okay?

And the reason why we know that is that a classical computer could always, you know, if it had no limits of time and memory, it could always just store the entire quantum state, you know, of your, you know, of the quantum, right, store a list of all the amplitudes, you know, in the state of the quantum computer, and then just, you know, do some linear algebra to just update that state, right?

And so, so anything that quantum computers can do can also be done by classical computers, albeit exponentially slower in some cases. - So quantum computers don't go into some magical place outside of Alan Turing's definition of computation. - Precisely, they do not solve the halting problem. They cannot solve anything that is uncomputable in Alan Turing's sense.

What they, what we think they do change is what is efficiently computable, okay? And, you know, since the 1960s, you know, the word efficiently, you know, as well as been a central word in computer science, but it's sort of a code word for something technical, which is basically with polynomial scaling, you know, that as you get to larger and larger inputs, you would like an algorithm that uses an amount of time that scales only like the size of the input raised to some power, and not exponentially with the size of the input, right?

- Yeah, so I do hope we get to talk again, because one of the many topics that there's probably several hours worth of conversation on is complexity, which we probably won't even get a chance to touch today, but you briefly mentioned it. But let's maybe try to continue. So you said the definition of quantum supremacy is basically design, is achieving a place where much faster on a formal, that quantum computer is much faster on a formal, well-defined problem that is or isn't useful.

- Yeah, yeah, yeah, right, right. And I would say that we really want three things, right? We want, first of all, the quantum computer to be much faster, just in the literal sense of like number of seconds, you know, at just solving this, you know, well-defined problem. Secondly, we want it to be sort of, you know, for a problem where we really believe that a quantum computer has better scaling behavior, right?

So it's not just an incidental, you know, matter of hardware, but it's that, you know, as you went to larger and larger inputs, you know, the classical scaling would be exponential and the scaling for the quantum algorithm would only be polynomial. And then thirdly, we want the first thing, the actual observed speed up to only be explainable in terms of the scaling behavior, right?

So, you know, I want, you know, a real world, you know, a real problem to get solved, let's say by a quantum computer with 50 qubits or so, and for no one to be able to explain that in any way other than, well, you know, to, this computer involved a quantum state with two to the 50th power amplitudes.

And, you know, a classical simulation, at least any that we know today, would require keeping track of two to the 50th numbers. And this is the reason why it was faster. - So the intuition is that then if you demonstrate on 50 qubits, then once you get to 100 qubits, then it'll be even much more faster.

- Precisely, precisely. Yeah, and, you know, and quantum supremacy does not require error correction, right? We don't, you know, we don't have, you could say true scalability yet, or true, you know, error correction yet, but you could say quantum supremacy is already enough by itself to refute the skeptics who said a quantum computer will never outperform a classical computer for anything.

- But one, how do you demonstrate quantum supremacy? And two, what's up with these news articles I'm reading that Google did so? - Yeah, all right, well-- - What did they actually do? - Great questions, 'cause now you get into, actually, you know, a lot of the work that I've, you know, I and my students have been doing for the last decade, which was precisely about how do you demonstrate quantum supremacy using technologies that, you know, we thought would be available in the near future?

And so one of the main things that we realized around 2011, and this was me and my student, Alex Arkhipov at MIT at the time, and independently of some others, including Bremner, Josa, and Shepard, okay? And the realization that we came to was that if you just want to prove that a quantum computer is faster, you know, and not do something useful with it, then there are huge advantages to sort of switching your attention from problems like factoring numbers that have a single right answer to what we call sampling problems.

So these are problems where the goal is just to output a sample from some probability distribution, let's say over strings of 50 bits, right? So there are, you know, many, many, many possible valid outputs. You know, your computer will probably never even produce the same output twice, you know, if it's running as, even, you know, assuming it's running perfectly, okay?

But the key is that some outputs are supposed to be likelier than other ones. - So, sorry, to clarify, is there a set of outputs that are valid and a set that are not? Or is it more that the distribution of a particular kind of output is more, is there's a specific distribution of a particular kind of output?

- Yeah, there's a specific distribution that you're trying to hit, right? Or, you know, that you're trying to sample from. Now, there are a lot of questions about this. You know, how do you do that, right? Now, how you do it, you know, it turns out that with a quantum computer, even with the noisy quantum computers that we have now, that we have today, what you can do is basically just apply a randomly chosen sequence of operations, right?

So we, you know, we, in some of those, you know, that part is almost trivial, right? We just sort of get the qubits to interact in some random way, although a sort of precisely specified random way, so we can repeat the exact same random sequence of interactions again and get another sample from that same distribution.

And what this does is it basically, well, it creates a lot of garbage, but, you know, very specific garbage, right? So, you know, of all of the, so if we're going to talk about Google's device, there were 53 qubits there, okay? And so there were two to the 53 power possible outputs.

Now, for some of those outputs, you know, there was a little bit more destructive interference in their amplitude, okay? So their amplitudes were a little bit smaller. And for others, there was a little more constructive interference. You know, the amplitudes were a little bit more aligned with each other, you know, and so those, those that were a little bit likelier, okay?

All of the outputs are exponentially unlikely, but some are, let's say, two times or three times, you know, unlikelier than others, okay? And so you can define, you know, this sequence of operations that gives rise to this probability distribution. Okay, now, the next question would be, well, how do you, you know, even if you're sampling from it, how do you verify that?

- Right, exactly. - How do you know? And so my students and I, and also the people at Google were doing the experiment, came up with statistical tests that you can apply to the outputs in order to try to verify, you know, what is, you know, that at least that some hard problem is being solved.

The test that Google ended up using was something that they called the linear cross-entropy benchmark, okay? And it's basically, you know, so the drawback of this test is that it requires, like, it requires you to do a two to the 53 time calculation with your classical computer, okay? So it's very expensive to do the test on a classical computer.

- Yeah. - The good news is-- - How big of a number is two to the 53? - It's about nine quadrillion. - Okay. - That doesn't help. - Well, you know, it's, you want it in like scientific notation-- - No, no, no, no. - What I mean is-- - Yeah, it is, it is, it is just-- - Is it impossible to run on a-- - Yeah, so we will come back to that.

It is just barely possible to run, we think, on the largest supercomputer that currently exists on Earth, which is called Summit at Oak Ridge National Lab, okay? (laughing) - Great, this is exciting. - That's the short answer. So ironically, for this type of experiment, we don't want 100 qubits, okay?

Because with 100 qubits, even if it works, we don't know how to verify the results, okay? So we want a number of qubits that is enough that the biggest classical computers on Earth will have to sweat and will just barely be able to keep up with the quantum computer, using much more time, but they will still be able to do it in order that we can verify the results.

- Which is where the 53 comes from, for the number of qubits. - Basically, well, I mean, that's also, that's sort of, I mean, that's sort of where they are now, in terms of scaling, you know? And then, you know, soon, you know, that point will be passed. And then when you get to larger numbers of qubits, then, you know, these types of sampling experiments will no longer be so interesting, because we won't even be able to verify the results, and we'll have to switch to other types of computation.

So with the sampling thing, you know, so the test that Google applied, with this linear cross-entropy benchmark, is basically just take the samples that were generated, which are, you know, a very small subset of all the possible samples that there are, but for those, you calculate, with your classical computer, the probabilities that they should have been output.

And you see, are those probabilities, like, larger than the mean? You know, so is the quantum computer biased toward outputting the strings that it's, you know, that you want it to be biased toward? Okay, and then, finally, we come to a very crucial question, which is supposing that it does that, well, how do we know that a classical computer could not have quickly done the same thing, right?

How do we know that, you know, this couldn't have been spoofed by a classical computer, right, and so, well, the first answer is we don't know, for sure, because, you know, this takes us into questions of complexity theory, you know, I mean, questions on the, of the magnitude of the P versus NP question, and things like that, right?

You know, we don't know how to rule out definitively that there could be fast classical algorithms for, you know, even simulating quantum mechanics, and for, you know, simulating experiments like these, but we can give some evidence against that possibility, and that was sort of the, you know, the main thrust of a lot of the work that my colleagues and I did, you know, over the last decade, which is then sort of in, around 2015 or so, what led to Google deciding to do this experiment.

- So is the kind of evidence you, first of all, the hard P equals NP problem that you mentioned and the kind of evidence that you're, were looking at, is that something you come to on a sheet of paper, or is this something, are these empirical experiments? - It's math, for the most part.

I mean, you know, it's also, you know, we have a bunch of methods that are known for simulating quantum circuits, you know, quantum computations with classical computers, and so we have to try them all out and make sure that, you know, they don't work, you know, make sure that they have exponential scaling on, you know, these problems, and not just theoretically, but with the actual range of parameters that are actually, you know, arising in Google's experiment.

Okay, so there is an empirical component to it, right? But now, on the theoretical side, you know, basically what we know how to do in theoretical computer science and computational complexity is, you know, we don't know how to prove that most of the problems we care about are hard, but we know how to pass the blame to someone else.

Okay, we know how to say, well, look, you know, I can't prove that this problem is hard, but if it is easy, then all these other things that, you know, you probably were much more confident or were hard, then those would be easy as well, okay? So we can give what are called reductions.

And this has been the basic strategy in, you know, NP completeness, right, in all of theoretical computer science and cryptography since the 1970s, really. And so we were able to give some reduction evidence for the hardness of simulating these sampling experiments, these sampling-based quantum supremacy experiments. The reduction evidence is not as satisfactory as it should be.

One of the biggest open problems in this area is to make it better. But, you know, we can do something. You know, certainly we can say that, you know, if there is a fast classical algorithm to spoof these experiments, then it has to be very, very unlike any of the algorithms that we know.

- Which is kind of in the same kind of space of reasoning that people say P not equals NP. - Yeah, it's in the same spirit. (silence) (silence) (silence) (silence) (silence) (silence) (silence)