*Epistemic status: Summarizing research literature*

Is there a task in which quantum computers can achieve beyond what any classical algorithm can do, even those yet undiscovered? These days we use the term *quantum supremacy*, coined by John Preskill, to mean such an achievement. I believe that the search for rigorous, complexity-theoretic proof of quantum supremacy^{1} goes back to the 2002 paper by Terhal and DiVincenzo (published in 2004). Then BosonSampling by Aaronson and Arkhipov in 2010 kick-started the experimental race for quantum supremacy in near-term quantum devices that continues to this day. These theoretical proofs of quantum supremacy all rule out classical simulations of certain quantum systems by arguing that if such simulations are possible, then there will be a devastating consequence, “collapse of the polynomial hierarchy”, which is deemed unlikely by computer scientists. Such an argument is not known for factoring; if tomorrow someone figures out a fast way to factor integers on a classical computer ^{2}, we will have to switch to a different encryption scheme but that’s about it. The concept of reality as we know it will still remain unchallenged, while a collapse of the polynomial hierarchy will bring us closer to the world in which \(\cc{P=NP}\). From Impagliazzo’s *A Personal View of Average-Case Complexity*,

Now switching from a very general exposition to a technical question that I’m interested in: what are the ingredients for proofs of quantum supremacy, particularly in aAlgorithmica is the world where \(\cc{P=NP}\) or its moral equivalent (e.g., \(\cc{NP\subseteq BPP}\)). To be more concrete, let’s define Algorithmica as the world where there exists a simple and magical linear time algorithm for the SAT problem. […] this world is a computational utopia. We would be able to automate various tasks that currently require significant creativity: engineering, programming, mathematics, and perhaps even writing, composing, and painting. On the other hand, this algorithm could also be used to break cryptographic schemes, and hence almost all of the cryptographic applications currently used will disappear.

*random quantum circuit*architecture? In this setting, there is an ensemble \(\mathcal{E}\) of quantum circuits \(U\)

and we want to “simulate” the outcome probabilities \(p_x = |\av{x|U|0}|^2\) for some \(x \in \{0,1\}^{2^n}\). There are two major “axes” of possible kinds of simulations.

By “simulate”, we may want to estimate \(p_x\) up to a multiplicative precision \[ |p_x - q_x| \le \eta p_x,\]

*or*we just want to estimate \(p_x\) up to an additive precision \[ |p_x - q_x| \le \epsilon. \]We may want to simulate all quantum circuits in \(\mathcal{E}\)

*or*just some fraction of them.

Ideally, we would like a result that says that additive approximation of \(p_x\) by a classical machine is hard on a constant fraction of the circuits i.e. hard on average. What has been proven and what is only a conjecture?

# Good multiplicative approximation in \(\cc{BPP^{NP}}\)

Multiplicative approximation is much stronger than additive one; if \(p_x = 0\), then a multiplicative estimate \(q_x\) must exactly equal \(p_x\). I believe that when a problem is \(\cc{\#P}\)-complete, multiplicative approximation to solutions of the problem is still \(\cc{\#P}\)-complete. (This is claimed for a slight generalization of the complexity class \(\cc{\#P}\) to quantum sampling problems in this review paper.) So for our purpose, a multiplicative approximation is as good as an exact solution.

All existing proposals of quantum supremacy makes use of the existence of a \(\cc{\#P}\)-hard problem that can’t be solved efficiently on any classical computer. It is a fairly common misconception that these hard problems can, in contrast, be solved efficiently on a quantum computer (by simulating BosonSampling for example). So let me state this clearly: **quantum computers are not believed to be able to solve \(\cc{NP}\)-complete problems efficiently**, let alone \(\cc{\#P}\)-complete problems. The arguments for quantum supremacy are more subtle and rely on an unphysical process known as *Stockmeyer’s counting* that, given a classical machine that outputs \(q_x\), there is a \(\cc{BPP^{NP}}\) machine that produces a multiplicative estimate \(\tilde{q}_x\) to \(q_x\). Aaronson and Arkhipov’s insight was that sometimes \(\tilde{q}_x\) is also a multiplicative estimate to the probabilities \(p_x\) from a quantum device, to which the output \(q_x\) of the classical device is an additive estimate. In other words, Stockmeyer’s counting can sometimes improves an additive estimate to a multiplicative one. Therefore, additive approximation to \(p_x\) alone doesn’t let you solve a \(\cc{\#P}\)-complete problem in polynomial time. It’s an additive approximation together with an \(\cc{NP}\) oracle that let you solve a \(\cc{\#P}\)-complete problem *on average*.

We follow the argument in

- Bremner, Montanaro and Shepherd,
*Average-case complexity versus approximate simulation of commuting quantum computations*(2016)

also repeated for random quantum circuits in Appendix G of

- Boixo
*et al.*,*Characterizing Quantum Supremacy in Near-Term Devices*(2018).

Suppose that there exists a classical sampler that gives an additive estimate \(q_x\) of \(p_x\): \[\sum_{x\in\{0,1\}^{2^n}} |p_x - q_x| \le \epsilon.\] By Stockmeyer’s counting, \(q_x\) can be approximated with multiplicative precision by a \(\mathrm{BPP^{NP}}\) machine \[ |q_x - \tilde{q}_x| \le \frac{q_x}{\mathrm{poly}(n)}. \] Let’s combine these two facts together. \[\begin{align} |p_x-\tilde{q}_x| &\le |p_x-q_x| + |q_x - \tilde{q}_x| &\text{Triangle inequality}\\ &\le |p_x-q_x| + \frac{q_x}{\mathrm{poly}(n)} &\text{Stockmeyer's counting}\\ &\le |p_x-q_x| + \frac{|p_x - q_x| + p_x}{\mathrm{poly}(n)} &\text{Triangle inequality}\\ &= \frac{p_x}{\textrm{Poly}(n)} + |p_x-q_x|\left(1 + \frac{1}{\textrm{poly}(n)}\right) \end{align}\] If \(|p_x-q_x|\) can be bounded above by \(p_x\), then \(\tilde{q}_x\) is a multiplicative approximation to \(p_x\) that we want. This is done in two steps:

Use Markov’s inequality \[\begin{align} \mathrm{Pr} \left[ |p_x-q_x| \ge \frac{\epsilon}{2^n\delta} \right] \le \frac{\mathbb{E}(|p_x-q_x|)2^n\delta}{\epsilon} \le \delta \iff \mathrm{Pr} \left[ |p_x-q_x| \le \frac{\epsilon}{2^n\delta} \right] \ge 1-\delta \end{align}\] to bound \(|p_x-q_x|\) by a constant for \(1-\delta\) fraction of quantum circuits.

Invoke “anti-concentration” of probabilities (conjectured for BosonSampling, proved for random quantum circuits and more) \[\begin{align} \mathrm{Pr} \left[ p_x > \frac{\epsilon}{2^n\delta\eta} \right] &\ge \gamma, \end{align}\] to bound the constant by \(p_x\), where \(\eta\) is some small constant.

Now \((1- \delta)\cap\gamma\) fraction of quantum circuits will satisfy \[\begin{align} |p_x-\tilde{q}_x| &\le p_x\left(\eta + \frac{1}{\mathrm{poly}(n)}\right). \end{align}\] For \((1- \delta)\cap\gamma\) to be non-empty, we need to pick \(\delta\) such that \((1- \delta)\cup\gamma > 1\).

If \(p_z\) is \(\cc{\#P}\)-hard to compute for this fraction of quantum circuits, then we have shown that \(\cc{P^{\#P}}\subset\cc{BPP^{NP}}\) which is contained in the third level of \(\cc{PH}\) by the Sipser–Gács–Lautemann theorem on the one hand. On the other hand, \(\cc{PH}\subset\cc{P^{\#P}}\) by Toda’s theorem, so \(\cc{PH}\) collapses to the third level.

Fun fact: exactly computing probabilities, even for a subclass of quantum circuits that does only \(\cc{BPP}\) computation (a layer of Hadamard’s to simulate coin flips, followed by diagonal gates) is \(\cc{\#P}\)-complete ^{3}. In particular, you can create the state \(\propto \sum_x \ket{x}\ket{f(x)}\) for any Boolean function \(f\). Counting the number \(\#f\) of the inputs such that \(f(x)=0\) is by definition \(\cc{\#P}\)-complete. Exactly computing the outcome probability \(\mathrm{Pr}(0) = \#f/2^n\) when measuring the second qubit is therefore \(\cc{\#P}\)-complete. since it’s a \(\cc{BPP}\) computation, obviously you can just sample the probability. What prevents PH collapse from this result?

# Anti-concentration

## By unitary 2-designs

- Hangleiter, Bermejo-Vega, Schwarz and Eisert,
*Anticoncentration theorems for schemes showing a quantum speedup*(2018)

Paley-Zygmund inequality

Counterexample: Clifford circuits with \(\ket{0}^{\otimes n}\) input state.

## Do “anti-concentrate” probabilities actually anti-concentrate?

- Hangleiter, Kliescha, Eisert and Gogolin,
*Sample complexity of device-independently certified “quantum supremacy”*(2019)

# Average-case hardness for random quantum circuits

- Bouland, Fefferman, Nirkhe and Vazirani,
*Quantum Supremacy and the Complexity of Random Circuit Sampling*1803.04402

in a non-black-box setting. Quantum advantage in a black box setting was already established long ago in the noiseless case by the Deutsch-Jozsa algorithm and in the noisy case by Simon’s algorithm.↩

Henry Cohn, “Factoring may be easier than you think”↩

The ability to sprinkle Hadamard’s anywhere among the Toffoli’s allows universal quantum computation.↩