February 2019 Archive

23 minute read

This is an archive of what I found useful, interesting, or simply enjoyable from February 2019.

Research

Quantum state verification

  • Why are amplitudes complex?, Shtetl-Optimized (An update on Scott’s personal conundrum)

    In the quantum foundations community, it is popular to derive the complex amplitudes (as opposed to real or quaternionic) from the “local tomography” principle 1 which Scott has never found intuitive. Now he stumbles upon the following question:

    suppose we have a two-outcome measurement $E$ (let’s say, on $n$ qubits), and suppose it accepts every product state with the same probability $p$. Must $E$ then accept every entangled state with probability $p$ as well? Or, a closely-related question: suppose we know $E$’s acceptance probabilities on every product state. Is that enough to determine its acceptance probabilities on all $n$-qubit states?

    The answer is that, yes, the acceptance probabilities are determined and, in fact, constant for all states. This follows from a result in the Braunstein et al 1999, specifically that there is always a ball of constant size around the maximally mixed state, all states in which are separable. This is in fact true only if amplitudes are complex.

    However, to disprove the statement “the acceptance probabilities are constant for all states” when it is false requires knowing the acceptance probabilities on product states to exponential accuracy. A non-example of the statement to illustrate this: the measurement projecting an $n$-qubit state onto $n/2$ Bell pairs accepts every product state with exponentially small probability (but the probabilities are different for different product states) even though it accepts $n$ Bell pairs with probability 1.

    Quotable: “quantum mechanics over the quaternions is a flaming garbage fire”.

  • Single-copy entanglement detection, Dimić and Dakić 2018 (The idea of a test in which the acceptance probability for product states falls off exponentially can be used to detect entanglement.)
  • Optimal verification of entangled states with local measurements, Pallister, Linden, and Montanaro 2018

    Verification “is the state $\ket{\psi}$?” is easier than tomography. Given a state $\sigma$, we want to know if it is “closed” to $\ket{\psi}$ so that we can accept or otherwise reject it. This can also be phrased as a promise problem: either the given state $\sigma$ is the true state $\ket{\psi}$ or $\ampU{\psi}{\sigma}{\psi} \le 1 - \epsilon$. The quantity $1 - \epsilon$ is the (test-independent) worst case fidelity that we’re willing to accept.

    The authors consider a binary hypothesis testing that disallows type I error (rejecting the true state) while having a minimum type-II error probability $\delta_n$, possibly depending on $n$. Their argument goes as follows: suppose that the test has the probability $p$ of accepting the true state and $p-\Delta_{\epsilon}$ of accepting a wrong state. Then our task is to distinguish between Bernoulli trials drawn from i.i.d. distributions ${p,1-p}$ and ${p-\Delta_{\epsilon}, 1-(p-\Delta_{\epsilon})}$. The Chernoff-Stein lemma says that \(\lim_{n\to\infty} \frac{\delta_n}{n} = - D(p||p-\Delta_{\epsilon}),\) where $D(Q||P)$ is the relative entropy \(D(Q||P) = \sum_x Q(x) \log \frac{Q(x)}{P(x)}.\) Rearranging the equation, we get a bound on the number of samples \(n \ge \frac{\ln \delta^{-1}}{D(p||p-\Delta_{\epsilon})}.\)

    They argue that the “worst case” scaling of $D(p||p-\Delta_{\epsilon})$ is quadratic in $\Delta_{\epsilon}$ because infinitesimally the relative entropy is just the Fisher information metric, which scales as the square of the distance $\sum_x \Delta^2{\epsilon}/4p$ 2, whereas setting $p=1$ gives you a linear scaling across all values of $\Delta{\epsilon}$. (Small $\Delta_{\epsilon}$ makes it hard to distinguish the two distributions but we probably expect it to be small in practice, both in adversarial and non-adversarial scenarios.)

    Given all that, the task of finding an optimal verification scheme is reduced to minimizing the probability of accepting the “worst” wrong state (easiest to have us fooled) \(\min_{\Omega} \max_{\substack{\sigma \\ \ampU{\psi}{\sigma}{\psi} \le 1 - \epsilon}} \tr (\Omega \sigma) := 1-\Delta_{\epsilon}\) over the “test operator” $\Omega = \sum_j \mu_j P_j$, a convex sum of all the tests $P_j$ with eigenvalue 1 for the target state. The (test-dependent) $\delta$ is no worse than the worst case error $(1-\Delta_{\epsilon})^n$, thus giving \(n \ge \frac{\ln \delta^{-1}}{\ln (1-\Delta_{\epsilon})^{-1}} \approx \frac{\ln \delta^{-1}}{\Delta_{\epsilon}},\) where we have assumed $\Delta_{\epsilon}$ to be small again.

    The following paper shows that $\Delta_{\epsilon}$, which determines the efficiency of the test, is linearly related to the worst case fidelity $\epsilon$, the proportionality constant being the gap between two largest eigenvalues of $\Omega$. The paper also explores the adversarial scenario, in which case all eigenvalues matter, not just the second largest one. All in all, I need to read more of it.

  • Efficient verification of pure quantum states with applications to hypergraph states, Zhu and Hayashi 1806.05565

    The Pallister paper gives the optimal local measurement for all two-qubit pure states and optimal stabilizer measurements for all $n$-qubit stabilizer states: $\Delta_{\epsilon}\approx \epsilon/2$ for measuring all stabilizer operators and $\epsilon/n$ for measuring only a set of stabilizer generators.

  • Validating and certifying stabilizer states, Kalev and Kyrillidis 1808.10786

    Direct fidelity estimation is great because it doesn’t matter how many qubits you want to verify, the number of samples stay the same. However, it is also not great by the same reason; even though all we have now are on the order of 10-100 qubits, you still need the same number of samples as if you have a thousand of qubits.

    The problem considered by the current paper is in a sense opposite to that in the Pallister paper. in the Pallister paper, we are given the worst case fidelity $\epsilon$ and are asked to find an optimal measurement, whereas in this paper we are given a (stabilizer) measurement and are asked for the worst case fidelity $1-\epsilon$ to a desired stabilizer state. The latter can be done with $n^3\ln(2\delta^{-1})/2r^2$ samples, where $r$ is the radius away from $1-\epsilon$.

Representation theory

  • Displacement operator by beam splitter, Paris 1996

    How to implement a displacement operator (a Heisenberg group element) \(D(\alpha) = \exp(\alpha b\dgg - \alpha^* b)\) on an arbitrary quantum state using an auxillary laser mode and a beam splitter (a symplectic group element)? The beam splitter unitary \(\mathrm{BS} = \exp \left[ i \tan^{-1} \sqrt{\frac{1-\tau}{\tau}} (ab\dgg + a\dgg b) \right]\) looks almost like a displacement operator, if only we could replace $a$ by $\alpha$ and flip the sign. Here comes a cool trick: notice that $ab\dgg$ and $a\dgg b$ are respectively the raising and lowering operator in $\mathfrak{sl}(2)$ made out of Schwinger’s bosons. ($a\dgg a - b\dgg b$ is $Z$.) So I can do the Gauss decomposition to move the exponentials of $ab\dgg$ and $a\dgg b$ to the left and the right, where $\bra{z}$ and $\ket{z}$ are waiting to turn $b$ into $z$. This gives (barred some mistake) \(\av{z|\mathrm{BS}|z} = \exp (\beta a\dgg a/2) \otimes \exp (\xi z^* b) \exp(-\beta b\dgg b/2) \exp (-\xi^* zb\dgg)\) Then rotate the linear term by the quadratic Hamiltonian $\beta b\dgg b/4$ before taking the appropriate limits to obtain the result.

  • Representation Theory and Complex Geometry, Chriss and Ginzburg
  • The Borel-Weil theorem in Peter Woit’s lectures (Old lectures 13-16)

    Associated to each weight $\lambda$ under a maximal torus $T$ is a line bundle $L_{\lambda}$ over the base space $G/T$ (the line is the one-dimensional irrep $e^{\lambda(H)}$ of $T$, thinking of the weight $\lambda$ as a linear functional on $\mathfrak{h}$) and the space of sections $\Gamma (G/T,L_{\lambda})$.

    The Borel-Weil theorem gives every irrep of a semisimple Lie group (or an algebraic group) over $\C$ a concrete realization as a subspace of $\Gamma (G/T,L_{\lambda})$. As an induced representation, $\Gamma (G/T,L_{\lambda})$ has the isotypic decomposition \(\Gamma(G/T,L_{\lambda}) = \bigoplus_{\mu} V_{\mu} \otimes (V_{\mu}^*)_{-\lambda},\) where $(V^*{\mu}){-\lambda}$ is the $-\lambda$-weight space of $V_{\mu}^$. (The minus sign comes about because we’re acting on $V^{\mu}$.) We could single out the irrep $V{\lambda}$ by putting in by hand the condition that $-\lambda$ has to be a lowest weight, or equivalently, $\lambda$ has to be a highest weight 3. What is beautiful about the Borel-Weil theorem is that this condition has a simple meaning in complex geometry (no pun intended): the sections have to be holomorphic for the following reason. The tangent space of $G/T$ at the identity is isomorphic to the even-dimensional Lie algebra of negative and positive roots consisting of holomorphic and antiholomorphic vectors: \(\begin{align} E_{\alpha} = \frac{\partial}{\partial z_{\alpha}}, && E_{-\alpha} = \frac{\partial}{\partial \overline{z}_{\alpha}}. \end{align}\) But $-\lambda$ is a lowest weight iff the corresponding section $f$ is annihilated by $E_{-\alpha}$ for all positive roots $\alpha$. In other words, $f$ is a holomorphic function.

    The $\SU(2)$ example is given in Lecture 16, in which case the space of sections are homogeneous polynomials of degree $k$ in two variables, a standard textbook result.

  • MO threads on complex and Kählerian flag manifolds (Why are flag manifolds important to physicists? They exhaust all the possibilities of compact homogeneous Kähler manifolds associated with $G$ connected, compact, and semisimple.)
    • Flag Manifolds, Alekseevsky 1996 (11th Yugoslav Geometric Seminar)
  • Trying to trace the history of $G/T \cong G^{\C}/B$

Media

Meditation

Anime

Physics/quantum

  • Ink propulsion engine

  • Linking plasma formation in grapes to microwave resonances of aqueous dimers, Khattak, Bianucci and Slepkov 2019 (Two grapes touching at a point create a plasma ball inside a microwave because wet, grape-sized objects sustains and amplifies standing microwaves, focusing the energy into a single point.)
  • Daniel Harlow’s Facebook discussion on the NY Times piece by Sabine Hossenfelder. I agree with one of the comments that there are at least two social consequences of not building a next-generation collider that are tangled together (1) punishing theorists (because science is not self-correcting) for not learning from their mistakes and hyping a next-generation collider up for a wrong reason (2) punishing experimentalists for the mistakes made by theorists (which we don’t want).
  • Someone wrote a blog post on what he did to get a job in quantum computing.
  • Andy Matuschak and Michael Nielsen recruit alpha testers for spaced-repetition-based “Quantum Computing for the Very Curious”
  • Find your ideal quantum interpretation.
  • Quantum Game Jam at Helsinki
  • QIP 2019 videos are now online.
  • Pan Jianwei in MIT Techonology Review: The China Issue
  • Quantum cryptanalysis of symmetric, public-key and hash-based cryptographic schemes, Vlad Gheorghiu and Michele Mosca

    Roughly speaking, the physical footprint $nq$ is a space resource and the quantum security parameter $qs$ is a time resource. $nq$ is the number of qubits per CPU parallelizing Grover’s search, the best known quantum attack against symmetric cryptographic protocols and hash functions. $qs$ in this case is the number of surface code cycles, assuming that fault tolerence is achieved via the surface code.

    Breaking RSA and ECC schemes in 24 hours may require at least ten million qubits, $10^{11}$ $T$ gates, and $10^{13}$ cycles. Symmetric schemes may require $>2^{100}$ cycles to break.

Musics

Finnish power metal

Similarity/homage

  1. Here and here, for example 

  2. Note that while the relative entropy is asymmetric, the Fisher information is symmetric and thus is a true distance function. 

  3. A potential point of confusion: The highest weight of $V_{\lambda}^$ is generally not $-\lambda$; we need to map the negative weight to the positive, highest weight by an element of the Weyl group (called the “longest element”). Here we only need the fact that $-\lambda$ is the lowest weight of $V_{\lambda}^$. 

Categories:

Updated:

Leave a comment