Quantum random sampling is the leading proposal for demonstrating a computational advantage of quantum computers over classical computers. Recently, first large-scale implementations of quantum random sampling have arguably surpassed the boundary of what can be simulated on existing classical hardware. In this article, we comprehensively review the theoretical underpinning of quantum random sampling in terms of computational complexity and verifiability, as well as the practical aspects of its experimental implementation using superconducting and photonic devices and its classical simulation. We discuss in detail open questions in the field and provide perspectives for the road ahead, including potential applications of quantum random sampling.

UR - https://arxiv.org/abs/2206.04079 ER - TY - JOUR T1 - Linear growth of quantum circuit complexity JF - Nat. Phys. Y1 - 2022 A1 - Jonas Haferkamp A1 - Philippe Faist A1 - Naga B. T. Kothakonda A1 - Jens Eisert A1 - Nicole Yunger Halpern AB -The complexity of quantum states has become a key quantity of interest across various subfields of physics, from quantum computing to the theory of black holes. The evolution of generic quantum systems can be modelled by considering a collection of qubits subjected to sequences of random unitary gates. Here we investigate how the complexity of these random quantum circuits increases by considering how to construct a unitary operation from Haar-random two-qubit quantum gates. Implementing the unitary operation exactly requires a minimal number of gates—this is the operation’s exact circuit complexity. We prove a conjecture that this complexity grows linearly, before saturating when the number of applied gates reaches a threshold that grows exponentially with the number of qubits. Our proof overcomes difficulties in establishing lower bounds for the exact circuit complexity by combining differential topology and elementary algebraic geometry with an inductive construction of Clifford circuits.

U5 - https://doi.org/10.1038/s41567-022-01539-6 ER - TY - JOUR T1 - Quantum computational advantage via high-dimensional Gaussian boson sampling JF - Science Advances Y1 - 2022 A1 - Abhinav Deshpande A1 - Arthur Mehta A1 - Trevor Vincent A1 - Nicolas Quesada A1 - Marcel Hinsche A1 - Marios Ioannou A1 - Lars Madsen A1 - Jonathan Lavoie A1 - Haoyu Qi A1 - Jens Eisert A1 - Dominik Hangleiter A1 - Bill Fefferman A1 - Ish Dhand AB -A programmable quantum computer based on fiber optics outperforms classical computers with a high level of confidence. Photonics is a promising platform for demonstrating a quantum computational advantage (QCA) by outperforming the most powerful classical supercomputers on a well-defined computational task. Despite this promise, existing proposals and demonstrations face challenges. Experimentally, current implementations of Gaussian boson sampling (GBS) lack programmability or have prohibitive loss rates. Theoretically, there is a comparative lack of rigorous evidence for the classical hardness of GBS. In this work, we make progress in improving both the theoretical evidence and experimental prospects. We provide evidence for the hardness of GBS, comparable to the strongest theoretical proposals for QCA. We also propose a QCA architecture we call high-dimensional GBS, which is programmable and can be implemented with low loss using few optical components. We show that particular algorithms for simulating GBS are outperformed by high-dimensional GBS experiments at modest system sizes. This work thus opens the path to demonstrating QCA with programmable photonic processors.

VL - 8 U4 - eabi7894 UR - https://www.science.org/doi/abs/10.1126/sciadv.abi7894 U5 - 10.1126/sciadv.abi7894 ER - TY - JOUR T1 - A single T-gate makes distribution learning hard Y1 - 2022 A1 - Marcel Hinsche A1 - Marios Ioannou A1 - Alexander Nietner A1 - Jonas Haferkamp A1 - Yihui Quek A1 - Dominik Hangleiter A1 - Jean-Pierre Seifert A1 - Jens Eisert A1 - Ryan Sweke AB -The task of learning a probability distribution from samples is ubiquitous across the natural sciences. The output distributions of local quantum circuits form a particularly interesting class of distributions, of key importance both to quantum advantage proposals and a variety of quantum machine learning algorithms. In this work, we provide an extensive characterization of the learnability of the output distributions of local quantum circuits. Our first result yields insight into the relationship between the efficient learnability and the efficient simulatability of these distributions. Specifically, we prove that the density modelling problem associated with Clifford circuits can be efficiently solved, while for depth d=nΩ(1) circuits the injection of a single T-gate into the circuit renders this problem hard. This result shows that efficient simulatability does not imply efficient learnability. Our second set of results provides insight into the potential and limitations of quantum generative modelling algorithms. We first show that the generative modelling problem associated with depth d=nΩ(1) local quantum circuits is hard for any learning algorithm, classical or quantum. As a consequence, one cannot use a quantum algorithm to gain a practical advantage for this task. We then show that, for a wide variety of the most practically relevant learning algorithms -- including hybrid-quantum classical algorithms -- even the generative modelling problem associated with depth d=ω(log(n)) Clifford circuits is hard. This result places limitations on the applicability of near-term hybrid quantum-classical generative modelling algorithms.

UR - https://arxiv.org/abs/2207.03140 ER - TY - JOUR T1 - Learnability of the output distributions of local quantum circuits Y1 - 2021 A1 - Marcel Hinsche A1 - Marios Ioannou A1 - Alexander Nietner A1 - Jonas Haferkamp A1 - Yihui Quek A1 - Dominik Hangleiter A1 - Jean-Pierre Seifert A1 - Jens Eisert A1 - Ryan Sweke AB -There is currently a large interest in understanding the potential advantages quantum devices can offer for probabilistic modelling. In this work we investigate, within two different oracle models, the probably approximately correct (PAC) learnability of quantum circuit Born machines, i.e., the output distributions of local quantum circuits. We first show a negative result, namely, that the output distributions of super-logarithmic depth Clifford circuits are not sample-efficiently learnable in the statistical query model, i.e., when given query access to empirical expectation values of bounded functions over the sample space. This immediately implies the hardness, for both quantum and classical algorithms, of learning from statistical queries the output distributions of local quantum circuits using any gate set which includes the Clifford group. As many practical generative modelling algorithms use statistical queries -- including those for training quantum circuit Born machines -- our result is broadly applicable and strongly limits the possibility of a meaningful quantum advantage for learning the output distributions of local quantum circuits. As a positive result, we show that in a more powerful oracle model, namely when directly given access to samples, the output distributions of local Clifford circuits are computationally efficiently PAC learnable by a classical learner. Our results are equally applicable to the problems of learning an algorithm for generating samples from the target distribution (generative modelling) and learning an algorithm for evaluating its probabilities (density modelling). They provide the first rigorous insights into the learnability of output distributions of local quantum circuits from the probabilistic modelling perspective.

UR - https://arxiv.org/abs/2110.05517 ER - TY - JOUR T1 - Precise Hamiltonian identification of a superconducting quantum processor Y1 - 2021 A1 - Dominik Hangleiter A1 - Ingo Roth A1 - Jens Eisert A1 - Pedram Roushan AB -The required precision to perform quantum simulations beyond the capabilities of classical computers imposes major experimental and theoretical challenges. Here, we develop a characterization technique to benchmark the implementation precision of a specific quantum simulation task. We infer all parameters of the bosonic Hamiltonian that governs the dynamics of excitations in a two-dimensional grid of nearest-neighbour coupled superconducting qubits. We devise a robust algorithm for identification of Hamiltonian parameters from measured times series of the expectation values of single-mode canonical coordinates. Using super-resolution and denoising methods, we first extract eigenfrequencies of the governing Hamiltonian from the complex time domain measurement; next, we recover the eigenvectors of the Hamiltonian via constrained manifold optimization over the orthogonal group. For five and six coupled qubits, we identify Hamiltonian parameters with sub-MHz precision and construct a spatial implementation error map for a grid of 27 qubits. Our approach enables us to distinguish and quantify the effects of state preparation and measurement errors and show that they are the dominant sources of errors in the implementation. Our results quantify the implementation accuracy of analog dynamics and introduce a diagnostic toolkit for understanding, calibrating, and improving analog quantum processors.

UR - https://arxiv.org/abs/2108.08319 ER - TY - JOUR T1 - Quantum Computational Supremacy via High-Dimensional Gaussian Boson Sampling Y1 - 2021 A1 - Abhinav Deshpande A1 - Arthur Mehta A1 - Trevor Vincent A1 - Nicolas Quesada A1 - Marcel Hinsche A1 - Marios Ioannou A1 - Lars Madsen A1 - Jonathan Lavoie A1 - Haoyu Qi A1 - Jens Eisert A1 - Dominik Hangleiter A1 - Bill Fefferman A1 - Ish Dhand AB -Photonics is a promising platform for demonstrating quantum computational supremacy (QCS) by convincingly outperforming the most powerful classical supercomputers on a well-defined computational task. Despite this promise, existing photonics proposals and demonstrations face significant hurdles. Experimentally, current implementations of Gaussian boson sampling lack programmability or have prohibitive loss rates. Theoretically, there is a comparative lack of rigorous evidence for the classical hardness of GBS. In this work, we make significant progress in improving both the theoretical evidence and experimental prospects. On the theory side, we provide strong evidence for the hardness of Gaussian boson sampling, placing it on par with the strongest theoretical proposals for QCS. On the experimental side, we propose a new QCS architecture, high-dimensional Gaussian boson sampling, which is programmable and can be implemented with low loss rates using few optical components. We show that particular classical algorithms for simulating GBS are vastly outperformed by high-dimensional Gaussian boson sampling experiments at modest system sizes. This work thus opens the path to demonstrating QCS with programmable photonic processors.

UR - https://arxiv.org/abs/2102.12474 ER - TY - JOUR T1 - Resource theory of quantum uncomplexity Y1 - 2021 A1 - Nicole Yunger Halpern A1 - Naga B. T. Kothakonda A1 - Jonas Haferkamp A1 - Anthony Munson A1 - Jens Eisert A1 - Philippe Faist AB -Quantum complexity is emerging as a key property of many-body systems, including black holes, topological materials, and early quantum computers. A state's complexity quantifies the number of computational gates required to prepare the state from a simple tensor product. The greater a state's distance from maximal complexity, or ``uncomplexity,'' the more useful the state is as input to a quantum computation. Separately, resource theories -- simple models for agents subject to constraints -- are burgeoning in quantum information theory. We unite the two domains, confirming Brown and Susskind's conjecture that a resource theory of uncomplexity can be defined. The allowed operations, fuzzy operations, are slightly random implementations of two-qubit gates chosen by an agent. We formalize two operational tasks, uncomplexity extraction and expenditure. Their optimal efficiencies depend on an entropy that we engineer to reflect complexity. We also present two monotones, uncomplexity measures that decline monotonically under fuzzy operations, in certain regimes. This work unleashes on many-body complexity the resource-theory toolkit from quantum information theory.

UR - https://arxiv.org/abs/2110.11371 ER - TY - JOUR T1 - Recovering quantum gates from few average gate fidelities JF - Phys. Rev. Lett. Y1 - 2018 A1 - Ingo Roth A1 - Richard Kueng A1 - Shelby Kimmel A1 - Yi-Kai Liu A1 - David Gross A1 - Jens Eisert A1 - Martin Kliesch AB -Characterising quantum processes is a key task in and constitutes a challenge for the development of quantum technologies, especially at the noisy intermediate scale of today's devices. One method for characterising processes is randomised benchmarking, which is robust against state preparation and measurement (SPAM) errors, and can be used to benchmark Clifford gates. A complementing approach asks for full tomographic knowledge. Compressed sensing techniques achieve full tomography of quantum channels essentially at optimal resource efficiency. So far, guarantees for compressed sensing protocols rely on unstructured random measurements and can not be applied to the data acquired from randomised benchmarking experiments. It has been an open question whether or not the favourable features of both worlds can be combined. In this work, we give a positive answer to this question. For the important case of characterising multi-qubit unitary gates, we provide a rigorously guaranteed and practical reconstruction method that works with an essentially optimal number of average gate fidelities measured respect to random Clifford unitaries. Moreover, for general unital quantum channels we provide an explicit expansion into a unitary 2-design, allowing for a practical and guaranteed reconstruction also in that case. As a side result, we obtain a new statistical interpretation of the unitarity -- a figure of merit that characterises the coherence of a process. In our proofs we exploit recent representation theoretic insights on the Clifford group, develop a version of Collins' calculus with Weingarten functions for integration over the Clifford group, and combine this with proof techniques from compressed sensing.

VL - 121 U4 - 170502 UR - https://arxiv.org/abs/1803.00572 U5 - https://doi.org/10.1103/PhysRevLett.121.170502 ER - TY - JOUR T1 - Quantum Tomography via Compressed Sensing: Error Bounds, Sample Complexity, and Efficient Estimators JF - New Journal of Physics Y1 - 2012 A1 - Steven T. Flammia A1 - David Gross A1 - Yi-Kai Liu A1 - Jens Eisert AB - Intuitively, if a density operator has small rank, then it should be easier to estimate from experimental data, since in this case only a few eigenvectors need to be learned. We prove two complementary results that confirm this intuition. First, we show that a low-rank density matrix can be estimated using fewer copies of the state, i.e., the sample complexity of tomography decreases with the rank. Second, we show that unknown low-rank states can be reconstructed from an incomplete set of measurements, using techniques from compressed sensing and matrix completion. These techniques use simple Pauli measurements, and their output can be certified without making any assumptions about the unknown state. We give a new theoretical analysis of compressed tomography, based on the restricted isometry property (RIP) for low-rank matrices. Using these tools, we obtain near-optimal error bounds, for the realistic situation where the data contains noise due to finite statistics, and the density matrix is full-rank with decaying eigenvalues. We also obtain upper-bounds on the sample complexity of compressed tomography, and almost-matching lower bounds on the sample complexity of any procedure using adaptive sequences of Pauli measurements. Using numerical simulations, we compare the performance of two compressed sensing estimators with standard maximum-likelihood estimation (MLE). We find that, given comparable experimental resources, the compressed sensing estimators consistently produce higher-fidelity state reconstructions than MLE. In addition, the use of an incomplete set of measurements leads to faster classical processing with no loss of accuracy. Finally, we show how to certify the accuracy of a low rank estimate using direct fidelity estimation and we describe a method for compressed quantum process tomography that works for processes with small Kraus rank. VL - 14 U4 - 095022 UR - http://arxiv.org/abs/1205.2300v2 CP - 9 J1 - New J. Phys. U5 - 10.1088/1367-2630/14/9/095022 ER - TY - JOUR T1 - Continuous-variable quantum compressed sensing Y1 - 2011 A1 - Matthias Ohliger A1 - Vincent Nesme A1 - David Gross A1 - Yi-Kai Liu A1 - Jens Eisert AB - We significantly extend recently developed methods to faithfully reconstruct unknown quantum states that are approximately low-rank, using only a few measurement settings. Our new method is general enough to allow for measurements from a continuous family, and is also applicable to continuous-variable states. As a technical result, this work generalizes quantum compressed sensing to the situation where the measured observables are taken from a so-called tight frame (rather than an orthonormal basis) --- hence covering most realistic measurement scenarios. As an application, we discuss the reconstruction of quantum states of light from homodyne detection and other types of measurements, and we present simulations that show the advantage of the proposed compressed sensing technique over present methods. Finally, we introduce a method to construct a certificate which guarantees the success of the reconstruction with no assumption on the state, and we show how slightly more measurements give rise to "universal" state reconstruction that is highly robust to noise. UR - http://arxiv.org/abs/1111.0853v3 ER - TY - JOUR T1 - Quantum state tomography via compressed sensing JF - Physical Review Letters Y1 - 2010 A1 - David Gross A1 - Yi-Kai Liu A1 - Steven T. Flammia A1 - Stephen Becker A1 - Jens Eisert AB - We establish methods for quantum state tomography based on compressed sensing. These methods are specialized for quantum states that are fairly pure, and they offer a significant performance improvement on large quantum systems. In particular, they are able to reconstruct an unknown density matrix of dimension d and rank r using O(rd log^2 d) measurement settings, compared to standard methods that require d^2 settings. Our methods have several features that make them amenable to experimental implementation: they require only simple Pauli measurements, use fast convex optimization, are stable against noise, and can be applied to states that are only approximately low-rank. The acquired data can be used to certify that the state is indeed close to pure, so no a priori assumptions are needed. We present both theoretical bounds and numerical simulations. VL - 105 UR - http://arxiv.org/abs/0909.3304v4 CP - 15 J1 - Phys. Rev. Lett. U5 - 10.1103/PhysRevLett.105.150401 ER -