Quantum random sampling is the leading proposal for demonstrating a computational advantage of quantum computers over classical computers. Recently, first large-scale implementations of quantum random sampling have arguably surpassed the boundary of what can be simulated on existing classical hardware. In this article, we comprehensively review the theoretical underpinning of quantum random sampling in terms of computational complexity and verifiability, as well as the practical aspects of its experimental implementation using superconducting and photonic devices and its classical simulation. We discuss in detail open questions in the field and provide perspectives for the road ahead, including potential applications of quantum random sampling.

1 aHangleiter, Dominik1 aEisert, Jens uhttps://arxiv.org/abs/2206.0407901537nas a2200157 4500008004100000245004800041210004800089260001400137520103000151100002101181700002001202700002801222700001701250700002801267856008401295 2022 eng d00aLinear growth of quantum circuit complexity0 aLinear growth of quantum circuit complexity c3/28/20223 aThe complexity of quantum states has become a key quantity of interest across various subfields of physics, from quantum computing to the theory of black holes. The evolution of generic quantum systems can be modelled by considering a collection of qubits subjected to sequences of random unitary gates. Here we investigate how the complexity of these random quantum circuits increases by considering how to construct a unitary operation from Haar-random two-qubit quantum gates. Implementing the unitary operation exactly requires a minimal number of gates—this is the operation’s exact circuit complexity. We prove a conjecture that this complexity grows linearly, before saturating when the number of applied gates reaches a threshold that grows exponentially with the number of qubits. Our proof overcomes difficulties in establishing lower bounds for the exact circuit complexity by combining differential topology and elementary algebraic geometry with an inductive construction of Clifford circuits.

1 aHaferkamp, Jonas1 aFaist, Philippe1 aKothakonda, Naga, B. T.1 aEisert, Jens1 aHalpern, Nicole, Yunger uhttps://www.quics.umd.edu/publications/linear-growth-quantum-circuit-complexity02013nas a2200277 4500008004100000245008100041210006900122260001300191300001300204490000600217520120300223100002301426700001801449700002001467700002101487700002001508700002001528700001701548700002101565700001401586700001701600700002401617700002001641700001501661856005901676 2022 eng d00aQuantum computational advantage via high-dimensional Gaussian boson sampling0 aQuantum computational advantage via highdimensional Gaussian bos c1/5/2022 aeabi78940 v83 aA programmable quantum computer based on fiber optics outperforms classical computers with a high level of confidence. Photonics is a promising platform for demonstrating a quantum computational advantage (QCA) by outperforming the most powerful classical supercomputers on a well-defined computational task. Despite this promise, existing proposals and demonstrations face challenges. Experimentally, current implementations of Gaussian boson sampling (GBS) lack programmability or have prohibitive loss rates. Theoretically, there is a comparative lack of rigorous evidence for the classical hardness of GBS. In this work, we make progress in improving both the theoretical evidence and experimental prospects. We provide evidence for the hardness of GBS, comparable to the strongest theoretical proposals for QCA. We also propose a QCA architecture we call high-dimensional GBS, which is programmable and can be implemented with low loss using few optical components. We show that particular algorithms for simulating GBS are outperformed by high-dimensional GBS experiments at modest system sizes. This work thus opens the path to demonstrating QCA with programmable photonic processors.

1 aDeshpande, Abhinav1 aMehta, Arthur1 aVincent, Trevor1 aQuesada, Nicolas1 aHinsche, Marcel1 aIoannou, Marios1 aMadsen, Lars1 aLavoie, Jonathan1 aQi, Haoyu1 aEisert, Jens1 aHangleiter, Dominik1 aFefferman, Bill1 aDhand, Ish uhttps://www.science.org/doi/abs/10.1126/sciadv.abi789402255nas a2200205 4500008004100000245005300041210005000094260001300144520167300157100002001830700002001850700002301870700002101893700001601914700002401930700002501954700001701979700001601996856003702012 2022 eng d00aA single T-gate makes distribution learning hard0 asingle Tgate makes distribution learning hard c7/7/20223 aThe task of learning a probability distribution from samples is ubiquitous across the natural sciences. The output distributions of local quantum circuits form a particularly interesting class of distributions, of key importance both to quantum advantage proposals and a variety of quantum machine learning algorithms. In this work, we provide an extensive characterization of the learnability of the output distributions of local quantum circuits. Our first result yields insight into the relationship between the efficient learnability and the efficient simulatability of these distributions. Specifically, we prove that the density modelling problem associated with Clifford circuits can be efficiently solved, while for depth d=nΩ(1) circuits the injection of a single T-gate into the circuit renders this problem hard. This result shows that efficient simulatability does not imply efficient learnability. Our second set of results provides insight into the potential and limitations of quantum generative modelling algorithms. We first show that the generative modelling problem associated with depth d=nΩ(1) local quantum circuits is hard for any learning algorithm, classical or quantum. As a consequence, one cannot use a quantum algorithm to gain a practical advantage for this task. We then show that, for a wide variety of the most practically relevant learning algorithms -- including hybrid-quantum classical algorithms -- even the generative modelling problem associated with depth d=ω(log(n)) Clifford circuits is hard. This result places limitations on the applicability of near-term hybrid quantum-classical generative modelling algorithms.

1 aHinsche, Marcel1 aIoannou, Marios1 aNietner, Alexander1 aHaferkamp, Jonas1 aQuek, Yihui1 aHangleiter, Dominik1 aSeifert, Jean-Pierre1 aEisert, Jens1 aSweke, Ryan uhttps://arxiv.org/abs/2207.0314002430nas a2200205 4500008004100000245007100041210006900112260001500181520180900196100002002005700002002025700002302045700002102068700001602089700002402105700002502129700001702154700001602171856003702187 2021 eng d00aLearnability of the output distributions of local quantum circuits0 aLearnability of the output distributions of local quantum circui c10/11/20213 aThere is currently a large interest in understanding the potential advantages quantum devices can offer for probabilistic modelling. In this work we investigate, within two different oracle models, the probably approximately correct (PAC) learnability of quantum circuit Born machines, i.e., the output distributions of local quantum circuits. We first show a negative result, namely, that the output distributions of super-logarithmic depth Clifford circuits are not sample-efficiently learnable in the statistical query model, i.e., when given query access to empirical expectation values of bounded functions over the sample space. This immediately implies the hardness, for both quantum and classical algorithms, of learning from statistical queries the output distributions of local quantum circuits using any gate set which includes the Clifford group. As many practical generative modelling algorithms use statistical queries -- including those for training quantum circuit Born machines -- our result is broadly applicable and strongly limits the possibility of a meaningful quantum advantage for learning the output distributions of local quantum circuits. As a positive result, we show that in a more powerful oracle model, namely when directly given access to samples, the output distributions of local Clifford circuits are computationally efficiently PAC learnable by a classical learner. Our results are equally applicable to the problems of learning an algorithm for generating samples from the target distribution (generative modelling) and learning an algorithm for evaluating its probabilities (density modelling). They provide the first rigorous insights into the learnability of output distributions of local quantum circuits from the probabilistic modelling perspective.

1 aHinsche, Marcel1 aIoannou, Marios1 aNietner, Alexander1 aHaferkamp, Jonas1 aQuek, Yihui1 aHangleiter, Dominik1 aSeifert, Jean-Pierre1 aEisert, Jens1 aSweke, Ryan uhttps://arxiv.org/abs/2110.0551701886nas a2200145 4500008004100000245007800041210006900119260001400188520142500202100002401627700001501651700001701666700002001683856003701703 2021 eng d00aPrecise Hamiltonian identification of a superconducting quantum processor0 aPrecise Hamiltonian identification of a superconducting quantum c8/18/20213 aThe required precision to perform quantum simulations beyond the capabilities of classical computers imposes major experimental and theoretical challenges. Here, we develop a characterization technique to benchmark the implementation precision of a specific quantum simulation task. We infer all parameters of the bosonic Hamiltonian that governs the dynamics of excitations in a two-dimensional grid of nearest-neighbour coupled superconducting qubits. We devise a robust algorithm for identification of Hamiltonian parameters from measured times series of the expectation values of single-mode canonical coordinates. Using super-resolution and denoising methods, we first extract eigenfrequencies of the governing Hamiltonian from the complex time domain measurement; next, we recover the eigenvectors of the Hamiltonian via constrained manifold optimization over the orthogonal group. For five and six coupled qubits, we identify Hamiltonian parameters with sub-MHz precision and construct a spatial implementation error map for a grid of 27 qubits. Our approach enables us to distinguish and quantify the effects of state preparation and measurement errors and show that they are the dominant sources of errors in the implementation. Our results quantify the implementation accuracy of analog dynamics and introduce a diagnostic toolkit for understanding, calibrating, and improving analog quantum processors.

1 aHangleiter, Dominik1 aRoth, Ingo1 aEisert, Jens1 aRoushan, Pedram uhttps://arxiv.org/abs/2108.0831902003nas a2200253 4500008004100000245008100041210006900122260001400191520125700205100002301462700001801485700002001503700002101523700002001544700002001564700001701584700002101601700001401622700001701636700002401653700002001677700001501697856003701712 2021 eng d00aQuantum Computational Supremacy via High-Dimensional Gaussian Boson Sampling0 aQuantum Computational Supremacy via HighDimensional Gaussian Bos c2/24/20213 aPhotonics is a promising platform for demonstrating quantum computational supremacy (QCS) by convincingly outperforming the most powerful classical supercomputers on a well-defined computational task. Despite this promise, existing photonics proposals and demonstrations face significant hurdles. Experimentally, current implementations of Gaussian boson sampling lack programmability or have prohibitive loss rates. Theoretically, there is a comparative lack of rigorous evidence for the classical hardness of GBS. In this work, we make significant progress in improving both the theoretical evidence and experimental prospects. On the theory side, we provide strong evidence for the hardness of Gaussian boson sampling, placing it on par with the strongest theoretical proposals for QCS. On the experimental side, we propose a new QCS architecture, high-dimensional Gaussian boson sampling, which is programmable and can be implemented with low loss rates using few optical components. We show that particular classical algorithms for simulating GBS are vastly outperformed by high-dimensional Gaussian boson sampling experiments at modest system sizes. This work thus opens the path to demonstrating QCS with programmable photonic processors.

1 aDeshpande, Abhinav1 aMehta, Arthur1 aVincent, Trevor1 aQuesada, Nicolas1 aHinsche, Marcel1 aIoannou, Marios1 aMadsen, Lars1 aLavoie, Jonathan1 aQi, Haoyu1 aEisert, Jens1 aHangleiter, Dominik1 aFefferman, Bill1 aDhand, Ish uhttps://arxiv.org/abs/2102.1247401694nas a2200169 4500008004100000245004400041210004400085260001500129520120900144100002801353700002801381700002101409700002001430700001701450700002001467856003701487 2021 eng d00aResource theory of quantum uncomplexity0 aResource theory of quantum uncomplexity c10/21/20213 aQuantum complexity is emerging as a key property of many-body systems, including black holes, topological materials, and early quantum computers. A state's complexity quantifies the number of computational gates required to prepare the state from a simple tensor product. The greater a state's distance from maximal complexity, or ``uncomplexity,'' the more useful the state is as input to a quantum computation. Separately, resource theories -- simple models for agents subject to constraints -- are burgeoning in quantum information theory. We unite the two domains, confirming Brown and Susskind's conjecture that a resource theory of uncomplexity can be defined. The allowed operations, fuzzy operations, are slightly random implementations of two-qubit gates chosen by an agent. We formalize two operational tasks, uncomplexity extraction and expenditure. Their optimal efficiencies depend on an entropy that we engineer to reflect complexity. We also present two monotones, uncomplexity measures that decline monotonically under fuzzy operations, in certain regimes. This work unleashes on many-body complexity the resource-theory toolkit from quantum information theory.

1 aHalpern, Nicole, Yunger1 aKothakonda, Naga, B. T.1 aHaferkamp, Jonas1 aMunson, Anthony1 aEisert, Jens1 aFaist, Philippe uhttps://arxiv.org/abs/2110.1137102434nas a2200205 4500008004100000245006200041210006200103260001500165300001100180490000800191520186900199100001502068700001902083700001902102700001602121700001702137700001702154700002002171856003702191 2018 eng d00aRecovering quantum gates from few average gate fidelities0 aRecovering quantum gates from few average gate fidelities c2018/03/01 a1705020 v1213 aCharacterising quantum processes is a key task in and constitutes a challenge for the development of quantum technologies, especially at the noisy intermediate scale of today's devices. One method for characterising processes is randomised benchmarking, which is robust against state preparation and measurement (SPAM) errors, and can be used to benchmark Clifford gates. A complementing approach asks for full tomographic knowledge. Compressed sensing techniques achieve full tomography of quantum channels essentially at optimal resource efficiency. So far, guarantees for compressed sensing protocols rely on unstructured random measurements and can not be applied to the data acquired from randomised benchmarking experiments. It has been an open question whether or not the favourable features of both worlds can be combined. In this work, we give a positive answer to this question. For the important case of characterising multi-qubit unitary gates, we provide a rigorously guaranteed and practical reconstruction method that works with an essentially optimal number of average gate fidelities measured respect to random Clifford unitaries. Moreover, for general unital quantum channels we provide an explicit expansion into a unitary 2-design, allowing for a practical and guaranteed reconstruction also in that case. As a side result, we obtain a new statistical interpretation of the unitarity -- a figure of merit that characterises the coherence of a process. In our proofs we exploit recent representation theoretic insights on the Clifford group, develop a version of Collins' calculus with Weingarten functions for integration over the Clifford group, and combine this with proof techniques from compressed sensing.

1 aRoth, Ingo1 aKueng, Richard1 aKimmel, Shelby1 aLiu, Yi-Kai1 aGross, David1 aEisert, Jens1 aKliesch, Martin uhttps://arxiv.org/abs/1803.0057202431nas a2200169 4500008004100000245010800041210006900149260001500218300001100233490000700244520189900251100002402150700001702174700001602191700001702207856003702224 2012 eng d00aQuantum Tomography via Compressed Sensing: Error Bounds, Sample Complexity, and Efficient Estimators 0 aQuantum Tomography via Compressed Sensing Error Bounds Sample Co c2012/09/27 a0950220 v143 a Intuitively, if a density operator has small rank, then it should be easier to estimate from experimental data, since in this case only a few eigenvectors need to be learned. We prove two complementary results that confirm this intuition. First, we show that a low-rank density matrix can be estimated using fewer copies of the state, i.e., the sample complexity of tomography decreases with the rank. Second, we show that unknown low-rank states can be reconstructed from an incomplete set of measurements, using techniques from compressed sensing and matrix completion. These techniques use simple Pauli measurements, and their output can be certified without making any assumptions about the unknown state. We give a new theoretical analysis of compressed tomography, based on the restricted isometry property (RIP) for low-rank matrices. Using these tools, we obtain near-optimal error bounds, for the realistic situation where the data contains noise due to finite statistics, and the density matrix is full-rank with decaying eigenvalues. We also obtain upper-bounds on the sample complexity of compressed tomography, and almost-matching lower bounds on the sample complexity of any procedure using adaptive sequences of Pauli measurements. Using numerical simulations, we compare the performance of two compressed sensing estimators with standard maximum-likelihood estimation (MLE). We find that, given comparable experimental resources, the compressed sensing estimators consistently produce higher-fidelity state reconstructions than MLE. In addition, the use of an incomplete set of measurements leads to faster classical processing with no loss of accuracy. Finally, we show how to certify the accuracy of a low rank estimate using direct fidelity estimation and we describe a method for compressed quantum process tomography that works for processes with small Kraus rank. 1 aFlammia, Steven, T.1 aGross, David1 aLiu, Yi-Kai1 aEisert, Jens uhttp://arxiv.org/abs/1205.2300v201526nas a2200157 4500008004100000245005100041210005000092260001500142520108300157100002201240700001901262700001701281700001601298700001701314856003701331 2011 eng d00aContinuous-variable quantum compressed sensing0 aContinuousvariable quantum compressed sensing c2011/11/033 a We significantly extend recently developed methods to faithfully reconstruct unknown quantum states that are approximately low-rank, using only a few measurement settings. Our new method is general enough to allow for measurements from a continuous family, and is also applicable to continuous-variable states. As a technical result, this work generalizes quantum compressed sensing to the situation where the measured observables are taken from a so-called tight frame (rather than an orthonormal basis) --- hence covering most realistic measurement scenarios. As an application, we discuss the reconstruction of quantum states of light from homodyne detection and other types of measurements, and we present simulations that show the advantage of the proposed compressed sensing technique over present methods. Finally, we introduce a method to construct a certificate which guarantees the success of the reconstruction with no assumption on the state, and we show how slightly more measurements give rise to "universal" state reconstruction that is highly robust to noise. 1 aOhliger, Matthias1 aNesme, Vincent1 aGross, David1 aLiu, Yi-Kai1 aEisert, Jens uhttp://arxiv.org/abs/1111.0853v301338nas a2200169 4500008004100000245005200041210005200093260001400145490000800159520087000167100001701037700001601054700002401070700002001094700001701114856003701131 2010 eng d00aQuantum state tomography via compressed sensing0 aQuantum state tomography via compressed sensing c2010/10/40 v1053 a We establish methods for quantum state tomography based on compressed sensing. These methods are specialized for quantum states that are fairly pure, and they offer a significant performance improvement on large quantum systems. In particular, they are able to reconstruct an unknown density matrix of dimension d and rank r using O(rd log^2 d) measurement settings, compared to standard methods that require d^2 settings. Our methods have several features that make them amenable to experimental implementation: they require only simple Pauli measurements, use fast convex optimization, are stable against noise, and can be applied to states that are only approximately low-rank. The acquired data can be used to certify that the state is indeed close to pure, so no a priori assumptions are needed. We present both theoretical bounds and numerical simulations. 1 aGross, David1 aLiu, Yi-Kai1 aFlammia, Steven, T.1 aBecker, Stephen1 aEisert, Jens uhttp://arxiv.org/abs/0909.3304v4