Qubit noise spectroscopy is an important tool for the experimental investigation of open quantum systems. However, conventional techniques for implementing noise spectroscopy are time-consuming, because they require multiple measurements of the noise spectral density at different frequencies. Here we describe an alternative method for quickly characterizing the spectral density. Our method utilizes random pulse sequences, with carefully-controlled correlations among the pulses, to measure arbitrary linear functionals of the noise spectrum. Such measurements allow us to estimate k\&$\#$39;th-order moments of the noise spectrum, as well as to reconstruct sparse noise spectra via compressed sensing. Our simulations of the performance of the random pulse sequences on a realistic physical system, self-assembled quantum dots, reveal a speedup of an order of magnitude in extracting the noise spectrum compared to conventional dynamical decoupling approaches.

}, url = {https://arxiv.org/abs/2303.00909}, author = {Kaixin Huang and Demitry Farfurnik and Alireza Seif and Mohammad Hafezi and Yi-Kai Liu} } @article {3354, title = {An Uncertainty Principle for the Curvelet Transform, and the Infeasibility of Quantum Algorithms for Finding Short Lattice Vectors}, year = {2023}, month = {10/5/2023}, abstract = {The curvelet transform is a special type of wavelet transform, which is useful for estimating the locations and orientations of waves propagating in Euclidean space. We prove an uncertainty principle that lower-bounds the variance of these estimates, for radial wave functions in n dimensions.

As an application of this uncertainty principle, we show the infeasibility of one approach to constructing quantum algorithms for solving lattice problems, such as the approximate shortest vector problem (approximate-SVP), and bounded distance decoding (BDD). This gives insight into the computational intractability of approximate-SVP, which plays an important role in algorithms for integer programming, and in post-quantum cryptosystems.

In this approach to solving lattice problems, one prepares quantum superpositions of Gaussian-like wave functions centered at lattice points. A key step in this procedure requires finding the center of each Gaussian-like wave function, using the quantum curvelet transform. We show that, for any choice of the Gaussian-like wave function, the error in this step will be above the threshold required to solve BDD and approximate-SVP.

Long-range correlated errors can severely impact the performance of NISQ (noisy intermediate-scale quantum) devices, and fault-tolerant quantum computation. Characterizing these errors is important for improving the performance of these devices, via calibration and error correction, and to ensure correct interpretation of the results. We propose a compressed sensing method for detecting two-qubit correlated dephasing errors, assuming only that the correlations are sparse (i.e., at most s pairs of qubits have correlated errors, where s \<\< n(n-1)/2, and n is the total number of qubits). In particular, our method can detect long-range correlations between any two qubits in the system (i.e., the correlations are not restricted to be geometrically local).

Our method is highly scalable: it requires as few as m = O(s log n) measurement settings, and efficient classical postprocessing based on convex optimization. In addition, when m = O(s log^4(n)), our method is highly robust to noise, and has sample complexity O(max(n,s)^2 log^4(n)), which can be compared to conventional methods that have sample complexity O(n^3). Thus, our method is advantageous when the correlations are sufficiently sparse, that is, when s \< O(n^(3/2) / log^2(n)). Our method also performs well in numerical simulations on small system sizes, and has some resistance to state-preparation-and-measurement (SPAM) errors. The key ingredient in our method is a new type of compressed sensing measurement, which works by preparing entangled Greenberger-Horne-Zeilinger states (GHZ states) on random subsets of qubits, and measuring their decay rates with high precision.

The National Institute of Standards and Technology is in the process of selecting one or more public-key cryptographic algorithms through a public, competition-like process. The new public-key cryptography standards will specify one or more additional digital signatures, public-key encryption, and key-establishment algorithms to augment Federal Information Processing Standard (FIPS) 186-4, Digital Signature Standard (DSS), as well as NIST Special Publication (SP) 800-56A Revision 3, Recommendation for Pair-Wise Key-Establishment Schemes Using Discrete Logarithm Cryptography, and SP 800-56B Revision 2, Recommendation for Pair-Wise Key Establishment Using Integer Factorization Cryptography. It is intended that these algorithms will be capable of protecting sensitive information well into the foreseeable future, including after the advent of quantum computers.

The NIST Post-Quantum Cryptography Standardization Process began in 2017 with 69 candidate algorithms that met both the minimum acceptance criteria and submission requirements. The first round lasted until January 2019, during which candidate algorithms were evaluated based on their security, performance, and other characteristics. NIST selected 26 algorithms to advance to the second round for more analysis. This report describes the evaluation and selection process, based on public feedback and internal review, of the second-round candidates. The report summarizes the 26 second-round candidate algorithms and identifies those selected to move forward to the third round of the competition. The third-round finalist public-key encryption and key-establishment algorithms are Classic McEliece, CRYSTALS-KYBER, NTRU, and SABER. The third-round finalists for digital signatures are CRYSTALS-DILITHIUM, FALCON, and Rainbow. These finalists will be considered for standardization at the end of the third round. In addition, eight alternate candidate algorithms will also advance to the third round: BIKE, FrodoKEM, HQC, NTRU Prime, SIKE, GeMSS, Picnic, and SPHINCS+. These additional candidates are still being considered for standardization, although this is unlikely to occur at the end of the third round. NIST hopes that the announcement of these finalists and additional candidates will serve to focus the cryptographic community\’s attention during the next round.

}, doi = {https://doi.org/10.6028/NIST.IR.8309}, author = {Gorjan Alagic and Jacob Alperin-Sheriff and Daniel Apon and David Cooper and Quynh Dang and John Kelsey and Yi-Kai Liu and Carl Miller and Dustin Moody and Rene Peralta and Ray Perlner and Angela Robinson and Daniel Smith-Tone} } @article {2451, title = {Classifying single-qubit noise using machine learning}, year = {2019}, month = {8/30/2019}, abstract = {Quantum characterization, validation, and verification (QCVV) techniques are used to probe, characterize, diagnose, and detect errors in quantum information processors (QIPs). An important component of any QCVV protocol is a mapping from experimental data to an estimate of a property of a QIP. Machine learning (ML) algorithms can help automate the development of QCVV protocols, creating such maps by learning them from training data. We identify the critical components of \"machine-learned\" QCVV techniques, and present a rubric for developing them. To demonstrate this approach, we focus on the problem of determining whether noise affecting a single qubit is coherent or stochastic (incoherent) using the data sets originally proposed for gate set tomography. We leverage known ML algorithms to train a classifier distinguishing these two kinds of noise. The accuracy of the classifier depends on how well it can approximate the \"natural\" geometry of the training data. We find GST data sets generated by a noisy qubit can reliably be separated by linear surfaces, although feature engineering can be necessary. We also show the classifier learned by a support vector machine (SVM) is robust under finite-sample noise.\

}, url = {https://arxiv.org/abs/1908.11762}, author = {Travis L. Scholten and Yi-Kai Liu and Kevin Young and Robin Blume-Kohout} } @article {2624, title = {Status Report on the First Round of the NIST Post-Quantum Cryptography Standardization Process}, journal = {School: National Institute for Standards and Technology }, year = {2019}, type = {techreport}, abstract = {The National Institute of Standards and Technology is in the process of selecting one or more

public-key cryptographic algorithms through a public competition-like process. The new publickey cryptography standards will specify one or more additional digital signature, public-key

encryption, and key-establishment algorithms to augment FIPS 186-4, Digital Signature Standard

(DSS), as well as special publications SP 800-56A Revision 2, Recommendation for Pair-Wise

Key Establishment Schemes Using Discrete Logarithm Cryptography, and SP 800-56B,

Recommendation for Pair-Wise Key-Establishment Schemes Using Integer Factorization. It is

intended that these algorithms will be capable of protecting sensitive information well into the

foreseeable future, including after the advent of quantum computers.

In November 2017, 82 candidate algorithms were submitted to NIST for consideration. Among

these, 69 met both the minimum acceptance criteria and our submission requirements, and were

accepted as First-Round Candidates on Dec. 20, 2017, marking the beginning of the First Round

of the NIST Post-Quantum Cryptography Standardization Process. This report describes the

evaluation criteria and selection process, based on public feedback and internal review of the

first-round candidates, and summarizes the 26 candidate algorithms announced on January 30,

2019 for moving forward to the second round of the competition. The 17 Second-Round

Candidate public-key encryption and key-establishment algorithms are BIKE, Classic McEliece,

CRYSTALS-KYBER, FrodoKEM, HQC, LAC, LEDAcrypt (merger of LEDAkem/LEDApkc),

NewHope, NTRU (merger of NTRUEncrypt/NTRU-HRSS-KEM), NTRU Prime, NTS-KEM,

ROLLO (merger of LAKE/LOCKER/Ouroboros-R), Round5 (merger of Hila5/Round2), RQC,

SABER, SIKE, and Three Bears. The 9 Second-Round Candidates for digital signatures are

CRYSTALS-DILITHIUM, FALCON, GeMSS, LUOV, MQDSS, Picnic, qTESLA, Rainbow,

and SPHINCS+.

From dice to modern complex circuits, there have been many attempts to build increasingly better devices to generate random numbers. Today, randomness is fundamental to security and cryptographic systems, as well as safeguarding privacy. A key challenge with random number generators is that it is hard to ensure that their outputs are unpredictable. For a random number generator based on a physical process, such as a noisy classical system or an elementary quantum measurement, a detailed model describing the underlying physics is required to assert unpredictability. Such a model must make a number of assumptions that may not be valid, thereby compromising the integrity of the device. However, it is possible to exploit the phenomenon of quantum nonlocality with a loophole-free Bell test to build a random number generator that can produce output that is unpredictable to any adversary limited only by general physical principles. With recent technological developments, it is now possible to carry out such a loophole-free Bell test. Here we present certified randomness obtained from a photonic Bell experiment and extract 1024 random bits uniform to within 10\−12. These random bits could not have been predicted within any physical theory that prohibits superluminal signaling and allows one to make independent measurement choices. To certify and quantify the randomness, we describe a new protocol that is optimized for apparatuses characterized by a low per-trial violation of Bell inequalities. We thus enlisted an experimental result that fundamentally challenges the notion of determinism to build a system that can increase trust in random sources. In the future, random number generators based on loophole-free Bell tests may play a role in increasing the security and trust of our cryptographic systems and infrastructure.

}, doi = {https://doi.org/10.1038/s41586-018-0019-0}, url = {https://arxiv.org/abs/1803.06219}, author = {Peter Bierhorst and Emanuel Knill and Scott Glancy and Yanbao Zhang and Alan Mink and Stephen Jordan and Andrea Rommal and Yi-Kai Liu and Bradley Christensen and Sae Woo Nam and Martin J. Stevens and Lynden K. Shalm} } @article {2139, title = {Phase Retrieval Without Small-Ball Probability Assumptions}, journal = {IEEE Transactions on Information Theory }, volume = {64}, year = {2018}, month = {2018/01/01}, pages = {485-500}, abstract = {In the context of the phase retrieval problem, it is known that certain natural classes of measurements, such as Fourier measurements and random Bernoulli measurements, do not lead to the unique reconstruction of all possible signals, even in combination with certain practically feasible random masks. To avoid this difficulty, the analysis is often restricted to measurement ensembles (or masks) that satisfy a small-ball probability condition, in order to ensure that the reconstruction is unique. This paper shows a complementary result: for random Bernoulli measurements, there is still a large class of signals that can be reconstructed uniquely, namely, those signals that are non-peaky. In fact, this result is much more general: it holds for random measurements sampled from any subgaussian distribution 2), without any small-ball conditions. This is demonstrated in two ways: 1) a proof of stability and uniqueness and 2) a uniform recovery guarantee for the PhaseLift algorithm. In all of these cases, the number of measurements m approaches the information-theoretic lower bound. Finally, for random Bernoulli measurements with erasures, it is shown that PhaseLift achieves uniform recovery of all signals (including peaky ones).

}, doi = {10.1109/TIT.2017.2757520}, url = {http://ieeexplore.ieee.org/document/8052535/}, author = {Felix Krahmer and Yi-Kai Liu} } @article {2098, title = {Pseudorandom States, Non-Cloning Theorems and Quantum Money}, journal = {In: Shacham H., Boldyreva A. (eds) Advances in Cryptology {\textendash} CRYPTO 2018. CRYPTO 2018. Lecture Notes in Computer Science.}, volume = {10993}, year = {2018}, month = {2017/11/01}, abstract = {We propose the concept of pseudorandom states and study their constructions, properties, and applications. Under the assumption that quantum-secure one-way functions exist, we present concrete and efficient constructions of pseudorandom states. The non-cloning theorem plays a central role in our study\—it motivates the proper definition and characterizes one of the important properties of pseudorandom quantum states. Namely, there is no efficient quantum algorithm that can create more copies of the state from a given number of pseudorandom states. As the main application, we prove that any family of pseudorandom states naturally gives rise to a private-key quantum money scheme.

}, doi = {https://doi.org/10.1007/978-3-319-96878-0_5}, url = {https://arxiv.org/abs/1711.00385}, author = {Zhengfeng Ji and Yi-Kai Liu and Fang Song} } @article {2326, title = {Quantum Cryptanalysis: Shor, Grover, and Beyond}, journal = {IEEE Security \& Privacy }, volume = {16}, year = {2018}, month = {2018/09}, pages = {14-21}, doi = {10.1109/MSP.2018.3761719}, author = {Stephen P. Jordan and Yi-Kai Liu} } @article {2138, title = {Recovering quantum gates from few average gate fidelities}, journal = {Phys. Rev. Lett. }, volume = {121}, year = {2018}, month = {2018/03/01}, pages = {170502}, abstract = {Characterising quantum processes is a key task in and constitutes a challenge for the development of quantum technologies, especially at the noisy intermediate scale of today\&$\#$39;s devices. One method for characterising processes is randomised benchmarking, which is robust against state preparation and measurement (SPAM) errors, and can be used to benchmark Clifford gates. A complementing approach asks for full tomographic knowledge. Compressed sensing techniques achieve full tomography of quantum channels essentially at optimal resource efficiency. So far, guarantees for compressed sensing protocols rely on unstructured random measurements and can not be applied to the data acquired from randomised benchmarking experiments. It has been an open question whether or not the favourable features of both worlds can be combined. In this work, we give a positive answer to this question. For the important case of characterising multi-qubit unitary gates, we provide a rigorously guaranteed and practical reconstruction method that works with an essentially optimal number of average gate fidelities measured respect to random Clifford unitaries. Moreover, for general unital quantum channels we provide an explicit expansion into a unitary 2-design, allowing for a practical and guaranteed reconstruction also in that case. As a side result, we obtain a new statistical interpretation of the unitarity -- a figure of merit that characterises the coherence of a process. In our proofs we exploit recent representation theoretic insights on the Clifford group, develop a version of Collins\&$\#$39; calculus with Weingarten functions for integration over the Clifford group, and combine this with proof techniques from compressed sensing.

}, doi = {https://doi.org/10.1103/PhysRevLett.121.170502}, url = {https://arxiv.org/abs/1803.00572}, author = {Ingo Roth and Richard Kueng and Shelby Kimmel and Yi-Kai Liu and David Gross and Jens Eisert and Martin Kliesch} } @article {1945, title = {Experimentally Generated Random Numbers Certified by the Impossibility of Superluminal Signaling}, year = {2017}, month = {2017/02/16}, abstract = {Random numbers are an important resource for applications such as numerical simulation and secure communication. However, it is difficult to certify whether a physical random number generator is truly unpredictable. Here, we exploit the phenomenon of quantum nonlocality in a loophole-free photonic Bell test experiment for the generation of randomness that cannot be predicted within any physical theory that allows one to make independent measurement choices and prohibits superluminal signaling. To certify and quantify the randomness, we describe a new protocol that performs well in an experimental regime characterized by low violation of Bell inequalities. Applying an extractor function to our data, we obtained 256 new random bits, uniform to within 0.001.

}, url = {https://arxiv.org/abs/1702.05178$\#$}, author = {Peter Bierhorst and Emanuel Knill and Scott Glancy and Alan Mink and Stephen P. Jordan and Andrea Rommal and Yi-Kai Liu and Bradley Christensen and Sae Woo Nam and Lynden K. Shalm} } @article {2201, title = {Exponential improvements for quantum-accessible reinforcement learning}, year = {2017}, abstract = {Quantum computers can offer dramatic improvements over classical devices for data analysis tasks such as prediction and classification. However, less is known about the advantages that quantum computers may bring in the setting of reinforcement learning, where learning is achieved via interaction with a task environment. Here, we consider a special case of reinforcement learning, where the task environment allows quantum access. In addition, we impose certain \"naturalness\" conditions on the task environment, which rule out the kinds of oracle problems that are studied in quantum query complexity (and for which quantum speedups are well-known). Within this framework of quantum-accessible reinforcement learning environments, we demonstrate that quantum agents can achieve exponential improvements in learning efficiency, surpassing previous results that showed only quadratic improvements. A key step in the proof is to construct task environments that encode well-known oracle problems, such as Simon\&$\#$39;s problem and Recursive Fourier Sampling, while satisfying the above \"naturalness\" conditions for reinforcement learning. Our results suggest that quantum agents may perform well in certain game-playing scenarios, where the game has recursive structure, and the agent can learn by playing against itself

}, url = {https://arxiv.org/abs/1710.11160}, author = {Vedran Dunjko and Yi-Kai Liu and Xingyao Wu and J. M. Taylor} } @conference {2140, title = {Phase retrieval using unitary 2-designs}, booktitle = {SampTA 2017}, year = {2017}, month = {2017/07}, abstract = {We consider a variant of the phase retrieval problem, where vectors are replaced by unitary matrices, i.e., the unknown signal is a unitary matrix U, and the measurements consist of squared inner products |tr(C\†U)|2\ with unitary matrices C that are chosen by the observer. This problem has applications to quantum process tomography, when the unknown process is a unitary operation. We show that PhaseLift, a convex programming algorithm for phase retrieval, can be adapted to this matrix setting, using measurements that are sampled from unitary 4- and 2-designs. In the case of unitary 4-design measurements, we show that PhaseLift can reconstruct all unitary matrices, using a nearoptimal number of measurements. This extends previous work on PhaseLift using spherical 4-designs. In the case of unitary 2-design measurements, we show that PhaseLift still works pretty well on average: it recovers almost all signals, up to a constant additive error, using a near-optimal number of measurements. These 2-design measurements are convenient for quantum process tomography, as they can be implemented via randomized benchmarking techniques. This is the first positive result on PhaseLift using 2-designs.

}, doi = {10.1109/SAMPTA.2017.8024414}, url = {http://ieeexplore.ieee.org/document/8024414/}, author = {Shelby Kimmel and Yi-Kai Liu} } @article {2099, title = {Super-polynomial and exponential improvements for quantum-enhanced reinforcement learning}, year = {2017}, month = {2017/12/12}, abstract = {Recent work on quantum machine learning has demonstrated that quantum computers can offer dramatic improvements over classical devices for data mining, prediction and classification. However, less is known about the advantages using quantum computers may bring in the more general setting of reinforcement learning, where learning is achieved via interaction with a task environment that provides occasional rewards. Reinforcement learning can incorporate data-analysis-oriented learning settings as special cases, but also includes more complex situations where, e.g., reinforcing feedback is delayed. In a few recent works, Grover-type amplification has been utilized to construct quantum agents that achieve up-to-quadratic improvements in learning efficiency. These encouraging results have left open the key question of whether super-polynomial improvements in learning times are possible for genuine reinforcement learning problems, that is problems that go beyond the other more restricted learning paradigms. In this work, we provide a family of such genuine reinforcement learning tasks. We construct quantum-enhanced learners which learn super-polynomially, and even exponentially faster than any classical reinforcement learning model, and we discuss the potential impact our results may have on future technologies.

}, url = {https://arxiv.org/abs/1710.11160}, author = {Vedran Dunjko and Yi-Kai Liu and Xingyao Wu and J. M. Taylor} } @article {2100, title = {Thermodynamic Analysis of Classical and Quantum Search Algorithms}, year = {2017}, month = {2017/09/29}, abstract = {We analyze the performance of classical and quantum search algorithms from a thermodynamic perspective, focusing on resources such as time, energy, and memory size. We consider two examples that are relevant to post-quantum cryptography: Grover\’s search algorithm, and the quantum algorithm for collisionfinding. Using Bennett\’s \“Brownian\” model of low-power reversible computation, we show classical algorithms that have the same asymptotic energy consumption as these quantum algorithms. Thus, the quantum advantage in query complexity does not imply a reduction in these thermodynamic resource costs. In addition, we present realistic estimates of the resource costs of quantum and classical search, for near-future computing technologies. We find that, if memory is cheap, classical exhaustive search can be surprisingly competitive with Grover\’s algorithm.

}, url = {https://arxiv.org/abs/1709.10510}, author = {Ray Perlner and Yi-Kai Liu} } @article {1909, title = {Optimized tomography of continuous variable systems using excitation counting}, journal = {Physical Review A}, volume = {94}, year = {2016}, month = {2016/11/21}, pages = {052327}, abstract = {We propose a systematic procedure to optimize quantum state tomography protocols for continuous variable systems based on excitation counting preceded by a displacement operation. Compared with conventional tomography based on Husimi or Wigner function measurement, the excitation counting approach can significantly reduce the number of measurement settings. We investigate both informational completeness and robustness, and provide a bound of reconstruction error involving the condition number of the sensing map. We also identify the measurement settings that optimize this error bound, and demonstrate that the improved reconstruction robustness can lead to an order-of-magnitude reduction of estimation error with given resources. This optimization procedure is general and can incorporate prior information of the unknown state to further simplify the protocol.

}, doi = {10.1103/PhysRevA.94.052327}, url = {http://link.aps.org/doi/10.1103/PhysRevA.94.052327}, author = {Shen, Chao and Heeres, Reinier W. and Reinhold, Philip and Jiang, Luyao and Yi-Kai Liu and Schoelkopf, Robert J. and Jiang, Liang} } @article {1585, title = {Phase Retrieval Without Small-Ball Probability Assumptions: Stability and Uniqueness}, journal = {SampTA}, year = {2015}, month = {2015/05/25}, pages = {411-414}, abstract = {We study stability and uniqueness for the phase retrieval problem. That is, we ask when is a signal x ∈ R n stably and uniquely determined (up to small perturbations), when one performs phaseless measurements of the form yi = |a T i x| 2 (for i = 1, . . . , N), where the vectors ai ∈ R n are chosen independently at random, with each coordinate aij ∈ R being chosen independently from a fixed sub-Gaussian distribution D. It is well known that for many common choices of D, certain ambiguities can arise that prevent x from being uniquely determined. In this note we show that for any sub-Gaussian distribution D, with no additional assumptions, most vectors x cannot lead to such ambiguities. More precisely, we show stability and uniqueness for all sets of vectors T ⊂ R n which are not too peaky, in the sense that at most a constant fraction of their mass is concentrated on any one coordinate. The number of measurements needed to recover x ∈ T depends on the complexity of T in a natural way, extending previous results of Eldar and Mendelson [12].}, url = {http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=\&arnumber=7148923\&tag=1}, author = {Felix Krahmer and Yi-Kai Liu} } @article {1596, title = {Quantum Compressed Sensing Using 2-Designs}, year = {2015}, month = {2015/10/29}, abstract = {We develop a method for quantum process tomography that combines the efficiency of compressed sensing with the robustness of randomized benchmarking. Our method is robust to state preparation and measurement errors, and it achieves a quadratic speedup over conventional tomography when the unknown process is a generic unitary evolution. Our method is based on PhaseLift, a convex programming technique for phase retrieval. We show that this method achieves approximate recovery of almost all signals, using measurements sampled from spherical or unitary 2-designs. This is the first positive result on PhaseLift using 2-designs. We also show that exact recovery of all signals is possible using unitary 4-designs. Previous positive results for PhaseLift required spherical 4-designs, while PhaseLift was known to fail in certain cases when using spherical 2-designs.}, url = {http://arxiv.org/abs/1510.08887}, author = {Shelby Kimmel and Yi-Kai Liu} } @article {1429, title = {Privacy Amplification in the Isolated Qubits Model}, journal = {Eurocrypt}, year = {2014}, month = {2014/10/15}, pages = {785-814}, abstract = { Isolated qubits are a special class of quantum devices, which can be used to implement tamper-resistant cryptographic hardware such as one-time memories (OTM{\textquoteright}s). Unfortunately, these OTM constructions leak some information, and standard methods for privacy amplification cannot be applied here, because the adversary has advance knowledge of the hash function that the honest parties will use. In this paper we show a stronger form of privacy amplification that solves this problem, using a fixed hash function that is secure against all possible adversaries in the isolated qubits model. This allows us to construct single-bit OTM{\textquoteright}s which only leak an exponentially small amount of information. We then study a natural generalization of the isolated qubits model, where the adversary is allowed to perform a polynomially-bounded number of entangling gates, in addition to unbounded local operations and classical communication (LOCC). We show that our technique for privacy amplification is also secure in this setting. }, doi = {10.1007/978-3-662-46803-6_26}, url = {http://arxiv.org/abs/1410.3918v2}, author = {Yi-Kai Liu} } @article {1428, title = {Single-shot security for one-time memories in the isolated qubits model}, journal = {CRYPTO}, volume = {Part II}, year = {2014}, month = {2014/02/01}, pages = {19-36}, abstract = { One-time memories (OTM{\textquoteright}s) are simple, tamper-resistant cryptographic devices, which can be used to implement sophisticated functionalities such as one-time programs. Can one construct OTM{\textquoteright}s whose security follows from some physical principle? This is not possible in a fully-classical world, or in a fully-quantum world, but there is evidence that OTM{\textquoteright}s can be built using "isolated qubits" -- qubits that cannot be entangled, but can be accessed using adaptive sequences of single-qubit measurements. Here we present new constructions for OTM{\textquoteright}s using isolated qubits, which improve on previous work in several respects: they achieve a stronger "single-shot" security guarantee, which is stated in terms of the (smoothed) min-entropy; they are proven secure against adversaries who can perform arbitrary local operations and classical communication (LOCC); and they are efficiently implementable. These results use Wiesner{\textquoteright}s idea of conjugate coding, combined with error-correcting codes that approach the capacity of the q-ary symmetric channel, and a high-order entropic uncertainty relation, which was originally developed for cryptography in the bounded quantum storage model. }, doi = {10.1007/978-3-662-44381-1_2}, url = {http://arxiv.org/abs/1402.0049v2}, author = {Yi-Kai Liu} } @article {1426, title = {Building one-time memories from isolated qubits}, journal = {Innovations in Theoretical Computer Science (ITCS)}, year = {2013}, month = {2013/04/18}, pages = {269-286}, abstract = { One-time memories (OTM{\textquoteright}s) are simple tamper-resistant cryptographic devices, which can be used to implement one-time programs, a very general form of software protection and program obfuscation. Here we investigate the possibility of building OTM{\textquoteright}s using quantum mechanical devices. It is known that OTM{\textquoteright}s cannot exist in a fully-quantum world or in a fully-classical world. Instead, we propose a new model based on "isolated qubits" -- qubits that can only be accessed using local operations and classical communication (LOCC). This model combines a quantum resource (single-qubit measurements) with a classical restriction (on communication between qubits), and can be implemented using current technologies, such as nitrogen vacancy centers in diamond. In this model, we construct OTM{\textquoteright}s that are information-theoretically secure against one-pass LOCC adversaries that use 2-outcome measurements. Our construction resembles Wiesner{\textquoteright}s old idea of quantum conjugate coding, implemented using random error-correcting codes; our proof of security uses entropy chaining to bound the supremum of a suitable empirical process. In addition, we conjecture that our random codes can be replaced by some class of efficiently-decodable codes, to get computationally-efficient OTM{\textquoteright}s that are secure against computationally-bounded LOCC adversaries. In addition, we construct data-hiding states, which allow an LOCC sender to encode an (n-O(1))-bit messsage into n qubits, such that at most half of the message can be extracted by a one-pass LOCC receiver, but the whole message can be extracted by a general quantum receiver. }, doi = {10.1145/2554797.2554823}, url = {http://arxiv.org/abs/1304.5007v2}, author = {Yi-Kai Liu} } @article {1587, title = {Multilingual Summarization: Dimensionality Reduction and a Step Towards Optimal Term Coverage}, journal = {MultiLing (Workshop on Multilingual Multi-document Summarization)}, year = {2013}, month = {2013/08/09}, pages = {55-63}, abstract = {In this paper we present three term weighting approaches for multi-lingual document summarization and give results on the DUC 2002 data as well as on the 2013 Multilingual Wikipedia feature articles data set. We introduce a new intervalbounded nonnegative matrix factorization. We use this new method, latent semantic analysis (LSA), and latent Dirichlet allocation (LDA) to give three term-weighting methods for multi-document multi-lingual summarization. Results on DUC and TAC data, as well as on the MultiLing 2013 data, demonstrate that these methods are very promising, since they achieve oracle coverage scores in the range of humans for 6 of the 10 test languages. Finally, we present three term weighting approaches for the MultiLing13 single document summarization task on the Wikipedia featured articles. Our submissions signifi- cantly outperformed the baseline in 19 out of 41 languages. }, url = {http://aclweb.org/anthology/W/W13/W13-3108.pdf}, author = {John M. Conroy and Sashka T. Davis and Jeff Kubina and Yi-Kai Liu and Dianne P. O{\textquoteright}Leary and Judith D. Schlesinger} } @article {1402, title = {Testing quantum expanders is co-QMA-complete}, journal = {Physical Review A}, volume = {87}, year = {2013}, month = {2013/4/15}, abstract = { A quantum expander is a unital quantum channel that is rapidly mixing, has only a few Kraus operators, and can be implemented efficiently on a quantum computer. We consider the problem of estimating the mixing time (i.e., the spectral gap) of a quantum expander. We show that this problem is co-QMA-complete. This has applications to testing randomized constructions of quantum expanders, and studying thermalization of open quantum systems. }, doi = {10.1103/PhysRevA.87.042317}, url = {http://arxiv.org/abs/1210.0787v2}, author = {Adam D. Bookatz and Stephen P. Jordan and Yi-Kai Liu and Pawel Wocjan} } @article {1434, title = {Quantum Tomography via Compressed Sensing: Error Bounds, Sample Complexity, and Efficient Estimators }, journal = {New Journal of Physics}, volume = {14}, year = {2012}, month = {2012/09/27}, pages = {095022}, abstract = { Intuitively, if a density operator has small rank, then it should be easier to estimate from experimental data, since in this case only a few eigenvectors need to be learned. We prove two complementary results that confirm this intuition. First, we show that a low-rank density matrix can be estimated using fewer copies of the state, i.e., the sample complexity of tomography decreases with the rank. Second, we show that unknown low-rank states can be reconstructed from an incomplete set of measurements, using techniques from compressed sensing and matrix completion. These techniques use simple Pauli measurements, and their output can be certified without making any assumptions about the unknown state. We give a new theoretical analysis of compressed tomography, based on the restricted isometry property (RIP) for low-rank matrices. Using these tools, we obtain near-optimal error bounds, for the realistic situation where the data contains noise due to finite statistics, and the density matrix is full-rank with decaying eigenvalues. We also obtain upper-bounds on the sample complexity of compressed tomography, and almost-matching lower bounds on the sample complexity of any procedure using adaptive sequences of Pauli measurements. Using numerical simulations, we compare the performance of two compressed sensing estimators with standard maximum-likelihood estimation (MLE). We find that, given comparable experimental resources, the compressed sensing estimators consistently produce higher-fidelity state reconstructions than MLE. In addition, the use of an incomplete set of measurements leads to faster classical processing with no loss of accuracy. Finally, we show how to certify the accuracy of a low rank estimate using direct fidelity estimation and we describe a method for compressed quantum process tomography that works for processes with small Kraus rank. }, doi = {10.1088/1367-2630/14/9/095022}, url = {http://arxiv.org/abs/1205.2300v2}, author = {Steven T. Flammia and David Gross and Yi-Kai Liu and Jens Eisert} } @article {1435, title = {A Spectral Algorithm for Latent Dirichlet Allocation}, journal = {Algorithmica}, year = {2012}, month = {2012/04/30}, pages = {193-214}, abstract = { The problem of topic modeling can be seen as a generalization of the clustering problem, in that it posits that observations are generated due to multiple latent factors (e.g., the words in each document are generated as a mixture of several active topics, as opposed to just one). This increased representational power comes at the cost of a more challenging unsupervised learning problem of estimating the topic probability vectors (the distributions over words for each topic), when only the words are observed and the corresponding topics are hidden. We provide a simple and efficient learning procedure that is guaranteed to recover the parameters for a wide class of mixture models, including the popular latent Dirichlet allocation (LDA) model. For LDA, the procedure correctly recovers both the topic probability vectors and the prior over the topics, using only trigram statistics (i.e., third order moments, which may be estimated with documents containing just three words). The method, termed Excess Correlation Analysis (ECA), is based on a spectral decomposition of low order moments (third and fourth order) via two singular value decompositions (SVDs). Moreover, the algorithm is scalable since the SVD operations are carried out on $k\times k$ matrices, where $k$ is the number of latent factors (e.g. the number of topics), rather than in the $d$-dimensional observed space (typically $d \gg k$). }, url = {http://arxiv.org/abs/1204.6703v4}, author = {Animashree Anandkumar and Dean P. Foster and Daniel Hsu and Sham M. Kakade and Yi-Kai Liu} } @article {1433, title = {Continuous-variable quantum compressed sensing}, year = {2011}, month = {2011/11/03}, abstract = { We significantly extend recently developed methods to faithfully reconstruct unknown quantum states that are approximately low-rank, using only a few measurement settings. Our new method is general enough to allow for measurements from a continuous family, and is also applicable to continuous-variable states. As a technical result, this work generalizes quantum compressed sensing to the situation where the measured observables are taken from a so-called tight frame (rather than an orthonormal basis) --- hence covering most realistic measurement scenarios. As an application, we discuss the reconstruction of quantum states of light from homodyne detection and other types of measurements, and we present simulations that show the advantage of the proposed compressed sensing technique over present methods. Finally, we introduce a method to construct a certificate which guarantees the success of the reconstruction with no assumption on the state, and we show how slightly more measurements give rise to "universal" state reconstruction that is highly robust to noise. }, url = {http://arxiv.org/abs/1111.0853v3}, author = {Matthias Ohliger and Vincent Nesme and David Gross and Yi-Kai Liu and Jens Eisert} } @article {1430, title = {Direct Fidelity Estimation from Few Pauli Measurements}, journal = {Physical Review Letters}, volume = {106}, year = {2011}, month = {2011/6/8}, abstract = { We describe a simple method for certifying that an experimental device prepares a desired quantum state rho. Our method is applicable to any pure state rho, and it provides an estimate of the fidelity between rho and the actual (arbitrary) state in the lab, up to a constant additive error. The method requires measuring only a constant number of Pauli expectation values, selected at random according to an importance-weighting rule. Our method is faster than full tomography by a factor of d, the dimension of the state space, and extends easily and naturally to quantum channels. }, doi = {10.1103/PhysRevLett.106.230501}, url = {http://arxiv.org/abs/1104.4695v3}, author = {Steven T. Flammia and Yi-Kai Liu} } @article {1427, title = {Universal low-rank matrix recovery from Pauli measurements}, journal = {Advances in Neural Information Processing Systems (NIPS)}, year = {2011}, month = {2011/03/14}, pages = {1638-1646}, abstract = { We study the problem of reconstructing an unknown matrix M of rank r and dimension d using O(rd poly log d) Pauli measurements. This has applications in quantum state tomography, and is a non-commutative analogue of a well-known problem in compressed sensing: recovering a sparse vector from a few of its Fourier coefficients. We show that almost all sets of O(rd log^6 d) Pauli measurements satisfy the rank-r restricted isometry property (RIP). This implies that M can be recovered from a fixed ("universal") set of Pauli measurements, using nuclear-norm minimization (e.g., the matrix Lasso), with nearly-optimal bounds on the error. A similar result holds for any class of measurements that use an orthonormal operator basis whose elements have small operator norm. Our proof uses Dudley{\textquoteright}s inequality for Gaussian processes, together with bounds on covering numbers obtained via entropy duality. }, url = {http://arxiv.org/abs/1103.2816v2}, author = {Yi-Kai Liu} } @article {1431, title = {Efficient Direct Tomography for Matrix Product States}, year = {2010}, month = {2010/02/24}, abstract = { In this note, we describe a method for reconstructing matrix product states from a small number of efficiently-implementable measurements. Our method is exponentially faster than standard tomography, and it can also be used to certify that the unknown state is an MPS. The basic idea is to use local unitary operations to measure in the Schmidt basis, giving direct access to the MPS representation. This compares favorably with recently and independently proposed methods that recover the MPS tensors by performing a variational minimization, which is computationally intractable in certain cases. Our method also has the advantage of recovering any MPS, while other approaches were limited to special classes of states that exclude important examples such as GHZ and W states. }, url = {http://arxiv.org/abs/1002.4632v1}, author = {Olivier Landon-Cardinal and Yi-Kai Liu and David Poulin} } @article {1436, title = {Efficient quantum state tomography}, journal = {Nature Communications}, volume = {1}, year = {2010}, month = {2010/12/21}, pages = {149}, abstract = { Quantum state tomography, the ability to deduce the state of a quantum system from measured data, is the gold standard for verification and benchmarking of quantum devices. It has been realized in systems with few components, but for larger systems it becomes infeasible because the number of quantum measurements and the amount of computation required to process them grows exponentially in the system size. Here we show that we can do exponentially better than direct state tomography for a wide range of quantum states, in particular those that are well approximated by a matrix product state ansatz. We present two schemes for tomography in 1-D quantum systems and touch on generalizations. One scheme requires unitary operations on a constant number of subsystems, while the other requires only local measurements together with more elaborate post-processing. Both schemes rely only on a linear number of experimental operations and classical postprocessing that is polynomial in the system size. A further strength of the methods is that the accuracy of the reconstructed states can be rigorously certified without any a priori assumptions. }, doi = {10.1038/ncomms1147}, url = {http://arxiv.org/abs/1101.4366v1}, author = {Marcus Cramer and Martin B. Plenio and Steven T. Flammia and David Gross and Stephen D. Bartlett and Rolando Somma and Olivier Landon-Cardinal and Yi-Kai Liu and David Poulin} } @article {1243, title = {Quantum property testing for bounded-degree graphs}, journal = {Proc. RANDOM}, year = {2010}, month = {2010/12/14}, pages = {365-376}, abstract = { We study quantum algorithms for testing bipartiteness and expansion of bounded-degree graphs. We give quantum algorithms that solve these problems in time O(N^(1/3)), beating the Omega(sqrt(N)) classical lower bound. For testing expansion, we also prove an Omega(N^(1/4)) quantum query lower bound, thus ruling out the possibility of an exponential quantum speedup. Our quantum algorithms follow from a combination of classical property testing techniques due to Goldreich and Ron, derandomization, and the quantum algorithm for element distinctness. The quantum lower bound is obtained by the polynomial method, using novel algebraic techniques and combinatorial analysis to accommodate the graph structure. }, doi = {10.1007/978-3-642-22935-0_31}, url = {http://arxiv.org/abs/1012.3174v3}, author = {Andris Ambainis and Andrew M. Childs and Yi-Kai Liu} } @article {1432, title = {Quantum state tomography via compressed sensing}, journal = {Physical Review Letters}, volume = {105}, year = {2010}, month = {2010/10/4}, abstract = { We establish methods for quantum state tomography based on compressed sensing. These methods are specialized for quantum states that are fairly pure, and they offer a significant performance improvement on large quantum systems. In particular, they are able to reconstruct an unknown density matrix of dimension d and rank r using O(rd log^2 d) measurement settings, compared to standard methods that require d^2 settings. Our methods have several features that make them amenable to experimental implementation: they require only simple Pauli measurements, use fast convex optimization, are stable against noise, and can be applied to states that are only approximately low-rank. The acquired data can be used to certify that the state is indeed close to pure, so no a priori assumptions are needed. We present both theoretical bounds and numerical simulations. }, doi = {10.1103/PhysRevLett.105.150401}, url = {http://arxiv.org/abs/0909.3304v4}, author = {David Gross and Yi-Kai Liu and Steven T. Flammia and Stephen Becker and Jens Eisert} } @article {1425, title = {Quantum Algorithms Using the Curvelet Transform}, journal = {Proc. ACM Symposium on Theory of Computing (STOC)}, year = {2009}, month = {2009/10/28}, pages = {391-400}, abstract = { The curvelet transform is a directional wavelet transform over R^n, which is used to analyze functions that have singularities along smooth surfaces (Candes and Donoho, 2002). I demonstrate how this can lead to new quantum algorithms. I give an efficient implementation of a quantum curvelet transform, together with two applications: a single-shot measurement procedure for approximately finding the center of a ball in R^n, given a quantum-sample over the ball; and, a quantum algorithm for finding the center of a radial function over R^n, given oracle access to the function. I conjecture that these algorithms succeed with constant probability, using one quantum-sample and O(1) oracle queries, respectively, independent of the dimension n -- this can be interpreted as a quantum speed-up. To support this conjecture, I prove rigorous bounds on the distribution of probability mass for the continuous curvelet transform. This shows that the above algorithms work in an idealized "continuous" model. }, url = {http://arxiv.org/abs/0810.4968v2}, author = {Yi-Kai Liu} } @article {1424, title = {The Complexity of the Consistency and N-representability Problems for Quantum States}, year = {2007}, month = {2007/12/18}, type = {PhD Thesis, Univ. of California, San Diego}, abstract = { QMA (Quantum Merlin-Arthur) is the quantum analogue of the class NP. There are a few QMA-complete problems, most notably the {\textquoteleft}{\textquoteleft}Local Hamiltonian{\textquoteright}{\textquoteright} problem introduced by Kitaev. In this dissertation we show some new QMA-complete problems. The first one is {\textquoteleft}{\textquoteleft}Consistency of Local Density Matrices{\textquoteright}{\textquoteright}: given several density matrices describing different (constant-size) subsets of an n-qubit system, decide whether these are consistent with a single global state. This problem was first suggested by Aharonov. We show that it is QMA-complete, via an oracle reduction from Local Hamiltonian. This uses algorithms for convex optimization with a membership oracle, due to Yudin and Nemirovskii. Next we show that two problems from quantum chemistry, {\textquoteleft}{\textquoteleft}Fermionic Local Hamiltonian{\textquoteright}{\textquoteright} and {\textquoteleft}{\textquoteleft}N-representability,{\textquoteright}{\textquoteright} are QMA-complete. These problems arise in calculating the ground state energies of molecular systems. N-representability is a key component in recently developed numerical methods using the contracted Schrodinger equation. Although these problems have been studied since the 1960{\textquoteright}s, it is only recently that the theory of quantum computation has allowed us to properly characterize their complexity. Finally, we study some special cases of the Consistency problem, pertaining to 1-dimensional and {\textquoteleft}{\textquoteleft}stoquastic{\textquoteright}{\textquoteright} systems. We also give an alternative proof of a result due to Jaynes: whenever local density matrices are consistent, they are consistent with a Gibbs state. }, url = {http://arxiv.org/abs/0712.3041v1}, author = {Yi-Kai Liu} } @article {1423, title = {The Local Consistency Problem for Stoquastic and 1-D Quantum Systems}, year = {2007}, month = {2007/12/10}, abstract = { The Local Hamiltonian problem (finding the ground state energy of a quantum system) is known to be QMA-complete. The Local Consistency problem (deciding whether descriptions of small pieces of a quantum system are consistent) is also known to be QMA-complete. Here we consider special cases of Local Hamiltonian, for {\textquoteleft}{\textquoteleft}stoquastic{\textquoteright}{\textquoteright} and 1-dimensional systems, that seem to be strictly easier than QMA. We show that there exist analogous special cases of Local Consistency, that have equivalent complexity (up to poly-time oracle reductions). Our main technical tool is a new reduction from Local Consistency to Local Hamiltonian, using SDP duality. }, url = {http://arxiv.org/abs/0712.1388v2}, author = {Yi-Kai Liu} } @article {1588, title = {N-representability is QMA-complete}, journal = {Phys. Rev. Lett.}, volume = {98}, year = {2007}, month = {2007/03/16}, abstract = {We study the computational complexity of the N-representability problem in quantum chemistry. We show that this problem is quantum Merlin-Arthur complete, which is the quantum generalization of nondeterministic polynomial time complete. Our proof uses a simple mapping from spin systems to fermionic systems, as well as a convex optimization technique that reduces the problem of finding ground states to N representability.}, doi = {10.1103/PhysRevLett.98.110503}, url = {http://journals.aps.org/prl/abstract/10.1103/PhysRevLett.98.110503}, author = {Yi-Kai Liu and Matthias Christandl and F. Verstraete} } @article {1589, title = {On Bounded Distance Decoding for General Lattices}, journal = {Proc. RANDOM}, year = {2006}, month = {2006/01/01}, pages = {450-461}, abstract = {A central problem in the algorithmic study of lattices is the closest vector problem: given a lattice L represented by some basis, and a target point y⃗ , find the lattice point closest to y⃗ . Bounded Distance Decoding is a variant of this problem in which the target is guaranteed to be close to the lattice, relative to the minimum distance λ1(L) of the lattice. Specifically, in the α-Bounded Distance Decoding problem (α-BDD), we are given a lattice L and a vector y⃗ (within distance α.λ1(L) from the lattice), and we are asked to find a lattice point x⃗ ∈L within distance α.λ1(L) from the target. In coding theory, the lattice points correspond to codewords, and the target points correspond to lattice points being perturbed by noise vectors. Since in coding theory the lattice is usually fixed, we may {\textquotedblleft}pre-process{\textquotedblright} it before receiving any targets, to make the subsequent decoding faster. This leads us to consider α-BDD with pre-processing. We show how a recent technique of Aharonov and Regev [2] can be used to solve α-BDD with pre-processing in polynomial time for α=O((logn)/n-------√). This improves upon the previously best known algorithm due to Klein [13] which solved the problem for α=O(1/n). We also establish hardness results for α-BDD and α-BDD with pre-processing, as well as generalize our results to other l p norms.}, url = {http://link.springer.com/chapter/10.1007/11830924_41$\#$page-1}, author = {Yi-Kai Liu and Vadim Lyubashevsky and Daniele Micciancio} } @article {1422, title = {Consistency of Local Density Matrices is QMA-complete}, journal = {Proc. RANDOM }, year = {2006}, month = {2006/04/21}, pages = {438-449}, abstract = { Suppose we have an n-qubit system, and we are given a collection of local density matrices rho_1,...,rho_m, where each rho_i describes a subset C_i of the qubits. We say that the rho_i are {\textquoteleft}{\textquoteleft}consistent{\textquoteright}{\textquoteright} if there exists some global state sigma (on all n qubits) that matches each of the rho_i on the subsets C_i. This generalizes the classical notion of the consistency of marginal probability distributions. We show that deciding the consistency of local density matrices is QMA-complete (where QMA is the quantum analogue of NP). This gives an interesting example of a hard problem in QMA. Our proof is somewhat unusual: we give a Turing reduction from Local Hamiltonian, using a convex optimization algorithm by Bertsimas and Vempala, which is based on random sampling. Unlike in the classical case, simple mapping reductions do not seem to work here. }, url = {http://arxiv.org/abs/quant-ph/0604166v3}, author = {Yi-Kai Liu} } @article {1421, title = {Gibbs States and the Consistency of Local Density Matrices}, year = {2006}, month = {2006/03/02}, abstract = { Suppose we have an n-qubit system, and we are given a collection of local density matrices rho_1,...,rho_m, where each rho_i describes some subset of the qubits. We say that rho_1,...,rho_m are "consistent" if there exists a global state sigma (on all n qubits) whose reduced density matrices match rho_1,...,rho_m. We prove the following result: if rho_1,...,rho_m are consistent with some state sigma > 0, then they are also consistent with a state sigma{\textquoteright} of the form sigma{\textquoteright} = (1/Z) exp(M_1+...+M_m), where each M_i is a Hermitian matrix acting on the same qubits as rho_i, and Z is a normalizing factor. (This is known as a Gibbs state.) Actually, we show a more general result, on the consistency of a set of expectation values