Long-range correlated errors can severely impact the performance of NISQ (noisy intermediate-scale quantum) devices, and fault-tolerant quantum computation. Characterizing these errors is important for improving the performance of these devices, via calibration and error correction, and to ensure correct interpretation of the results. We propose a compressed sensing method for detecting two-qubit correlated dephasing errors, assuming only that the correlations are sparse (i.e., at most s pairs of qubits have correlated errors, where s << n(n-1)/2, and n is the total number of qubits). In particular, our method can detect long-range correlations between any two qubits in the system (i.e., the correlations are not restricted to be geometrically local).

Our method is highly scalable: it requires as few as m = O(s log n) measurement settings, and efficient classical postprocessing based on convex optimization. In addition, when m = O(s log^4(n)), our method is highly robust to noise, and has sample complexity O(max(n,s)^2 log^4(n)), which can be compared to conventional methods that have sample complexity O(n^3). Thus, our method is advantageous when the correlations are sufficiently sparse, that is, when s < O(n^(3/2) / log^2(n)). Our method also performs well in numerical simulations on small system sizes, and has some resistance to state-preparation-and-measurement (SPAM) errors. The key ingredient in our method is a new type of compressed sensing measurement, which works by preparing entangled Greenberger-Horne-Zeilinger states (GHZ states) on random subsets of qubits, and measuring their decay rates with high precision.

The National Institute of Standards and Technology is in the process of selecting one or more public-key cryptographic algorithms through a public, competition-like process. The new public-key cryptography standards will specify one or more additional digital signatures, public-key encryption, and key-establishment algorithms to augment Federal Information Processing Standard (FIPS) 186-4, Digital Signature Standard (DSS), as well as NIST Special Publication (SP) 800-56A Revision 3, Recommendation for Pair-Wise Key-Establishment Schemes Using Discrete Logarithm Cryptography, and SP 800-56B Revision 2, Recommendation for Pair-Wise Key Establishment Using Integer Factorization Cryptography. It is intended that these algorithms will be capable of protecting sensitive information well into the foreseeable future, including after the advent of quantum computers.

The NIST Post-Quantum Cryptography Standardization Process began in 2017 with 69 candidate algorithms that met both the minimum acceptance criteria and submission requirements. The first round lasted until January 2019, during which candidate algorithms were evaluated based on their security, performance, and other characteristics. NIST selected 26 algorithms to advance to the second round for more analysis. This report describes the evaluation and selection process, based on public feedback and internal review, of the second-round candidates. The report summarizes the 26 second-round candidate algorithms and identifies those selected to move forward to the third round of the competition. The third-round finalist public-key encryption and key-establishment algorithms are Classic McEliece, CRYSTALS-KYBER, NTRU, and SABER. The third-round finalists for digital signatures are CRYSTALS-DILITHIUM, FALCON, and Rainbow. These finalists will be considered for standardization at the end of the third round. In addition, eight alternate candidate algorithms will also advance to the third round: BIKE, FrodoKEM, HQC, NTRU Prime, SIKE, GeMSS, Picnic, and SPHINCS+. These additional candidates are still being considered for standardization, although this is unlikely to occur at the end of the third round. NIST hopes that the announcement of these finalists and additional candidates will serve to focus the cryptographic community’s attention during the next round.

%B NISTIR 8309 %8 07/2020 %G eng %R https://doi.org/10.6028/NIST.IR.8309 %0 Journal Article %D 2019 %T Classifying single-qubit noise using machine learning %A Travis L. Scholten %A Yi-Kai Liu %A Kevin Young %A Robin Blume-Kohout %XQuantum characterization, validation, and verification (QCVV) techniques are used to probe, characterize, diagnose, and detect errors in quantum information processors (QIPs). An important component of any QCVV protocol is a mapping from experimental data to an estimate of a property of a QIP. Machine learning (ML) algorithms can help automate the development of QCVV protocols, creating such maps by learning them from training data. We identify the critical components of "machine-learned" QCVV techniques, and present a rubric for developing them. To demonstrate this approach, we focus on the problem of determining whether noise affecting a single qubit is coherent or stochastic (incoherent) using the data sets originally proposed for gate set tomography. We leverage known ML algorithms to train a classifier distinguishing these two kinds of noise. The accuracy of the classifier depends on how well it can approximate the "natural" geometry of the training data. We find GST data sets generated by a noisy qubit can reliably be separated by linear surfaces, although feature engineering can be necessary. We also show the classifier learned by a support vector machine (SVM) is robust under finite-sample noise.

%8 8/30/2019 %G eng %U https://arxiv.org/abs/1908.11762 %0 Journal Article %J School: National Institute for Standards and Technology %D 2019 %T Status Report on the First Round of the NIST Post-Quantum Cryptography Standardization Process %A Gorjan Alagic %A J. Alperin-Sheriff %A D. Apon %A D. Cooper %A Q. Dang %A Carl Miller %A D. Moody %A R. Peralta %A R. Perlner %A A. Robinson %A D. Smith-Tone %A Yi-Kai Liu %XThe National Institute of Standards and Technology is in the process of selecting one or more

public-key cryptographic algorithms through a public competition-like process. The new publickey cryptography standards will specify one or more additional digital signature, public-key

encryption, and key-establishment algorithms to augment FIPS 186-4, Digital Signature Standard

(DSS), as well as special publications SP 800-56A Revision 2, Recommendation for Pair-Wise

Key Establishment Schemes Using Discrete Logarithm Cryptography, and SP 800-56B,

Recommendation for Pair-Wise Key-Establishment Schemes Using Integer Factorization. It is

intended that these algorithms will be capable of protecting sensitive information well into the

foreseeable future, including after the advent of quantum computers.

In November 2017, 82 candidate algorithms were submitted to NIST for consideration. Among

these, 69 met both the minimum acceptance criteria and our submission requirements, and were

accepted as First-Round Candidates on Dec. 20, 2017, marking the beginning of the First Round

of the NIST Post-Quantum Cryptography Standardization Process. This report describes the

evaluation criteria and selection process, based on public feedback and internal review of the

first-round candidates, and summarizes the 26 candidate algorithms announced on January 30,

2019 for moving forward to the second round of the competition. The 17 Second-Round

Candidate public-key encryption and key-establishment algorithms are BIKE, Classic McEliece,

CRYSTALS-KYBER, FrodoKEM, HQC, LAC, LEDAcrypt (merger of LEDAkem/LEDApkc),

NewHope, NTRU (merger of NTRUEncrypt/NTRU-HRSS-KEM), NTRU Prime, NTS-KEM,

ROLLO (merger of LAKE/LOCKER/Ouroboros-R), Round5 (merger of Hila5/Round2), RQC,

SABER, SIKE, and Three Bears. The 9 Second-Round Candidates for digital signatures are

CRYSTALS-DILITHIUM, FALCON, GeMSS, LUOV, MQDSS, Picnic, qTESLA, Rainbow,

and SPHINCS+.

From dice to modern complex circuits, there have been many attempts to build increasingly better devices to generate random numbers. Today, randomness is fundamental to security and cryptographic systems, as well as safeguarding privacy. A key challenge with random number generators is that it is hard to ensure that their outputs are unpredictable. For a random number generator based on a physical process, such as a noisy classical system or an elementary quantum measurement, a detailed model describing the underlying physics is required to assert unpredictability. Such a model must make a number of assumptions that may not be valid, thereby compromising the integrity of the device. However, it is possible to exploit the phenomenon of quantum nonlocality with a loophole-free Bell test to build a random number generator that can produce output that is unpredictable to any adversary limited only by general physical principles. With recent technological developments, it is now possible to carry out such a loophole-free Bell test. Here we present certified randomness obtained from a photonic Bell experiment and extract 1024 random bits uniform to within 10−12. These random bits could not have been predicted within any physical theory that prohibits superluminal signaling and allows one to make independent measurement choices. To certify and quantify the randomness, we describe a new protocol that is optimized for apparatuses characterized by a low per-trial violation of Bell inequalities. We thus enlisted an experimental result that fundamentally challenges the notion of determinism to build a system that can increase trust in random sources. In the future, random number generators based on loophole-free Bell tests may play a role in increasing the security and trust of our cryptographic systems and infrastructure.

%B Nature %V 556 %P 223-226 %8 2018/04/11 %G eng %U https://arxiv.org/abs/1803.06219 %R https://doi.org/10.1038/s41586-018-0019-0 %0 Journal Article %J IEEE Transactions on Information Theory %D 2018 %T Phase Retrieval Without Small-Ball Probability Assumptions %A Felix Krahmer %A Yi-Kai Liu %XIn the context of the phase retrieval problem, it is known that certain natural classes of measurements, such as Fourier measurements and random Bernoulli measurements, do not lead to the unique reconstruction of all possible signals, even in combination with certain practically feasible random masks. To avoid this difficulty, the analysis is often restricted to measurement ensembles (or masks) that satisfy a small-ball probability condition, in order to ensure that the reconstruction is unique. This paper shows a complementary result: for random Bernoulli measurements, there is still a large class of signals that can be reconstructed uniquely, namely, those signals that are non-peaky. In fact, this result is much more general: it holds for random measurements sampled from any subgaussian distribution 2), without any small-ball conditions. This is demonstrated in two ways: 1) a proof of stability and uniqueness and 2) a uniform recovery guarantee for the PhaseLift algorithm. In all of these cases, the number of measurements m approaches the information-theoretic lower bound. Finally, for random Bernoulli measurements with erasures, it is shown that PhaseLift achieves uniform recovery of all signals (including peaky ones).

%B IEEE Transactions on Information Theory %V 64 %P 485-500 %8 2018/01/01 %G eng %U http://ieeexplore.ieee.org/document/8052535/ %N 1 %R 10.1109/TIT.2017.2757520 %0 Journal Article %J In: Shacham H., Boldyreva A. (eds) Advances in Cryptology – CRYPTO 2018. CRYPTO 2018. Lecture Notes in Computer Science. %D 2018 %T Pseudorandom States, Non-Cloning Theorems and Quantum Money %A Zhengfeng Ji %A Yi-Kai Liu %A Fang Song %XWe propose the concept of pseudorandom states and study their constructions, properties, and applications. Under the assumption that quantum-secure one-way functions exist, we present concrete and efficient constructions of pseudorandom states. The non-cloning theorem plays a central role in our study—it motivates the proper definition and characterizes one of the important properties of pseudorandom quantum states. Namely, there is no efficient quantum algorithm that can create more copies of the state from a given number of pseudorandom states. As the main application, we prove that any family of pseudorandom states naturally gives rise to a private-key quantum money scheme.

%B In: Shacham H., Boldyreva A. (eds) Advances in Cryptology – CRYPTO 2018. CRYPTO 2018. Lecture Notes in Computer Science. %V 10993 %8 2017/11/01 %G eng %U https://arxiv.org/abs/1711.00385 %R https://doi.org/10.1007/978-3-319-96878-0_5 %0 Journal Article %J IEEE Security & Privacy %D 2018 %T Quantum Cryptanalysis: Shor, Grover, and Beyond %A Stephen P. Jordan %A Yi-Kai Liu %B IEEE Security & Privacy %V 16 %P 14-21 %8 2018/09 %G eng %N 5 %R 10.1109/MSP.2018.3761719 %0 Journal Article %J Phys. Rev. Lett. %D 2018 %T Recovering quantum gates from few average gate fidelities %A Ingo Roth %A Richard Kueng %A Shelby Kimmel %A Yi-Kai Liu %A David Gross %A Jens Eisert %A Martin Kliesch %XCharacterising quantum processes is a key task in and constitutes a challenge for the development of quantum technologies, especially at the noisy intermediate scale of today's devices. One method for characterising processes is randomised benchmarking, which is robust against state preparation and measurement (SPAM) errors, and can be used to benchmark Clifford gates. A complementing approach asks for full tomographic knowledge. Compressed sensing techniques achieve full tomography of quantum channels essentially at optimal resource efficiency. So far, guarantees for compressed sensing protocols rely on unstructured random measurements and can not be applied to the data acquired from randomised benchmarking experiments. It has been an open question whether or not the favourable features of both worlds can be combined. In this work, we give a positive answer to this question. For the important case of characterising multi-qubit unitary gates, we provide a rigorously guaranteed and practical reconstruction method that works with an essentially optimal number of average gate fidelities measured respect to random Clifford unitaries. Moreover, for general unital quantum channels we provide an explicit expansion into a unitary 2-design, allowing for a practical and guaranteed reconstruction also in that case. As a side result, we obtain a new statistical interpretation of the unitarity -- a figure of merit that characterises the coherence of a process. In our proofs we exploit recent representation theoretic insights on the Clifford group, develop a version of Collins' calculus with Weingarten functions for integration over the Clifford group, and combine this with proof techniques from compressed sensing.

%B Phys. Rev. Lett. %V 121 %P 170502 %8 2018/03/01 %G eng %U https://arxiv.org/abs/1803.00572 %R https://doi.org/10.1103/PhysRevLett.121.170502 %0 Journal Article %D 2017 %T Experimentally Generated Random Numbers Certified by the Impossibility of Superluminal Signaling %A Peter Bierhorst %A Emanuel Knill %A Scott Glancy %A Alan Mink %A Stephen P. Jordan %A Andrea Rommal %A Yi-Kai Liu %A Bradley Christensen %A Sae Woo Nam %A Lynden K. Shalm %XRandom numbers are an important resource for applications such as numerical simulation and secure communication. However, it is difficult to certify whether a physical random number generator is truly unpredictable. Here, we exploit the phenomenon of quantum nonlocality in a loophole-free photonic Bell test experiment for the generation of randomness that cannot be predicted within any physical theory that allows one to make independent measurement choices and prohibits superluminal signaling. To certify and quantify the randomness, we describe a new protocol that performs well in an experimental regime characterized by low violation of Bell inequalities. Applying an extractor function to our data, we obtained 256 new random bits, uniform to within 0.001.

%8 2017/02/16 %G eng %U https://arxiv.org/abs/1702.05178# %0 Journal Article %D 2017 %T Exponential improvements for quantum-accessible reinforcement learning %A Vedran Dunjko %A Yi-Kai Liu %A Xingyao Wu %A J. M. Taylor %XQuantum computers can offer dramatic improvements over classical devices for data analysis tasks such as prediction and classification. However, less is known about the advantages that quantum computers may bring in the setting of reinforcement learning, where learning is achieved via interaction with a task environment. Here, we consider a special case of reinforcement learning, where the task environment allows quantum access. In addition, we impose certain "naturalness" conditions on the task environment, which rule out the kinds of oracle problems that are studied in quantum query complexity (and for which quantum speedups are well-known). Within this framework of quantum-accessible reinforcement learning environments, we demonstrate that quantum agents can achieve exponential improvements in learning efficiency, surpassing previous results that showed only quadratic improvements. A key step in the proof is to construct task environments that encode well-known oracle problems, such as Simon's problem and Recursive Fourier Sampling, while satisfying the above "naturalness" conditions for reinforcement learning. Our results suggest that quantum agents may perform well in certain game-playing scenarios, where the game has recursive structure, and the agent can learn by playing against itself

%G eng %U https://arxiv.org/abs/1710.11160 %0 Conference Paper %B SampTA 2017 %D 2017 %T Phase retrieval using unitary 2-designs %A Shelby Kimmel %A Yi-Kai Liu %XWe consider a variant of the phase retrieval problem, where vectors are replaced by unitary matrices, i.e., the unknown signal is a unitary matrix U, and the measurements consist of squared inner products |tr(C†U)|2 with unitary matrices C that are chosen by the observer. This problem has applications to quantum process tomography, when the unknown process is a unitary operation. We show that PhaseLift, a convex programming algorithm for phase retrieval, can be adapted to this matrix setting, using measurements that are sampled from unitary 4- and 2-designs. In the case of unitary 4-design measurements, we show that PhaseLift can reconstruct all unitary matrices, using a nearoptimal number of measurements. This extends previous work on PhaseLift using spherical 4-designs. In the case of unitary 2-design measurements, we show that PhaseLift still works pretty well on average: it recovers almost all signals, up to a constant additive error, using a near-optimal number of measurements. These 2-design measurements are convenient for quantum process tomography, as they can be implemented via randomized benchmarking techniques. This is the first positive result on PhaseLift using 2-designs.

%B SampTA 2017 %8 2017/07 %G eng %U http://ieeexplore.ieee.org/document/8024414/ %R 10.1109/SAMPTA.2017.8024414 %0 Journal Article %D 2017 %T Super-polynomial and exponential improvements for quantum-enhanced reinforcement learning %A Vedran Dunjko %A Yi-Kai Liu %A Xingyao Wu %A J. M. Taylor %XRecent work on quantum machine learning has demonstrated that quantum computers can offer dramatic improvements over classical devices for data mining, prediction and classification. However, less is known about the advantages using quantum computers may bring in the more general setting of reinforcement learning, where learning is achieved via interaction with a task environment that provides occasional rewards. Reinforcement learning can incorporate data-analysis-oriented learning settings as special cases, but also includes more complex situations where, e.g., reinforcing feedback is delayed. In a few recent works, Grover-type amplification has been utilized to construct quantum agents that achieve up-to-quadratic improvements in learning efficiency. These encouraging results have left open the key question of whether super-polynomial improvements in learning times are possible for genuine reinforcement learning problems, that is problems that go beyond the other more restricted learning paradigms. In this work, we provide a family of such genuine reinforcement learning tasks. We construct quantum-enhanced learners which learn super-polynomially, and even exponentially faster than any classical reinforcement learning model, and we discuss the potential impact our results may have on future technologies.

%8 2017/12/12 %G eng %U https://arxiv.org/abs/1710.11160 %0 Journal Article %D 2017 %T Thermodynamic Analysis of Classical and Quantum Search Algorithms %A Ray Perlner %A Yi-Kai Liu %XWe analyze the performance of classical and quantum search algorithms from a thermodynamic perspective, focusing on resources such as time, energy, and memory size. We consider two examples that are relevant to post-quantum cryptography: Grover’s search algorithm, and the quantum algorithm for collisionfinding. Using Bennett’s “Brownian” model of low-power reversible computation, we show classical algorithms that have the same asymptotic energy consumption as these quantum algorithms. Thus, the quantum advantage in query complexity does not imply a reduction in these thermodynamic resource costs. In addition, we present realistic estimates of the resource costs of quantum and classical search, for near-future computing technologies. We find that, if memory is cheap, classical exhaustive search can be surprisingly competitive with Grover’s algorithm.

%8 2017/09/29 %G eng %U https://arxiv.org/abs/1709.10510 %0 Journal Article %J Physical Review A %D 2016 %T Optimized tomography of continuous variable systems using excitation counting %A Shen, Chao %A Heeres, Reinier W. %A Reinhold, Philip %A Jiang, Luyao %A Yi-Kai Liu %A Schoelkopf, Robert J. %A Jiang, Liang %XWe propose a systematic procedure to optimize quantum state tomography protocols for continuous variable systems based on excitation counting preceded by a displacement operation. Compared with conventional tomography based on Husimi or Wigner function measurement, the excitation counting approach can significantly reduce the number of measurement settings. We investigate both informational completeness and robustness, and provide a bound of reconstruction error involving the condition number of the sensing map. We also identify the measurement settings that optimize this error bound, and demonstrate that the improved reconstruction robustness can lead to an order-of-magnitude reduction of estimation error with given resources. This optimization procedure is general and can incorporate prior information of the unknown state to further simplify the protocol.

%B Physical Review A %V 94 %P 052327 %8 2016/11/21 %G eng %U http://link.aps.org/doi/10.1103/PhysRevA.94.052327 %R 10.1103/PhysRevA.94.052327 %0 Journal Article %J SampTA %D 2015 %T Phase Retrieval Without Small-Ball Probability Assumptions: Stability and Uniqueness %A Felix Krahmer %A Yi-Kai Liu %X We study stability and uniqueness for the phase retrieval problem. That is, we ask when is a signal x ∈ R n stably and uniquely determined (up to small perturbations), when one performs phaseless measurements of the form yi = |a T i x| 2 (for i = 1, . . . , N), where the vectors ai ∈ R n are chosen independently at random, with each coordinate aij ∈ R being chosen independently from a fixed sub-Gaussian distribution D. It is well known that for many common choices of D, certain ambiguities can arise that prevent x from being uniquely determined. In this note we show that for any sub-Gaussian distribution D, with no additional assumptions, most vectors x cannot lead to such ambiguities. More precisely, we show stability and uniqueness for all sets of vectors T ⊂ R n which are not too peaky, in the sense that at most a constant fraction of their mass is concentrated on any one coordinate. The number of measurements needed to recover x ∈ T depends on the complexity of T in a natural way, extending previous results of Eldar and Mendelson [12]. %B SampTA %P 411-414 %8 2015/05/25 %G eng %U http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7148923&tag=1 %0 Journal Article %D 2015 %T Quantum Compressed Sensing Using 2-Designs %A Shelby Kimmel %A Yi-Kai Liu %X We develop a method for quantum process tomography that combines the efficiency of compressed sensing with the robustness of randomized benchmarking. Our method is robust to state preparation and measurement errors, and it achieves a quadratic speedup over conventional tomography when the unknown process is a generic unitary evolution. Our method is based on PhaseLift, a convex programming technique for phase retrieval. We show that this method achieves approximate recovery of almost all signals, using measurements sampled from spherical or unitary 2-designs. This is the first positive result on PhaseLift using 2-designs. We also show that exact recovery of all signals is possible using unitary 4-designs. Previous positive results for PhaseLift required spherical 4-designs, while PhaseLift was known to fail in certain cases when using spherical 2-designs. %8 2015/10/29 %G eng %U http://arxiv.org/abs/1510.08887 %0 Journal Article %J Eurocrypt %D 2014 %T Privacy Amplification in the Isolated Qubits Model %A Yi-Kai Liu %X Isolated qubits are a special class of quantum devices, which can be used to implement tamper-resistant cryptographic hardware such as one-time memories (OTM's). Unfortunately, these OTM constructions leak some information, and standard methods for privacy amplification cannot be applied here, because the adversary has advance knowledge of the hash function that the honest parties will use. In this paper we show a stronger form of privacy amplification that solves this problem, using a fixed hash function that is secure against all possible adversaries in the isolated qubits model. This allows us to construct single-bit OTM's which only leak an exponentially small amount of information. We then study a natural generalization of the isolated qubits model, where the adversary is allowed to perform a polynomially-bounded number of entangling gates, in addition to unbounded local operations and classical communication (LOCC). We show that our technique for privacy amplification is also secure in this setting. %B Eurocrypt %P 785-814 %8 2014/10/15 %G eng %U http://arxiv.org/abs/1410.3918v2 %! EUROCRYPT 2015 %R 10.1007/978-3-662-46803-6_26 %0 Journal Article %J CRYPTO %D 2014 %T Single-shot security for one-time memories in the isolated qubits model %A Yi-Kai Liu %X One-time memories (OTM's) are simple, tamper-resistant cryptographic devices, which can be used to implement sophisticated functionalities such as one-time programs. Can one construct OTM's whose security follows from some physical principle? This is not possible in a fully-classical world, or in a fully-quantum world, but there is evidence that OTM's can be built using "isolated qubits" -- qubits that cannot be entangled, but can be accessed using adaptive sequences of single-qubit measurements. Here we present new constructions for OTM's using isolated qubits, which improve on previous work in several respects: they achieve a stronger "single-shot" security guarantee, which is stated in terms of the (smoothed) min-entropy; they are proven secure against adversaries who can perform arbitrary local operations and classical communication (LOCC); and they are efficiently implementable. These results use Wiesner's idea of conjugate coding, combined with error-correcting codes that approach the capacity of the q-ary symmetric channel, and a high-order entropic uncertainty relation, which was originally developed for cryptography in the bounded quantum storage model. %B CRYPTO %V Part II %P 19-36 %8 2014/02/01 %G eng %U http://arxiv.org/abs/1402.0049v2 %! CRYPTO 2014 %R 10.1007/978-3-662-44381-1_2 %0 Journal Article %J Innovations in Theoretical Computer Science (ITCS) %D 2013 %T Building one-time memories from isolated qubits %A Yi-Kai Liu %X One-time memories (OTM's) are simple tamper-resistant cryptographic devices, which can be used to implement one-time programs, a very general form of software protection and program obfuscation. Here we investigate the possibility of building OTM's using quantum mechanical devices. It is known that OTM's cannot exist in a fully-quantum world or in a fully-classical world. Instead, we propose a new model based on "isolated qubits" -- qubits that can only be accessed using local operations and classical communication (LOCC). This model combines a quantum resource (single-qubit measurements) with a classical restriction (on communication between qubits), and can be implemented using current technologies, such as nitrogen vacancy centers in diamond. In this model, we construct OTM's that are information-theoretically secure against one-pass LOCC adversaries that use 2-outcome measurements. Our construction resembles Wiesner's old idea of quantum conjugate coding, implemented using random error-correcting codes; our proof of security uses entropy chaining to bound the supremum of a suitable empirical process. In addition, we conjecture that our random codes can be replaced by some class of efficiently-decodable codes, to get computationally-efficient OTM's that are secure against computationally-bounded LOCC adversaries. In addition, we construct data-hiding states, which allow an LOCC sender to encode an (n-O(1))-bit messsage into n qubits, such that at most half of the message can be extracted by a one-pass LOCC receiver, but the whole message can be extracted by a general quantum receiver. %B Innovations in Theoretical Computer Science (ITCS) %P 269-286 %8 2013/04/18 %G eng %U http://arxiv.org/abs/1304.5007v2 %! Proceedings of the 5th conference on Innovations in Theoretical Computer Science (ITCS 2014) %R 10.1145/2554797.2554823 %0 Journal Article %J MultiLing (Workshop on Multilingual Multi-document Summarization) %D 2013 %T Multilingual Summarization: Dimensionality Reduction and a Step Towards Optimal Term Coverage %A John M. Conroy %A Sashka T. Davis %A Jeff Kubina %A Yi-Kai Liu %A Dianne P. O'Leary %A Judith D. Schlesinger %X In this paper we present three term weighting approaches for multi-lingual document summarization and give results on the DUC 2002 data as well as on the 2013 Multilingual Wikipedia feature articles data set. We introduce a new intervalbounded nonnegative matrix factorization. We use this new method, latent semantic analysis (LSA), and latent Dirichlet allocation (LDA) to give three term-weighting methods for multi-document multi-lingual summarization. Results on DUC and TAC data, as well as on the MultiLing 2013 data, demonstrate that these methods are very promising, since they achieve oracle coverage scores in the range of humans for 6 of the 10 test languages. Finally, we present three term weighting approaches for the MultiLing13 single document summarization task on the Wikipedia featured articles. Our submissions signifi- cantly outperformed the baseline in 19 out of 41 languages. %B MultiLing (Workshop on Multilingual Multi-document Summarization) %P 55-63 %8 2013/08/09 %G eng %U http://aclweb.org/anthology/W/W13/W13-3108.pdf %0 Journal Article %J Physical Review A %D 2013 %T Testing quantum expanders is co-QMA-complete %A Adam D. Bookatz %A Stephen P. Jordan %A Yi-Kai Liu %A Pawel Wocjan %X A quantum expander is a unital quantum channel that is rapidly mixing, has only a few Kraus operators, and can be implemented efficiently on a quantum computer. We consider the problem of estimating the mixing time (i.e., the spectral gap) of a quantum expander. We show that this problem is co-QMA-complete. This has applications to testing randomized constructions of quantum expanders, and studying thermalization of open quantum systems. %B Physical Review A %V 87 %8 2013/4/15 %G eng %U http://arxiv.org/abs/1210.0787v2 %N 4 %! Phys. Rev. A %R 10.1103/PhysRevA.87.042317 %0 Journal Article %J New Journal of Physics %D 2012 %T Quantum Tomography via Compressed Sensing: Error Bounds, Sample Complexity, and Efficient Estimators %A Steven T. Flammia %A David Gross %A Yi-Kai Liu %A Jens Eisert %X Intuitively, if a density operator has small rank, then it should be easier to estimate from experimental data, since in this case only a few eigenvectors need to be learned. We prove two complementary results that confirm this intuition. First, we show that a low-rank density matrix can be estimated using fewer copies of the state, i.e., the sample complexity of tomography decreases with the rank. Second, we show that unknown low-rank states can be reconstructed from an incomplete set of measurements, using techniques from compressed sensing and matrix completion. These techniques use simple Pauli measurements, and their output can be certified without making any assumptions about the unknown state. We give a new theoretical analysis of compressed tomography, based on the restricted isometry property (RIP) for low-rank matrices. Using these tools, we obtain near-optimal error bounds, for the realistic situation where the data contains noise due to finite statistics, and the density matrix is full-rank with decaying eigenvalues. We also obtain upper-bounds on the sample complexity of compressed tomography, and almost-matching lower bounds on the sample complexity of any procedure using adaptive sequences of Pauli measurements. Using numerical simulations, we compare the performance of two compressed sensing estimators with standard maximum-likelihood estimation (MLE). We find that, given comparable experimental resources, the compressed sensing estimators consistently produce higher-fidelity state reconstructions than MLE. In addition, the use of an incomplete set of measurements leads to faster classical processing with no loss of accuracy. Finally, we show how to certify the accuracy of a low rank estimate using direct fidelity estimation and we describe a method for compressed quantum process tomography that works for processes with small Kraus rank. %B New Journal of Physics %V 14 %P 095022 %8 2012/09/27 %G eng %U http://arxiv.org/abs/1205.2300v2 %N 9 %! New J. Phys. %R 10.1088/1367-2630/14/9/095022 %0 Journal Article %J Algorithmica %D 2012 %T A Spectral Algorithm for Latent Dirichlet Allocation %A Animashree Anandkumar %A Dean P. Foster %A Daniel Hsu %A Sham M. Kakade %A Yi-Kai Liu %X The problem of topic modeling can be seen as a generalization of the clustering problem, in that it posits that observations are generated due to multiple latent factors (e.g., the words in each document are generated as a mixture of several active topics, as opposed to just one). This increased representational power comes at the cost of a more challenging unsupervised learning problem of estimating the topic probability vectors (the distributions over words for each topic), when only the words are observed and the corresponding topics are hidden. We provide a simple and efficient learning procedure that is guaranteed to recover the parameters for a wide class of mixture models, including the popular latent Dirichlet allocation (LDA) model. For LDA, the procedure correctly recovers both the topic probability vectors and the prior over the topics, using only trigram statistics (i.e., third order moments, which may be estimated with documents containing just three words). The method, termed Excess Correlation Analysis (ECA), is based on a spectral decomposition of low order moments (third and fourth order) via two singular value decompositions (SVDs). Moreover, the algorithm is scalable since the SVD operations are carried out on $k\times k$ matrices, where $k$ is the number of latent factors (e.g. the number of topics), rather than in the $d$-dimensional observed space (typically $d \gg k$). %B Algorithmica %P 193-214 %8 2012/04/30 %G eng %U http://arxiv.org/abs/1204.6703v4 %0 Journal Article %D 2011 %T Continuous-variable quantum compressed sensing %A Matthias Ohliger %A Vincent Nesme %A David Gross %A Yi-Kai Liu %A Jens Eisert %X We significantly extend recently developed methods to faithfully reconstruct unknown quantum states that are approximately low-rank, using only a few measurement settings. Our new method is general enough to allow for measurements from a continuous family, and is also applicable to continuous-variable states. As a technical result, this work generalizes quantum compressed sensing to the situation where the measured observables are taken from a so-called tight frame (rather than an orthonormal basis) --- hence covering most realistic measurement scenarios. As an application, we discuss the reconstruction of quantum states of light from homodyne detection and other types of measurements, and we present simulations that show the advantage of the proposed compressed sensing technique over present methods. Finally, we introduce a method to construct a certificate which guarantees the success of the reconstruction with no assumption on the state, and we show how slightly more measurements give rise to "universal" state reconstruction that is highly robust to noise. %8 2011/11/03 %G eng %U http://arxiv.org/abs/1111.0853v3 %0 Journal Article %J Physical Review Letters %D 2011 %T Direct Fidelity Estimation from Few Pauli Measurements %A Steven T. Flammia %A Yi-Kai Liu %X We describe a simple method for certifying that an experimental device prepares a desired quantum state rho. Our method is applicable to any pure state rho, and it provides an estimate of the fidelity between rho and the actual (arbitrary) state in the lab, up to a constant additive error. The method requires measuring only a constant number of Pauli expectation values, selected at random according to an importance-weighting rule. Our method is faster than full tomography by a factor of d, the dimension of the state space, and extends easily and naturally to quantum channels. %B Physical Review Letters %V 106 %8 2011/6/8 %G eng %U http://arxiv.org/abs/1104.4695v3 %N 23 %! Phys. Rev. Lett. %R 10.1103/PhysRevLett.106.230501 %0 Journal Article %J Advances in Neural Information Processing Systems (NIPS) %D 2011 %T Universal low-rank matrix recovery from Pauli measurements %A Yi-Kai Liu %X We study the problem of reconstructing an unknown matrix M of rank r and dimension d using O(rd poly log d) Pauli measurements. This has applications in quantum state tomography, and is a non-commutative analogue of a well-known problem in compressed sensing: recovering a sparse vector from a few of its Fourier coefficients. We show that almost all sets of O(rd log^6 d) Pauli measurements satisfy the rank-r restricted isometry property (RIP). This implies that M can be recovered from a fixed ("universal") set of Pauli measurements, using nuclear-norm minimization (e.g., the matrix Lasso), with nearly-optimal bounds on the error. A similar result holds for any class of measurements that use an orthonormal operator basis whose elements have small operator norm. Our proof uses Dudley's inequality for Gaussian processes, together with bounds on covering numbers obtained via entropy duality. %B Advances in Neural Information Processing Systems (NIPS) %P 1638-1646 %8 2011/03/14 %G eng %U http://arxiv.org/abs/1103.2816v2 %! Advances in Neural Information Processing Systems (NIPS) 24 %0 Journal Article %D 2010 %T Efficient Direct Tomography for Matrix Product States %A Olivier Landon-Cardinal %A Yi-Kai Liu %A David Poulin %X In this note, we describe a method for reconstructing matrix product states from a small number of efficiently-implementable measurements. Our method is exponentially faster than standard tomography, and it can also be used to certify that the unknown state is an MPS. The basic idea is to use local unitary operations to measure in the Schmidt basis, giving direct access to the MPS representation. This compares favorably with recently and independently proposed methods that recover the MPS tensors by performing a variational minimization, which is computationally intractable in certain cases. Our method also has the advantage of recovering any MPS, while other approaches were limited to special classes of states that exclude important examples such as GHZ and W states. %8 2010/02/24 %G eng %U http://arxiv.org/abs/1002.4632v1 %0 Journal Article %J Nature Communications %D 2010 %T Efficient quantum state tomography %A Marcus Cramer %A Martin B. Plenio %A Steven T. Flammia %A David Gross %A Stephen D. Bartlett %A Rolando Somma %A Olivier Landon-Cardinal %A Yi-Kai Liu %A David Poulin %X Quantum state tomography, the ability to deduce the state of a quantum system from measured data, is the gold standard for verification and benchmarking of quantum devices. It has been realized in systems with few components, but for larger systems it becomes infeasible because the number of quantum measurements and the amount of computation required to process them grows exponentially in the system size. Here we show that we can do exponentially better than direct state tomography for a wide range of quantum states, in particular those that are well approximated by a matrix product state ansatz. We present two schemes for tomography in 1-D quantum systems and touch on generalizations. One scheme requires unitary operations on a constant number of subsystems, while the other requires only local measurements together with more elaborate post-processing. Both schemes rely only on a linear number of experimental operations and classical postprocessing that is polynomial in the system size. A further strength of the methods is that the accuracy of the reconstructed states can be rigorously certified without any a priori assumptions. %B Nature Communications %V 1 %P 149 %8 2010/12/21 %G eng %U http://arxiv.org/abs/1101.4366v1 %N 9 %! Nat Comms %R 10.1038/ncomms1147 %0 Journal Article %J Proc. RANDOM %D 2010 %T Quantum property testing for bounded-degree graphs %A Andris Ambainis %A Andrew M. Childs %A Yi-Kai Liu %X We study quantum algorithms for testing bipartiteness and expansion of bounded-degree graphs. We give quantum algorithms that solve these problems in time O(N^(1/3)), beating the Omega(sqrt(N)) classical lower bound. For testing expansion, we also prove an Omega(N^(1/4)) quantum query lower bound, thus ruling out the possibility of an exponential quantum speedup. Our quantum algorithms follow from a combination of classical property testing techniques due to Goldreich and Ron, derandomization, and the quantum algorithm for element distinctness. The quantum lower bound is obtained by the polynomial method, using novel algebraic techniques and combinatorial analysis to accommodate the graph structure. %B Proc. RANDOM %P 365-376 %8 2010/12/14 %G eng %U http://arxiv.org/abs/1012.3174v3 %! Proceedings of RANDOM 2011 %R 10.1007/978-3-642-22935-0_31 %0 Journal Article %J Physical Review Letters %D 2010 %T Quantum state tomography via compressed sensing %A David Gross %A Yi-Kai Liu %A Steven T. Flammia %A Stephen Becker %A Jens Eisert %X We establish methods for quantum state tomography based on compressed sensing. These methods are specialized for quantum states that are fairly pure, and they offer a significant performance improvement on large quantum systems. In particular, they are able to reconstruct an unknown density matrix of dimension d and rank r using O(rd log^2 d) measurement settings, compared to standard methods that require d^2 settings. Our methods have several features that make them amenable to experimental implementation: they require only simple Pauli measurements, use fast convex optimization, are stable against noise, and can be applied to states that are only approximately low-rank. The acquired data can be used to certify that the state is indeed close to pure, so no a priori assumptions are needed. We present both theoretical bounds and numerical simulations. %B Physical Review Letters %V 105 %8 2010/10/4 %G eng %U http://arxiv.org/abs/0909.3304v4 %N 15 %! Phys. Rev. Lett. %R 10.1103/PhysRevLett.105.150401 %0 Journal Article %J Proc. ACM Symposium on Theory of Computing (STOC) %D 2009 %T Quantum Algorithms Using the Curvelet Transform %A Yi-Kai Liu %X The curvelet transform is a directional wavelet transform over R^n, which is used to analyze functions that have singularities along smooth surfaces (Candes and Donoho, 2002). I demonstrate how this can lead to new quantum algorithms. I give an efficient implementation of a quantum curvelet transform, together with two applications: a single-shot measurement procedure for approximately finding the center of a ball in R^n, given a quantum-sample over the ball; and, a quantum algorithm for finding the center of a radial function over R^n, given oracle access to the function. I conjecture that these algorithms succeed with constant probability, using one quantum-sample and O(1) oracle queries, respectively, independent of the dimension n -- this can be interpreted as a quantum speed-up. To support this conjecture, I prove rigorous bounds on the distribution of probability mass for the continuous curvelet transform. This shows that the above algorithms work in an idealized "continuous" model. %B Proc. ACM Symposium on Theory of Computing (STOC) %P 391-400 %8 2009/10/28 %G eng %U http://arxiv.org/abs/0810.4968v2 %! Proc. ACM Symposium on Theory of Computing (STOC) %0 Journal Article %D 2007 %T The Complexity of the Consistency and N-representability Problems for Quantum States %A Yi-Kai Liu %X QMA (Quantum Merlin-Arthur) is the quantum analogue of the class NP. There are a few QMA-complete problems, most notably the ``Local Hamiltonian'' problem introduced by Kitaev. In this dissertation we show some new QMA-complete problems. The first one is ``Consistency of Local Density Matrices'': given several density matrices describing different (constant-size) subsets of an n-qubit system, decide whether these are consistent with a single global state. This problem was first suggested by Aharonov. We show that it is QMA-complete, via an oracle reduction from Local Hamiltonian. This uses algorithms for convex optimization with a membership oracle, due to Yudin and Nemirovskii. Next we show that two problems from quantum chemistry, ``Fermionic Local Hamiltonian'' and ``N-representability,'' are QMA-complete. These problems arise in calculating the ground state energies of molecular systems. N-representability is a key component in recently developed numerical methods using the contracted Schrodinger equation. Although these problems have been studied since the 1960's, it is only recently that the theory of quantum computation has allowed us to properly characterize their complexity. Finally, we study some special cases of the Consistency problem, pertaining to 1-dimensional and ``stoquastic'' systems. We also give an alternative proof of a result due to Jaynes: whenever local density matrices are consistent, they are consistent with a Gibbs state. %8 2007/12/18 %G eng %U http://arxiv.org/abs/0712.3041v1 %9 PhD Thesis, Univ. of California, San Diego %0 Journal Article %D 2007 %T The Local Consistency Problem for Stoquastic and 1-D Quantum Systems %A Yi-Kai Liu %X The Local Hamiltonian problem (finding the ground state energy of a quantum system) is known to be QMA-complete. The Local Consistency problem (deciding whether descriptions of small pieces of a quantum system are consistent) is also known to be QMA-complete. Here we consider special cases of Local Hamiltonian, for ``stoquastic'' and 1-dimensional systems, that seem to be strictly easier than QMA. We show that there exist analogous special cases of Local Consistency, that have equivalent complexity (up to poly-time oracle reductions). Our main technical tool is a new reduction from Local Consistency to Local Hamiltonian, using SDP duality. %8 2007/12/10 %G eng %U http://arxiv.org/abs/0712.1388v2 %0 Journal Article %J Phys. Rev. Lett. %D 2007 %T N-representability is QMA-complete %A Yi-Kai Liu %A Matthias Christandl %A F. Verstraete %X We study the computational complexity of the N-representability problem in quantum chemistry. We show that this problem is quantum Merlin-Arthur complete, which is the quantum generalization of nondeterministic polynomial time complete. Our proof uses a simple mapping from spin systems to fermionic systems, as well as a convex optimization technique that reduces the problem of finding ground states to N representability. %B Phys. Rev. Lett. %V 98 %8 2007/03/16 %G eng %U http://journals.aps.org/prl/abstract/10.1103/PhysRevLett.98.110503 %N 11 %R 10.1103/PhysRevLett.98.110503 %0 Journal Article %J Proc. RANDOM %D 2006 %T On Bounded Distance Decoding for General Lattices %A Yi-Kai Liu %A Vadim Lyubashevsky %A Daniele Micciancio %X A central problem in the algorithmic study of lattices is the closest vector problem: given a lattice L represented by some basis, and a target point y⃗ , find the lattice point closest to y⃗ . Bounded Distance Decoding is a variant of this problem in which the target is guaranteed to be close to the lattice, relative to the minimum distance λ1(L) of the lattice. Specifically, in the α-Bounded Distance Decoding problem (α-BDD), we are given a lattice L and a vector y⃗ (within distance α⋅λ1(L) from the lattice), and we are asked to find a lattice point x⃗ ∈L within distance α⋅λ1(L) from the target. In coding theory, the lattice points correspond to codewords, and the target points correspond to lattice points being perturbed by noise vectors. Since in coding theory the lattice is usually fixed, we may “pre-process” it before receiving any targets, to make the subsequent decoding faster. This leads us to consider α-BDD with pre-processing. We show how a recent technique of Aharonov and Regev [2] can be used to solve α-BDD with pre-processing in polynomial time for α=O((logn)/n−−−−−−−√). This improves upon the previously best known algorithm due to Klein [13] which solved the problem for α=O(1/n). We also establish hardness results for α-BDD and α-BDD with pre-processing, as well as generalize our results to other ℓ p norms. %B Proc. RANDOM %P 450-461 %8 2006/01/01 %G eng %U http://link.springer.com/chapter/10.1007/11830924_41#page-1 %0 Journal Article %J Proc. RANDOM %D 2006 %T Consistency of Local Density Matrices is QMA-complete %A Yi-Kai Liu %X Suppose we have an n-qubit system, and we are given a collection of local density matrices rho_1,...,rho_m, where each rho_i describes a subset C_i of the qubits. We say that the rho_i are ``consistent'' if there exists some global state sigma (on all n qubits) that matches each of the rho_i on the subsets C_i. This generalizes the classical notion of the consistency of marginal probability distributions. We show that deciding the consistency of local density matrices is QMA-complete (where QMA is the quantum analogue of NP). This gives an interesting example of a hard problem in QMA. Our proof is somewhat unusual: we give a Turing reduction from Local Hamiltonian, using a convex optimization algorithm by Bertsimas and Vempala, which is based on random sampling. Unlike in the classical case, simple mapping reductions do not seem to work here. %B Proc. RANDOM %P 438-449 %8 2006/04/21 %G eng %U http://arxiv.org/abs/quant-ph/0604166v3 %0 Journal Article %D 2006 %T Gibbs States and the Consistency of Local Density Matrices %A Yi-Kai Liu %X Suppose we have an n-qubit system, and we are given a collection of local density matrices rho_1,...,rho_m, where each rho_i describes some subset of the qubits. We say that rho_1,...,rho_m are "consistent" if there exists a global state sigma (on all n qubits) whose reduced density matrices match rho_1,...,rho_m. We prove the following result: if rho_1,...,rho_m are consistent with some state sigma > 0, then they are also consistent with a state sigma' of the form sigma' = (1/Z) exp(M_1+...+M_m), where each M_i is a Hermitian matrix acting on the same qubits as rho_i, and Z is a normalizing factor. (This is known as a Gibbs state.) Actually, we show a more general result, on the consistency of a set of expectation values