Zapata Computing Inc.
The study developed and evaluated algorithmic protocols for decomposing Matrix Product States (MPS) into shallow parametrized quantum circuits (PQCs). Researchers found that an iterative approach combining analytical layer initialization with global optimization of all circuit unitaries consistently yielded superior fidelity for 12-qubit MPS across various types, enabling efficient transfer of quantum states to near-term quantum hardware.
Practical challenges in simulating quantum systems on classical computers have been widely recognized in the quantum physics and quantum chemistry communities over the past century. Although many approximation methods have been introduced, the complexity of quantum mechanics remains hard to appease. The advent of quantum computation brings new pathways to navigate this challenging complexity landscape. By manipulating quantum states of matter and taking advantage of their unique features such as superposition and entanglement, quantum computers promise to efficiently deliver accurate results for many important problems in quantum chemistry such as the electronic structure of molecules. In the past two decades significant advances have been made in developing algorithms and physical hardware for quantum computing, heralding a revolution in simulation of quantum systems. This article is an overview of the algorithms and results that are relevant for quantum chemistry. The intended audience is both quantum chemists who seek to learn more about quantum computing, and quantum computing researchers who would like to explore applications in quantum chemistry.
Generating high-quality data (e.g. images or video) is one of the most exciting and challenging frontiers in unsupervised machine learning. Utilizing quantum computers in such tasks to potentially enhance conventional machine learning algorithms has emerged as a promising application, but poses big challenges due to the limited number of qubits and the level of gate noise in available devices. In this work, we provide the first practical and experimental implementation of a quantum-classical generative algorithm capable of generating high-resolution images of handwritten digits with state-of-the-art gate-based quantum computers. In our quantum-assisted machine learning framework, we implement a quantum-circuit based generative model to learn and sample the prior distribution of a Generative Adversarial Network. We introduce a multi-basis technique that leverages the unique possibility of measuring quantum states in different bases, hence enhancing the expressivity of the prior distribution. We train this hybrid algorithm on an ion-trap device based on 171^{171}Yb+^{+} ion qubits to generate high-quality images and quantitatively outperform comparable classical Generative Adversarial Networks trained on the popular MNIST data set for handwritten digits.
The computational power of real-world quantum computers is limited by errors. When using quantum computers to perform algorithms which cannot be efficiently simulated classically, it is important to quantify the accuracy with which the computation has been performed. In this work we introduce a machine-learning-based technique to estimate the fidelity between the state produced by a noisy quantum circuit and the target state corresponding to ideal noise-free computation. Our machine learning model is trained in a supervised manner, using smaller or simpler circuits for which the fidelity can be estimated using other techniques like direct fidelity estimation and quantum state tomography. We demonstrate that, for simulated random quantum circuits with a realistic noise model, the trained model can predict the fidelities of more complicated circuits for which such methods are infeasible. In particular, we show the trained model may make predictions for circuits with higher degrees of entanglement than were available in the training set, and that the model may make predictions for non-Clifford circuits even when the training set included only Clifford-reducible circuits. This empirical demonstration suggests classical machine learning may be useful for making predictions about beyond-classical quantum circuits for some non-trivial problems.
Measuring quantum observables by grouping terms that can be rotated to sums of only products of Pauli z^\hat z operators (Ising form) is proven to be efficient in near term quantum computing algorithms. This approach requires extra unitary transformations to rotate the state of interest so that the measurement of a fragment's Ising form would be equivalent to measurement of the fragment for the unrotated state. These extra rotations allow one to perform a fewer number of measurements by grouping more terms into the measurable fragments with a lower overall estimator variance. However, previous estimations of the number of measurements did not take into account non-unit fidelity of quantum gates implementing the additional transformations. Through a circuit fidelity reduction, additional transformations introduce extra uncertainty and increase the needed number of measurements. Here we consider a simple model for errors introduced by additional gates needed in schemes involving grouping of commuting Pauli products. For a set of molecular electronic Hamiltonians, we confirm that the numbers of measurements in schemes using non-local qubit rotations are still lower than those in their local qubit rotation counterparts, even after accounting for uncertainties introduced by additional gates.
The ability of near-term quantum computers to represent classically-intractable quantum states has brought much interest in using such devices for estimating the ground and excited state energies of fermionic Hamiltonians. The usefulness of such near-term techniques, generally based on the Variational Quantum Eigensolver (VQE), however, is limited by device noise and the need to perform many circuit repetitions. This paper addresses these challenges by generalizing VQE to consider wavefunctions in a subspace spanned by classically tractable states and states that can be prepared on a quantum computer. The manuscript shows how the ground and excited state energies can be estimated using such "classical-boosting" and how this approach can be combined with VQE Hamiltonian decomposition techniques. Unlike existing VQE approaches, the sensitivity to sampling error and device noise approaches zero in the limit where the classically tractable states are able to describe an eigenstate. A detailed analysis of the measurement requirements in the simplest case, where a single computational basis state is used to boost conventional VQE, shows that the ground-state energy estimation of several closed-shell homonuclear diatomic molecules can be accelerated by a factor of approximately 10-1000. The analysis also shows that the measurement reduction of such single basis state boosting, relative to conventional VQE, can be estimated using only the overlap between the ground state and the computational basis state used for boosting.
While recent breakthroughs have proven the ability of noisy intermediate-scale quantum (NISQ) devices to achieve quantum advantage in classically-intractable sampling tasks, the use of these devices for solving more practically relevant computational problems remains a challenge. Proposals for attaining practical quantum advantage typically involve parametrized quantum circuits (PQCs), whose parameters can be optimized to find solutions to diverse problems throughout quantum simulation and machine learning. However, training PQCs for real-world problems remains a significant practical challenge, largely due to the phenomenon of barren plateaus in the optimization landscapes of randomly-initialized quantum circuits. In this work, we introduce a scalable procedure for harnessing classical computing resources to provide pre-optimized initializations for PQCs, which we show significantly improves the trainability and performance of PQCs on a variety of problems. Given a specific optimization task, this method first utilizes tensor network (TN) simulations to identify a promising quantum state, which is then converted into gate parameters of a PQC by means of a high-performance decomposition procedure. We show that this learned initialization avoids barren plateaus, and effectively translates increases in classical resources to enhanced performance and speed in training quantum circuits. By demonstrating a means of boosting limited quantum resources using classical computers, our approach illustrates the promise of this synergy between quantum and quantum-inspired models in quantum computing, and opens up new avenues to harness the power of modern quantum hardware for realizing practical quantum advantage.
Research by Zhou et al. from Zapata Computing and bp Technology explores the practical implications of finite scalability in early fault-tolerant quantum computing (EFTQC) for simulating homogeneous catalysts. It quantifies how hardware characteristics and error correction codes influence resource requirements, demonstrating that high-fidelity architectures require lower minimum scalability and that LDPC codes can significantly enhance their runtime competitiveness for chemically relevant problems.
A milestone in the field of quantum computing will be solving problems in quantum chemistry and materials faster than state-of-the-art classical methods. The current understanding is that achieving quantum advantage in this area will require some degree of fault tolerance. While hardware is improving towards this milestone, optimizing quantum algorithms also brings it closer to the present. Existing methods for ground state energy estimation are costly in that they require a number of gates per circuit that grows exponentially with the desired number of bits in precision. We reduce this cost exponentially, by developing a ground state energy estimation algorithm for which this cost grows linearly in the number of bits of precision. Relative to recent resource estimates of ground state energy estimation for the industrially-relevant molecules of ethylene-carbonate and PF6_6^-, the estimated gate count and circuit depth is reduced by a factor of 43 and 78, respectively. Furthermore, the algorithm can use additional circuit depth to reduce the total runtime. These features make our algorithm a promising candidate for realizing quantum advantage in the era of early fault-tolerant quantum computing.
It is expected that the simulation of correlated fermions in chemistry and material science will be one of the first practical applications of quantum processors. Given the rapid evolution of quantum hardware, it is increasingly important to develop robust benchmarking techniques to gauge the capacity of quantum hardware specifically for the purpose of fermionic simulation. Here we propose using the one-dimensional Fermi-Hubbard model as an application benchmark for variational quantum simulations on near-term quantum devices. Since the one-dimensional Hubbard model is both strongly correlated and exactly solvable with the Bethe ansatz, it provides a reference ground state energy that a given device with limited coherence will be able to approximate up to a maximal size. The length of the largest chain that can be simulated provides an effective fermionic length. We use variational quantum eigensolver to approximate the ground state energy values of Fermi-Hubbard instances and show how the fermionic length benchmark can be used in practice to assess the performance of bounded-depth devices in a scalable fashion.
This study explores hardware implementation of Robust Amplitude Estimation (RAE) on IBM quantum devices, demonstrating its application in quantum chemistry for one- and two-qubit Hamiltonian systems. Known for potentially offering quadratic speedups over traditional methods in estimating expectation values, RAE is evaluated under realistic noisy conditions. Our experiments provide detailed insights into the practical challenges associated with RAE. We achieved a significant reduction in sampling requirements compared to direct measurement techniques. In estimating the ground state energy of the hydrogen molecule, the RAE implementation demonstrated two orders of magnitude better accuracy for the two-qubit experiments and achieved chemical accuracy. These findings reveal its potential to enhance computational efficiencies in quantum chemistry applications despite the inherent limitations posed by hardware noise. We also found that its performance can be adversely impacted by coherent error and device stability and does not always correlate with the average gate error. These results underscore the importance of adapting quantum computational methods to hardware specifics to realize their full potential in practical scenarios.
Quantum algorithms for Noisy Intermediate-Scale Quantum (NISQ) machines have recently emerged as new promising routes towards demonstrating near-term quantum advantage (or supremacy) over classical systems. In these systems samples are typically drawn from probability distributions which --- under plausible complexity-theoretic conjectures --- cannot be efficiently generated classically. Rather than first define a physical system and then determine computational features of the output state, we ask the converse question: given direct access to the quantum state, what features of the generating system can we efficiently learn? In this work we introduce the Variational Quantum Unsampling (VQU) protocol, a nonlinear quantum neural network approach for verification and inference of near-term quantum circuits outputs. In our approach one can variationally train a quantum operation to unravel the action of an unknown unitary on a known input state; essentially learning the inverse of the black-box quantum dynamics. While the principle of our approach is platform independent, its implementation will depend on the unique architecture of a specific quantum processor. Here, we experimentally demonstrate the VQU protocol on a quantum photonic processor. Alongside quantum verification, our protocol has broad applications; including optimal quantum measurement and tomography, quantum sensing and imaging, and ansatz validation.
Recent advances in quantum computing devices have brought attention to hybrid quantum-classical algorithms like the Variational Quantum Eigensolver (VQE) as a potential route to practical quantum advantage in chemistry. However, it is not yet clear whether such algorithms, even in the absence of device error, could actually achieve quantum advantage for systems of practical interest. We have performed an exhaustive analysis to estimate the number of qubits and number of measurements required to compute the combustion energies of small organic molecules and related systems to within chemical accuracy of experimental values using VQE. We consider several key modern improvements to VQE, including low-rank factorizations of the Hamiltonian. Our results indicate that although these techniques are useful, they will not be sufficient to achieve practical quantum computational advantage for our molecular set, or for similar molecules. This suggests that novel approaches to operator estimation leveraging quantum coherence, such as Enhanced Likelihood Functions [arXiv:2006.09350, arXiv:2006.09349], may be required.
Model-based optimization, in concert with conventional black-box methods, can quickly solve large-scale combinatorial problems. Recently, quantum-inspired modeling schemes based on tensor networks have been developed which have the potential to better identify and represent correlations in datasets. Here, we use a quantum-inspired model-based optimization method TN-GEO to assess the efficacy of these quantum-inspired methods when applied to realistic problems. In this case, the problem of interest is the optimization of a realistic assembly line based on BMW's currently utilized manufacturing schedule. Through a comparison of optimization techniques, we found that quantum-inspired model-based optimization, when combined with conventional black-box methods, can find lower-cost solutions in certain contexts.
Quantum chemistry and materials is one of the most promising applications of quantum computing. Yet much work is still to be done in matching industry-relevant problems in these areas with quantum algorithms that can solve them. Most previous efforts have carried out resource estimations for quantum algorithms run on large-scale fault-tolerant architectures, which include the quantum phase estimation algorithm. In contrast, few have assessed the performance of near-term quantum algorithms, which include the variational quantum eigensolver (VQE) algorithm. Recently, a large-scale benchmark study [Gonthier et al. 2020] found evidence that the performance of the variational quantum eigensolver for a set of industry-relevant molecules may be too inefficient to be of practical use. This motivates the need for developing and assessing methods that improve the efficiency of VQE. In this work, we predict the runtime of the energy estimation subroutine of VQE when using robust amplitude estimation (RAE) to estimate Pauli expectation values. Under conservative assumptions, our resource estimation predicts that RAE can reduce the runtime over the standard estimation method in VQE by one to two orders of magnitude. Despite this improvement, we find that the runtimes are still too large to be practical. These findings motivate two complementary efforts towards quantum advantage: 1) the investigation of more efficient near-term methods for ground state energy estimation and 2) the development of problem instances that are of industrial value and classically challenging, but better suited to quantum computation.
Quantum metrology allows for measuring properties of a quantum system at the optimal Heisenberg limit. However, when the relevant quantum states are prepared using digital Hamiltonian simulation, the accrued algorithmic errors will cause deviations from this fundamental limit. In this work, we show how algorithmic errors due to Trotterized time evolution can be mitigated through the use of standard polynomial interpolation techniques. Our approach is to extrapolate to zero Trotter step size, akin to zero-noise extrapolation techniques for mitigating hardware errors. We perform a rigorous error analysis of the interpolation approach for estimating eigenvalues and time-evolved expectation values, and show that the Heisenberg limit is achieved up to polylogarithmic factors in the error. Our work suggests that accuracies approaching those of state-of-the-art simulation algorithms may be achieved using Trotter and classical resources alone for a number of relevant algorithmic tasks.
Progress in fault-tolerant quantum computation (FTQC) has driven the pursuit of practical applications with early fault-tolerant quantum computers (EFTQC). These devices, limited in their qubit counts and fault-tolerance capabilities, require algorithms that can accommodate some degrees of error, which are known as EFTQC algorithms. To predict the onset of early quantum advantage, a comprehensive methodology is needed to develop and analyze EFTQC algorithms, drawing insights from both the methodologies of noisy intermediate-scale quantum (NISQ) and traditional FTQC. To address this need, we propose such a methodology for modeling algorithm performance on EFTQC devices under varying degrees of error. As a case study, we apply our methodology to analyze the performance of Randomized Fourier Estimation (RFE), an EFTQC algorithm for phase estimation. We investigate the runtime performance and the fault-tolerant overhead of RFE in comparison to the traditional quantum phase estimation algorithm. Our analysis reveals that RFE achieves significant savings in physical qubit counts while having a much higher runtime upper bound. We anticipate even greater physical qubit savings when considering more realistic assumptions about the performance of EFTQC devices. By providing insights into the performance trade-offs and resource requirements of EFTQC algorithms, our work contributes to the development of practical and efficient quantum computing solutions on the path to quantum advantage.
Constrained combinatorial optimization problems abound in industry, from portfolio optimization to logistics. One of the major roadblocks in solving these problems is the presence of non-trivial hard constraints which limit the valid search space. In some heuristic solvers, these are typically addressed by introducing certain Lagrange multipliers in the cost function, by relaxing them in some way, or worse yet, by generating many samples and only keeping valid ones, which leads to very expensive and inefficient searches. In this work, we encode arbitrary integer-valued equality constraints of the form Ax=b, directly into U(1) symmetric tensor networks (TNs) and leverage their applicability as quantum-inspired generative models to assist in the search of solutions to combinatorial optimization problems. This allows us to exploit the generalization capabilities of TN generative models while constraining them so that they only output valid samples. Our constrained TN generative model efficiently captures the constraints by reducing number of parameters and computational costs. We find that at tasks with constraints given by arbitrary equalities, symmetric Matrix Product States outperform their standard unconstrained counterparts at finding novel and better solutions to combinatorial optimization problems.
The hope of the quantum computing field is that quantum architectures are able to scale up and realize fault-tolerant quantum computing. Due to engineering challenges, such ''cheap'' error correction may be decades away. In the meantime, we anticipate an era of ''costly'' error correction, or early fault-tolerant quantum computing. Costly error correction might warrant settling for error-prone quantum computations. This motivates the development of quantum algorithms which are robust to some degree of error as well as methods to analyze their performance in the presence of error. Several such algorithms have recently been developed; what is missing is a methodology to analyze their robustness. To this end, we introduce a randomized algorithm for the task of phase estimation and give an analysis of its performance under two simple noise models. In both cases the analysis leads to a noise threshold, below which arbitrarily high accuracy can be achieved by increasing the number of samples used in the algorithm. As an application of this general analysis, we compute the maximum ratio of the largest circuit depth and the dephasing scale such that performance guarantees hold. We calculate that the randomized algorithm can succeed with arbitrarily high probability as long as the required circuit depth is less than 0.916 times the dephasing scale.
We formalize the problem of dissipative quantum encoding, and explore the advantages of using Markovian evolution to prepare a quantum code in the desired logical space, with emphasis on discrete-time dynamics and the possibility of exact finite-time convergence. In particular, we investigate robustness of the encoding dynamics and their ability to tolerate initialization errors, thanks to the existence of non-trivial basins of attraction. As a key application, we show that for stabilizer quantum codes on qubits, a finite-time dissipative encoder may always be constructed, by using at most a number of quantum maps determined by the number of stabilizer generators. We find that even in situations where the target code lacks gauge degrees of freedom in its subsystem form, dissipative encoders afford nontrivial robustness against initialization errors, thus overcoming a limitation of purely unitary encoding procedures. Our general results are illustrated in a number of relevant examples, including Kitaev's toric code.
There are no more papers matching your filters at the moment.