USRA Research Institute for Advanced Computer Science (RIACS)
We explore the interplay between two emerging paradigms: reservoir computing and quantum computing. We observe how quantum systems featuring beyond-classical correlations and vast computational spaces can serve as non-trivial, experimentally viable reservoirs for typical tasks in machine learning. With a focus on neutral atom quantum processing units, we describe and exemplify a novel quantum reservoir computing (QRC) workflow. We conclude exploratively discussing the main challenges ahead, whilst arguing how QRC can offer a natural candidate to push forward reservoir computing applications.
We provide practical simulation methods for scalar field theories on a quantum computer that yield improved asymptotics as well as concrete gate estimates for the simulation and physical qubit estimates using the surface code. We achieve these improvements through two optimizations. First, we consider a different approach for estimating the elements of the S-matrix. This approach is appropriate in general for 1+1D and for certain low-energy elastic collisions in higher dimensions. Second, we implement our approach using a series of different fault-tolerant simulation algorithms for Hamiltonians formulated both in the field occupation basis and field amplitude basis. Our algorithms are based on either second-order Trotterization or qubitization. The cost of Trotterization in occupation basis scales as $\widetilde{O}(\lambda N^7 |\Omega|^3/(M^{5/2} \epsilon^{3/2})where where \lambda$ is the coupling strength, NN is the occupation cutoff Ω|\Omega| is the volume of the spatial lattice, MM is the mass of the particles and ϵ\epsilon is the uncertainty in the energy calculation used for the SS-matrix determination. Qubitization in the field basis scales as O~(Ω2(k2Λ+kM2)/ϵ)\widetilde{O}(|\Omega|^2 (k^2 \Lambda +kM^2)/\epsilon) where kk is the cutoff in the field and Λ\Lambda is a scaled coupling constant. We find in both cases that the bounds suggest physically meaningful simulations can be performed using on the order of 4×1064\times 10^6 physical qubits and 101210^{12} TT-gates which corresponds to roughly one day on a superconducting quantum computer with surface code and a cycle time of 100 ns, placing simulation of scalar field theory within striking distance of the gate counts for the best available chemistry simulation results.
We present two deterministic algorithms to approximate single-qutrit gates. These algorithms utilize the Clifford + R\mathbf{R} group to find the best approximation of diagonal rotations. The first algorithm exhaustively searches over the group; while the second algorithm searches only for Householder reflections. The exhaustive search algorithm yields an average R\mathbf{R} count of 2.193(11)+8.621(7)log10(1/ε)2.193(11) + 8.621(7) \log_{10}(1 / \varepsilon), albeit with a time complexity of O(ε4.4)\mathcal{O}(\varepsilon^{-4.4}). The Householder search algorithm results in a larger average R\mathbf{R} count of $3.20(13) + 10.77(3) \log_{10}(1 / \varepsilon)$ at a reduced time complexity of O(ε0.42)\mathcal{O}(\varepsilon^{-0.42}), greatly extending the reach in ε\varepsilon. These costs correspond asymptotically to 35% and 69% more non-Clifford gates compared to synthesizing the same unitary with two qubits. Such initial results are encouraging for using the R\mathbf{R} gate as the non-transversal gate for qutrit-based computation.
Recent demonstrations on specialized benchmarks have reignited excitement for quantum computers, yet whether they can deliver an advantage for practical real-world problems remains an open question. Here, we show that probabilistic computers (p-computers), when co-designed with hardware to implement powerful Monte Carlo algorithms, provide a compelling and scalable classical pathway for solving hard optimization problems. We focus on two key algorithms applied to 3D spin glasses: discrete-time simulated quantum annealing (DT-SQA) and adaptive parallel tempering (APT). We benchmark these methods against the performance of a leading quantum annealer on the same problem instances. For DT-SQA, we find that increasing the number of replicas improves residual energy scaling, in line with expectations from extreme value theory. We then show that APT, when supported by non-local isoenergetic cluster moves, exhibits a more favorable scaling and ultimately outperforms DT-SQA. We demonstrate these algorithms are readily implementable in modern hardware, projecting that custom Field Programmable Gate Arrays (FPGA) or specialized chips can leverage massive parallelism to accelerate these algorithms by orders of magnitude while drastically improving energy efficiency. Our results establish a new, rigorous classical baseline, clarifying the landscape for assessing a practical quantum advantage and presenting p-computers as a scalable platform for real-world optimization challenges.
We identify a period-4 measurement schedule for the checks of the Bacon-Shor code that fully covers spacetime with constant-weight detectors, and is numerically observed to provide the code with a threshold. Unlike previous approaches, our method does not rely on code concatenation and instead arises as the solution to a coloring game on a square grid. Under a uniform circuit-level noise model, we observe a threshold of approximately 0.3%0.3\% when decoding with minimum weight perfect matching, and we conjecture that this could be improved using a more tailored decoder.
Recent Large Language Models (LLMs) have demonstrated impressive capabilities at tasks that require human intelligence and are a significant step towards human-like artificial intelligence (AI). Yet the performance of LLMs at reasoning tasks have been subpar and the reasoning capability of LLMs is a matter of significant debate. While it has been shown that the choice of the prompting technique to the LLM can alter its performance on a multitude of tasks, including reasoning, the best performing techniques require human-made prompts with the knowledge of the tasks at hand. We introduce a framework for what we call Combinatorial Reasoning (CR), a fully-automated prompting method, where reasons are sampled from an LLM pipeline and mapped into a Quadratic Unconstrained Binary Optimization (QUBO) problem. The framework investigates whether QUBO solutions can be profitably used to select a useful subset of the reasons to construct a Chain-of-Thought style prompt. We explore the acceleration of CR with specialized solvers. We also investigate the performance of simpler zero-shot strategies such as linear majority rule or random selection of reasons. Our preliminary study indicates that coupling a combinatorial solver to generative AI pipelines is an interesting avenue for AI reasoning and elucidates design principles for future CR methods.
The Bacon-Shor code is a quantum error correcting subsystem code composed of weight 2 check operators that admits a single logical qubit, and has distance dd on a d×dd \times d square lattice. We show that when viewed as a Floquet code, by choosing an appropriate measurement schedule of the check operators, it can additionally host several dynamical logical qubits. Specifically, we identify a period 4 measurement schedule of the check operators that preserves logical information between the instantaneous stabilizer groups. Such a schedule measures not only the usual stabilizers of the Bacon-Shor code, but also additional stabilizers that protect the dynamical logical qubits against errors. We show that the code distance of these Floquet-Bacon-Shor codes scales as Θ(d/k)\Theta(d/\sqrt{k}) on an n=d×dn = d \times d lattice with kk dynamical logical qubits, along with the logical qubit of the parent subsystem code. Unlike the usual Bacon-Shor code, the Floquet-Bacon-Shor code family introduced here can therefore saturate the subsystem bound kd=O(n)kd = O(n). Moreover, several errors are shown to be self-corrected purely by the measurement schedule itself. This work provides insights into the design space for dynamical codes and expands the known approaches for constructing Floquet codes.
Quantum circuits with local particle number conservation (LPNC) restrict the quantum computation to a subspace of the Hilbert space of the qubit register. In a noiseless or fault-tolerant quantum computation, such quantities are preserved. In the presence of noise, however, the evolution's symmetry could be broken and non-valid states could be sampled at the end of the computation. On the other hand, the restriction to a subspace in the ideal case suggest the possibility of more resource efficient error mitigation techniques for circuits preserving symmetries that are not possible for general circuits. Here, we analyze the probability of staying in such symmetry-preserved subspaces under noise, providing an exact formula for local depolarizing noise. We apply our findings to benchmark, under depolarizing noise, the symmetry robustness of XY-QAOA, which has local particle number conserving symmetries, and is a special case of the Quantum Alternating Operator Ansatz. We also analyze the influence of the choice of encoding the problem on the symmetry robustness of the algorithm and discuss a simple adaption of the bit flip code to correct for symmetry-breaking errors with reduced resources.
A quantum annealing solver for the renowned job-shop scheduling problem (JSP) is presented in detail. After formulating the problem as a time-indexed quadratic unconstrained binary optimization problem, several pre-processing and graph embedding strategies are employed to compile optimally parametrized families of the JSP for scheduling instances of up to six jobs and six machines on the D-Wave Systems Vesuvius processor. Problem simplifications and partitioning algorithms, including variable pruning and running strategies that consider tailored binary searches, are discussed and the results from the processor are compared against state-of-the-art global-optimum solvers.
Quantum algorithms have been widely studied in the context of combinatorial optimization problems. While this endeavor can often analytically and practically achieve quadratic speedups, theoretical and numeric studies remain limited, especially compared to the study of classical algorithms. We propose and study a new class of hybrid approaches to quantum optimization, termed Iterative Quantum Algorithms, which in particular generalizes the Recursive Quantum Approximate Optimization Algorithm. This paradigm can incorporate hard problem constraints, which we demonstrate by considering the Maximum Independent Set (MIS) problem. We show that, for QAOA with depth p=1p=1, this algorithm performs exactly the same operations and selections as the classical greedy algorithm for MIS. We then turn to deeper p>1p>1 circuits and other ways to modify the quantum algorithm that can no longer be easily mimicked by classical algorithms, and empirically confirm improved performance. Our work demonstrates the practical importance of incorporating proven classical techniques into more effective hybrid quantum-classical algorithms.
The ability to perform ab initio molecular dynamics simulations using potential energies calculated on quantum computers would allow virtually exact dynamics for chemical and biochemical systems, with substantial impacts on the fields of catalysis and biophysics. However, noisy hardware, the costs of computing gradients, and the number of qubits required to simulate large systems present major challenges to realizing the potential of dynamical simulations using quantum hardware. Here, we demonstrate that some of these issues can be mitigated by recent advances in machine learning. By combining transfer learning with techniques for building machine-learned potential energy surfaces, we propose a new path forward for molecular dynamics simulations on quantum hardware. We use transfer learning to reduce the number of energy evaluations that use quantum hardware by first training models on larger, less accurate classical datasets and then refining them on smaller, more accurate quantum datasets. We demonstrate this approach by training machine learning models to predict a molecule's potential energy using Behler-Parrinello neural networks. When successfully trained, the model enables energy gradient predictions necessary for dynamics simulations that cannot be readily obtained directly from quantum hardware. To reduce the quantum resources needed, the model is initially trained with data derived from low-cost techniques, such as Density Functional Theory, and subsequently refined with a smaller dataset obtained from the optimization of the Unitary Coupled Cluster ansatz. We show that this approach significantly reduces the size of the quantum training dataset while capturing the high accuracies needed for quantum chemistry simulations.
There have been multiple attempts to demonstrate that quantum annealing and, in particular, quantum annealing on quantum annealing machines, has the potential to outperform current classical optimization algorithms implemented on CMOS technologies. The benchmarking of these devices has been controversial. Initially, random spin-glass problems were used, however, these were quickly shown to be not well suited to detect any quantum speedup. Subsequently, benchmarking shifted to carefully crafted synthetic problems designed to highlight the quantum nature of the hardware while (often) ensuring that classical optimization techniques do not perform well on them. Even worse, to date a true sign of improved scaling with the number of problem variables remains elusive when compared to classical optimization techniques. Here, we analyze the readiness of quantum annealing machines for real-world application problems. These are typically not random and have an underlying structure that is hard to capture in synthetic benchmarks, thus posing unexpected challenges for optimization techniques, both classical and quantum alike. We present a comprehensive computational scaling analysis of fault diagnosis in digital circuits, considering architectures beyond D-wave quantum annealers. We find that the instances generated from real data in multiplier circuits are harder than other representative random spin-glass benchmarks with a comparable number of variables. Although our results show that transverse-field quantum annealing is outperformed by state-of-the-art classical optimization algorithms, these benchmark instances are hard and small in the size of the input, therefore representing the first industrial application ideally suited for testing near-term quantum annealers and other quantum algorithmic strategies for optimization problems.
Machine learning has been presented as one of the key applications for near-term quantum technologies, given its high commercial value and wide range of applicability. In this work, we introduce the \textit{quantum-assisted Helmholtz machine:} a hybrid quantum-classical framework with the potential of tackling high-dimensional real-world machine learning datasets on continuous variables. Instead of using quantum computers only to assist deep learning, as previous approaches have suggested, we use deep learning to extract a low-dimensional binary representation of data, suitable for processing on relatively small quantum computers. Then, the quantum hardware and deep learning architecture work together to train an unsupervised generative model. We demonstrate this concept using 1644 quantum bits of a D-Wave 2000Q quantum device to model a sub-sampled version of the MNIST handwritten digit dataset with 16x16 continuous valued pixels. Although we illustrate this concept on a quantum annealer, adaptations to other quantum platforms, such as ion-trap technologies or superconducting gate-model architectures, could be explored within this flexible framework.
Diagrammatic representations of quantum algorithms and circuits offer novel approaches to their design and analysis. In this work, we describe extensions of the ZX-calculus especially suitable for parameterized quantum circuits, in particular for computing observable expectation values as functions of or for fixed parameters, which are important algorithmic quantities in a variety of applications ranging from combinatorial optimization to quantum chemistry. We provide several new ZX-diagram rewrite rules and generalizations for this setting. In particular, we give formal rules for dealing with linear combinations of ZX-diagrams, where the relative complex-valued scale factors of each diagram must be kept track of, in contrast to most previously studied single-diagram realizations where these coefficients can be effectively ignored. This allows us to directly import a number useful relations from the operator analysis to ZX-calculus setting, including causal cone and quantum gate commutation rules. We demonstrate that the diagrammatic approach offers useful insights into algorithm structure and performance by considering several ansatze from the literature including realizations of hardware-efficient ansatze and QAOA. We find that by employing a diagrammatic representation, calculations across different ansatze can become more intuitive and potentially easier to approach systematically than by alternative means. Finally, we outline how diagrammatic approaches may aid in the design and study of new and more effective quantum circuit ansatze.
Variational quantum eigensolvers are touted as a near-term algorithm capable of impacting many applications. However, the potential has not yet been realized, with few claims of quantum advantage and high resource estimates, especially due to the need for optimization in the presence of noise. Finding algorithms and methods to improve convergence is important to accelerate the capabilities of near-term hardware for VQE or more broad applications of hybrid methods in which optimization is required. To this goal, we look to use modern approaches developed in circuit simulations and stochastic classical optimization, which can be combined to form a surrogate optimization approach to quantum circuits. Using an approximate (classical CPU/GPU) state vector simulator as a surrogate model, we efficiently calculate an approximate Hessian, passed as an input for a quantum processing unit or exact circuit simulator. This method will lend itself well to parallelization across quantum processing units. We demonstrate the capabilities of such an approach with and without sampling noise and a proof-of-principle demonstration on a quantum processing unit utilizing 40 qubits.
Mainstream machine-learning techniques such as deep learning and probabilistic programming rely heavily on sampling from generally intractable probability distributions. There is increasing interest in the potential advantages of using quantum computing technologies as sampling engines to speed up these tasks or to make them more effective. However, some pressing challenges in state-of-the-art quantum annealers have to be overcome before we can assess their actual performance. The sparse connectivity, resulting from the local interaction between quantum bits in physical hardware implementations, is considered the most severe limitation to the quality of constructing powerful generative unsupervised machine-learning models. Here we use embedding techniques to add redundancy to data sets, allowing us to increase the modeling capacity of quantum annealers. We illustrate our findings by training hardware-embedded graphical models on a binarized data set of handwritten digits and two synthetic data sets in experiments with up to 940 quantum bits. Our model can be trained in quantum hardware without full knowledge of the effective parameters specifying the corresponding quantum Gibbs-like distribution; therefore, this approach avoids the need to infer the effective temperature at each iteration, speeding up learning; it also mitigates the effect of noise in the control parameters, making it robust to deviations from the reference Gibbs distribution. Our approach demonstrates the feasibility of using quantum annealers for implementing generative models, and it provides a suitable framework for benchmarking these quantum technologies on machine-learning-related tasks.
Quantum optimization, both for classical and quantum functions, is one of the most well-studied applications of quantum computing, but recent trends have relied on hybrid methods that push much of the fine-tuning off onto costly classical algorithms. Feedback-based quantum algorithms, such as FALQON, avoid these fine-tuning problems but at the cost of additional circuit depth and a lack of convergence guarantees. In this work, we take the local greedy information collected by Lyapunov feedback control and develop an analytic framework to use it to perturbatively update previous control layers, similar to the global optimal control achievable using Pontryagin optimal control. This perturbative methodology, which we call Feedback Optimally Controlled Quantum States (FOCQS), can be used to improve the results of feedback-based algorithms, like FALQON. Furthermore, this perturbative method can be used to push smooth annealing-like control protocol closer to the control optimum, even providing and iterative approach, albeit with diminishing returns. In numerical testing, we show improvements in convergence and required depth due to these methods over existing quantum feedback control methods.
To run quantum algorithms on emerging gate-model quantum hardware, quantum circuits must be compiled to take into account constraints on the hardware. For near-term hardware, with only limited means to mitigate decoherence, it is critical to minimize the duration of the circuit. We investigate the application of temporal planners to the problem of compiling quantum circuits to newly emerging quantum hardware. While our approach is general, we focus on compiling to superconducting hardware architectures with nearest neighbor constraints. Our initial experiments focus on compiling Quantum Alternating Operator Ansatz (QAOA) circuits whose high number of commuting gates allow great flexibility in the order in which the gates can be applied. That freedom makes it more challenging to find optimal compilations but also means there is a greater potential win from more optimized compilation than for less flexible circuits. We map this quantum circuit compilation problem to a temporal planning problem, and generated a test suite of compilation problems for QAOA circuits of various sizes to a realistic hardware architecture. We report compilation results from several state-of-the-art temporal planners on this test set. This early empirical evaluation demonstrates that temporal planning is a viable approach to quantum circuit compilation.
We present a general condition to obtain subspaces that decay uniformly in a system governed by the Lindblad master equation and use them to perform error mitigated quantum computation. The expectation values of dynamics encoded in such subspaces are unbiased estimators of noise-free expectation values. In analogy to the decoherence free subspaces which are left invariant by the action of Lindblad operators, we show that the uniformly decaying subspaces are left invariant (up to orthogonal terms) by the action of the dissipative part of the Lindblad equation. We apply our theory to a system of qubits and qudits undergoing relaxation with varying decay rates and show that such subspaces can be used to eliminate bias up to first order variations in the decay rates without requiring full knowledge of noise. Since such a bias cannot be corrected through standard symmetry verification, our method can improve error mitigation in dual-rail qubits and given partial knowledge of noise, can perform better than probabilistic error cancellation.
Open quantum systems are a topic of intense theoretical research. The use of master equations to model a system's evolution subject to an interaction with an external environment is one of the most successful theoretical paradigms. General experimental tools to study different open system realizations have been limited, and so it is highly desirable to develop experimental tools which emulate diverse master equation dynamics and give a way to test open systems theories. In this paper we demonstrate a systematic method for engineering specific system-environment interactions and emulating master equations of a particular form using classical stochastic noise. We also demonstrate that non-Markovian noise can be used as a resource to extend the coherence of a quantum system and counteract the adversarial effects of Markovian environments.
There are no more papers matching your filters at the moment.