We study the limitations of black-box amplification in the quantum complexity class QMA. Amplification is known to boost any inverse-polynomial gap between completeness and soundness to exponentially small error, and a recent result (Jeffery and Witteveen, 2025) shows that completeness can in fact be amplified to be doubly exponentially close to 1. We prove that this is optimal for black-box procedures: we provide a quantum oracle relative to which no QMA verification procedure using polynomial resources can achieve completeness closer to 1 than doubly exponential, or a soundness which is super-exponentially small. This is proven by using techniques from complex approximation theory, to make the oracle separation from (Aaronson, 2008), between QMA and QMA with perfect completeness, quantitative.
Projected entangled pair states (PEPS) constitute a variational family of quantum states with area-law entanglement. PEPS are particularly relevant and successful for studying ground states of spatially local Hamiltonians. However, computing local expectation values in these states is known to be \postBQP-hard. Injective PEPS, where all constituent tensors fulfil an injectivity constraint, are generally believed to be better behaved, because they are unique ground states of spatially local Hamiltonians. In this work, we therefore examine how the computational hardness of contraction depends on the injectivity. We establish that below a constant positive injectivity threshold, evaluating local observables remains \postBQP-complete, while above a different constant nontrivial threshold there exists an efficient classical algorithm for the task, resolving an open question from (Anshu et al., STOC `24). We do this by proving that noisy postselected quantum computation can be made fault-tolerant.
We present an extremely simple polynomial-space exponential-time (1−ε)-approximation algorithm for MAX-k-SAT that is (slightly) faster than the previous known polynomial-space (1−ε)-approximation algorithms by Hirsch (Discrete Applied Mathematics, 2003) and Escoffier, Paschos and Tourniaire (Theoretical Computer Science, 2014). Our algorithm repeatedly samples an assignment uniformly at random until finding an assignment that satisfies a large enough fraction of clauses. Surprisingly, we can show the efficiency of this simpler approach by proving that in any instance of MAX-k-SAT (or more generally any instance of MAXCSP), an exponential number of assignments satisfy a fraction of clauses close to the optimal value.
The standard dynamical approach to quantum thermodynamics is based on Markovian master equations describing the thermalization of a system weakly coupled to a large environment, and on tools such as entropy production relations. Here we develop a new framework overcoming the limitations that the current dynamical and information theory approaches encounter when applied to this setting. More precisely, we introduce the notion of continuous thermomajorization, and employ it to obtain necessary and sufficient conditions for the existence of a Markovian thermal process transforming between given initial and final energy distributions of the system. These lead to a complete set of generalized entropy production inequalities including the standard one as a special case. Importantly, these conditions can be reduced to a finitely verifiable set of constraints governing non-equilibrium transformations under master equations. What is more, the framework is also constructive, i.e., it returns explicit protocols realizing any allowed transformation. These protocols use as building blocks elementary thermalizations, which we prove to be universal controls. Finally, we also present an algorithm constructing the full set of energy distributions achievable from a given initial state via Markovian thermal processes and provide a Mathematica implementation solving d=6 on a laptop computer in minutes.
The quantum Schur transform has become a foundational quantum algorithm, yet even after two decades since the seminal 2005 paper by Bacon, Chuang, and Harrow (BCH), some aspects of the transform remain insufficiently understood. Moreover, an alternative approach proposed by Krovi in 2018 was recently found to contain a crucial error. In this paper, we present a corrected version of Krovi's algorithm along with a detailed treatment of the high-dimensional version of the BCH Schur transform. This high-dimensional focus makes the two versions of the transform practical for regimes where the number of qudits n is smaller than the local dimension d, with Krovi's algorithm scaling as O(n4) and BCH as O(min(n5,nd4)). Our work addresses a key gap in the literature, strengthening the algorithmic foundations of a wide range of results that rely on Schur--Weyl duality in quantum information theory and quantum computation.
Gaussian states are widely regarded as one of the most relevant classes of continuous-variable (CV) quantum states, as they naturally arise in physical systems and play a key role in quantum technologies. This motivates a fundamental question: given copies of an unknown CV state, how can we efficiently test whether it is Gaussian? We address this problem from the perspective of representation theory and quantum learning theory, characterizing the sample complexity of Gaussianity testing as a function of the number of modes. For pure states, we prove that just a constant number of copies is sufficient to decide whether the state is exactly Gaussian. We then extend this to the tolerant setting, showing that a polynomial number of copies suffices to distinguish states that are close to Gaussian from those that are far. In contrast, we establish that testing Gaussianity of general mixed states necessarily requires exponentially many copies, thereby identifying a fundamental limitation in testing CV systems. Our approach relies on rotation-invariant symmetries of Gaussian states together with the recently introduced toolbox of CV trace-distance bounds.
Quantum data hiding is the existence of pairs of bipartite quantum states that are (almost) perfectly distinguishable with global measurements, yet close to indistinguishable when only measurements implementable with local operations and classical communication are allowed. Remarkably, data hiding states can also be chosen to be separable, meaning that secrets can be hidden using no entanglement that are almost irretrievable without entanglement -- this is sometimes called `nonlocality without entanglement'. Essentially two families of data hiding states were known prior to this work: Werner states and random states. Hiding Werner states can be made either separable or globally perfectly orthogonal, but not both -- separability comes at the price of orthogonality being only approximate. Random states can hide many more bits, but they are typically entangled and again only approximately orthogonal. In this paper, we present an explicit construction of novel group-symmetric data hiding states that are simultaneously separable, perfectly orthogonal, and even invariant under partial transpose, thus exhibiting the phenomenon of nonlocality without entanglement to the utmost extent. Our analysis leverages novel applications of numerical analysis tools to study convex optimisation problems in quantum information theory, potentially offering technical insights that extend beyond this work.
Quantum phase estimation combined with Hamiltonian simulation is the most
promising algorithmic framework to computing ground state energies on quantum
computers. Its main computational overhead derives from the Hamiltonian
simulation subroutine. In this paper we use randomization to speed up product
formulas, one of the standard approaches to Hamiltonian simulation. We propose
new partially randomized Hamiltonian simulation methods in which some terms are
kept deterministically and others are randomly sampled. We perform a detailed
resource estimate for single-ancilla phase estimation using partially
randomized product formulas for benchmark systems in quantum chemistry and
obtain orders-of-magnitude improvements compared to other simulations based on
product formulas. When applied to the hydrogen chain, we have numerical
evidence that our methods exhibit asymptotic scaling with the system size that
is competitive with the best known qubitization approaches.
QuantumBoost, a new quantum algorithm, significantly improves boosting algorithm runtime, achieving an overall complexity of ˜O(W/(√ǫγ^4)) by leveraging quantum computation and a novel lazy projection strategy. This method removes explicit dependence on dataset size `m` and improves the error parameter dependence to ˜O(1/√ǫ) compared to prior classical and quantum approaches.
Recent advances in quantum computers are demonstrating the ability to solve
problems at a scale beyond brute force classical simulation. As such, a
widespread interest in quantum algorithms has developed in many areas, with
optimization being one of the most pronounced domains. Across computer science
and physics, there are a number of different approaches for major classes of
optimization problems, such as combinatorial optimization, convex optimization,
non-convex optimization, and stochastic extensions. This work draws on multiple
approaches to study quantum optimization. Provably exact versus heuristic
settings are first explained using computational complexity theory -
highlighting where quantum advantage is possible in each context. Then, the
core building blocks for quantum optimization algorithms are outlined to
subsequently define prominent problem classes and identify key open questions
that, if answered, will advance the field. The effects of scaling relevant
problems on noisy quantum devices are also outlined in detail, alongside
meaningful benchmarking problems. We underscore the importance of benchmarking
by proposing clear metrics to conduct appropriate comparisons with classical
optimization techniques. Lastly, we highlight two domains - finance and
sustainability - as rich sources of optimization problems that could be used to
benchmark, and eventually validate, the potential real-world impact of quantum
optimization.
A fundamental task in quantum information is to approximate a pure quantum state in terms of sparse states or, for a bipartite system, states of bounded Schmidt rank. The optimal deterministic approximation in each case is straightforward, and maximizes the fidelity: keep the largest entries or singular values. On the other hand, random mixtures of sparse states can achieve quadratically improved trace distances, and yield nontrivial bounds on other distance measures like the robustness. In this work, we give efficient algorithms for finding mixtures of sparse states that optimally approximate a given pure state in either trace distance or robustness. These algorithms also yield descriptions of efficiently samplable ensembles of sparse, or less-entangled, states that correspond to these optimal mixed approximations. This can be used for the truncation step of algorithms for matrix product states, improving their accuracy while using no extra memory, and we demonstrate this improvement numerically. Our proofs use basic facts about convex optimization and zero-sum games, as well as rigorous guarantees for computing maximum-entropy distributions.
This paper introduces the foliage partition, an easy-to-compute LC-invariant for graph states, of computational complexity O(n3) in the number of qubits. Inspired by the foliage of a graph, our invariant has a natural graphical representation in terms of leaves, axils, and twins. It captures both, the connection structure of a graph and the 2-body marginal properties of the associated graph state. We relate the foliage partition to the size of LC-orbits and use it to bound the number of LC-automorphisms of graphs. We also show the invariance of the foliage partition when generalized to weighted graphs and qudit graph states.
This research introduces Optimized QUBO formulation methods, named IQP and MS, which drastically reduce the number of slack variables needed for constrained optimization problems. When applied to an NP-hard financial problem, these methods resulted in up to a 90% reduction in variables and improved optimal solution success rates on D-Wave quantum annealers by a factor of 7x to 184x.
The quantum Schur transform has become a foundational quantum algorithm, yet even after two decades since the seminal 2005 paper by Bacon, Chuang, and Harrow (BCH), some aspects of the transform remain insufficiently understood. Moreover, an alternative approach proposed by Krovi in 2018 was recently found to contain a crucial error. In this paper, we present a corrected version of Krovi's algorithm along with a detailed treatment of the high-dimensional version of the BCH Schur transform. This high-dimensional focus makes the two versions of the transform practical for regimes where the number of qudits n is smaller than the local dimension d, with Krovi's algorithm scaling as O(n4) and BCH as O(min(n5,nd4)). Our work addresses a key gap in the literature, strengthening the algorithmic foundations of a wide range of results that rely on Schur--Weyl duality in quantum information theory and quantum computation.
We study the problem of learning Hamiltonians H that are s-sparse in the Pauli basis, given access to their time evolution. Although Hamiltonian learning has been extensively investigated, two issues recur in much of the existing literature: the absence of matching lower bounds and the use of mathematically convenient but physically opaque error measures.
We address both challenges by introducing two physically motivated distances between Hamiltonians and designing a nearly optimal algorithm with respect to one of these metrics. The first, time-constrained distance, quantifies distinguishability through dynamical evolution up to a bounded time. The second, temperature-constrained distance, captures distinguishability through thermal states at bounded inverse temperatures.
We show that s-sparse Hamiltonians with bounded operator norm can be learned in both distances with O(slog(1/ϵ)) experiments and O(s2/ϵ) evolution time. For the time-constrained distance, we further establish lower bounds of Ω((s/n)log(1/ϵ)+s) experiments and Ω(s/ϵ) evolution time, demonstrating near-optimality in the number of experiments.
As an intermediate result, we obtain an algorithm that learns every Pauli coefficient of s-sparse Hamiltonians up to error ϵ in O(slog(1/ϵ)) experiments and O(s/ϵ) evolution time, improving upon several recent results.
The source of this improvement is a new isolation technique, inspired by the Valiant-Vazirani theorem (STOC'85), which shows that NP is as easy as detecting unique solutions. This isolation technique allows us to query the time evolution of a single Pauli coefficient of a sparse Hamiltonian--even when the Pauli support of the Hamiltonian is unknown--ultimately enabling us to recover the Pauli support itself.
We discuss a construction of quantum many-body scars in the context of holography. We consider two-dimensional conformal field theories and use their dynamical symmetries, naturally realized through the Virasoro algebra, to construct scarred states. By studying their Loschmidt amplitude, we evaluate the states' periodic properties. A geometrical interpretation allows us to compute the expectation value of the stress tensor and entanglement entropy of these scarred states. We show that their holographic dual is related by a diffeomorphism to empty AdS, even for energies above the black hole threshold. We also demonstrate that expectation values in the scarred states are generally non-thermal and that their entanglement entropy grows with the energy as log(E) in contrast to E for the typical (bulk) states. Furthermore, we identify fixed points on the CFT plane associated with divergent or vanishing entanglement entropy in the limit where the scarred states have infinite energy.
An inverse-free 4-query algorithm identifies n-qubit Clifford unitaries, offering tolerance and outperforming prior methods by requiring fewer queries than stabilizer testing. This work also provides efficient auxiliary-free single-copy testers with established lower bounds, alongside settling a conjecture on the non-commutative Gowers Q₃-norm.
This paper surveys quantum learning theory: the theoretical aspects of machine learning using quantum computers. We describe the main results known for three models of learning: exact learning from membership queries, and Probably Approximately Correct (PAC) and agnostic learning from classical or quantum examples.
In order to achieve fault-tolerant quantum computation, we need to repeat the
following sequence of four steps: First, perform 1 or 2 qubit quantum gates (in
parallel if possible). Second, do a syndrome measurement on a subset of the
qubits. Third, perform a fast classical computation to establish which errors
have occurred (if any). Fourth, depending on the errors, we apply a correction
step. Then the procedure repeats with the next sequence of gates. In order for
these four steps to succeed, we need the error rate of the gates to be below a
certain threshold. Unfortunately, the error rates of current quantum hardware
are still too high. On the other hand, current quantum hardware platforms are
designed with these four steps in mind. In this work we make use of this
four-step scheme not to carry out fault-tolerant computations, but to enhance
short, constant-depth, quantum circuits that perform 1 qubit gates and
nearest-neighbor 2 qubit gates. To explore how this can be useful, we study a
computational model which we call Local Alternating Quantum Classical
Computations (LAQCC). In this model, qubits are placed in a grid allowing
nearest neighbor interactions; the quantum circuits are of constant depth with
intermediate measurements; a classical controller can perform log-depth
computations on these intermediate measurement outcomes to control future
quantum operations. This model fits naturally between quantum algorithms in the
NISQ era and full fledged fault-tolerant quantum computation. We show that
LAQCC circuits can create long-ranged interactions, which constant-depth
quantum circuits cannot achieve, and use it to construct a range of useful
multi-qubit gates. With these gates, we create three new state preparation
protocols for a uniform superposition over an arbitrary number of states,
W-states, Dicke states and may-body scar states.
We solve the generalised quantum Stein's lemma, proving that the Stein exponent associated with entanglement testing, namely, the quantum hypothesis testing task of distinguishing between n copies of an entangled state ρAB and a generic separable state σAn:Bn, equals the regularised relative entropy of entanglement. Not only does this determine the ultimate performance of entanglement testing, but it also establishes the reversibility of all quantum resource theories under asymptotically resource non-generating operations, with the regularised relative entropy of resource governing the asymptotic transformation rate between any two quantum states. As a by-product, we prove that the same Stein exponent can also be achieved when the null hypothesis is only approximately i.i.d., in the sense that it can be modelled by an 'almost power state'. To solve the problem we introduce two techniques. The first is a procedure that we call 'blurring', which, informally, transforms a permutationally symmetric state by making it more evenly spread across nearby type classes. Blurring alone suffices to prove the generalised Stein's lemma in the fully classical case, but not in the quantum case. Our second technical innovation, therefore, is to perform a second quantisation step to lift the problem to an infinite-dimensional bosonic quantum system; we then solve it there by using techniques from continuous-variable quantum information. Rather remarkably, the second-quantised action of the blurring map corresponds to a pure loss channel. A careful examination of this second quantisation step is the core of our quantum solution.
There are no more papers matching your filters at the moment.