Microsoft Quantum
Microsoft Research developed Vivace, a machine learning force field, which accurately predicts macroscopic polymer densities (0.04 g/cm³ MAE) and glass transition temperatures (43 K MAE) from first principles, outperforming traditional classical force fields. The research introduces new polymer-specific datasets and benchmarks for validating MLFFs against experimental results.
Microsoft Research, AI for Science, developed Skala, a deep learning exchange-correlation functional that achieves chemical accuracy on molecular atomization energies, outperforming traditional hybrid DFT functionals, while maintaining the computational efficiency of semi-local DFT. This was enabled by an unprecedentedly large and diverse high-accuracy training dataset and a novel architecture for learning non-local interactions.
We describe a concrete device roadmap towards a fault-tolerant quantum computing architecture based on noise-resilient, topologically protected Majorana-based qubits. Our roadmap encompasses four generations of devices: a single-qubit device that enables a measurement-based qubit benchmarking protocol; a two-qubit device that uses measurement-based braiding to perform single-qubit Clifford operations; an eight-qubit device that can be used to show an improvement of a two-qubit operation when performed on logical qubits rather than directly on physical qubits; and a topological qubit array supporting lattice surgery demonstrations on two logical qubits. Devices that enable this path require a superconductor-semiconductor heterostructure that supports a topological phase, quantum dots and coupling between those quantum dots that can create the appropriate loops for interferometric measurements, and a microwave readout system that can perform fast, low-error single-shot measurements. We describe the key design components of these qubit devices, along with the associated protocols for demonstrations of single-qubit benchmarking, Clifford gate execution, quantum error detection, and quantum error correction, which differ greatly from those in more conventional qubits. Finally, we comment on implications and advantages of this architecture for utility-scale quantum computation.
Researchers at Microsoft Quantum propose a family of 4D geometric codes that dramatically reduce physical qubit requirements and enable single-shot error correction for fault-tolerant quantum computation. The work identifies a [[96,6,8]] 4D Hadamard lattice code, which encodes 6 logical qubits using 96 physical qubits at distance 8, achieving a logical error rate of approximately 4e-7 at a physical error rate of 1e-3.
The paper demonstrates how Resource Estimation (RE) enables the development and evaluation of quantum computing applications for real-world problems by predicting fault-tolerant hardware requirements. It provides quantitative insights into qubit counts and runtime for complex quantum chemistry tasks, facilitating design optimization and informing hardware roadmaps.
We present a quantum error correcting code with dynamically generated logical qubits. When viewed as a subsystem code, the code has no logical qubits. Nevertheless, our measurement patterns generate logical qubits, allowing the code to act as a fault-tolerant quantum memory. Our particular code gives a model very similar to the two-dimensional toric code, but each measurement is a two-qubit Pauli measurement.
Advances in deep learning have greatly improved structure prediction of molecules. However, many macroscopic observations that are important for real-world applications are not functions of a single molecular structure, but rather determined from the equilibrium distribution of structures. Traditional methods for obtaining these distributions, such as molecular dynamics simulation, are computationally expensive and often intractable. In this paper, we introduce a novel deep learning framework, called Distributional Graphormer (DiG), in an attempt to predict the equilibrium distribution of molecular systems. Inspired by the annealing process in thermodynamics, DiG employs deep neural networks to transform a simple distribution towards the equilibrium distribution, conditioned on a descriptor of a molecular system, such as a chemical graph or a protein sequence. This framework enables efficient generation of diverse conformations and provides estimations of state densities. We demonstrate the performance of DiG on several molecular tasks, including protein conformation sampling, ligand structure sampling, catalyst-adsorbate sampling, and property-guided structure generation. DiG presents a significant advancement in methodology for statistically understanding molecular systems, opening up new research opportunities in molecular science.
Quantum processors based on neutral atoms trapped in arrays of optical tweezers have appealing properties, including relatively easy qubit number scaling and the ability to engineer arbitrary gate connectivity with atom movement. However, these platforms are inherently prone to atom loss, and the ability to replace lost atoms during a quantum computation is an important but previously elusive capability. Here, we demonstrate the ability to measure and re-initialize, and if necessary replace, a subset of atoms while maintaining coherence in other atoms. This allows us to perform logical circuits that include single and two-qubit gates as well as repeated midcircuit measurement while compensating for atom loss. We highlight this capability by performing up to 41 rounds of syndrome extraction in a repetition code, and combine midcircuit measurement and atom replacement with real-time conditional branching to demonstrate heralded state preparation of a logically encoded Bell state. Finally, we demonstrate the ability to replenish atoms in a tweezer array from an atomic beam while maintaining coherence of existing atoms -- a key step towards execution of logical computations that last longer than the lifetime of an atom in the system.
We show that Gottesman's (1998) semantics for Clifford circuits based on the Heisenberg representation gives rise to a lightweight Hoare-like logic for efficiently characterizing a common subset of quantum programs. Our applications include (i) certifying whether auxiliary qubits can be safely disposed of, (ii) determining if a system is separable across a given bipartition, (iii) checking the transversality of a gate with respect to a given stabilizer code, and (iv) computing post-measurement states for computational basis measurements. Further, this logic is extended to accommodate universal quantum computing by deriving Hoare triples for the TT-gate, multiply-controlled unitaries such as the Toffoli gate, and some gate injection circuits that use associated magic states. A number of interesting results emerge from this logic, including a lower bound on the number of TT gates necessary to perform a multiply-controlled ZZ gate.
We provide an efficient algorithm to compile quantum circuits for fault-tolerant execution. We target surface codes, which form a 2D grid of logical qubits with nearest-neighbor logical operations. Embedding an input circuit's qubits in surface codes can result in long-range two-qubit operations across the grid. We show how to prepare many long-range Bell pairs on qubits connected by edge-disjoint paths of ancillas in constant depth that can be used to perform these long-range operations. This forms one core part of our Edge-Disjoint Paths Compilation (EDPC) algorithm, by easily performing many parallel long-range Clifford operations in constant depth. It also allows us to establish a connection between surface code compilation and several well-studied edge-disjoint paths problems. Similar techniques allow us to perform non-Clifford single-qubit rotations far from magic state distillation factories. In this case, we can easily find the maximum set of paths by a max-flow reduction, which forms the other major part of EDPC. EDPC has the best asymptotic worst-case performance guarantees on the circuit depth for compiling parallel operations when compared to related compilation methods based on swaps and network coding. EDPC also shows a quadratic depth improvement over sequential Pauli-based compilation for parallel rotations requiring magic resources. We implement EDPC and find significantly improved performance for circuits built from parallel cnots, and for circuits which implement the multi-controlled XX gate.
We optimize fault-tolerant quantum error correction to reduce the number of syndrome bit measurements. Speeding up error correction will also speed up an encoded quantum computation, and should reduce its effective error rate. We give both code-specific and general methods, using a variety of techniques and in a variety of settings. We design new quantum error-correcting codes specifically for efficient error correction, e.g., allowing single-shot error correction. For codes with multiple logical qubits, we give methods for combining error correction with partial logical measurements. There are tradeoffs in choosing a code and error-correction technique. While to date most work has concentrated on optimizing the syndrome-extraction procedure, we show that there are also substantial benefits to optimizing how the measured syndromes are chosen and used. As an example, we design single-shot measurement sequences for fault-tolerant quantum error correction with the 16-qubit extended Hamming code. Our scheme uses 10 syndrome bit measurements, compared to 40 measurements with the Shor scheme. We design single-shot logical measurements as well: any logical Z measurement can be made together with fault-tolerant error correction using only 11 measurements. For comparison, using the Shor scheme a basic implementation of such a non-destructive logical measurement uses 63 measurements. We also offer ten open problems, the solutions of which could lead to substantial improvements of fault-tolerant error correction.
We introduce a simple construction of boundary conditions for the honeycomb code that uses only pairwise checks and allows parallelogram geometries at the cost of modifying the bulk measurement sequence. We discuss small instances of the code.
We consider process tomography for unitary quantum channels. Given access to an unknown unitary channel acting on a d\textsf{d}-dimensional qudit, we aim to output a classical description of a unitary that is ε\varepsilon-close to the unknown unitary in diamond norm. We design an algorithm achieving error ε\varepsilon using O(d2/ε)O(\textsf{d}^2/\varepsilon) applications of the unknown channel and only one qudit. This improves over prior results, which use O(d3/ε2)O(\textsf{d}^3/\varepsilon^2) [via standard process tomography] or O(d2.5/ε)O(\textsf{d}^{2.5}/\varepsilon) [Yang, Renner, and Chiribella, PRL 2020] applications. To show this result, we introduce a simple technique to "bootstrap" an algorithm that can produce constant-error estimates to one that can produce ε\varepsilon-error estimates with the Heisenberg scaling. Finally, we prove a complementary lower bound showing that estimation requires Ω(d2/ε)\Omega(\textsf{d}^2/\varepsilon) applications, even with access to the inverse or controlled versions of the unknown unitary. This shows that our algorithm has both optimal query complexity and optimal space complexity.
We present measurements and simulations of semiconductor-superconductor heterostructure devices that are consistent with the observation of topological superconductivity and Majorana zero modes. The devices are fabricated from high-mobility two-dimensional electron gases in which quasi-one-dimensional wires are defined by electrostatic gates. These devices enable measurements of local and non-local transport properties and have been optimized via extensive simulations to ensure robustness against non-uniformity and disorder. Our main result is that several devices, fabricated according to the design's engineering specifications, have passed the topological gap protocol defined in Pikulin et al. [arXiv:2103.12217]. This protocol is a stringent test composed of a sequence of three-terminal local and non-local transport measurements performed while varying the magnetic field, semiconductor electron density, and junction transparencies. Passing the protocol indicates a high probability of detection of a topological phase hosting Majorana zero modes as determined by large-scale disorder simulations. Our experimental results are consistent with a quantum phase transition into a topological superconducting phase that extends over several hundred millitesla in magnetic field and several millivolts in gate voltage, corresponding to approximately one hundred micro-electron-volts in Zeeman energy and chemical potential in the semiconducting wire. These regions feature a closing and re-opening of the bulk gap, with simultaneous zero-bias conductance peaks at both ends of the devices that withstand changes in the junction transparencies. The extracted maximum topological gaps in our devices are 20-60 μ\mueV. This demonstration is a prerequisite for experiments involving fusion and braiding of Majorana zero modes.
We propose a new model of quantum computation comprised of low-weight measurement sequences that simultaneously encode logical information, enable error correction, and apply logical gates. These measurement sequences constitute a new class of quantum error-correcting codes generalizing Floquet codes, which we call dynamic automorphism (DA) codes. We construct an explicit example, the DA color code, which is assembled from short measurement sequences that can realize all 72 automorphisms of the 2D color code. On a stack of NN triangular patches, the DA color code encodes NN logical qubits and can implement the full logical Clifford group by a sequence of two- and, more rarely, three-qubit Pauli measurements. We also make the first step towards universal quantum computation with DA codes by introducing a 3D DA color code and showing that a non-Clifford logical gate can be realized by adaptive two-qubit measurements.
The typical model for measurement noise in quantum error correction is to randomly flip the binary measurement outcome. In experiments, measurements yield much richer information - e.g., continuous current values, discrete photon counts - which is then mapped into binary outcomes by discarding some of this information. In this work, we consider methods to incorporate all of this richer information, typically called soft information, into the decoding of quantum error correction codes, and in particular the surface code. We describe how to modify both the Minimum Weight Perfect Matching and Union-Find decoders to leverage soft information, and demonstrate these soft decoders outperform the standard (hard) decoders that can only access the binary measurement outcomes. Moreover, we observe that the soft decoder achieves a threshold 25\% higher than any hard decoder for phenomenological noise with Gaussian soft measurement outcomes. We also introduce a soft measurement error model with amplitude damping, in which measurement time leads to a trade-off between measurement resolution and additional disturbance of the qubits. Under this model we observe that the performance of the surface code is very sensitive to the choice of the measurement time - for a distance-19 surface code, a five-fold increase in measurement time can lead to a thousand-fold increase in logical error rate. Moreover, the measurement time that minimizes the physical error rate is distinct from the one that minimizes the logical performance, pointing to the benefits of jointly optimizing the physical and quantum error correction layers.
Microsoft Quantum developed the Azure Quantum Resource Estimator (AQRE), an open-source tool that automatically evaluates the logical and physical resources required for fault-tolerant quantum algorithms. Applied to quantum integer multiplication, the tool revealed that Karatsuba's algorithm, despite its asymptotic classical advantage, requires more physical qubits and is only faster than standard multiplication for very large inputs, highlighting the importance of full fault-tolerance overheads.
We provide the first tensor network method for computing quantum weight enumerator polynomials in the most general form. If a quantum code has a known tensor network construction of its encoding map, our method is far more efficient, and in some cases exponentially faster than the existing approach. As a corollary, it produces decoders and an algorithm that computes the code distance. For non-(Pauli)-stabilizer codes, this constitutes the current best algorithm for computing the code distance. For degenerate stabilizer codes, it can be substantially faster compared to the current methods. We also introduce novel weight enumerators and their applications. In particular, we show that these enumerators can be used to compute logical error rates exactly and thus construct (optimal) decoders for any i.i.d. single qubit or qudit error channels. The enumerators also provide a more efficient method for computing non-stabilizerness in quantum many-body states. As the power for these speedups rely on a Quantum Lego decomposition of quantum codes, we further provide systematic methods for decomposing quantum codes and graph states into a modular construction for which our technique applies. As a proof of principle, we perform exact analyses of the deformed surface codes, the holographic pentagon code, and the 2d Bacon-Shor code under (biased) Pauli noise and limited instances of coherent error at sizes that are inaccessible by brute force.
Quantum technologies such as communications, computing, and sensing offer vast opportunities for advanced research and development. While an open-source ethos currently exists within some quantum technologies, especially in quantum computer programming, we argue that there are additional advantages in developing open quantum hardware (OQH). Open quantum hardware encompasses open-source software for the control of quantum devices in labs, blueprints and open-source toolkits for chip design and other hardware components, as well as openly-accessible testbeds and facilities that allow cloud-access to a wider scientific community. We provide an overview of current projects in the OQH ecosystem, identify gaps, and make recommendations on how to close them today. More open quantum hardware would accelerate technology transfer to and growth of the quantum industry and increase accessibility in science.
Suppose y\boldsymbol{y} is a real random variable, and one is given access to ``the code'' that generates it (for example, a randomized or quantum circuit whose output is y\boldsymbol{y}). We give a quantum procedure that runs the code O(n)O(n) times and returns an estimate μ^\widehat{\boldsymbol{\mu}} for $\mu = \mathrm{E}[\boldsymbol{y}]$ that with high probability satisfies μ^μσ/n|\widehat{\boldsymbol{\mu}} - \mu| \leq \sigma/n, where $\sigma = \mathrm{stddev}[\boldsymbol{y}].Thisdependenceon. This dependence on n$ is optimal for quantum algorithms. One may compare with classical algorithms, which can only achieve the quadratically worse $|\widehat{\boldsymbol{\mu}} - \mu| \leq \sigma/\sqrt{n}$. Our method improves upon previous works, which either made additional assumptions about y\boldsymbol{y}, and/or assumed the algorithm knew an a priori bound on σ\sigma, and/or used additional logarithmic factors beyond O(n)O(n). The central subroutine for our result is essentially Grover's algorithm but with complex phases.ally Grover's algorithm but with complex phases.
There are no more papers matching your filters at the moment.