Los Alamos National Laboratory
Quantum computers hold promise for solving problems intractable for classical computers, especially those with high time and/or space complexity. The reduction of the power flow (PF) problem into a linear system of equations, allows formulation of quantum power flow (QPF) algorithms, based on quantum linear system solving methods such as the Harrow-Hassidim-Lloyd (HHL) algorithm. The speedup due to QPF algorithms is claimed to be exponential when compared to classical PF solved by state-of-the-art algorithms. We investigate the potential for practical quantum advantage (PQA) in solving QPF compared to classical methods on gate-based quantum computers. We meticulously scrutinize the end-to-end complexity of QPF, providing a nuanced evaluation of the purported quantum speedup in this problem. Our analysis establishes a best-case bound for the HHL-QPF complexity, conclusively demonstrating the absence of any PQA in the direct current power flow (DCPF) and fast decoupled load flow (FDLF) problem. Additionally, we establish that for potential PQA to exist it is necessary to consider DCPF-type problems with a very narrow range of condition number values and readout requirements.
The carbon capture, utilization, and storage (CCUS) framework is an essential component in reducing greenhouse gas emissions, with its success hinging on the comprehensive knowledge of subsurface geology and geomechanics. Passive seismic event relocation and fault detection serve as indispensable tools, offering vital insights into subsurface structures and fluid migration pathways. Accurate identification and localization of seismic events, however, face significant challenges, including the necessity for high-quality seismic data and advanced computational methods. To address these challenges, we introduce a novel deep learning method, DeFault, specifically designed for passive seismic source relocation and fault delineating for passive seismic monitoring projects. By leveraging data domain-adaptation, DeFault allows us to train a neural network with labeled synthetic data and apply it directly to field data. Using DeFault, the passive seismic sources are automatically clustered based on their recording time and spatial locations, and subsequently, faults and fractures are delineated accordingly. We demonstrate the efficacy of DeFault on a field case study involving CO2 injection related microseismic data from the Decatur, Illinois area. Our approach accurately and efficiently relocated passive seismic events, identified faults and aided in the prevention of potential geological hazards. Our results highlight the potential of DeFault as a valuable tool for passive seismic monitoring, emphasizing its role in ensuring CCUS project safety. This research bolsters the understanding of subsurface characterization in CCUS, illustrating machine learning's capacity to refine these methods. Ultimately, our work bear significant implications for CCUS technology deployment, an essential strategy in combating climate change.
The article introduces the stochastic N-k interdiction problem for power grid operations and planning that aims to identify a subset of k components (out of N components) that maximizes the expected damage, measured in terms of load shed. Uncertainty is modeled through a fixed set of outage scenarios, where each scenario represents a subset of components removed from the grid. We formulate the stochastic N-k interdiction problem as a bi-level optimization problem and propose two algorithmic solutions. The first approach reformulates the bi-level stochastic optimization problem to a single level, mixed-integer linear program (MILP) by dualizing the inner problem and solving the resulting problem directly using a MILP solver to global optimality. The second is a heuristic cutting-plane approach, which is exact under certain assumptions. We compare these approaches in terms of computation time and solution quality using the IEEE-Reliability Test System and present avenues for future research.
·
Diffusion Posterior Sampling (DPS) presents a unified framework for solving noisy linear and nonlinear inverse problems using diffusion models. The method introduces a novel approximation for the likelihood term, enabling robust handling of various noise statistics and general forward operators, and achieves state-of-the-art perceptual quality on tasks such as phase retrieval and non-uniform deblurring.
467
The Open Molecules 2025 (OMol25) dataset offers a large-scale collection of over 100 million density functional theory calculations designed to accelerate machine learning model development for molecular simulations. This resource spans a diverse chemical space, covering 83 elements and complex systems up to 350 atoms with various charge and spin states.
1,777
Researchers at QuEra Computing Inc. developed a transversal STAR architecture, co-designed with neutral-atom quantum hardware, to significantly reduce resource overhead for early fault-tolerant quantum simulation. This approach projects over 100x space-time savings and achieves logical error rates approaching 10^-6 for distance-7 surface codes, enabling megaquop-scale Hamiltonian simulation with fewer physical resources.
This paper introduces "The Well," a 15 terabyte collection of 16 diverse physics simulations designed to serve as a comprehensive benchmark for machine learning-based surrogate models. Benchmarking on this dataset reveals that current models struggle to achieve high accuracy and long-term stability in autoregressive predictions, often performing worse than simple baselines.
1,138
We show that the Floquet Hamiltonian of a quantum particle driven by a general time-periodic imaginary potential is exactly equivalent, at stroboscopic times, to the Hamiltonian of a free particle constrained to a curved Riemannian manifold with fixed embedding. We illustrate the construction for a sinusoidal drive and for the torus of revolution, and outline how the framework can guide experimental design of curved-space quantum dynamics. Our results unify non-Hermitian Floquet physics with spectral geometry and provide a general recipe for engineering quantum dynamics on embedded manifolds.
We introduce MORPH, a modality-agnostic, autoregressive foundation model for partial differential equations (PDEs). MORPH is built on a convolutional vision transformer backbone that seamlessly handles heterogeneous spatiotemporal datasets of varying data modality (1D--3D) at different resolutions, and multiple fields with mixed scalar and vector components. The architecture combines (i) component-wise convolution, which jointly processes scalar and vector channels to capture local interactions, (ii) inter-field cross-attention, which models and selectively propagates information between different physical fields, (iii) axial attentions, which factorize full spatiotemporal self-attention along individual spatial and temporal axes to reduce computational burden while retaining expressivity. We pretrain multiple model variants on a diverse collection of heterogeneous PDE datasets and evaluate transfer to a range of downstream prediction tasks. Using both full-model fine-tuning and parameter-efficient low-rank adapters (LoRA), MORPH outperforms models trained from scratch. Across extensive evaluations, MORPH matches or surpasses strong baselines and recent state-of-the-art models. Collectively, these capabilities present a flexible and powerful backbone for learning from the heterogeneous and multimodal nature of scientific observations, charting a path toward scalable and data-efficient scientific machine learning. The source code, datasets, and models are publicly available at this https URL.
6
Gamma-ray bursts are the most luminous electromagnetic events in the universe. Their prompt gamma-ray emission has typical durations between a fraction of a second and several minutes. A rare subset of these events have durations in excess of a thousand seconds, referred to as ultra-long gamma-ray bursts. Here, we report the discovery of the longest gamma-ray burst ever seen with a ~25,000 s gamma-ray duration, GRB 250702B, and characterize this event using data from four instruments in the InterPlanetary Network and the Monitor of All-sky X-ray Image. We find a hard spectrum, subsecond variability, and high total energy, which are only known to arise from ultrarelativistic jets powered by a rapidly-spinning stellar-mass central engine. These properties and the extreme duration are together incompatible with all confirmed gamma-ray burst progenitors and nearly all models in the literature. This burst is naturally explained with the helium merger model, where a field binary ends when a black hole falls into a stripped star and proceeds to consume and explode it from within. Under this paradigm, GRB 250702B adds to the growing evidence that helium stars expand and that some ultra-long GRBs have similar evolutionary pathways as collapsars, stellar-mass gravitational wave sources, and potentially rare types of supernovae.
Variational quantum computing offers a flexible computational paradigm with applications in diverse areas. However, a key obstacle to realizing their potential is the Barren Plateau (BP) phenomenon. When a model exhibits a BP, its parameter optimization landscape becomes exponentially flat and featureless as the problem size increases. Importantly, all the moving pieces of an algorithm -- choices of ansatz, initial state, observable, loss function and hardware noise -- can lead to BPs when ill-suited. Due to the significant impact of BPs on trainability, researchers have dedicated considerable effort to develop theoretical and heuristic methods to understand and mitigate their effects. As a result, the study of BPs has become a thriving area of research, influencing and cross-fertilizing other fields such as quantum optimal control, tensor networks, and learning theory. This article provides a comprehensive review of the current understanding of the BP phenomenon.
Diffusion models are commonly interpreted as learning the score function, i.e., the gradient of the log-density of noisy data. However, this assumption implies that the target of learning is a conservative vector field, which is not enforced by the neural network architectures used in practice. We present numerical evidence that trained diffusion networks violate both integral and differential constraints required of true score functions, demonstrating that the learned vector fields are not conservative. Despite this, the models perform remarkably well as generative mechanisms. To explain this apparent paradox, we advocate a new theoretical perspective: diffusion training is better understood as flow matching to the velocity field of a Wasserstein Gradient Flow (WGF), rather than as score learning for a reverse-time stochastic differential equation. Under this view, the "probability flow" arises naturally from the WGF framework, eliminating the need to invoke reverse-time SDE theory and clarifying why generative sampling remains successful even when the neural vector field is not a true score. We further show that non-conservative errors from neural approximation do not necessarily harm density transport. Our results advocate for adopting the WGF perspective as a principled, elegant, and theoretically grounded framework for understanding diffusion generative models.
Near-Earth asteroid 2024 YR4 was discovered on 2024-12-27 and its probability of Earth impact in December 2032 peaked at about 3% on 2025-02-18. Additional observations ruled out Earth impact by 2025-02-23. However, the probability of lunar impact in December 2032 then rose, reaching about 4% by the end of the apparition in May 2025. James Webb Space Telescope (JWST) observations on 2025-03-26 estimated the asteroid's diameter at 60 +/- 7 m. Studies of 2024 YR4's potential lunar impact effects suggest lunar ejecta could increase micrometeoroid debris flux in low Earth orbit up to 1000 times above background levels over just a few days, possibly threatening astronauts and spacecraft. In this work, we present options for space missions to 2024 YR4 that could be utilized if lunar impact is confirmed. We cover flyby & rendezvous reconnaissance, deflection, and robust disruption of the asteroid. We examine both rapid-response and delayed launch options through 2032. We evaluate chemical and solar electric propulsion, various launch vehicles, optimized deep space maneuvers, and gravity assists. Re-tasking extant spacecraft and using built spacecraft not yet launched are also considered. The best reconnaissance mission options launch in late 2028, leaving only approximately three years for development at the time of this writing in August 2025. Deflection missions were assessed and appear impractical. However, kinetic robust disruption missions are available with launches between April 2030 and April 2032. Nuclear robust disruption missions are also available with launches between late 2029 and late 2031. Finally, even if lunar impact is ruled out there is significant potential utility in deploying a reconnaissance mission to characterize the asteroid.
Applications such as simulating complicated quantum systems or solving large-scale linear algebra problems are very challenging for classical computers due to the extremely high computational cost. Quantum computers promise a solution, although fault-tolerant quantum computers will likely not be available in the near future. Current quantum devices have serious constraints, including limited numbers of qubits and noise processes that limit circuit depth. Variational Quantum Algorithms (VQAs), which use a classical optimizer to train a parametrized quantum circuit, have emerged as a leading strategy to address these constraints. VQAs have now been proposed for essentially all applications that researchers have envisioned for quantum computers, and they appear to the best hope for obtaining quantum advantage. Nevertheless, challenges remain including the trainability, accuracy, and efficiency of VQAs. Here we overview the field of VQAs, discuss strategies to overcome their challenges, and highlight the exciting prospects for using them to obtain quantum advantage.
51
This research demonstrates that Quantum Convolutional Neural Networks (QCNNs), despite their heuristic successes, are effectively classically simulable for commonly used benchmark tasks. The paper shows that QCNNs primarily process low-bodyness information and that current benchmark datasets are "locally-easy," enabling efficient classical simulation with high accuracy and minimal quantum resources.
We present a classical algorithm based on Pauli propagation for estimating expectation values of arbitrary observables on random unstructured quantum circuits across all circuit architectures and depths, including those with all-to-all connectivity. We prove that for any architecture where each circuit layer is randomly sampled from a distribution invariant under single-qubit rotations, our algorithm achieves a small error ε\varepsilon on all circuits except for a small fraction δ\delta. The computational time is polynomial in qubit count and circuit depth for any small constant ε,δ\varepsilon, \delta, and quasi-polynomial for inverse-polynomially small ε,δ\varepsilon, \delta. Our results show that estimating observables of quantum circuits exhibiting chaotic and locally scrambling behavior is classically tractable across all geometries. We further conduct numerical experiments beyond our average-case assumptions, demonstrating the potential utility of Pauli propagation methods for simulating real-time dynamics and finding low-energy states of physical Hamiltonians.
1
We introduce LOWESA as a classical algorithm for faithfully simulating quantum systems via a classically constructed surrogate expectation landscape. After an initial overhead to build the surrogate landscape, one can rapidly study entire families of Hamiltonians, initial states and target observables. As a case study, we simulate the 127-qubit transverse-field Ising quantum system on a heavy-hexagon lattice with up to 20 Trotter steps which was recently presented in Nature 618, 500-505 (2023). Specifically, we approximately reconstruct (in minutes to hours on a laptop) the entire expectation landscape spanned by the heavy-hex Ising model. The expectation of a given observable can then be evaluated at different parameter values, i.e. with different onsite magnetic fields and coupling strengths, in fractions of a second on a laptop. This highlights that LOWESA can attain state-of-the-art performance in quantum simulation tasks, with the potential to become the algorithm of choice for scanning a wide range of systems quickly.
The accelerating pace and expanding scope of materials discovery demand optimization frameworks that efficiently navigate vast, nonlinear design spaces while judiciously allocating limited evaluation resources. We present a cost-aware, batch Bayesian optimization scheme powered by deep Gaussian process (DGP) surrogates and a heterotopic querying strategy. Our DGP surrogate, formed by stacking GP layers, models complex hierarchical relationships among high-dimensional compositional features and captures correlations across multiple target properties, propagating uncertainty through successive layers. We integrate evaluation cost into an upper-confidence-bound acquisition extension, which, together with heterotopic querying, proposes small batches of candidates in parallel, balancing exploration of under-characterized regions with exploitation of high-mean, low-variance predictions across correlated properties. Applied to refractory high-entropy alloys for high-temperature applications, our framework converges to optimal formulations in fewer iterations with cost-aware queries than conventional GP-based BO, highlighting the value of deep, uncertainty-aware, cost-sensitive strategies in materials campaigns.
Self-Organising Memristive Networks (SOMNs) offer a solution to the energy inefficiency of conventional AI hardware by leveraging intrinsic brain-like dynamics, such as plasticity and criticality, for on-device, continual learning. These networks demonstrate capabilities in physical reservoir computing and associative learning, providing a path towards sustainable embedded edge intelligence.
Researchers at EPFL and Los Alamos National Laboratory developed a classical algorithm that efficiently simulates typical quantum circuits subjected to arbitrary local noise, including non-unital and dephasing channels. This method achieves inversely polynomial precision and demonstrates that such noise effectively limits circuits to a logarithmic depth.
There are no more papers matching your filters at the moment.