Istituto Nazionale di Geofisica e Vulcanologia
In combinatorial optimization, probabilistic Ising machines (PIMs) have gained significant attention for their acceleration of Monte Carlo sampling with the potential to reduce time-to-solution in finding approximate ground states. However, to be viable in real applications, further improvements in scalability and energy efficiency are necessary. One of the promising paths toward achieving this objective is the development of a co-design approach combining different technology layers including device, circuits and algorithms. Here, we experimentally demonstrate a fully connected PIM architecture based on 250 spin-transfer torque magnetic tunnel junctions (STT-MTJs), interfaced with an FPGA. Our computing approach integrates STT-MTJ-based tunable true random number generators with advanced annealing techniques, enabling the solution of problems with any topology and size. For sparsely connected graphs, the massive parallel architecture of our PIM enables a cluster parallel update method that overcomes the serial limitations of Gibbs sampling, leading to a 10 times acceleration without hardware changes. Furthermore, we prove experimentally that the simulated quantum annealing boosts solution quality 20 times over conventional simulated annealing while also increasing robustness to MTJ variability. Short pulse switching measurements indicate that STT-MTJ-based PIMs can potentially be 10 times faster and 10 times more energy-efficient than graphic processing units, which paves the way for future large-scale, high-performance, and energy-efficient unconventional computing hardware implementations.
The Lunar Gravitational-wave Antenna (LGWA) is a proposed array of next-generation inertial sensors to monitor the response of the Moon to gravitational waves (GWs). Given the size of the Moon and the expected noise produced by the lunar seismic background, the LGWA would be able to observe GWs from about 1 mHz to 1 Hz. This would make the LGWA the missing link between space-borne detectors like LISA with peak sensitivities around a few millihertz and proposed future terrestrial detectors like Einstein Telescope or Cosmic Explorer. In this article, we provide a first comprehensive analysis of the LGWA science case including its multi-messenger aspects and lunar science with LGWA data. We also describe the scientific analyses of the Moon required to plan the LGWA mission.
Recent demonstrations on specialized benchmarks have reignited excitement for quantum computers, yet whether they can deliver an advantage for practical real-world problems remains an open question. Here, we show that probabilistic computers (p-computers), when co-designed with hardware to implement powerful Monte Carlo algorithms, provide a compelling and scalable classical pathway for solving hard optimization problems. We focus on two key algorithms applied to 3D spin glasses: discrete-time simulated quantum annealing (DT-SQA) and adaptive parallel tempering (APT). We benchmark these methods against the performance of a leading quantum annealer on the same problem instances. For DT-SQA, we find that increasing the number of replicas improves residual energy scaling, in line with expectations from extreme value theory. We then show that APT, when supported by non-local isoenergetic cluster moves, exhibits a more favorable scaling and ultimately outperforms DT-SQA. We demonstrate these algorithms are readily implementable in modern hardware, projecting that custom Field Programmable Gate Arrays (FPGA) or specialized chips can leverage massive parallelism to accelerate these algorithms by orders of magnitude while drastically improving energy efficiency. Our results establish a new, rigorous classical baseline, clarifying the landscape for assessing a practical quantum advantage and presenting p-computers as a scalable platform for real-world optimization challenges.
Combinatorial Optimization (CO) problems exhibit exponential complexity, making their resolution challenging. Simulated Adiabatic Bifurcation (aSB) is a quantum-inspired algorithm to obtain approximate solutions to largescale CO problems written in the Ising form. It explores the solution space by emulating the adiabatic evolution of a network of Kerr-nonlinear parametric oscillators (KPOs), where each oscillator represents a variable in the problem. The optimal solution corresponds to the ground state of this system. A key advantage of this approach is the possibility of updating multiple variables simultaneously, making it particularly suited for hardware implementation. To enhance solution quality and convergence speed, variations of the algorithm have been proposed in the literature, including ballistic (bSB), discrete (dSB), and thermal (HbSB) versions. In this work, we have comprehensively analyzed dSB, bSB, and HbSB using dedicated software models, evaluating the feasibility of using a fixed-point representation for hardware implementation. We then present an opensource hardware architecture implementing the dSB algorithm for Field-Programmable Gate Arrays (FPGAs). The design allows users to adjust the degree of algorithmic parallelization based on their specific requirements. A proof-of-concept implementation that solves 256-variable problems was achieved on an AMD Kria KV260 SoM, a low-tier FPGA, validated using well-known max-cut and knapsack problems.
We introduce a universal theory of phase auto-oscillators driven by a bi harmonic signal (having frequency components close to single and double of the free-running oscillator frequency) with noise. With it, we show how deterministic phase locking and stochastic phase slips can be continuously tuned by varying the relative amplitudes and frequencies of the driving components. Using, as an example, a spin-torque nano-oscillator, we numerically validate this theory by implementing a deterministic Ising machine paradigm, a probabilistic one, and dual-mode operation of the two. This demonstration introduces the concept of adaptive Ising machines (AIM), a unified oscillator-based architecture that dynamically combines both regimes within the same hardware platform by properly tuning the amplitudes of the bi-harmonic driving relative to the noise strength. Benchmarking on different classes of combinatorial optimization problems, the AIM exhibits complementary performance compared to oscillator based Ising machines and probabilistic Ising machines, with adaptability to the specific problem class. This work introduces the first OIM capable of transitioning between deterministic and probabilistic computation taking advantage of a proper design of the trade-off between the strength of phase-locking of an auto-oscillator to a bi harmonic external driving and noise, opening a path toward scalable, CMOS compatible hardware for hybrid optimization and inference.
In a recent study (Jozinović et al, 2020) we showed that convolutional neural networks (CNNs) applied to network seismic traces can be used for rapid prediction of earthquake peak ground motion intensity measures (IMs) at distant stations using only recordings from stations near the epicenter. The predictions are made without any previous knowledge concerning the earthquake location and magnitude. This approach differs from the standard procedure adopted by earthquake early warning systems (EEWSs) that rely on location and magnitude information. In the previous study, we used 10 s, raw, multistation waveforms for the 2016 earthquake sequence in central Italy for 915 events (CI dataset). The CI dataset has a large number of spatially concentrated earthquakes and a dense station network. In this work, we applied the CNN model to an area around the VIRGO gravitational waves observatory sited near Pisa, Italy. In our initial application of the technique, we used a dataset consisting of 266 earthquakes recorded by 39 stations. We found that the CNN model trained using this smaller dataset performed worse compared to the results presented in the original study by Jozinović et al. (2020). To counter the lack of data, we adopted transfer learning (TL) using two approaches: first, by using a pre-trained model built on the CI dataset and, next, by using a pre-trained model built on a different (seismological) problem that has a larger dataset available for training. We show that the use of TL improves the results in terms of outliers, bias, and variability of the residuals between predicted and true IMs values. We also demonstrate that adding knowledge of station positions as an additional layer in the neural network improves the results. The possible use for EEW is demonstrated by the times for the warnings that would be received at the station PII.
Geophysical systems are inherently complex and span multiple spatial and temporal scales, making their dynamics challenging to understand and predict. This challenge is especially pronounced for extreme events, which are primarily governed by their instantaneous properties rather than their average characteristics. Advances in dynamical systems theory, including the development of local dynamical indices such as local dimension and inverse persistence, have provided powerful tools for studying these short-lasting phenomena. However, existing applications of such indices often rely on predefined fixed spatial domains and scales, with limited discussion on the influence of spatial scales on the results. In this work, we present a novel spatially multiscale methodology that applies a sliding window method to compute dynamical indices, enabling the exploration of scale-dependent properties. Applying this framework to high-impact European summertime heatwaves, we reconcile previously different perspectives, thereby underscoring the importance of spatial scales in such analyses. Furthermore, we emphasize that our novel methodology has broad applicability to other atmospheric phenomena, as well as to other geophysical and spatio-temporal systems.
Nonlinear dynamical systems are ubiquitous in nature and they are hard to forecast. Not only they may be sensitive to small perturbations in their initial conditions, but they are often composed of processes acting at multiple scales. Classical approaches based on the Lyapunov spectrum rely on the knowledge of the dynamic forward operator, or of a data-derived approximation of it. This operator is typically unknown, or the data are too noisy to derive a faithful representation. Here, we propose a new data-driven approach to analyze the local predictability of dynamical systems. This method, based on the concept of recurrence, is closely linked to the well-established framework of local dynamical indices. Applied to both idealized systems and real-world datasets, this new index shows results consistent with existing knowledge, proving its effectiveness in estimating local predictability. Additionally, we discuss its relationship with local dynamical indices, illustrating how it complements the previous framework as a more direct measure of predictability. Furthermore, we explore its reflection of the scale-dependent nature of predictability, its extension that includes a weighting strategy, and its real-time application. We believe these aspects collectively demonstrate its potential as a powerful diagnostic tool for complex systems.
Volcanism on Venus has never been directly observed, but several measurements indicate present-day activity. Volcanism could potentially play a role in climatic processes on Venus, especially in the sulfur cycle like on Earth. Observation of volcanic activity is the primary objective of future Venus spacecraft. However, there are many unknowns regarding its Venusian characteristics, like the condition at the vent, the volatile content and composition. Past modelling efforts have only studied explosive volcanic plume propagation over a limited range of flow parameters at the vent and in an idealised Venus atmospheric configuration. We propose to use the 1D FPLUME volcanic plume model in a realistic Venusian environment. In similar Venusian conditions, the height of the plume is consistent with past modelling. The present study shows that explosive volcanism would preferably reach 15 km of altitude. Under certain conditions, plumes are able to reach the VenSpec-H tropospheric altitude range of observations and even the 45 km cloud floor. For the first time, the impact of wind was quantified, and the super-rotating winds have a substantial impact by plume-bending of reducing the height of plumes. Contrary to the Earth, the atmospheric heat capacity depends greatly on temperature, and will disadvantage lower plumes and allow larger plumes to propagate at higher altitudes. The high latitude atmospheric environment, due to the thermal profile and weaker winds, is favorable to plumes reaching higher altitudes.
Ising machines can solve combinatorial optimization problems by representing them as energy minimization problems. A common implementation is the probabilistic Ising machine (PIM), which uses probabilistic (p-) bits to represent coupled binary spins. However, many real-world problems have complex data representations that do not map naturally into a binary encoding, leading to a significant increase in hardware resources and time-to-solution. Here, we describe a generalized spin model that supports an arbitrary number of spin dimensions, each with an arbitrary real component. We define the probabilistic d-dimensional bit (p-dit) as the base unit of a p-computing implementation of this model. We further describe two restricted forms of p-dits for specific classes of common problems and implement them experimentally on an application-specific integrated circuit (ASIC): (A) isotropic p-dits, which simplify the implementation of categorical variables resulting in ~34x performance improvement compared to a p-bit implementation on an example 3-partition problem. (B) Probabilistic integers (p-ints), which simplify the representation of numeric values and provide ~5x improvement compared to a p-bit implementation of an example integer linear programming (ILP) problem. Additionally, we report a field-programmable gate array (FPGA) p-int-based integer quadratic programming (IQP) solver which shows ~64x faster time-to-solution compared to the best of a series of state-of-the-art software solvers. The generalized formulation of probabilistic variables presented here provides a path to solving large-scale optimization problems on various hardware platforms including digital CMOS.
Short-term earthquake clustering is one of the most important features of seismicity. Clusters are identified using various techniques, generally deterministic and based on spatio-temporal windowing. Conversely, the leading rail in short-term earthquake forecasting has a probabilistic view of clustering, usually based on the Epidemic Type Aftershock Sequence (ETAS) models. In this study we compare seismic clusters, identified by two different deterministic window-based techniques, with the ETAS probabilities associated with any event in the clusters, thus investigating the consistency between deterministic and probabilistic approaches. The comparison is performed by considering, for each event in an identified cluster, the corresponding probability of being independent and the expected number of triggered events according to ETAS. Results show no substantial differences between the cluster identification procedures, and an overall consistency between the identified clusters and the relative events' ETAS probabilities.
The physical properties of galactic cirrus emission are not well characterized. BOOMERanG is a balloon-borne experiment designed to study the cosmic microwave background at high angular resolution in the millimeter range. The BOOMERanG 245 and 345GHz channels are sensitive to interstellar signals, in a spectral range intermediate between FIR and microwave frequencies. We look for physical characteristics of cirrus structures in a region at high galactic latitudes (b~-40°) where BOOMERanG performed its deepest integration, combining the BOOMERanG data with other available datasets at different wavelengths. We have detected eight emission patches in the 345 GHz map, consistent with cirrus dust in the Infrared Astronomical Satellite maps. The analysis technique we have developed allows to identify the location and the shape of cirrus clouds, and to extract the flux from observations with different instruments at different wavelengths and angular resolutions. We study the integrated flux emitted from these cirrus clouds using data from Infrared Astronomical Satellite (IRAS), DIRBE, BOOMERanG and Wilkinson Microwave Anisotropy Probe (WMAP) in the frequency range 23-3000 GHz (13 mm 100 micron wavelength). We fit the measured spectral energy distributions with a combination of a grey body and a power-law spectra considering two models for the thermal emission. The temperature of the thermal dust component varies in the 7 - 20 K range and its emissivity spectral index is in the 1 - 5 range. We identified a physical relation between temperature and spectral index as had been proposed in previous works. This technique can be proficiently used for the forthcoming Planck and Herschel missions data.
Probabilistic computing with pbits is emerging as a computational paradigm for machine learning and for facing combinatorial optimization problems (COPs) with the so-called probabilistic Ising machines (PIMs). From a hardware point of view, the key elements that characterize a PIM are the random number generation, the nonlinearity, the network of coupled pbits, and the energy minimization algorithm. Regarding the latter, in this work we show that PIMs using the simulated quantum annealing (SQA) schedule exhibit better performance as compared to simulated annealing and parallel tempering in solving a number of COPs, such as maximum satisfiability problems, planted Ising problem, and travelling salesman problem. Additionally, we design and simulate the architecture of a fully connected CMOS based PIM able to run the SQA algorithm having a spin-update time of 8 ns with a power consumption of 0.22 mW. Our results also show that SQA increases the reliability and the scalability of PIMs by compensating for device variability at an algorithmic level enabling the development of their implementation combining CMOS with different technologies such as spintronics. This work shows that the characteristics of the SQA are hardware agnostic and can be applied in the co-design of any hybrid analog digital Ising machine implementation. Our results open a promising direction for the implementation of a new generation of reliable and scalable PIMs.
The sensitivity of Global Navigation Satellite Systems (GNSS) receivers to ionospheric disturbances and their constant growth are nowadays resulting in an increased concern of GNSS-users about the impacts of ionospheric disturbances at mid-latitudes. The geomagnetic storm of June 2015 is an example of a rare phenomenon of a spill-over of equatorial plasma bubbles well North from their habitual. We study the occurrence of small- and medium-scale irregularities in the North Atlantic Eastern-Mediterranean mid- and low-latitudinal zone by analysing the behaviour of the amplitude scintillation index S4 and of the rate of total electron content index (ROTI) during such a storm. In addition, large scale perturbations of the ionospheric electron density were studied using ground and space-born instruments, thus characterizing a complex perturbation behaviour over the region mentioned above. The multi-source data allows us to characterize the impact of irregularities of different scales to better understand the ionospheric dynamics and stress the importance of a proper monitoring of the ionosphere in the studied region.
The variability in the magnetic activity of the Sun is the main source of the observed changes in the plasma and electromagnetic environments within the heliosphere. The primary way in which solar activity affects the Earth's environment is via the solar wind and its transients. However, the relationship between solar activity and solar wind is not the same at the Space Weather and Space Climate time scales. In this work, we investigate this relationship exploiting five solar cycles data of Ca II K index and solar wind parameters, by taking advantage of the Hilbert-Huang Transform, which allows to separate the contribution at the different time scales. By filtering out the high frequency components and looking at decennial time scales, we confirm the presence of a delayed response of solar wind to Ca II K index variations, with a time lag of ~ 3.1-year for the speed and ~ 3.4-year for the dynamic pressure. To assess the results in a stronger framework, we make use of a Transfer Entropy approach to investigate the information flow between the quantities and to test the causality of the relation. The time lag results from the latter are consistent with the cross-correlation ones, pointing out the presence of a statistical significant information flow from Ca II K index to solar wind dynamic pressure that peaks at time lag of 3.6-year. Such a result could be of relevance to build up a predictive model in a Space Climate context.
We present the first density model of Stromboli volcano (Aeolian Islands, Italy) obtained by simultaneously inverting land-based (543) and sea-surface (327) relative gravity data. Modern positioning technology, a 1 * 1 m digital elevation model, and a 15 * 15 m bathymetric model made it possible to obtain a detailed 3-D density model through an iteratively reweighted smoothness-constrained least-squares inversion that explained the land-based gravity data to 0.09 mGal and the sea-surface data to 5 mGal. Our inverse formulation avoids introducing any assumptions about density magnitudes. At 125 m depth from the land surface, the inferred mean density of the island is 2380 kg m-3, with corresponding 2.5 and 97.5 percentiles of 2200 and 2530 kg m-3. This density range covers the rock densities of new and previously published samples of Paleostromboli I, Vancori, Neostromboli and San Bartolo lava flows. High-density anomalies in the central and southern part of the island can be related to two main degassing faults crossing the island (N41 and N64) that are interpreted as preferential regions of dyke intrusions. In addition, two low-density anomalies are found in the northeastern part and in the summit area of the island. These anomalies seem to be geographically related with past paroxysmal explosive phreato-magmatic events that have played important roles in the evolution of Stromboli Island by forming the Scari caldera and the Neostromboli crater, respectively.
Probabilistic Ising machines (PIMs) provide a path to solving many computationally hard problems more efficiently than deterministic algorithms on von Neumann computers. Stochastic magnetic tunnel junctions (S-MTJs), which are engineered to be thermally unstable, show promise as entropy sources in PIMs. However, scaling up S-MTJ-PIMs is challenging, as it requires fine control of a small magnetic energy barrier across large numbers of devices. In addition, non-spintronic components of S-MTJ-PIMs to date have been primarily realized using general-purpose processors or field-programmable gate arrays. Reaching the ultimate performance of spintronic PIMs, however, requires co-designed application-specific integrated circuits (ASICs), combining CMOS with spintronic entropy sources. Here we demonstrate an ASIC in 130 nm foundry CMOS, which implements integer factorization as a representative hard optimization problem, using PIM-based invertible logic gates realized with 1143 probabilistic bits. The ASIC uses stochastic bit sequences read from an adjacent voltage-controlled (V-) MTJ chip. The V-MTJs are designed to be thermally stable in the absence of voltage, and generate random bits on-demand in response to 10 ns pulses using the voltage-controlled magnetic anisotropy effect. We experimentally demonstrate the chip's functionality and provide projections for designs in advanced nodes, illustrating a path to millions of probabilistic bits on a single CMOS+V-MTJ chip.
The decomposition of a signal is a fundamental tool in many fields of research, including signal processing, geophysics, astrophysics, engineering, medicine, and many more. By breaking down complex signals into simpler oscillatory components we can enhance the understanding and processing of the data, unveiling hidden information contained in them. Traditional methods, such as Fourier analysis and wavelet transforms, which are effective in handling mono-dimensional stationary signals struggle with non-stationary data sets and they require, this is the case of the wavelet, the selection of predefined basis functions. In contrast, the Empirical Mode Decomposition (EMD) method and its variants, such as Iterative Filtering (IF), have emerged as effective nonlinear approaches, adapting to signals without any need for a priori assumptions. To accelerate these methods, the Fast Iterative Filtering (FIF) algorithm was developed, and further extensions, such as Multivariate FIF (MvFIF) and Multidimensional FIF (FIF2), have been proposed to handle higher-dimensional data. In this work, we introduce the Multidimensional and Multivariate Fast Iterative Filtering (MdMvFIF) technique, an innovative method that extends FIF to handle data that vary simultaneously in space and time. This new algorithm is capable of extracting Intrinsic Mode Functions (IMFs) from complex signals that vary in both space and time, overcoming limitations found in prior methods. The potentiality of the proposed method is demonstrated through applications to artificial and real-life signals, highlighting its versatility and effectiveness in decomposing multidimensional and multivariate nonstationary signals. The MdMvFIF method offers a powerful tool for advanced signal analysis across many scientific and engineering disciplines.
Accurate and reliable diagnosis of diseases is crucial in enabling timely medical treatment and enhancing patient survival rates. In recent years, Machine Learning has revolutionized diagnostic practices by creating classification models capable of identifying diseases. However, these classification problems often suffer from significant class imbalances, which can inhibit the effectiveness of traditional models. Therefore, the interest in Quantum models has arisen, driven by the captivating promise of overcoming the limitations of the classical counterpart thanks to their ability to express complex patterns by mapping data in a higher-dimensional computational space.
Laser interferometry enables to remotely measure microscopical length changes of deployed telecommunication cables originating from earthquakes. Long range and compatibility with data traffic make it unique to the exploration of remote regions, as well as highly-populated areas where optical networks are pervasive, and its large-scale implementation is attractive for both Earth scientists and telecom operators. However, validation and modeling of its response and sensitivity are still at an early stage and suffer from lack of statistically-significant event catalogs and limited availability of co-located seismometers. We implemented laser interferometry on a land-based telecommunication cable and analyzed 1.5 years of continuous acquisition, with successful detections of events in a broad range of magnitudes, including very weak ones. By comparing fiber and seismometer recordings we determined relations between a cable's detection probability and the magnitude and distance of events, and showed that spectral analysis of recorded data allows considerations on the earthquake dynamics. Our results reveal that quantitative analysis is possible for this sensing technique and support the interpretation of data from the growing amount of interferometric deployments. We anticipate the high integration and scalability of laser interferometry into existing telecommunication grids to be useful for the daily seismicity monitoring, in perspective exploitable for civilian protection use.
There are no more papers matching your filters at the moment.