University of Milano
In a day-ahead market, energy buyers and sellers submit their bids for a particular future time, including the amount of energy they wish to buy or sell and the price they are prepared to pay or receive. However, the dynamic for forming the Market Clearing Price (MCP) dictated by the bidding mechanism is frequently overlooked in the literature on energy market modelling. Forecasting models usually focus on predicting the MCP rather than trying to build the optimal supply and demand curves for a given price scenario. Following this approach, the article focuses on developing a bidding strategy for a seller in a continuous action space through a single agent Reinforcement Learning algorithm, specifically the Deep Deterministic Policy Gradient. The algorithm controls the offering curve (action) based on past data (state) to optimize future payoffs (rewards). The participant can access historical data on production costs, capacity, and prices for various sources, including renewable and fossil fuels. The participant gains the ability to operate in the market with greater efficiency over time to maximize individual payout.
We present a hybrid stochastic-continuum model to study the sulphation of calcium carbonate and the consequent formation of gypsum, a key phenomenon driving marble deterioration. While calcium carbonate and gypsum are continuous random fields evolving according to random ordinary differential equations, the dynamics of sulfuric acid particles follows It\^o-type stochastic differential equations. The particle evolution incorporates both strong repulsion between particles via the Lennard-Jones potential, and non-local interactions with the continuum environment. The particle-continuum coupling is also achieved through the chemical reaction, modeled as a Poisson counting process. We simulate the spatiotemporal evolution of this corrosion process using the Euler-Maruyama algorithm with varying initial data combined with finite elements to take care of the spatial discretization. Despite symmetric initial data, our simulations highlight an uneven progression of corrosion due to the stochastic influences in the model.
We review the method of the differential equations for the evaluation of multi-loop Feynman integrals. In particular, we focus on the series expansion approach for solving the system of differential equation and we discuss how to perform the analytical continuation of the result to entire (complex) phase-space. This approach allow us to consider arbitrary internal complex masses. This review is based on a lecture given by the author at the "Advanced School and Workshop on Multiloop Scattering Amplitudes" held in NISER, Bhubaneswar (India) in January 2024.
We investigate the origin of the unusually large electroweak (EW) radiative effects observed in the extraction of the spin-density matrix and related observables at colliders, focusing on leptonic Z-boson decays. We compute the Z-boson-decay spin-density matrix at next-to-leading order (NLO) and find that, while its analytic structure remains essentially unchanged with respect to leading order, the EW corrections induce a sizeable 35%-35\% shift in the spin-analysing-power parameter η\eta_\ell. This effect alone accounts for the striking size of the corrections. For boosted Z bosons, we further show that the treatment of photon radiation in lepton-dressing algorithms significantly affects the extraction of spin-density-matrix coefficients at NLO and must be carefully controlled. To address these challenges, we propose a quantum tomography procedure that is applicable to any final state with one or more on-shell Z bosons that is robust under higher-order corrections. We illustrate its validity and limitations in ppZZ4{\textrm{pp}} \to {\textrm{ZZ}} \to 4\ell and in heavy (MH>2MZM_{\textrm{H}}>2 M_{\textrm{Z}}) Higgs-boson decay HZZ4{\textrm{H}}\to {\textrm{ZZ}} \to 4\ell.
Smartphones and wearable devices, along with Artificial Intelligence, can represent a game-changer in the pandemic control, by implementing low-cost and pervasive solutions to recognize the development of new diseases at their early stages and by potentially avoiding the rise of new outbreaks. Some recent works show promise in detecting diagnostic signals of COVID-19 from voice and coughs by using machine learning and hand-crafted acoustic features. In this paper, we decided to investigate the capabilities of the recently proposed deep embedding model L3-Net to automatically extract meaningful features from raw respiratory audio recordings in order to improve the performances of standard machine learning classifiers in discriminating between COVID-19 positive and negative subjects from smartphone data. We evaluated the proposed model on 3 datasets, comparing the obtained results with those of two reference works. Results show that the combination of L3-Net with hand-crafted features overcomes the performance of the other works of 28.57% in terms of AUC in a set of subject-independent experiments. This result paves the way to further investigation on different deep audio embeddings, also for the automatic detection of different diseases.
Given a reaction-diffusion system modelling the sulphation phenomenon, we derive a single regularised non-conservative and path-dependent nonlinear partial differential equation and propose a probabilistic interpretation using a non-Markovian McKean-type stochastic differential equation. We discuss the well-posedness of such a stochastic model, and we establish the propagation of chaos property for the related interacting particle system.
With the remarkable success of deep learning, applying such techniques to EM methods has emerged as a promising research direction to overcome the limitations of conventional approaches. The effectiveness of deep learning methods depends heavily on the quality of datasets, which directly influences model performance and generalization ability. Existing application studies often construct datasets from random one-dimensional or structurally simple three-dimensional models, which fail to represent the complexity of real geological environments. Furthermore, the absence of standardized, publicly available three-dimensional geoelectric datasets continues to hinder progress in deep learning based EM exploration. To address these limitations, we present OpenEM, a large scale, multi structural three dimensional geoelectric dataset that encompasses a broad range of geologically plausible subsurface structures. OpenEM consists of nine categories of geoelectric models, spanning from simple configurations with anomalous bodies in half space to more complex structures such as flat layers, folded layers, flat faults, curved faults, and their corresponding variants with anomalous bodies. Since three-dimensional forward modeling in electromagnetics is extremely time-consuming, we further developed a deep learning based fast forward modeling approach for OpenEM, enabling efficient and reliable forward modeling across the entire dataset. This capability allows OpenEM to be rapidly deployed for a wide range of tasks. OpenEM provides a unified, comprehensive, and large-scale dataset for common EM exploration systems to accelerate the application of deep learning in electromagnetic methods. The complete dataset, along with the forward modeling codes and trained models, is publicly available at this https URL.
3
Multifragment events resulting from peripheral Au + Au collisions at 35 MeV/nucleon are analysed in terms of critical behavior. The analysis of most of criticality signals proposed so far (conditional moments of charge distributions, Campi scatter plot, fluctuations of the size of the largest fragment, intermittency analysis) is consistent with the occurrence of a critical behavior of the system.
We give a bird's-eye view of the plastic deformation of crystals aimed at the statistical physics community, and a broad introduction into the statistical theories of forced rigid systems aimed at the plasticity community. Memory effects in magnets, spin glasses, charge density waves, and dilute colloidal suspensions are discussed in relation to the onset of plastic yielding in crystals. Dislocation avalanches and complex dislocation tangles are discussed via a brief introduction to the renormalization group and scaling. Analogies to emergent scale invariance in fracture, jamming, coarsening, and a variety of depinning transitions are explored. Dislocation dynamics in crystals challenges non equilibrium statistical physics. Statistical physics provides both cautionary tales of subtle memory effects in nonequilibrium systems, and systematic tools designed to address complex scale-invariant behavior on multiple length and time scales.
We investigate the qualitative behaviour of the solutions of a stochastic boundary value problem on the half-line for a nonlinear system of parabolic reaction-diffusion equations, from a numerical point of view. The model describes the chemical aggression of calcium carbonate stones under the attack of sulphur dioxide. The dynamical boundary condition is given by a Pearson diffusion, which is original in the context of the degradation of cultural heritage. We first discuss a scheme based on the Lamperti transformation for the stochastic differential equation to preserve the boundary and a splitting strategy for the partial differential equation based on recent theoretical results. Positiveness, boundedness, and stability are stated. The impact of boundary noise on the solution and its qualitative behaviour both in the slow and fast regimes is discussed in several numerical experiments.
The next generation of data-intensive surveys are bound to produce a vast amount of data, which can be dealt with using machine-learning methods to explore possible correlations within the multi-dimensional parameter space. We explore the classification capabilities of convolution neural networks (CNNs) to identify galaxy cluster members (CLMs) by using Hubble Space Telescope (HST) images of 15 galaxy clusters at redshift 0.19
We present the first analytical results for the O(ααs){\cal O}(\alpha\alpha_s) corrections to the total cross section for the inclusive production of an on-shell ZZ boson at hadron colliders. We include the complete set of contributions, with photonic and massive weak gauge boson effects, which have been computed in analytical form and expressed in terms of polylogarithmic and elliptic integrals. We present numerical results, relevant for the precision studies at the LHC. These corrections increase the accuracy of the predictions and contribute to the reduction of the QCD component of the theoretical uncertainty.
In computational physics, chemistry, and biology, the implementation of new techniques in a shared and open source software lowers barriers to entry and promotes rapid scientific progress. However, effectively training new software users presents several challenges. Common methods like direct knowledge transfer and in-person workshops are limited in reach and comprehensiveness. Furthermore, while the COVID-19 pandemic highlighted the benefits of online training, traditional online tutorials can quickly become outdated and may not cover all the software's functionalities. To address these issues, here we introduce ``PLUMED Tutorials'', a collaborative model for developing, sharing, and updating online tutorials. This initiative utilizes repository management and continuous integration to ensure compatibility with software updates. Moreover, the tutorials are interconnected to form a structured learning path and are enriched with automatic annotations to provide broader context. This paper illustrates the development, features, and advantages of PLUMED Tutorials, aiming to foster an open community for creating and sharing educational resources.
In this work we introduce a new class of gradient-free global optimization methods based on a binary interaction dynamics governed by a Boltzmann type equation. In each interaction the particles act taking into account both the best microscopic binary position and the best macroscopic collective position. In the mean-field limit we show that the resulting Fokker-Planck partial differential equations generalize the current class of consensus based optimization (CBO) methods. For the latter methods, convergence to the global minimizer can be shown for a large class of functions. Algorithmic implementations inspired by the well-known direct simulation Monte Carlo methods in kinetic theory are derived and discussed. Several examples on prototype test functions for global optimization are reported including applications to machine learning.
We report on experiments in which the Texas Petawatt laser irradiated a mixture of deuterium or deuterated methane clusters and helium-3 gas, generating three types of nuclear fusion reactions: D(d, 3He)n, D(d, t)p and 3He(d, p)4He. We measured the yields of fusion neutrons and protons from these reactions and found them to agree with yields based on a simple cylindrical plasma model using known cross sections and measured plasma parameters. Within our measurement errors, the fusion products were isotropically distributed. Plasma temperatures, important for the cross sections, were determined by two independent methods: (1) deuterium ion time-of-flight, and (2) utilizing the ratio of neutron yield to proton yield from D(d, 3He)n and 3He(d, p)4He reactions, respectively. This experiment produced the highest ion temperature ever achieved with laser-irradiated deuterium clusters.
The Haldane model is a paradigmatic 2d lattice model exhibiting the integer quantum Hall effect. We consider an interacting version of the model, and prove that for short-range interactions, smaller than the bandwidth, the Hall conductivity is quantized, for all the values of the parameters outside two critical curves, across which the model undergoes a `topological' phase transition: the Hall coefficient remains integer and constant as long as we continuously deform the parameters without crossing the curves; when this happens, the Hall coefficient jumps abruptly to a different integer. Previous works were limited to the perturbative regime, in which the interaction is much smaller than the bare gap, so they were restricted to regions far from the critical lines. The non-renormalization of the Hall conductivity arises as a consequence of lattice conservation laws and of the regularity properties of the current-current correlations. Our method provides a full construction of the critical curves, which are modified (`dressed') by the electron-electron interaction. The shift of the transition curves manifests itself via apparent infrared divergences in the naive perturbative series, which we resolve via renormalization group methods.
This paper proposes a weakly-supervised machine learning-based approach aiming at a tool to alert patients about possible respiratory diseases. Various types of pathologies may affect the respiratory system, potentially leading to severe diseases and, in certain cases, death. In general, effective prevention practices are considered as major actors towards the improvement of the patient's health condition. The proposed method strives to realize an easily accessible tool for the automatic diagnosis of respiratory diseases. Specifically, the method leverages Variational Autoencoder architectures permitting the usage of training pipelines of limited complexity and relatively small-sized datasets. Importantly, it offers an accuracy of 57 %, which is in line with the existing strongly-supervised approaches.
We present an algorithm to evaluate multiloop Feynman integrals with an arbitrary number of internal massive lines, with the masses being in general complex-valued, and its implementation in the \textsc{Mathematica} package \textsc{SeaSyde}. The implementation solves by series expansions the system of differential equations satisfied by the Master Integrals. At variance with respect to other existing codes, the analytical continuation of the solution is performed in the complex plane associated to each kinematical invariant. We present the results of the evaluation of the Master Integrals relevant for the NNLO QCD-EW corrections to the neutral-current Drell-Yan processes.
8
While the standard network description of complex systems is based on quantifying links between pairs of system units, higher-order interactions (HOIs) involving three or more units play a major role in governing the collective network behavior. This work introduces an approach to quantify pairwise and HOIs for multivariate rhythmic processes interacting across multiple time scales. We define the so-called O-information rate (OIR) as a new metric to assess HOIs for multivariate time series, and propose a framework to decompose it into measures quantifying Granger-causal and instantaneous influences, as well as to expand it in the frequency domain. The framework exploits the spectral representation of vector autoregressive and state-space models to assess synergistic and redundant interactions among groups of processes, both in specific bands and in the time domain after whole-band integration. Validation on simulated networks illustrates how the spectral OIR can highlight redundant and synergistic HOIs emerging at specific frequencies but not using time-domain measures. The application to physiological networks described by heart period, arterial pressure and respiration measured in healthy subjects during paced breathing, and to brain networks described by ECoG signals acquired in an animal experiment during anesthesia, document the capability of our approach to identify informational circuits relevant to well-defined cardiovascular oscillations and brain rhythms and related to specific physiological mechanisms of autonomic control and altered consciousness. The proposed framework allows a hierarchically-organized evaluation of time- and frequency-domain interactions in networks mapped by multivariate time series, and its high flexibility and scalability make it suitable to investigate networks beyond pairwise interactions in neuroscience, physiology and other fields.
The purpose of this paper is to compare different learnable frontends in medical acoustics tasks. A framework has been implemented to classify human respiratory sounds and heartbeats in two categories, i.e. healthy or affected by pathologies. After obtaining two suitable datasets, we proceeded to classify the sounds using two learnable state-of-art frontends -- LEAF and nnAudio -- plus a non-learnable baseline frontend, i.e. Mel-filterbanks. The computed features are then fed into two different CNN models, namely VGG16 and EfficientNet. The frontends are carefully benchmarked in terms of the number of parameters, computational resources, and effectiveness. This work demonstrates how the integration of learnable frontends in neural audio classification systems may improve performance, especially in the field of medical acoustics. However, the usage of such frameworks makes the needed amount of data even larger. Consequently, they are useful if the amount of data available for training is adequately large to assist the feature learning process.
There are no more papers matching your filters at the moment.