International School for Advanced Studies
The remarkable capability of over-parameterised neural networks to generalise effectively has been explained by invoking a ``simplicity bias'': neural networks prevent overfitting by initially learning simple classifiers before progressing to more complex, non-linear functions. While simplicity biases have been described theoretically and experimentally in feed-forward networks for supervised learning, the extent to which they also explain the remarkable success of transformers trained with self-supervised techniques remains unclear. In our study, we demonstrate that transformers, trained on natural language data, also display a simplicity bias. Specifically, they sequentially learn many-body interactions among input tokens, reaching a saturation point in the prediction error for low-degree interactions while continuing to learn high-degree interactions. To conduct this analysis, we develop a procedure to generate \textit{clones} of a given natural language data set, which rigorously capture the interactions between tokens up to a specified order. This approach opens up the possibilities of studying how interactions of different orders in the data affect learning, in natural language processing and beyond.
Researchers from UCLouvain and collaborators developed a first-principles framework showing that thermal expansion and phonon anharmonicity have antagonistic impacts on the phonon-limited electrical resistivity of elemental metals. The framework accurately reproduces experimental resistivity values for Pb and Al across broad temperature ranges, resolving previous overestimations by explicitly including both effects.
In this paper, we consider an NN-oscillators complexified Kuramoto model. We first observe that there are solutions exhibiting finite-time blow-up behavior in all coupling regimes. When the coupling strength \lambda>\lambda_c, sufficient conditions for various types of synchronization are established for general N2N \geq 2. On the other hand, we analyze the case when the coupling strength is weak. For N=2N=2 with coupling below λc\lambda_c, our complex-analytic approach not only recovers the periodic orbits reported by Thümler--Srinivas--Schröder--Timme but also provides, for the first time, their exact period Tω,λ=2π/ω2λ2T_{\omega,\lambda}=2\pi/\sqrt{\omega^{2}-\lambda^{2}}, confirming full phase locking. Furthermore, for the critical case λ=λc\lambda = \lambda_c, we find that the complexified Kuramoto system admits homoclinic orbits. These phenomena significantly differentiate the complexified Kuramoto model from the real Kuramoto system, as synchronization never occurs when \lambda<\lambda_c in the latter. For N=3N=3, we demonstrate that if the natural frequencies are in arithmetic progression, non-trivial synchronization states can be achieved for certain initial conditions even when the coupling strength is weak. In particular, we characterize the critical coupling strength (λ/λc=0.85218915...\lambda/\lambda_c = 0.85218915...) such that a semistable equilibrium point in the real Kuramoto model bifurcates into a pair of stable and unstable equilibria, marking a new phenomenon in complexified Kuramoto models.
We consider the coarsening properties of a kinetic Ising model with a memory field. The probability of a spin-flip depends on the persistence time of the spin in a state. The more a spin has been in a given state, the less the spin-flip probability is. We numerically studied the growth and persistence properties of such a system on a two dimensional square lattice. The memory introduces energy barriers which freeze the system at zero temperature. At finite temperature we can observe an apparent arrest of coarsening for low temperature and long memory length. However, since the energy barriers introduced by memory are due to local effects, there exists a timescale on which coarsening takes place as for the Ising model. Moreover the two point correlation functions of the Ising model with and without memory are the same, indicating that they belong to the same universality class.
We discuss the algebra of operators in AdS-Rinlder wedge, particularly in AdS5_{5}/CFT4_{4}. We explicitly construct the algebra at N=N=\infty limit and discuss its Type III1_{1} nature. We will consider 1/N1/N corrections to the theory and using a novel way of renormalizing the area of Ryu-Takayanagi surface, describe how several divergences can be renormalized and the algebra becomes Type II_{\infty}. This will make it possible to associate a density matrix to any state in the Hilbert space and thus a von Neumann entropy.
Monitored quantum systems, where unitary dynamics compete with continuous measurements, exhibit dynamical transitions as the measurement rate is varied. These reflect abrupt changes in the structure of the evolving wavefunction, captured by complementary complexity diagnostics that include and go beyond entanglement aspects. Here, we investigate how monitoring affects magic state resources, the nonstabilizerness, of Gaussian fermionic systems. Using scalable Majorana sampling techniques, we track the evolution of stabilizer Rényi entropies in large systems under projective measurements. While the leading extensive (volume-law) scaling of magic remains robust across all measurement rates, we uncover a sharp transition in the subleading logarithmic corrections. This measurement-induced complexity transition, invisible to standard entanglement probes, highlights the power of magic-based diagnostics in revealing hidden features of monitored many-body dynamics.
We investigate the quantum resource requirements of a dataset generated from simulations of two-dimensional, periodic, incompressible shear flow, aimed at training machine learning models. By measuring entanglement and non-stabilizerness on MPS-encoded functions, we estimate the computational complexity encountered by a stabilizer or a tensor network solver applied to Computational Fluid Dynamics (CFD) simulations across different flow regimes. Our analysis reveals that, under specific initial conditions, the shear width identifies a transition between resource-efficient and resource-intensive regimes for non-trivial evolution. Furthermore, we find that the two resources qualitatively track each other in time, and that the mesh resolution along with the sign structure play a crucial role in determining the resource content of the encoded state. These findings offer useful guidelines for the development of scalable, quantum-inspired approaches to fluid dynamics.
The detection of primordial BB-mode polarisation of the Cosmic Microwave Background (CMB) is a major observational goal in modern Cosmology, offering a potential window into inflationary physics through the measurement of the tensor-to-scalar ratio rr. However, the presence of Galactic foregrounds poses significant challenges, possibly biasing the rr estimate. In this study we explore the viability of using Minkowski functionals (MFs) as a robustness test to validate a potential rr detection by identifying non-Gaussian features associated with foregrounds contamination. To do so, we simulate sky maps as observed by a LiteBIRD-like CMB experiment, with realistic instrumental and foregrounds modelling. The CMB BB-mode signal is recovered through blind component separation algorithms, and the obtained (biased) value of rr is used to generate Gaussian realisation of CMB signal. Their MFs are then compared with those computed on maps contaminated by foreground residual left by component separation, looking for a detection of non-Gaussianity. Our results demonstrate that, with the experimental configuration considered here, MFs can not be reliably adopted as a robustness test of an eventual rr detection, as we find that in the majority of the cases MFs are not able to raise significant warnings about the non-Gaussianity induced by the presence of foreground residuals. In the most realistic and refined scenario we adopted, the test is able to flag non-Gaussianity in 26%\sim 26\% of the simulations, meaning that there is no warning on the biased tensor-to-scalar ratio in 74%\sim 74\% of cases. These results suggest that more advanced statistics than MFs must be considered to look for non-Gaussian signatures of foregrounds, in order to be able to perform reliable null tests in future CMB missions.
Quantum spin glasses form a good testbed for studying the performance of various quantum annealing and optimization algorithms. In this work we show how two- and three-dimensional tensor networks can accurately and efficiently simulate the quantum annealing dynamics of Ising spin glasses on a range of lattices. Such dynamics were recently simulated using D-Wave's Advantage22 system [A. D. King et al, Science, 10.1126/science.ado6285 (2025)] and, following extensive comparisons to existing numerical methods, claimed to be beyond the reach of classical computation. Here we show that by evolving lattice-specific tensor networks with simple belief propagation to keep up with the entanglement generated during the time evolution and then extracting expectation values with more sophisticated variants of belief propagation, state-of-the-art accuracies can be reached with modest computational resources. We exploit the scalability of our simulations and simulate a system of over 300300 qubits, allowing us to verify the universal physics present and extract a value for the associated Kibble-Zurek exponent which agrees with recent values obtained in literature. Our results demonstrate that tensor networks are a viable approach for simulating large scale quantum dynamics in two and three dimensions on classical computers, and algorithmic advancements are expected to expand their applicability going forward.
The simulation and analysis of the thermal stability of nanoparticles, a stepping stone towards their application in technological devices, require fast and accurate force fields, in conjunction with effective characterisation methods. In this work, we develop efficient, transferable, and interpretable machine learning force fields for gold nanoparticles based on data gathered from Density Functional Theory calculations. We use them to investigate the thermodynamic stability of gold nanoparticles of different sizes (1 to 6 nm), containing up to 6266 atoms, concerning a solid-liquid phase change through molecular dynamics simulations. We predict nanoparticle melting temperatures in good agreement with available experimental data. Furthermore, we characterize the solid-liquid phase change mechanism employing an unsupervised learning scheme to categorize local atomic environments. We thus provide a data-driven definition of liquid atomic arrangements in the inner and surface regions of a nanoparticle and employ it to show that melting initiates at the outer layers.
RNA function crucially depends on its structure. Thermodynamic models currently used for secondary structure prediction rely on computing the partition function of folding ensembles, and can thus estimate minimum free-energy structures and ensemble populations. These models sometimes fail in identifying native structures unless complemented by auxiliary experimental data. Here, we build a set of models that combine thermodynamic parameters, chemical probing data (DMS, SHAPE), and co-evolutionary data (Direct Coupling Analysis, DCA) through a network that outputs perturbations to the ensemble free energy. Perturbations are trained to increase the ensemble populations of a representative set of known native RNA structures. In the chemical probing nodes of the network, a convolutional window combines neighboring reactivities, enlightening their structural information content and the contribution of local conformational ensembles. Regularization is used to limit overfitting and improve transferability. The most transferable model is selected through a cross-validation strategy that estimates the performance of models on systems on which they are not trained. With the selected model we obtain increased ensemble populations for native structures and more accurate predictions in an independent validation set. The flexibility of the approach allows the model to be easily retrained and adapted to incorporate arbitrary experimental information.
We perform an analysis of the full shapes of Lyman-α\alpha (Lyα\alpha) forest correlation functions measured from the first data release (DR1) of the Dark Energy Spectroscopic Instrument (DESI). Our analysis focuses on measuring the Alcock-Paczynski (AP) effect and the cosmic growth rate times the amplitude of matter fluctuations in spheres of 88 h1Mpch^{-1}\text{Mpc}, fσ8f\sigma_8. We validate our measurements using two different sets of mocks, a series of data splits, and a large set of analysis variations, which were first performed blinded. Our analysis constrains the ratio DM/DH(zeff)=4.525±0.071D_M/D_H(z_\mathrm{eff})=4.525\pm0.071, where DH=c/H(z)D_H=c/H(z) is the Hubble distance, DMD_M is the transverse comoving distance, and the effective redshift is zeff=2.33z_\mathrm{eff}=2.33. This is a factor of 2.42.4 tighter than the Baryon Acoustic Oscillation (BAO) constraint from the same data. When combining with Lyα\alpha BAO constraints from DESI DR2, we obtain the ratios DH(zeff)/rd=8.646±0.077D_H(z_\mathrm{eff})/r_d=8.646\pm0.077 and DM(zeff)/rd=38.90±0.38D_M(z_\mathrm{eff})/r_d=38.90\pm0.38, where rdr_d is the sound horizon at the drag epoch. We also measure fσ8(zeff)=0.37  0.065+0.055(stat)±0.033(sys)f\sigma_8(z_\mathrm{eff}) = 0.37\; ^{+0.055}_{-0.065} \,(\mathrm{stat})\, \pm 0.033 \,(\mathrm{sys}), but we do not use it for cosmological inference due to difficulties in its validation with mocks. In Λ\LambdaCDM, our measurements are consistent with both cosmic microwave background (CMB) and galaxy clustering constraints. Using a nucleosynthesis prior but no CMB anisotropy information, we measure the Hubble constant to be H0=68.3±1.6  kms1Mpc1H_0 = 68.3\pm 1.6\;\,{\rm km\,s^{-1}\,Mpc^{-1}} within Λ\LambdaCDM. Finally, we show that Lyα\alpha forest AP measurements can help improve constraints on the dark energy equation of state, and are expected to play an important role in upcoming DESI analyses.
We show that for general spherically symmetric configurations, contributions of general gravitational and mixed gauge-gravitational Chern-Simons terms to the equations of motion vanish identically in D>3D>3 dimensions. This implies that such terms in the action do not affect Birkhoff's theorem or any previously known spherically symmetric solutions. Furthermore, we investigate the thermodynamical properties using the procedure described in an accompanying paper. We find that in D>3D>3 static spherically symmetric case Chern-Simons terms do not contribute to the entropy either. Moreover, if one requires only for the metric tensor to be spherically symmetric, letting other fields unrestricted, the results extend almost completely, with only one possible exception --- Chern-Simons Lagrangian terms in which the gravitational part is just the n=2n=2 irreducible gravitational Chern-Simons term.
The two-dimensional Ising model is the simplest model of statistical mechanics exhibiting a second order phase transition. While in absence of magnetic field it is known to be solvable on the lattice since Onsager's work of the forties, exact results for the magnetic case have been missing until the late eighties, when A.Zamolodchikov solved the model in a field at the critical temperature, directly in the scaling limit, within the framework of integrable quantum field theory. In this article we review this field theoretical approach to the Ising universality class, with particular attention to the results obtained starting from Zamolodchikov's scattering solution and to their comparison with the numerical estimates on the lattice. The topics discussed include scattering theory, form factors, correlation functions, universal amplitude ratios and perturbations around integrable directions. Although we restrict our discussion to the Ising model, the emphasis is on the general methods of integrable quantum field theory which can be used in the study of all universality classes of critical behaviour in two dimensions.
We investigate the robustness of a dynamical phase transition against quantum fluctuations by studying the impact of a ferromagnetic nearest-neighbour spin interaction in one spatial dimension on the non-equilibrium dynamical phase diagram of the fully-connected quantum Ising model. In particular, we focus on the transient dynamics after a quantum quench and study the pre-thermal state via a combination of analytic time-dependent spin-wave theory and numerical methods based on matrix product states. We find that, upon increasing the strength of the quantum fluctuations, the dynamical critical point fans out into a chaotic dynamical phase within which the asymptotic ordering is characterised by strong sensitivity to the parameters and initial conditions. We argue that such a phenomenon is general, as it arises from the impact of quantum fluctuations on the mean-field out of equilibrium dynamics of any system which exhibits a broken discrete symmetry.
We estimate the efficiency of mitigating the lensing BB-mode polarization, the so-called delensing, for the LiteBIRDLiteBIRD experiment with multiple external data sets of lensing-mass tracers. The current best bound on the tensor-to-scalar ratio, rr, is limited by lensing rather than Galactic foregrounds. Delensing will be a critical step to improve sensitivity to rr as measurements of rr become more and more limited by lensing. In this paper, we extend the analysis of the recent LiteBIRDLiteBIRD forecast paper to include multiple mass tracers, i.e., the CMB lensing maps from LiteBIRDLiteBIRD and CMB-S4-like experiment, cosmic infrared background, and galaxy number density from EuclidEuclid- and LSST-like survey. We find that multi-tracer delensing will further improve the constraint on rr by about 20%20\%. In LiteBIRDLiteBIRD, the residual Galactic foregrounds also significantly contribute to uncertainties of the BB-modes, and delensing becomes more important if the residual foregrounds are further reduced by an improved component separation method.
The air flows in the proximal and distal portions of the human lungs are interconnected: the lower Reynolds number in the deeper generations causes a progressive flow regularization, while mass conservation requires flow rate oscillations to propagate through the airway bifurcations. To explain how these two competing effects shape the flow state in the deeper generations, we have performed the first high-fidelity numerical simulations of the air flow in a lung model including 23 successive bifurcations of a single planar airway. Turbulence modelling or assumptions on flow regimes are not required. The chosen flow rate is stationary (steady on average), and representative of the peak inspiratory flow reached by adult patients breathing through therapeutical inhalers. As expected, advection becomes progressively less important after each bifurcation, until a time-dependent Stokes regime governed solely by viscous diffusion is established in the smallest generations. However, fluctuations in this regime are relatively fast and large with respect to the mean flow, which is in contrast with the commonly agreed picture that only the breathing frequency is relevant at the scale of the alveoli. We demonstrate that the characteristic frequency and amplitude of these fluctuations are linked to the flow in the upper part of the bronchial tree, as they originate from the time-dependent flow splitting in the upper bifurcations. Even though these fluctuations are observed here in an idealized, rigid lung model, our findings suggest that the assumptions usually adopted in many of the current lung models might need to be revised.
This thesis is devoted to the study of gravitational theories which can be seen as modifications or generalisations of General Relativity. The motivation for considering such theories, stemming from Cosmology, High Energy Physics and Astrophysics is thoroughly discussed (cosmological problems, dark energy and dark matter problems, the lack of success so far in obtaining a successful formulation for Quantum Gravity). The basic principles which a gravitational theory should follow, and their geometrical interpretation, are analysed in a broad perspective which highlights the basic assumptions of General Relativity and suggests possible modifications which might be made. A number of such possible modifications are presented, focusing on certain specific classes of theories: scalar-tensor theories, metric f(R) theories, Palatini f(R) theories, metric-affine f(R) theories and Gauss--Bonnet theories. The characteristics of these theories are fully explored and attention is payed to issues of dynamical equivalence between them. Also, cosmological phenomenology within the realm of each of the theories is discussed and it is shown that they can potentially address the well-known cosmological problems. A number of viability criteria are presented: cosmological observations, Solar System tests, stability criteria, existence of exact solutions for common vacuum or matter configurations etc. Finally, future perspectives in the field of modified gravity are discussed and the possibility for going beyond a trial-and-error approach to modified gravity is explored.
Researchers derived new physical beta functions for quadratic gravity, demonstrating that asymptotic freedom can be achieved without the presence of tachyonic states, a long-standing issue for the theory's viability. This re-evaluation offers a path for quadratic gravity to serve as a consistent quantum theory of gravity.
We explore the capability of measuring lensing signals in LiteBIRDLiteBIRD full-sky polarization maps. With a 3030 arcmin beam width and an impressively low polarization noise of 2.16μ2.16\,\muK-arcmin, LiteBIRDLiteBIRD will be able to measure the full-sky polarization of the cosmic microwave background (CMB) very precisely. This unique sensitivity also enables the reconstruction of a nearly full-sky lensing map using only polarization data, even considering its limited capability to capture small-scale CMB anisotropies. In this paper, we investigate the ability to construct a full-sky lensing measurement in the presence of Galactic foregrounds, finding that several possible biases from Galactic foregrounds should be negligible after component separation by harmonic-space internal linear combination. We find that the signal-to-noise ratio of the lensing is approximately 4040 using only polarization data measured over 90%90\% of the sky. This achievement is comparable to PlanckPlanck's recent lensing measurement with both temperature and polarization and represents a four-fold improvement over PlanckPlanck's polarization-only lensing measurement. The LiteBIRDLiteBIRD lensing map will complement the PlanckPlanck lensing map and provide several opportunities for cross-correlation science, especially in the northern hemisphere.
There are no more papers matching your filters at the moment.