Johannes Gutenberg University
A comprehensive survey on automated scientific discovery critically analyzes its history, current state, and future, consolidating efforts from equation discovery to autonomous systems. It identifies a key research gap in integrating interpretable knowledge generation with autonomous experimentation, proposing a path towards AI scientists capable of producing human-interpretable scientific knowledge.
The first direct measurement of gravitational waves by the LIGO and Virgo collaborations has opened up new avenues to explore our Universe. This white paper outlines the challenges and gains expected in gravitational-wave searches at frequencies above the LIGO/Virgo band. The scarcity of possible astrophysical sources in most of this frequency range provides a unique opportunity to discover physics beyond the Standard Model operating both in the early and late Universe, and we highlight some of the most promising of these sources. We review several detector concepts that have been proposed to take up this challenge, and compare their expected sensitivity with the signal strength predicted in various models. This report is the summary of a series of workshops on the topic of high-frequency gravitational wave detection, held in 2019 (ICTP, Trieste, Italy), 2021 (online) and 2023 (CERN, Geneva, Switzerland).
We present a comprehensive analysis of electroweak, flavor, and collider bounds on the complete set of dimension-six SMEFT operators in the U(2)5U(2)^5-symmetric limit. This operator basis provides a consistent framework to describe a wide class of new physics models and, in particular, the motivated class of models where the new degrees of freedom couple mostly to the third generation. By analyzing observables from all three sectors, and consistently including renormalization group evolution, we provide bounds on the effective scale of all 124 U(2)5U(2)^5-invariant operators. The relation between flavor-conserving and flavor-violating observables is analyzed taking into account the leading U(2)5U(2)^5 breaking in the Yukawa sector, which is responsible for heavy-light quark mixing. We show that under simple, motivated, and non-tuned hypotheses for the parametric size of the Wilson coefficients at the high scale, all present bounds are consistent with an effective scale as low as 1.5 TeV. We also show that a future circular e+ee^+ e^- collider program such as FCC-ee would push most of these bounds by an order of magnitude. This would rule out or provide clear evidence for a wide class of compelling new physics models that are fully compatible with present data.
We study the phenomenology of physics beyond the Standard Model in long-baseline neutrino oscillation experiments using the most general parametrisation of heavy new physics in the framework of Standard Model Effective Theory (SMEFT), as well as its counterpart below the electroweak scale, Weak Effective Field Theory (WEFT). We compute neutrino production, oscillation, and detection rates in these frameworks, consistently accounting for renormalisation group running as well as SMEFT/WEFT matching. We moreover use appropriately modified neutrino--nucleus cross sections, focusing specifically on the regime of quasi-elastic scattering. Compared to the traditional formalism of non-standard neutrino interactions (NSI), our approach is theoretically more consistent, and it allows for straightforward joint analyses of data taken at different energy scales and by different experiments including not only neutrino oscillation experiments, but also searches for charged lepton flavour violation, low-energy precision measurements, and the LHC. As a specific example, we carry out a sensitivity study for the DUNE experiment and compute projected limits on the WEFT and SMEFT Wilson coefficients. Together with this paper, we also release a public simulation package called ``GLoBES-EFT'' for consistently simulating long-baseline neutrino oscillation experiments in the presence of new physics parameterized either in WEFT or in SMEFT. GLoBES-EFT is available from \href{this https URL}{GitHub}.
In standard genetic programming (stdGP), solutions are varied by modifying their syntax, with uncertain effects on their semantics. Geometric-semantic genetic programming (GSGP), a popular variant of GP, effectively searches the semantic solution space using variation operations based on linear combinations, although it results in significantly larger solutions. This paper presents Transformer Semantic Genetic Programming (TSGP), a novel and flexible semantic approach that uses a generative transformer model as search operator. The transformer is trained on synthetic test problems and learns semantic similarities between solutions. Once the model is trained, it can be used to create offspring solutions with high semantic similarity also for unseen and unknown problems. Experiments on several symbolic regression problems show that TSGP generates solutions with comparable or even significantly better prediction quality than stdGP, SLIM_GSGP, DSR, and DAE-GP. Like SLIM_GSGP, TSGP is able to create new solutions that are semantically similar without creating solutions of large size. An analysis of the search dynamic reveals that the solutions generated by TSGP are semantically more similar than the solutions generated by the benchmark approaches allowing a better exploration of the semantic solution space.
Scientific discoveries often hinge on synthesizing decades of research, a task that potentially outstrips human information processing capacities. Large language models (LLMs) offer a solution. LLMs trained on the vast scientific literature could potentially integrate noisy yet interrelated findings to forecast novel results better than human experts. To evaluate this possibility, we created BrainBench, a forward-looking benchmark for predicting neuroscience results. We find that LLMs surpass experts in predicting experimental outcomes. BrainGPT, an LLM we tuned on the neuroscience literature, performed better yet. Like human experts, when LLMs were confident in their predictions, they were more likely to be correct, which presages a future where humans and LLMs team together to make discoveries. Our approach is not neuroscience-specific and is transferable to other knowledge-intensive endeavors.
67
National United UniversityUniversity of Cambridge logoUniversity of CambridgeChinese Academy of Sciences logoChinese Academy of SciencesCarnegie Mellon University logoCarnegie Mellon UniversitySichuan UniversitySun Yat-Sen University logoSun Yat-Sen UniversityKorea UniversityBeihang University logoBeihang UniversityNanjing University logoNanjing UniversityTsinghua University logoTsinghua UniversityNankai UniversityPeking University logoPeking UniversityJoint Institute for Nuclear ResearchSouthwest UniversityStockholm University logoStockholm UniversityUniversity of TurinUppsala UniversityGuangxi Normal UniversityCentral China Normal UniversityShandong University logoShandong UniversityLanzhou UniversityUlm UniversityNorthwest UniversityIndian Institute of Technology MadrasIowa State UniversityUniversity of South ChinaUniversity of Groningen logoUniversity of GroningenWarsaw University of TechnologyGuangxi UniversityShanxi UniversityHenan University of Science and TechnologyHelmholtz-Zentrum Dresden-RossendorfZhengzhou UniversityINFN, Sezione di TorinoCOMSATS University IslamabadHangzhou Institute for Advanced Study, UCASIndian Institute of Technology GuwahatiBudker Institute of Nuclear PhysicsXian Jiaotong UniversityJohannes Gutenberg UniversityINFN, Laboratori Nazionali di FrascatiHenan Normal UniversityNorth China Electric Power UniversityInstitute of high-energy PhysicsJustus Liebig University GiessenInstitute for Nuclear Research of the Russian Academy of SciencesGSI Helmholtzzentrum fur Schwerionenforschung GmbHUniversity of the PunjabHuazhong Normal UniversityThe University of MississippiNikhef, National Institute for Subatomic PhysicsUniversity of Science and Technology LiaoningINFN Sezione di Roma Tor VergataHelmholtz-Institut MainzPontificia Universidad JaverianaIJCLab, Université Paris-Saclay, CNRSSchool of Physics and Technology, Wuhan UniversityInstitut f¨ur Kernphysik, Forschungszentrum J¨ulichINFN-Sezione di FerraraRuhr-University-BochumUniversity of Rome “Tor Vergata ”
Based on 10.64 fb110.64~\mathrm{fb}^{-1} of e+ee^+e^- collision data taken at center-of-mass energies between 4.237 and 4.699 GeV with the BESIII detector, we study the leptonic Ds+D^+_s decays using the e+eDs+Dse^+e^-\to D^{*+}_{s} D^{*-}_{s} process. The branching fractions of Ds++ν(=μ,τ)D_s^+\to\ell^+\nu_{\ell}\,(\ell=\mu,\tau) are measured to be B(Ds+μ+νμ)=(0.547±0.026stat±0.016syst)%\mathcal{B}(D_s^+\to\mu^+\nu_\mu)=(0.547\pm0.026_{\rm stat}\pm0.016_{\rm syst})\% and B(Ds+τ+ντ)=(5.60±0.16stat±0.20syst)%\mathcal{B}(D_s^+\to\tau^+\nu_\tau)=(5.60\pm0.16_{\rm stat}\pm0.20_{\rm syst})\%, respectively. The product of the decay constant and Cabibbo-Kobayashi-Maskawa matrix element Vcs|V_{cs}| is determined to be fDs+Vcs=(246.5±5.9stat±3.6syst±0.5input)μν MeVf_{D_s^+}|V_{cs}|=(246.5\pm5.9_{\rm stat}\pm3.6_{\rm syst}\pm0.5_{\rm input})_{\mu\nu}~\mathrm{MeV} and fDs+Vcs=(252.7±3.6stat±4.5syst±0.6input))τν MeVf_{D_s^+}|V_{cs}|=(252.7\pm3.6_{\rm stat}\pm4.5_{\rm syst}\pm0.6_{\rm input}))_{\tau \nu}~\mathrm{MeV}, respectively. Taking the value of Vcs|V_{cs}| from a global fit in the Standard Model, we obtain fDs+=(252.8±6.0stat±3.7syst±0.6input)μν{f_{D^+_s}}=(252.8\pm6.0_{\rm stat}\pm3.7_{\rm syst}\pm0.6_{\rm input})_{\mu\nu} MeV and fDs+=(259.2±3.6stat±4.5syst±0.6input)τν{f_{D^+_s}}=(259.2\pm3.6_{\rm stat}\pm4.5_{\rm syst}\pm0.6_{\rm input})_{\tau \nu} MeV, respectively. Conversely, taking the value for fDs+f_{D_s^+} from the latest lattice quantum chromodynamics calculation, we obtain Vcs=(0.986±0.023stat±0.014syst±0.003input)μν|V_{cs}| =(0.986\pm0.023_{\rm stat}\pm0.014_{\rm syst}\pm0.003_{\rm input})_{\mu\nu} and Vcs=(1.011±0.014stat±0.018syst±0.003input)τν|V_{cs}| = (1.011\pm0.014_{\rm stat}\pm0.018_{\rm syst}\pm0.003_{\rm input})_{\tau \nu}, respectively.
Recent measurements from the Atacama Cosmology Telescope (ACT), combined with Planck and DESI data, suggest a higher value for the spectral index nsn_s. This places Starobinsky inflation at the edge of the 2σ2\sigma constraints for a number of e-folds NN_\star around 6060 when using the usual analytical approximations. We present refined predictions for Starobinsky inflation that go beyond the commonly used analytical approximations. By evaluating the model with these improved expressions, we show that for N58N_\star \gtrsim 58 it remains consistent with current observational constraints at the 2σ2\sigma level. Additionally, we examine the implications of the ACT results for post-inflationary reheating parameters. Specifically, we find a lower bound on the effective equation of state parameter during reheating of approximately ω0.462\omega \gtrsim 0.462; this excludes purely perturbative reheating, which leads to ω0\omega \simeq 0. We also show that the reheating temperature is constrained to be Trh2×1010 GeVT_{\text{rh}} \lesssim 2 \times 10^{10}~\text{GeV}, assuming ω1\omega \leq 1. Furthermore, we find that the predictions for the spectral index and tensor-to-scalar ratio can lie within 1σ1\sigma of the recent ACT constraints if the reheating temperature satisfies 4 MeVTrh10 GeV4~\text{MeV} \lesssim T_{\text{rh}} \lesssim 10~\text{GeV} for 0.8ω10.8 \lesssim \omega \leq 1.
We study numerically the spin-1/2 XXZ model in a field on an infinite Kagome lattice. We use different algorithms based on infinite Projected Entangled Pair States (iPEPS) for this, namely: (i) with simplex tensors and 9-site unit cell, and (ii) coarse-graining three spins in the Kagome lattice and mapping it to a square-lattice model with nearest-neighbor interactions, with usual PEPS tensors, 6- and 12-site unit cells. Similarly to our previous calculation at the SU(2)-symmetric point (Heisenberg Hamiltonian), for any anisotropy from the Ising limit to the XY limit, we also observe the emergence of magnetization plateaus as a function of the magnetic field, at mz=13m_z = \frac{1}{3} using 6- 9- and 12-site PEPS unit cells, and at mz=19,59m_z = \frac{1}{9}, \frac{5}{9} and 79\frac{7}{9} using a 9-site PEPS unit cell, the later set-up being able to accommodate 3×3\sqrt{3} \times \sqrt{3} solid order. We also find that, at mz=13m_z = \frac{1}{3}, (lattice) nematic and 3×3\sqrt{3} \times \sqrt{3} VBC-order states are degenerate within the accuracy of the 9-site simplex-method, for all anisotropy. The 6- and 12-site coarse-grained PEPS methods produce almost-degenerate nematic and 1×21 \times 2 VBC-Solid orders. Within our accuracy, the 6-site coarse-grained PEPS method gives slightly lower energies, which can be explained by the larger amount of entanglement this approach can handle, even when the PEPS unit-cell is not commensurate with the expected ground state. Furthermore, we do not observe chiral spin liquid behaviors at and close to the XY point, as has been recently proposed. Our results are the first tensor network investigations of the XXZ spin chain in a field, and reveal the subtle competition between nearby magnetic orders in numerical simulations of frustrated quantum antiferromagnets, as well as the delicate interplay between energy optimization and symmetry in tensor networks.
We show that primordial adiabatic curvature fluctuations generate an instability of the scalar field sourcing a kination era. We demonstrate that the generated higher Fourier modes constitute a radiation-like component dominating over the kination background after about 1111 e-folds of cosmic expansion. Current constraints on the extra number of neutrino flavors ΔNeff\Delta N_{\rm eff} thus imply the observational bound of approximately 10 e-folds, representing the most stringent bound to date on the stiffness of the equation of state of the pre-Big-Bang-Nucleosynthesis universe.
The classical Heisenberg model in two spatial dimensions constitutes one of the most paradigmatic spin models, taking an important role in statistical and condensed matter physics to understand magnetism. Still, despite its paradigmatic character and the widely accepted ban of a (continuous) spontaneous symmetry breaking, controversies remain whether the model exhibits a phase transition at finite temperature. Importantly, the model can be interpreted as a lattice discretization of the O(3)O(3) non-linear sigma model in 1+11+1 dimensions, one of the simplest quantum field theories encompassing crucial features of celebrated higher-dimensional ones (like quantum chromodynamics in 3+13+1 dimensions), namely the phenomenon of asymptotic freedom. This should also exclude finite-temperature transitions, but lattice effects might play a significant role in correcting the mainstream picture. In this work, we make use of state-of-the-art tensor network approaches, representing the classical partition function in the thermodynamic limit over a large range of temperatures, to comprehensively explore the correlation structure for Gibbs states. By implementing an SU(2)SU(2) symmetry in our two-dimensional tensor network contraction scheme, we are able to handle very large effective bond dimensions of the environment up to χEeff1500\chi_E^\text{eff} \sim 1500, a feature that is crucial in detecting phase transitions. With decreasing temperatures, we find a rapidly diverging correlation length, whose behaviour is apparently compatible with the two main contradictory hypotheses known in the literature, namely a finite-TT transition and asymptotic freedom, though with a slight preference for the second.
We discuss matters related to the point that topological quantization in the strong interaction is a consequence of an infinite spacetime volume. Because of the ensuing order of limits, i.e. infinite volume prior to summing over topological sectors, CP is conserved. Here, we show that this reasoning is consistent with the construction of the path integral from steepest-descent contours. We reply to some objections that aim to support the case for CP violation in the strong interactions that are based on the role of the CP-odd theta-parameter in three-form effective theories, the correct sampling of all configurations in the dilute instanton gas approximation and the volume dependence of the partition function. We also show that the chiral effective field theory derived from taking the volume to infinity first is in no contradiction with analyses based on partially conserved axial currents.
Studying the impact of new-physics models on low-energy observables necessitates matching to effective field theories at the relevant mass thresholds. We introduce the first public version of Matchete, a computer tool for matching weakly-coupled models at one-loop order. It uses functional methods to directly compute all matching contributions in a manifestly gauge-covariant manner, while simplification methods eliminate redundant operators from the output. We sketch the workings of the program and provide examples of how to match simple Standard Model extensions. The package, documentation, and example notebooks are publicly available at this https URL
We discuss the possibility of forming primordial black holes during a first-order phase transition in the early Universe. As is well known, such a phase transition proceeds through the formation of true-vacuum bubbles in a Universe that is still in a false vacuum. When there is a particle species whose mass increases significantly during the phase transition, transmission of the corresponding particles through the advancing bubble walls is suppressed. Consequently, an overdensity can build up in front of the walls and become sufficiently large to trigger primordial black hole formation. We track this process quantitatively by solving a Boltzmann equation, and we delineate the phase transition properties required for our mechanism to yield an appreciable abundance of primordial black holes.
Symbol letters are crucial for analytically calculating Feynman integrals in terms of iterated integrals. We present a novel method to construct the symbol letters for a given integral family without prior knowledge of the canonical differential equations. We provide a program package implementing our algorithm, and demonstrate its effectiveness in solving non-trivial problems with multiple loops and legs. Using our method, we successfully bootstrap the canonical differential equations for a two-loop five-point family with two external masses and for a three-loop four-point family with two external masses, which were previously unknown in the literature. We anticipate that our method can be applied to a wide range of cutting-edge calculations in the future.
Large Language Models (LLM) are already widely used to generate content for a variety of online platforms. As we are not able to safely distinguish LLM-generated content from human-produced content, LLM-generated content is used to train the next generation of LLMs, giving rise to a self-consuming training loop. From the image generation domain we know that such a self-consuming training loop reduces both quality and diversity of images finally ending in a model collapse. However, it is unclear whether this alarming effect can also be observed for LLMs. Therefore, we present the first study investigating the self-consuming training loop for LLMs. Further, we propose a novel method based on logic expressions that allows us to unambiguously verify the correctness of LLM-generated content, which is difficult for natural language text. We find that the self-consuming training loop produces correct outputs, however, the output declines in its diversity depending on the proportion of the used generated data. Fresh data can slow down this decline, but not stop it. Given these concerning results, we encourage researchers to study methods to negate this process.
Research from Johannes-Gutenberg University demonstrates that the superior generalization of Residual Networks stems from their "variable-depth" function space, which offers an inductive bias better aligned with natural data. Controlled post-training experiments show that variable-depth architectures consistently outperform fixed-depth networks at equivalent effective nonlinear depths, with the gap increasing at lower nonlinear depths and for more complex datasets.
The detection of compact binary mergers with sub-solar masses at gravitational-wave observatories could mark the groundbreaking discovery of primordial black holes (PBHs). Concurrently, evidence for a nHz stochastic gravitational wave background observed by pulsar timing arrays (PTAs) could suggest a non-astrophysical origin, potentially arising from scalar-induced gravitational waves (SIGW). In this work, we analyze the connection between the two phenomena in the case where they share a common origin: the collapse of large primordial curvature perturbations in the early universe. We focus on sub-solar PBH populations within reach of upcoming experiments, including the current and future runs of LIGO-Virgo-KAGRA as well as the third generation observatories such as the Einstein Telescope and Cosmic Explorer. Using a Bayesian framework with physically motivated priors, we perform a consistent model comparison that incorporates existing astrophysical bounds together with the discovery potential of future detectors. Our analysis lends stronger support for the SIGW interpretation over the astrophysical one, as the narrowed priors place greater weight on the region of highest likelihood. Ultimately, we illustrate that combining PTA data with interferometer searches can deliver correlated evidence for new physics across multiple gravitational-wave bands.
Light pseudoscalar resonances that couple to the Standard Model via non-renormalizable operators, such as axions and axion-like particles (ALPs), generate contributions to the renormalization group evolution equations of couplings of dimension-4 and higher-dimensional operators. In particular, they modify the β\beta-function of the Higgs quartic coupling and of SM and SMEFT parameters entering this equation, thus having an impact on the instability scale of the electroweak vacuum. We employ this fact together with the requirement that, in the presence of axions and ALPs, the Universe remains in a meta-stable state to deduce bounds on ALP couplings to the Standard Model fields. We also show that the modification of the β\beta-functions of the gauge couplings by the ALP can lead to a unification around the Planck scale, even in non-sypersymmetric models.
There are no more papers matching your filters at the moment.