Universitat de Barcelona
King’s College London logoKing’s College LondonUniversitat de BarcelonaInstituto de Física Teórica UAM/CSICUniversidad de Alcal
We formulate spacetime inequalities applicable to quantum-corrected black holes to all orders of backreaction in semiclassical gravity. Namely, we propose refined versions of the quantum Penrose and reverse isoperimetric inequalities, valid for all known three-dimensional asymptotically anti-de Sitter quantum black holes. Previous proposals of the quantum Penrose inequality apply in higher dimensions but fail when applied in three dimensions beyond the perturbative regime. Our quantum Penrose inequality, valid in three dimensions, holds at all orders of backreaction. This suggests cosmic censorship must exist in non-perturbative semiclassical gravity. Our quantum reverse isoperimetric inequality implies a maximum entropy state for quantum black holes at fixed volume.
The Variational Monte Carlo method has recently seen important advances through the use of neural network quantum states. While more and more sophisticated ansätze have been designed to tackle a wide variety of quantum many-body problems, modest progress has been made on the associated optimisation algorithms. In this work, we revisit the Kronecker-Factored Approximate Curvature, an optimiser that has been used extensively in a variety of simulations. We suggest improvements on the scaling and the direction of this optimiser, and find that they substantially increase its performance at a negligible additional cost. We also reformulate the Variational Monte Carlo approach in a game theory framework, to propose a novel optimiser based on decision geometry. We find that, on a practical test case for continuous systems, this new optimiser consistently outperforms any of the KFAC improvements in terms of stability, accuracy and speed of convergence. Beyond Variational Monte Carlo, the versatility of this approach suggests that decision geometry could provide a solid foundation for accelerating a broad class of machine learning algorithms.
We revisit the Next-to-Leading Order (two-loop) contributions to the Anomalous Dimensions of ΔF=1\Delta F = 1 four-quark operators in QCD. We devise a test for anomalous dimensions, that we regard as of general interest, and by means of which we detect a problem in the results available in the literature. Deconstructing the steps leading to the available result, we identify the source of the problem, which is related to the operator known as Q11Q_{11}. We show how to fix the problem and provide the corrected anomalous dimensions. With the insight of our findings, we propose an alternative approach to the one used in the literature which does not suffer from the identified disease, and which confirms our corrected results. We assess the numerical impact of our corrections, which happens to be in the ballpark of 5%5\% in certain entries of the evolution matrix. Our results are important for the correct resummation of Next-to-Leading Logarithms in analyses of physics beyond the Standard Model in ΔF=1\Delta F = 1 processes, such as the decays of Kaons and BB-mesons.
We present Denario, an AI multi-agent system designed to serve as a scientific research assistant. Denario can perform many different tasks, such as generating ideas, checking the literature, developing research plans, writing and executing code, making plots, and drafting and reviewing a scientific paper. The system has a modular architecture, allowing it to handle specific tasks, such as generating an idea, or carrying out end-to-end scientific analysis using Cmbagent as a deep-research backend. In this work, we describe in detail Denario and its modules, and illustrate its capabilities by presenting multiple AI-generated papers generated by it in many different scientific disciplines such as astrophysics, biology, biophysics, biomedical informatics, chemistry, material science, mathematical physics, medicine, neuroscience and planetary science. Denario also excels at combining ideas from different disciplines, and we illustrate this by showing a paper that applies methods from quantum physics and machine learning to astrophysical data. We report the evaluations performed on these papers by domain experts, who provided both numerical scores and review-like feedback. We then highlight the strengths, weaknesses, and limitations of the current system. Finally, we discuss the ethical implications of AI-driven research and reflect on how such technology relates to the philosophy of science. We publicly release the code at this https URL. A Denario demo can also be run directly on the web at this https URL, and the full app will be deployed on the cloud.
76
A detailed taxonomy of hallucinations in Large Language Models is presented, formally defining them as an inherent, inevitable characteristic of computable models and categorizing their various types, underlying causes, evaluation methods, and mitigation strategies, alongside considerations for human factors and monitoring.
This paper presents FitNets, a method for training deep and thin neural networks by transferring intermediate representations from larger, shallower teacher networks. The approach enables smaller, more efficient student models to outperform their teachers, achieving better accuracy and significantly reduced parameter counts and inference times on datasets like CIFAR-10 and CIFAR-100.
204
Autoregressive protein language models (pLMs) have emerged as powerful tools to efficiently design functional proteins with extraordinary diversity, as evidenced by the successful generation of diverse enzyme families, including lysozymes or carbonic anhydrases. However, a fundamental limitation of pLMs is their propensity to sample from dense regions within the training distribution, which constrains their ability to sample from rare, high-value regions of the sequence space. This limitation becomes particularly critical in applications targeting underrepresented distribution tails, such as engineering for enzymatic activity or binding affinity. To address this challenge, we implement DPO_pLM, a reinforcement learning (RL) framework for protein sequence optimization with pLMs. Drawing inspiration from the success of RL in aligning language models to human preferences, we approach protein optimization as an iterative process that fine-tunes pLM weights to maximize a reward provided by an external oracle. Our strategy demonstrates that RL can efficiently optimize for a variety of custom properties without the need for additional data, achieving significant while preserving sequence diversity. We applied DPO_pLM to the design of EGFR binders, successfully identifying nanomolar binders within hours. Our code is publicly available at this https URL
University of Washington logoUniversity of WashingtonCNRS logoCNRSUniversity of Toronto logoUniversity of TorontoUniversity of Illinois at Urbana-Champaign logoUniversity of Illinois at Urbana-ChampaignUniversity of Pittsburgh logoUniversity of PittsburghMonash University logoMonash UniversityUniversity of California, Santa Barbara logoUniversity of California, Santa BarbaraHarvard University logoHarvard UniversityUniversity of UtahChinese Academy of Sciences logoChinese Academy of SciencesVanderbilt UniversityUniversity of OklahomaNew York University logoNew York UniversityTel Aviv University logoTel Aviv UniversityUniversidad de ConcepcionUniversity of EdinburghThe University of Texas at Austin logoThe University of Texas at AustinPeking University logoPeking UniversityIEECKU Leuven logoKU LeuvenColumbia University logoColumbia UniversityUniversity of Florida logoUniversity of FloridaSpace Telescope Science Institute logoSpace Telescope Science InstituteYork UniversityJohns Hopkins University logoJohns Hopkins UniversityUniversidad Diego PortalesUniversity of Wisconsin-Madison logoUniversity of Wisconsin-MadisonThe Pennsylvania State University logoThe Pennsylvania State UniversityUniversity of Arizona logoUniversity of ArizonaAustralian National University logoAustralian National UniversityLeiden University logoLeiden UniversityUniversity of Warwick logoUniversity of WarwickThe Ohio State University logoThe Ohio State UniversityUniversitat de BarcelonaCarnegie ObservatoriesUniversity of ConnecticutUniversity of St Andrews logoUniversity of St AndrewsUniversity of Colorado BoulderFlatiron Institute logoFlatiron InstituteLomonosov Moscow State UniversityGeorge Mason UniversityUniversidade Federal do Rio de JaneiroNorthern Arizona UniversityRussian Academy of SciencesUniversidad de ChileUniversity of KentuckyUniversity of Texas at ArlingtonNew Mexico State UniversityCentro de Astrobiología (CAB)Southwest Research InstituteEmbry-Riddle Aeronautical UniversityDrexel UniversityUniversidade Federal de SergipeMontana State UniversityTowson UniversityUniversidad de La Laguna (ULL)Instituto de Astrofísica de Canarias (IAC)Universidade de Sao PauloUniversidad Andres BelloUniversitat Polit`ecnica de CatalunyaPontificia Universidad Catolica de ChileEuropean Space Agency (ESA)Konkoly ObservatoryUniversidad de La SerenaCONACyTEotvos Lorand UniversityTexas Christian UniversityWestern Washington UniversityUniversidad Nacional Autonoma de MexicoUniversit\'e C\^ote d'AzurUniversidad Cat`olica del NorteUniversitat PotsdamUniversit´e de MontpellierValparaiso UniversityUniversity of Nebraska at OmahaCentro de Astrofísica y Tecnologías Afines (CATA)Max-Planck-Institut fur extraterrestrische Physik (MPE)Western Carolina UniversityMax-Planck-Institut f¨ur SonnensystemforschungObservat´orios NacionaisE¨otv¨os Lor´and Research Network (ELKH)Universidad Aut´onoma del Estado de Morelos (UAEM)Centre de Recerca en Ci`encies de la Terra (Geo3BCN)-CSICUniversitȁt HeidelbergMax Planck Institut für AstronomieCenter for Astrophysics  Harvard & SmithsonianLeibniz–Institut für Astrophysik Potsdam (AIP)Institut de Ciéncies del Cosmos (ICCUB)
Mapping the local and distant Universe is key to our understanding of it. For decades, the Sloan Digital Sky Survey (SDSS) has made a concerted effort to map millions of celestial objects to constrain the physical processes that govern our Universe. The most recent and fifth generation of SDSS (SDSS-V) is organized into three scientific ``mappers". Milky Way Mapper (MWM) that aims to chart the various components of the Milky Way and constrain its formation and assembly, Black Hole Mapper (BHM), which focuses on understanding supermassive black holes in distant galaxies across the Universe, and Local Volume Mapper (LVM), which uses integral field spectroscopy to map the ionized interstellar medium in the local group. This paper describes and outlines the scope and content for the nineteenth data release (DR19) of SDSS and the most substantial to date in SDSS-V. DR19 is the first to contain data from all three mappers. Additionally, we also describe nine value added catalogs (VACs) that enhance the science that can be conducted with the SDSS-V data. Finally, we discuss how to access SDSS DR19 and provide illustrative examples and tutorials.
We present an updated global analysis of neutrino oscillation data as of September 2024. The parameters θ12\theta_{12}, θ13\theta_{13}, Δm212\Delta m^2_{21}, and Δm32|\Delta m^2_{3\ell}| (=1,2\ell = 1,2) are well-determined with relative precision at 3σ3\sigma of about 13\%, 8\%, 15\%, and 6\%, respectively. The third mixing angle θ23\theta_{23} still suffers from the octant ambiguity, with no clear indication of whether it is larger or smaller than 4545^\circ. The determination of the leptonic CP phase δCP\delta_{CP} depends on the neutrino mass ordering: for normal ordering the global fit is consistent with CP conservation within 1σ1\sigma, whereas for inverted ordering CP-violating values of δCP\delta_{CP} around 270270^\circ are favored against CP conservation at more than 3.6σ3.6\sigma. While the present data has in principle 2.52.5--3σ3\sigma sensitivity to the neutrino mass ordering, there are different tendencies in the global data that reduce the discrimination power: T2K and NOvA appearance data individually favor normal ordering, but they are more consistent with each other for inverted ordering. Conversely, the joint determination of Δm32|\Delta m^2_{3\ell}| from global disappearance data prefers normal ordering. Altogether, the global fit including long-baseline, reactor and IceCube atmospheric data results into an almost equally good fit for both orderings. Only when the χ2\chi^2 table for atmospheric neutrino data from Super-Kamiokande is added to our χ2\chi^2, the global fit prefers normal ordering with Δχ2=6.1\Delta\chi^2 = 6.1. We provide also updated ranges and correlations for the effective parameters sensitive to the absolute neutrino mass from β\beta-decay, neutrinoless double-beta decay, and cosmology.
Researchers from Université Libre de Bruxelles and Universitat de Barcelona investigated how different network representations of symbolic music influence structural properties and alignment with human perception. Their work, using eight distinct network models and a perceptual model, found that simpler, single-feature networks align better with human cognitive processing, suggesting modular perception, and that musical networks are structured to concentrate uncertainty in perceptually relevant regions.
Efficient and accurate 3D reconstruction is crucial for various applications, including augmented and virtual reality, medical imaging, and cinematic special effects. While traditional Multi-View Stereo (MVS) systems have been fundamental in these applications, using neural implicit fields in implicit 3D scene modeling has introduced new possibilities for handling complex topologies and continuous surfaces. However, neural implicit fields often suffer from computational inefficiencies, overfitting, and heavy reliance on data quality, limiting their practical use. This paper presents an enhanced MVS framework that integrates multi-view 360-degree imagery with robust camera pose estimation via Structure from Motion (SfM) and advanced image processing for point cloud densification, mesh reconstruction, and texturing. Our approach significantly improves upon traditional MVS methods, offering superior accuracy and precision as validated using Chamfer distance metrics on the Realistic Synthetic 360 dataset. The developed MVS technique enhances the detail and clarity of 3D reconstructions and demonstrates superior computational efficiency and robustness in complex scene reconstruction, effectively handling occlusions and varying viewpoints. These improvements suggest that our MVS framework can compete with and potentially exceed current state-of-the-art neural implicit field methods, especially in scenarios requiring real-time processing and scalability.
· +1
While representation learning and generative modeling seek to understand visual data, unifying both domains remains unexplored. Recent Unified Self-Supervised Learning (SSL) methods have started to bridge the gap between both paradigms. However, they rely solely on semantic token reconstruction, which requires an external tokenizer during training -- introducing a significant overhead. In this work, we introduce Sorcen, a novel unified SSL framework, incorporating a synergic Contrastive-Reconstruction objective. Our Contrastive objective, "Echo Contrast", leverages the generative capabilities of Sorcen, eliminating the need for additional image crops or augmentations during training. Sorcen "generates" an echo sample in the semantic token space, forming the contrastive positive pair. Sorcen operates exclusively on precomputed tokens, eliminating the need for an online token transformation during training, thereby significantly reducing computational overhead. Extensive experiments on ImageNet-1k demonstrate that Sorcen outperforms the previous Unified SSL SoTA by 0.4%, 1.48 FID, 1.76%, and 1.53% on linear probing, unconditional image generation, few-shot learning, and transfer learning, respectively, while being 60.8% more efficient. Additionally, Sorcen surpasses previous single-crop MIM SoTA in linear probing and achieves SoTA performance in unconditional image generation, highlighting significant improvements and breakthroughs in Unified SSL models.
Mitigation of the threat from airbursting asteroids requires an understanding of the potential risk they pose for the ground. How asteroids release their kinetic energy in the atmosphere is not well understood due to the rarity of significant impacts. Ordinary chondrites, in particular L chondrites, represent a frequent type of Earth-impacting asteroids. Here, we present the first comprehensive, space-to-lab characterization of an L chondrite impact. Small asteroid 2023 CX1 was detected in space and predicted to impact over Normandy, France, on 13 February 2023. Observations from multiple independent sensors and reduction techniques revealed an unusual but potentially high-risk fragmentation behavior. The nearly spherical 650 ±\pm 160 kg (72 ±\pm 6 cm diameter) asteroid catastrophically fragmented around 28 km altitude, releasing 98% of its total energy in a concentrated region of the atmosphere. The resulting shockwave was spherical, not cylindrical, and released more energy closer to the ground. This type of fragmentation increases the risk of significant damage at ground level. These results warrant consideration for a planetary defense strategy for cases where a >3-4 MPa dynamic pressure is expected, including planning for evacuation of areas beneath anticipated disruption locations.
Researchers conducted the first N-body/hydrodynamics simulation of a Milky Way-sized galaxy with star-by-star resolution, coupling traditional methods with a deep learning surrogate model to overcome the computational bottlenecks of supernova feedback. The approach enables simulating 300 billion particles and achieves a 113x speedup compared to conventional methods for equivalent resolution.
We formulate the statistics of peaks of non-Gaussian random fields and implement it to study the sphericity of peaks. For non-Gaussianity of the local type, we present a general formalism valid regardless of how large the deviation from Gaussian statistics is. For general types of non-Gaussianity, we provide a framework that applies to any system with a given power spectrum and the corresponding bispectrum in the regime in which contributions from higher-order correlators can be neglected. We present an explicit expression for the most probable values of the sphericity parameters, including the effect of non-Gaussianity on the shape. We show that the effects of small perturbative non-Gaussianity on the sphericity parameters are negligible, as they are even smaller than the subleading Gaussian corrections. In contrast, we find that large non-Gaussianity can significantly distort the peak configurations, making them much less spherical.
This document is meant to be a practical introduction to the analytical and numerical manipulation of Fermionic Gaussian systems. Starting from the basics, we move to relevant modern results and techniques, presenting numerical examples and studying relevant Hamiltonians, such as the transverse field Ising Hamiltonian, in detail. We finish introducing novel algorithms connecting Fermionic Guassian states with matrix product states techniques. All the numerical examples make use of the free Julia package F_utilities.
Stellar flares are intense bursts of radiation caused by magnetic reconnection events on active stars. They are especially frequent on M dwarfs, where they can strongly influence planetary habitability. Flare frequency distributions (FFDs) are usually modeled as power laws, but recent studies have proposed alternatives such as lognormal distributions, implying different flare mechanisms and planetary impacts. This work investigates which statistical distribution best describes flare occurrences on M dwarfs, considering both equivalent duration (ED), the quantity directly measured from photometry, and bolometric energy, which is more relevant for habitability assesments. We analyzed 110 M dwarfs observed with TESS and CHEOPS, detecting 5,620 flares. Complex events were decomposed, detection biases corrected, and FFDs from both missions scaled to build a combined distribution spanning nearly 10 orders of magnitude in bolometric energy. ED-based FFDs follow a power law, reflecting intrinsic photometric flare occurrence. However, bolometric energy-based FFDs deviate from a pure power law, being better described by a lognormal distribution or, more accurately, by a truncated power law with a break near 103310^{33} erg, the typical superflare threshold. This truncation suggests a change in flare-generation physics between regular flares and superflares, with implications for the cumulative impact on exoplanetary atmospheres. The apparent low-energy flattening previously attributed to lognormal behavior arises from observational biases, while the drop in flare frequency above 103510^{35} erg remains unexplained, possibly reflecting an intrinsic cutoff or current observational limits. The upcoming PLATO mission will be well suited to probe both regimes.
The first direct measurement of gravitational waves by the LIGO and Virgo collaborations has opened up new avenues to explore our Universe. This white paper outlines the challenges and gains expected in gravitational-wave searches at frequencies above the LIGO/Virgo band. The scarcity of possible astrophysical sources in most of this frequency range provides a unique opportunity to discover physics beyond the Standard Model operating both in the early and late Universe, and we highlight some of the most promising of these sources. We review several detector concepts that have been proposed to take up this challenge, and compare their expected sensitivity with the signal strength predicted in various models. This report is the summary of a series of workshops on the topic of high-frequency gravitational wave detection, held in 2019 (ICTP, Trieste, Italy), 2021 (online) and 2023 (CERN, Geneva, Switzerland).
We present pbhstat, a publicly available Python package designed to compute the mass function and total abundance of primordial black holes (PBHs) from a given primordial power spectrum. The package offers a modular framework using multiple statistical approaches, including Press-Schechter theory, peaks theory, and formalisms based on the non-linear compaction function. Currently, the implementation is limited to scenarios with nearly Gaussian initial conditions.
There are no more papers matching your filters at the moment.