University of Louvain
This study from UCLouvain identifies a previously uncharacterized tension between B-mode polarization data from the South Pole Telescope (SPT) and BICEP-Keck (BK) measurements, dubbed "BaBy Cosmic Tension." While SPT data initially indicated a preference for non-vanishing primordial gravitational waves, a comprehensive Bayesian analysis incorporating Planck lowlTT data suggests this SPT B-mode excess is more likely attributed to an unmodeled foreground rather than an inflationary signal.
GVE-Leiden introduces a highly optimized parallel implementation of the Leiden algorithm for shared-memory multicore CPUs, achieving a processing rate of 403 million edges/second and running up to 436 times faster than sequential Leiden, while consistently producing well-connected communities and outperforming GPU-based solutions on several large graphs.
Time-varying optimization problems are central to many engineering applications, where performance metrics and system constraints evolve dynamically with time. Several algorithms have been proposed to address these problems; a common characteristic among them is their implicit reliance on knowledge of the optimizers' temporal variability. In this paper, we provide a fundamental characterization of this property: we show that an algorithm can track time-varying optimizers if and only if it incorporates a model of the temporal variability of the optimization problem. We refer to this concept as the internal model principle of time-varying optimization. Our analysis relies on showing that time-varying optimization problems can be recast as output regulation problems and, by using tools from center manifold theory, we establish necessary and sufficient conditions for exact asymptotic tracking. As a result, these findings enable the design of new algorithms for time-varying optimization. We demonstrate the effectiveness of the approach through numerical experiments on both synthetic problems and the dynamic traffic assignment problem from traffic control.
We study stochastic effects in viable ultra-slow-roll inflation models that produce primordial black holes. We consider asteroid, solar, and supermassive black hole seed masses. In each case, we simulate 10810^8 patches of the universe that may collapse into PBHs. In every patch, we follow 4×1044\times10^4 momentum shells to construct its spherically symmetric profile from first principles, without introducing a window function. We include the effects of critical collapse and the radiation era transfer function. The resulting compaction function profiles are very spiky due to stochastic kicks. This can enhance the PBH abundance by up to 36 orders of magnitude, depending on the mass range and collapse criterion. The PBH mass function shifts to higher masses and widens significantly. These changes may have a large effect on observational constraints of PBHs and make it possible to generate PBHs with a smaller amplitude of the power spectrum. However, convergence issues for the mass function remain. The results call for redoing collapse simulations to determine the collapse criterion for spiky profiles.
University of Toronto logoUniversity of TorontoUniversity of MississippiAcademia SinicaUniversity of CincinnatiUniversity of Illinois at Urbana-Champaign logoUniversity of Illinois at Urbana-ChampaignUniversity of Pittsburgh logoUniversity of PittsburghUniversity of OsloUniversity of Cambridge logoUniversity of CambridgeUniversity of VictoriaKyungpook National UniversityVanderbilt UniversityUniversité de Montréal logoUniversité de MontréalUniversity of OklahomaDESYUniversity of Manchester logoUniversity of ManchesterUniversity of ZurichUniversity of BernTel Aviv University logoTel Aviv UniversityUC Berkeley logoUC BerkeleyUniversity of Oxford logoUniversity of OxfordNikhefUniversity of Science and Technology of China logoUniversity of Science and Technology of ChinaSungkyunkwan UniversityUniversity of California, Irvine logoUniversity of California, IrvinePanjab UniversityKyoto University logoKyoto UniversityUniversity of Bristol logoUniversity of BristolThe University of EdinburghFermilabUniversity of British Columbia logoUniversity of British ColumbiaOkayama UniversityNorthwestern University logoNorthwestern UniversityBoston University logoBoston UniversityUniversity of Texas at Austin logoUniversity of Texas at AustinLancaster UniversityUniversity of Florida logoUniversity of FloridaINFN Sezione di PisaKansas State UniversityCERN logoCERNArgonne National Laboratory logoArgonne National LaboratoryUniversidad de GranadaUniversity of Southampton logoUniversity of SouthamptonUniversity of Minnesota logoUniversity of MinnesotaUniversity of Maryland logoUniversity of MarylandBrookhaven National Laboratory logoBrookhaven National LaboratoryUniversity of Wisconsin-Madison logoUniversity of Wisconsin-MadisonUniversité Paris-Saclay logoUniversité Paris-SaclayUniversity of HelsinkiKing’s College London logoKing’s College LondonUniversity of LiverpoolSorbonne Université logoSorbonne UniversitéUniversity of Massachusetts AmherstUniversity of RochesterVirginia Tech logoVirginia TechFermi National Accelerator LaboratoryUniversity of SheffieldTechnionUniversity of GenevaBergische Universität WuppertalUniversity of BelgradeUniversity of GlasgowUniversity of SiegenQueen Mary University of London logoQueen Mary University of LondonUniversity of Warwick logoUniversity of WarwickUniversidade Federal do ABCWayne State UniversityIndian Institute of Technology MadrasIowa State UniversityKarlsruhe Institute of Technology logoKarlsruhe Institute of TechnologyUniversità di GenovaUniversity of SussexUniversity College DublinUniversity of New MexicoUniversidade Federal do Rio de JaneiroUniversità di TriesteSejong UniversityUniversity of Southern DenmarkUniversity of OregonUniversity of AlabamaUniversität HamburgSOKENDAI (The Graduate University for Advanced Studies)Tokyo Institute of TechnologyUniversitat Autònoma de BarcelonaBelarusian State UniversityUniversit`a di BolognaPontificia Universidad Católica de ChileUniversidad de AntioquiaAlbert-Ludwigs-Universität FreiburgUniversity of KansasINFN, Laboratori Nazionali di FrascatiUniversità di Napoli Federico IIUniversity of California, Santa Cruz logoUniversity of California, Santa CruzCINVESTAVUniversidad de Los AndesUniversity of California RiversideUniversité de Paris-SaclayUniversity of LouvainINFN - Sezione di PadovaAGH University of Science and TechnologyBen Gurion UniversityUniversità degli Studi di Urbino ’Carlo Bo’University of ToyamaINFN Milano-BicoccaInstitute of High Energy Physics, CASSLACINFN Sezione di RomaINFN CagliariINFN - PadovaINFN MilanoUniversity of the PacificINFN-LecceUniversity of Mississippi Medical CenterThe American University in CairoINFN-FirenzeUniversité de Savoie Mont BlancUniversidad Antonio NariñoLaboratoire de Physique Nucléaire et de Hautes ÉnergiesLAPP, Université Savoie Mont Blanc, CNRSCPPM, Aix-Marseille Université, CNRS/IN2P3University of Puerto Rico - MayagüezIFIC (CSIC & Universitat de Valencia)INFN - PerugiaINFN-Sezione di FerraraUniversit catholique de LouvainUniversit Paris DiderotUniversit Libre de BruxellesUniversit de StrasbourgRWTH Aachen UniversityUniversit de LyonUniversit Clermont AuvergneUniversit degli Studi di MilanoUniversit di PaviaUniversit di Roma Tor Vergata
This is the third out of five chapters of the final report [1] of the Workshop on Physics at HL-LHC, and perspectives on HE-LHC [2]. It is devoted to the study of the potential, in the search for Beyond the Standard Model (BSM) physics, of the High Luminosity (HL) phase of the LHC, defined as 3 ab13~\mathrm{ab}^{-1} of data taken at a centre-of-mass energy of 14 TeV14~\mathrm{TeV}, and of a possible future upgrade, the High Energy (HE) LHC, defined as 15 ab115~\mathrm{ab}^{-1} of data at a centre-of-mass energy of 27 TeV27~\mathrm{TeV}. We consider a large variety of new physics models, both in a simplified model fashion and in a more model-dependent one. A long list of contributions from the theory and experimental (ATLAS, CMS, LHCb) communities have been collected and merged together to give a complete, wide, and consistent view of future prospects for BSM physics at the considered colliders. On top of the usual standard candles, such as supersymmetric simplified models and resonances, considered for the evaluation of future collider potentials, this report contains results on dark matter and dark sectors, long lived particles, leptoquarks, sterile neutrinos, axion-like particles, heavy scalars, vector-like quarks, and more. Particular attention is placed, especially in the study of the HL-LHC prospects, to the detector upgrades, the assessment of the future systematic uncertainties, and new experimental techniques. The general conclusion is that the HL-LHC, on top of allowing to extend the present LHC mass and coupling reach by 2050%20-50\% on most new physics scenarios, will also be able to constrain, and potentially discover, new physics that is presently unconstrained. Moreover, compared to the HL-LHC, the reach in most observables will generally more than double at the HE-LHC, which may represent a good candidate future facility for a final test of TeV-scale new physics.
Cosmic inflation may exhibit stochastic periods during which quantum fluctuations dominate over the semi-classical evolution. Extracting observables in these regimes is a notoriously difficult program as quantum randomness makes them fully probabilistic. However, among all the possible quantum histories, the ones which are relevant for Cosmology are conditioned by the requirement that stochastic inflation ended. From an observational point of view, it would be more convenient to model stochastic periods as starting from the time at which they ended and evolving backwards in times. We present a time-reversed approach to stochastic inflation, based on a reverse Fokker-Planck equation, which allows us to derive non-perturbatively the probability distribution of the field values at a given time before the end of the quantum regime. As a motivated example, we solve the flat semi-infinite potential and derive a new and exact formula for the probability distribution of the quantum-generated curvature fluctuations. It is normalisable while exhibiting tails slowly decaying as a Levy distribution. Our reverse-time stochastic formalism could be applied to any inflationary potentials and quantum diffusion eras, including the ones that can lead to the formation of primordial black holes.
This paper introduces the Raw Natural Image Noise Dataset (RawNIND), a diverse collection of paired raw images designed to support the development of denoising models that generalize across sensors, image development workflows, and styles. Two denoising methods are proposed: one operates directly on raw Bayer data, leveraging computational efficiency, while the other processes linear RGB images for improved generalization to different sensors, with both preserving flexibility for subsequent development. Both methods outperform traditional approaches which rely on developed images. Additionally, the integration of denoising and compression at the raw data level significantly enhances rate-distortion performance and computational efficiency. These findings suggest a paradigm shift toward raw data workflows for efficient and flexible image processing.
In the framework of convolutional neural networks, downsampling is often performed with an average-pooling, where all the activations are treated equally, or with a max-pooling operation that only retains an element with maximum activation while discarding the others. Both of these operations are restrictive and have previously been shown to be sub-optimal. To address this issue, a novel pooling scheme, named\emph{ ordinal pooling}, is introduced in this work. Ordinal pooling rearranges all the elements of a pooling region in a sequence and assigns a different weight to each element based upon its order in the sequence. These weights are used to compute the pooling operation as a weighted sum of the rearranged elements of the pooling region. They are learned via a standard gradient-based training, allowing to learn a behavior anywhere in the spectrum of average-pooling to max-pooling in a differentiable manner. Our experiments suggest that it is advantageous for the networks to perform different types of pooling operations within a pooling layer and that a hybrid behavior between average- and max-pooling is often beneficial. More importantly, they also demonstrate that ordinal pooling leads to consistent improvements in the accuracy over average- or max-pooling operations while speeding up the training and alleviating the issue of the choice of the pooling operations and activation functions to be used in the networks. In particular, ordinal pooling mainly helps on lightweight or quantized deep learning architectures, as typically considered e.g. for embedded applications.
20
ETH Zurich logoETH ZurichUniversity of Washington logoUniversity of WashingtonCNRS logoCNRSUniversity of Toronto logoUniversity of TorontoUniversity of CincinnatiINFN Sezione di NapoliCharles UniversityNiigata UniversityUniversité de Montréal logoUniversité de MontréalImperial College London logoImperial College LondonDESYUniversity of BernKEKUniversity College London logoUniversity College LondonUniversity of Oxford logoUniversity of OxfordChonnam National UniversitySungkyunkwan UniversityOsaka University logoOsaka Universitythe University of Tokyo logothe University of TokyoKyoto University logoKyoto UniversityUniversity of ReginaYonsei UniversityUniversity of British Columbia logoUniversity of British ColumbiaTata Institute of Fundamental ResearchOkayama UniversityLouisiana State UniversityLancaster UniversityUniversidad de GranadaYork UniversityUniversity of Tokyo logoUniversity of TokyoBrookhaven National Laboratory logoBrookhaven National LaboratoryStockholm University logoStockholm UniversityUniversity of Alberta logoUniversity of AlbertaUniversité de GenèveUniversity of RochesterVirginia Tech logoVirginia TechUniversity of SheffieldChiba UniversityUniversity of GlasgowQueen Mary University of London logoQueen Mary University of LondonUniversity of Warwick logoUniversity of WarwickUniversit`a degli Studi di PadovaHumboldt-Universität zu BerlinUniversidade Federal do ABCUniversity of SussexWarsaw University of TechnologySejong UniversityINFN, Laboratori Nazionali del Gran SassoTRIUMFSTFC Rutherford Appleton LaboratoryTokyo Metropolitan UniversityTokyo Institute of TechnologyKobe UniversityHamburg UniversityUniversity of WinnipegIndian Institute of Technology HyderabadINFN, Laboratori Nazionali di FrascatiUniversity of ValenciaBenemérita Universidad Autónoma de PueblaUniversity of Science and TechnologyGSSIMoulay Ismail UniversityUniversité de Paris-SaclayNational Centre for Nuclear ResearchCIEMATUniversity of LouvainUniversity of Silesia in KatowiceWroclaw University of Science and TechnologyUniversité de Reims Champagne ArdenneUniversity of SilesiaUniversità degli Studi di BariIRFU, CEA, Université Paris-SaclayICRR, University of TokyoJINRUniversité Mohammed PremierDongshin UniversityLPNHE, Sorbonne Université, CNRS/IN2P3Universidad Nacional de IngenieríaIFJ PAN,INFN (Sezione di Bari)Université Paris-Saclay, CEA, IrfuUniversité Paris Cité, CNRS, IN2P3Université de Lyon, Université Claude Bernard Lyon 1, CNRS/IN2P3, IP2ISUBATECH, CNRS/IN2P3, Université de Nantes, IMT AtlantiqueIFIC, CSIC-UVIGFAE, Universidade de Santiago de CompostelaInstituto de Física Corpuscular (IFIC), CSIC – Universitat de ValènciaUniversit degli Studi di SalernoUniversit de StrasbourgRWTH Aachen UniversityUniversit di PisaSapienza Universit di RomaUniversity of Minnesota Duluth
This paper describes the analysis to estimate the sensitivity of the Hyper-Kamiokande experiment to long-baseline neutrino oscillation parameters using accelerator (anti)neutrinos. Results are presented for the CPV discovery sensitivity and precision measurements of the oscillation parameters δCP\delta_{CP}, sin2θ23\sin^2\theta_{23}, Δm322\Delta m^2_{32} and sin2θ13\sin^2\theta_{13}. With the assumed Hyper-Kamiokande running plan, a 5σ5\sigma CPV discovery is possible in less than three years in the case of maximal CPV and known MO.In the absence of external constraints on the MO, considering the MO sensitivity of the Hyper-Kamiokande measurement using atmospheric neutrinos, the time for a CPV discovery could be estimated to be around six years. Using the nominal final exposure of 27×102127 \times 10^{21} protons on target, corresponding to 10 years, with a ratio of 1:3 in neutrino to antineutrino beam mode, we expect to select approximately 10000 charged current, quasi-elastic-like, muon neutrino events, and a similar number of muon anti-neutrino events. In the electron (anti)neutrino appearance channels, we expect approximately 2000 charged current, quasi-elastic-like electron neutrino events and 800 electron antineutrino events. These larges event samples will allow Hyper-Kamiokande to exclude CP conservation at the 5σ5\sigmasignificance level for over 60% of the possible true values of δCP\delta_{CP}.
Gamma-ray bursts (GRBs) have long been considered a possible source of high-energy neutrinos. While no correlations have yet been detected between high-energy neutrinos and GRBs, the recent observation of GRB 221009A - the brightest GRB observed by Fermi-GBM to date and the first one to be observed above an energy of 10 TeV - provides a unique opportunity to test for hadronic emission. In this paper, we leverage the wide energy range of the IceCube Neutrino Observatory to search for neutrinos from GRB 221009A. We find no significant deviation from background expectation across event samples ranging from MeV to PeV energies, placing stringent upper limits on the neutrino emission from this source.
GRChombo is an open-source code for performing Numerical Relativity time evolutions, built on top of the publicly available Chombo software for the solution of PDEs. Whilst GRChombo uses standard techniques in NR, it focusses on applications in theoretical physics where adaptability, both in terms of grid structure, and in terms of code modification, are key drivers.
A characteristic observational signature of cosmic strings are short duration gravitational wave (GW) bursts. These have been searched for by the LIGO-Virgo-KAGRA (LVK) collaboration, and will be searched for with LISA. We point out that these burst signals are repeated, since cosmic string loops evolve quasi-periodically in time, and will always appear from essentially the same position in the sky. We estimate the number of GW repeaters for LVK and LISA, and show that the string tension that can be probed scales as detector sensitivity to the sixth power, which raises hope for detection in future GW detectors. The observation of repeated GW bursts from the same cosmic string loop helps distinguish between the GW waveform parameters and the sky-localization.
We study the impact of fragmentation on the cosmic string loop number density, using an approach inspired by the three-scale model and a Boltzmann equation. We build a new formulation designed to be more amenable to numerical resolution and present two complementary numerical methods to obtain the full loop distribution including the effect of fragmentation and gravitational radiation. We show that fragmentation generically predicts a decay of the loop number density on large scales and a deviation from a pure power-law. We expect fragmentation to be crucial for the calibration of loop distribution models.
A characteristic observational signature of cosmic strings are short duration gravitational wave (GW) bursts. These have been searched for by the LIGO-Virgo-KAGRA (LVK) collaboration, and will be searched for with LISA. We point out that these burst signals are repeated, since cosmic string loops evolve quasi-periodically in time, and will always appear from essentially the same position in the sky. We estimate the number of GW repeaters for LVK and LISA, and show that the string tension that can be probed scales as detector sensitivity to the sixth power, which raises hope for detection in future GW detectors. The observation of repeated GW bursts from the same cosmic string loop helps distinguish between the GW waveform parameters and the sky-localization.
In the framework of assessing the pathology severity in chronic cough diseases, medical literature underlines the lack of tools for allowing the automatic, objective and reliable detection of cough events. This paper describes a system based on two microphones which we developed for this purpose. The proposed approach relies on a large variety of audio descriptors, an efficient algorithm of feature selection based on their mutual information and the use of artificial neural networks. First, the possible use of a contact microphone (placed on the patient's thorax or trachea) in complement to the audio signal is investigated. This study underlines that this contact microphone suffers from reliability issues, and conveys little new relevant information compared to the audio modality. Secondly, the proposed audio-only approach is compared to a commercially available system using four sensors on a database with different sound categories often misdetected as coughs, and produced in various conditions. With average sensitivity and specificity of 94.7% and 95% respectively, the proposed method achieves better cough detection performance than the commercial system.
We propose a mechanism of electroweak baryogenesis based on the Standard Model and explaining the coincidence between the baryon and Dark Matter (DM) densities. Large curvature fluctuations slightly below the threshold for Primordial Black Hole (PBH) formation locally reheat the plasma above the sphaleron barrier when they collapse gravitationally, leading to regions with a maximal baryogenesis at the Quantum Chromodynamics epoch. Using numerical relativity simulations, we calculate the overdensity threshold for baryogenesis. If PBH significantly contribute to the DM, aborted PBHs can generate a baryon density and an averaged baryon-to-photon ratio consistent with observations.
This paper investigates the fundamental performance limits of gradient-based algorithms for time-varying optimization. Leveraging the internal model principle and root locus techniques, we show that temporal variabilities impose intrinsic limits on the achievable rate of convergence. For a problem with condition ratio κ\kappa and time variation whose model has degree nn, we show that the worst-case convergence rate of any minimal-order gradient-based algorithm is ρTV=(κ1κ+1)1/n\rho_\text{TV} = (\frac{\kappa-1}{\kappa+1})^{1/n}. This bound reveals a fundamental tradeoff between problem conditioning, temporal complexity, and rate of convergence. We further construct explicit controllers that attain the bound for low-degree models of time variation.
The Precision IceCube Next Generation Upgrade (PINGU) is a proposed low-energy in-fill extension to the IceCube Neutrino Observatory. With detection technology modeled closely on the successful IceCube example, PINGU will provide a 6Mton effective mass for neutrino detection with an energy threshold of a few GeV. With an unprecedented sample of over 60,000 atmospheric neutrinos per year in this energy range, PINGU will make highly competitive measurements of neutrino oscillation parameters in an energy range over an order of magnitude higher than long-baseline neutrino beam experiments. PINGU will measure the mixing parameters θ23\theta_{\rm 23} and Δm322\Delta m^2_{\rm 32}, including the octant of θ23\theta_{\rm 23} for a wide range of values, and determine the neutrino mass ordering at 3σ3\sigma median significance within 4 years of operation. PINGU's high precision measurement of the rate of ντ{\nu_\tau} appearance will provide essential tests of the unitarity of the 3×33\times 3 PMNS neutrino mixing matrix. PINGU will also improve the sensitivity of searches for low mass dark matter in the Sun, use neutrino tomography to directly probe the composition of the Earth's core, and improve IceCube's sensitivity to neutrinos from Galactic supernovae. Reoptimization of the PINGU design has permitted substantial reduction in both cost and logistical requirements while delivering performance nearly identical to configurations previously studied. This document summarizes the results of detailed studies described in a more comprehensive document to be released soon.
The development of a system for the automatic, objective and reliable detection of cough events is a need underlined by the medical literature for years. The benefit of such a tool is clear as it would allow the assessment of pathology severity in chronic cough diseases. Even though some approaches have recently reported solutions achieving this task with a relative success, there is still no standardization about the method to adopt or the sensors to use. The goal of this paper is to study objectively the performance of several sensors for cough detection: ECG, thermistor, chest belt, accelerometer, contact and audio microphones. Experiments are carried out on a database of 32 healthy subjects producing, in a confined room and in three situations, voluntary cough at various volumes as well as other event categories which can possibly lead to some detection errors: background noise, forced expiration, throat clearing, speech and laugh. The relevance of each sensor is evaluated at three stages: mutual information conveyed by the features, ability to discriminate at the frame level cough from these latter other sources of ambiguity, and ability to detect cough events. In this latter experiment, with both an averaged sensitivity and specificity of about 94.5%, the proposed approach is shown to clearly outperform the commercial Karmelsonix system which achieved a specificity of 95.3% and a sensitivity of 64.9%.
The D-Egg, an acronym for ``Dual optical sensors in an Ellipsoid Glass for Gen2,'' is one of the optical modules designed for future extensions of the IceCube experiment at the South Pole. The D-Egg has an elongated-sphere shape to maximize the photon-sensitive effective area while maintaining a narrow diameter to reduce the cost and the time needed for drilling of the deployment holes in the glacial ice for the optical modules at depths up to 2700 meters. The D-Egg design is utilized for the IceCube Upgrade, the next stage of the IceCube project also known as IceCube-Gen2 Phase 1, where nearly half of the optical sensors to be deployed are D-Eggs. With two 8-inch high-quantum efficiency photomultiplier tubes (PMTs) per module, D-Eggs offer an increased effective area while retaining the successful design of the IceCube digital optical module (DOM). The convolution of the wavelength-dependent effective area and the Cherenkov emission spectrum provides an effective photodetection sensitivity that is 2.8 times larger than that of IceCube DOMs. The signal of each of the two PMTs is digitized using ultra-low-power 14-bit analog-to-digital converters with a sampling frequency of 240 MSPS, enabling a flexible event triggering, as well as seamless and lossless event recording of single-photon signals to multi-photons exceeding 200 photoelectrons within 10 nanoseconds. Mass production of D-Eggs has been completed, with 277 out of the 310 D-Eggs produced to be used in the IceCube Upgrade. In this paper, we report the des\ ign of the D-Eggs, as well as the sensitivity and the single to multi-photon detection performance of mass-produced D-Eggs measured in a laboratory using the built-in data acquisition system in each D-Egg optical sensor module.
There are no more papers matching your filters at the moment.