Max-Planck-Institut f":
Image deblurring aims to recover the latent sharp image from its blurry counterpart and has a wide range of applications in computer vision. The Convolution Neural Networks (CNNs) have performed well in this domain for many years, and until recently an alternative network architecture, namely Transformer, has demonstrated even stronger performance. One can attribute its superiority to the multi-head self-attention (MHSA) mechanism, which offers a larger receptive field and better input content adaptability than CNNs. However, as MHSA demands high computational costs that grow quadratically with respect to the input resolution, it becomes impractical for high-resolution image deblurring tasks. In this work, we propose a unified lightweight CNN network that features a large effective receptive field (ERF) and demonstrates comparable or even better performance than Transformers while bearing less computational costs. Our key design is an efficient CNN block dubbed LaKD, equipped with a large kernel depth-wise convolution and spatial-channel mixing structure, attaining comparable or larger ERF than Transformers but with a smaller parameter scale. Specifically, we achieve +0.17dB / +0.43dB PSNR over the state-of-the-art Restormer on defocus / motion deblurring benchmark datasets with 32% fewer parameters and 39% fewer MACs. Extensive experiments demonstrate the superior performance of our network and the effectiveness of each module. Furthermore, we propose a compact and intuitive ERFMeter metric that quantitatively characterizes ERF, and shows a high correlation to the network performance. We hope this work can inspire the research community to further explore the pros and cons of CNN and Transformer architectures beyond image deblurring tasks.
108
OB associations are low-density groups of young stars that are dispersing from their birth environment into the Galactic field. They are important for understanding the star formation process, early stellar evolution, the properties and distribution of young stars and the processes by which young stellar groups disperse. Recent observations, particularly from Gaia, have shown that associations are highly complex, with a high degree of spatial, kinematic and temporal substructure. The kinematics of associations have shown them to be globally unbound and expanding, with the majority of recent studies revealing evidence for clear expansion patterns in the association subgroups, suggesting the subgroups were more compact in the past. This expansion is often non-isotropic, arguing against a simple explosive expansion, as predicted by some models of residual gas expulsion. The star formation histories of associations are often complex, exhibit moderate age spreads and temporal substructure, but so far have failed to reveal simple patterns of star formation propagation (e.g., triggering). These results have challenged the historical paradigm of the origin of associations as the expanded remnants of dense star clusters and suggests instead that they originate as highly substructured systems without a linear star formation history, but with multiple clumps of stars that have since expanded and begun to overlap, producing the complex systems we observe today. This has wide-ranging consequences for the early formation environments of most stars and planetary systems, including our own Solar System.
Future black hole (BH) imaging observations are expected to resolve finer features corresponding to higher-order images of hotspots and of the horizon-scale accretion flow. In spherical spacetimes, the image order is determined by the number of half-loops executed by the photons that form it. Consecutive-order images arrive approximately after a delay time of π\approx\pi times the BH shadow radius. The fractional diameters, widths, and flux-densities of consecutive-order images are exponentially demagnified by the lensing Lyapunov exponent, a characteristic of the spacetime. The appearance of a simple point-sized hotspot when located at fixed spatial locations or in motion on circular orbits is investigated. The exact time delay between the appearance of its zeroth and first-order images agrees with our analytic estimate, which accounts for the observer inclination, with 20%\lesssim 20\% error for hotspots located about 5M\lesssim 5M from a Schwarzschild BH of mass MM. Since M87^\star and Sgr A^\star host geometrically-thick accretion flows, we also explore the variation in the diameters and widths of their first-order images with disk scale-height. Using a simple conical torus model, for realistic morphologies, we estimate the first-order image diameter to deviate from that of the shadow by 30%\lesssim 30\% and its width to be 1.3M\lesssim 1.3M. Finally, the error in recovering the Schwarzschild lensing exponent (π\pi), when using the diameters or the widths of the first and second-order images is estimated to be 20%\lesssim 20\%. It will soon become possible to robustly learn more about the spacetime geometry of astrophysical BHs from such measurements.
The KBC void is a local underdensity with the observed relative density contrast δ1ρ/ρ0=0.46±0.06\delta \equiv 1 - \rho/\rho_{0} = 0.46 \pm 0.06 between 40 and 300 Mpc around the Local Group. If mass is conserved in the Universe, such a void could explain the 5.3σ5.3\sigma Hubble tension. However, the MXXL simulation shows that the KBC void causes 6.04σ6.04\sigma tension with standard cosmology (Λ\LambdaCDM). Combined with the Hubble tension, Λ\LambdaCDM is ruled out at 7.09σ7.09\sigma confidence. Consequently, the density and velocity distribution on Gpc scales suggest a long-range modification to gravity. In this context, we consider a cosmological MOND model supplemented with 11eV/c211 \, \rm{eV}/c^{2} sterile neutrinos. We explain why this ν\nuHDM model has a nearly standard expansion history, primordial abundances of light elements, and cosmic microwave background (CMB) anisotropies. In MOND, structure growth is self-regulated by external fields from surrounding structures. We constrain our model parameters with the KBC void density profile, the local Hubble and deceleration parameters derived jointly from supernovae at redshifts $0.023 - 0.15$, time delays in strong lensing systems, and the Local Group velocity relative to the CMB. Our best-fitting model simultaneously explains these observables at the 1.14%1.14\% confidence level (2.53σ{2.53 \sigma} tension) if the void is embedded in a time-independent external field of 0.055a0{0.055 \, a_{_0}}. Thus, we show for the first time that the KBC void can naturally resolve the Hubble tension in Milgromian dynamics. Given the many successful a priori MOND predictions on galaxy scales that are difficult to reconcile with Λ\LambdaCDM, Milgromian dynamics supplemented by 11eV/c211 \, \rm{eV}/c^{2} sterile neutrinos may provide a more holistic explanation for astronomical observations across all scales.
We present a measurement of the pairwise kinematic Sunyaev-Zel'dovich (kSZ) signal using the Dark Energy Spectroscopic Instrument (DESI) Bright Galaxy Sample (BGS) Data Release 1 (DR1) galaxy sample overlapping with the Atacama Cosmology Telescope (ACT) CMB temperature map. Our analysis makes use of 1.61.6 million galaxies with stellar masses \log M_\star/M_\odot > 10, and we explore measurements across a range of aperture sizes (2.1' < \theta_{\rm ap} < 3.5') and stellar mass selections. This statistic directly probes the velocity field of the large-scale structure, a unique observable of cosmic dynamics and modified gravity. In particular, at low redshifts, this quantity is especially interesting, as deviations from General Relativity are expected to be largest. Notably, our result represents the highest-significance low-redshift (z0.3z \sim 0.3) detection of the kSZ pairwise effect yet. In our most optimal configuration (θap=3.3\theta_{\rm ap} = 3.3', \log M_\star > 11), we achieve a 5σ5\sigma detection. Assuming that an estimate of the optical depth and galaxy bias of the sample exists via e.g., external observables, this measurement constrains the fundamental cosmological combination H0fσ82H_0 f \sigma_8^2. A key challenge is the degeneracy with the galaxy optical depth. We address this by combining CMB lensing, which allows us to infer the halo mass and galaxy population properties, with hydrodynamical simulation estimates of the mean optical depth, τˉ\bar \tau. We stress that this is a proof-of-concept analysis; with BGS DR2 data we expect to improve the statistical precision by roughly a factor of two, paving the way toward robust tests of modified gravity with kSZ-informed velocity-field measurements at low redshift.
We analyze the complexity of classically simulating continuous-time dynamics of locally interacting quantum spin systems with a constant rate of entanglement breaking noise. We prove that a polynomial time classical algorithm can be used to sample from the state of the spins when the rate of noise is higher than a threshold determined by the strength of the local interactions. Furthermore, by encoding a 1D fault tolerant quantum computation into the dynamics of spin systems arranged on two or higher dimensional grids, we show that for several noise channels, the problem of weakly simulating the output state of both purely Hamiltonian and purely dissipative dynamics is expected to be hard in the low-noise regime.
The on-going X-ray all-sky survey with the eROSITA instrument will yield large galaxy cluster samples, which will bring strong constraints on cosmological parameters. In particular, the survey holds great promise to investigate the tension between CMB and low-redshift measurements. The current bottleneck preventing the full exploitation of the survey data is the systematics associated with the relation between survey observable and halo mass. Numerous recent studies have shown that gas mass and core-excised X-ray luminosity exhibit very low scatter at fixed mass. We propose a new method to reconstruct these quantities from low photon count data and validate the method using extensive eROSITA-like simulations. We find that even near the detection threshold of ~50 counts the core-excised luminosity and the gas mass can be recovered with 20-30% precision, which is substantially less than the scatter of the full integrated X-ray luminosity at fixed mass. When combined with an accurate calibration of the absolute mass scale (e.g. through weak gravitational lensing), our technique reduces the systematics on cosmological parameters induced by the mass calibration.
It is commonly believed that area laws for entanglement entropies imply that a quantum many-body state can be faithfully represented by efficient tensor network states - a conjecture frequently stated in the context of numerical simulations and analytical considerations. In this work, we show that this is in general not the case, except in one dimension. We prove that the set of quantum many-body states that satisfy an area law for all Renyi entropies contains a subspace of exponential dimension. Establishing a novel link between quantum many-body theory and the theory of communication complexity, we then show that there are states satisfying area laws for all Renyi entropies but cannot be approximated by states with a classical description of small Kolmogorov complexity, including polynomial projected entangled pair states (PEPS) or states of multi-scale entanglement renormalisation (MERA). Not even a quantum computer with post-selection can efficiently prepare all quantum states fulfilling an area law, and we show that not all area law states can be eigenstates of local Hamiltonians. We also prove translationally invariant and isotropic instances of these results, and show a variation with decaying correlations using quantum error-correcting codes.
University of Amsterdam logoUniversity of AmsterdamCharles UniversityNew York University logoNew York UniversityUniversity of Chicago logoUniversity of ChicagoNikhefUniversity of LjubljanaINFN logoINFNCONICETUniversidade de LisboaLouisiana State UniversityRadboud UniversityColorado State UniversityCity University of New YorkGran Sasso Science InstituteSorbonne Université logoSorbonne UniversitéCase Western Reserve UniversityFermi National Accelerator LaboratoryObservatorio Pierre AugerUniversidade Federal do ABCKarlsruhe Institute of Technology logoKarlsruhe Institute of TechnologyUniversidad Nacional de La PlataMichigan Technological UniversityInstitute of Physics of the Czech Academy of SciencesUniversidade Estadual de Campinas (UNICAMP)University of AdelaideInstituto BalseiroUniversidade Federal de SergipeCNRS/IN2P3ao Paulo - USPUniversidade de SASTRONNational Centre for Nuclear ResearchUniversidade de Santiago de CompostelaHoria Hulubei National Institute for R&D in Physics and Nuclear EngineeringInstitute of Nuclear Physics, Polish Academy of SciencesLIPInstitute of Space ScienceUniversidad Industrial de Santander̈ur RadioastronomieJ. Stefan InstitutePalacky Universityao, Cie Paris-SaclayUniversidade Federal do Rio de Janeiro (UFRJ)exicoInstitut universitaire de France (IUF)e Grenoble AlpesUniversidade Federal de SUniversidade Federal do Oeste da BahiaIFLPerita Universidad AutBenemonoma de PueblaUniversidade Federal de Pelotase de Parisecnica Federico Santa MarUniversidad TCentro Brasileiro de Pesquisas FısicasUniversidade Federal Fluminense (UFF)Centro Atomico Barilocheat WuppertalUniversidad Nacional de San Agustat Siegena Degli Studi di Milanöat FreiburgCentro de Investigacia di Roma ”Tor Vergata”Instituto Galego de Fısica de Altas Enerxa del SalentoInstituto de Tecnologıas en Deteccion y de Estudios Avanzados del IPN (CINVESTAV)ıa Atomicae Libre de Bruxelles (ULB)ısica de Rosario (IFIR)ısica de La Plata (IALP)a del Piemonte Orientaleencia e Tecnologia do Espıa y Fe Savoie Mont Blancırito Santo (IFES)a di Torinoao Carlos (UFSCar)on y Astropartısica del Espacio (IAFE)Comision Nacional de Energa di Cataniaonoma de Bucaramangaın de ArequipaInstituto de Astronomıas (IGFAE)Instituto Federal de Educaıculas (ITeDA)onoma del Estado de MInstituto de F ```Universidad Nacional Aut",Universidad Aut",a",E",RWTH Aachen UniversityUniversit ",C",Onoma de M",Instituto de Astrof ```Max-Planck-Institut f":":Universidade Federal do ParanVrije Universiteit Brussel
The Pierre Auger Observatory presents the most comprehensive measurement of the Ultra-High Energy Cosmic Ray (UHECR) energy spectrum, combining different detection methods to cover declinations from 90 to +44.8 . This study confirms the "instep" feature at 10 EeV with 5.5-sigma significance and shows the UHECR energy spectrum is consistent across different sky regions.
As is well-known in the context of topological insulators and superconductors, short-range-correlated fermionic pure Gaussian states with fundamental symmetries are systematically classified by the periodic table. We revisit this topic from a quantum-information-inspired operational perspective without referring to any Hamiltonians, and apply the formalism to bosonic Gaussian states as well as (both fermionic and bosonic) locality-preserving unitary Gaussian operations. We find that while bosonic Gaussian states are all trivial, there exist nontrivial bosonic Gaussian operations that cannot be continuously deformed into the identity under the locality and symmetry constraint. Moreover, we unveil unexpectedly complicated relations between fermionic Gaussian states and operations, pointing especially out that some of the former can be disentangled by the latter under the same symmetry constraint, while some cannot. In turn, we find that some topological operations are genuinely dynamical, in the sense that they cannot create any topological states from a trivial one, yet they are not connected to the identity. The notions of disentanglability and genuinely dynamical topology can be unified in a general picture of unitary-to-state homomorphism, and apply equally to interacting topological phases and quantum cellular automata.
Cosmology requires new physics beyond the Standard Model of elementary particles and fields. What is the fundamental physics behind dark matter and dark energy? What generated the initial fluctuations in the early Universe? Polarised light of the cosmic microwave background (CMB) may hold the key to answers. In this article, we discuss two new developments in this research area. First, if the physics behind dark matter and dark energy violates parity symmetry, their coupling to photons rotates the plane of linear polarisation as the CMB photons travel more than 13 billion years. This effect is known as `cosmic birefringence': space filled with dark matter and dark energy behaves as if it were a birefringent material, like a crystal. A tantalising hint for such a signal has been found with the statistical significance of 3σ3\sigma. Next, the period of accelerated expansion in the very early Universe, called `cosmic inflation', produced a stochastic background of primordial gravitational waves (GW). What generated GW? The leading idea is vacuum fluctuations in spacetime, but matter fields could also produce a significant amplitude of primordial GW. Finding its origin using CMB polarisation opens a new window into the physics behind inflation. These new scientific targets may influence how data from future CMB experiments are collected, calibrated, and analysed.
Deep Reinforcement Learning (DRL) is employed to develop autonomously optimized and custom-designed heat-treatment processes that are both, microstructure-sensitive and energy efficient. Different from conventional supervised machine learning, DRL does not rely on static neural network training from data alone, but a learning agent autonomously develops optimal solutions, based on reward and penalty elements, with reduced or no supervision. In our approach, a temperature-dependent Allen-Cahn model for phase transformation is used as the environment for the DRL agent, serving as the model world in which it gains experience and takes autonomous decisions. The agent of the DRL algorithm is controlling the temperature of the system, as a model furnace for heat-treatment of alloys. Microstructure goals are defined for the agent based on the desired microstructure of the phases. After training, the agent can generate temperature-time profiles for a variety of initial microstructure states to reach the final desired microstructure state. The agent's performance and the physical meaning of the heat-treatment profiles generated are investigated in detail. In particular, the agent is capable of controlling the temperature to reach the desired microstructure starting from a variety of initial conditions. This capability of the agent in handling a variety of conditions paves the way for using such an approach also for recycling-oriented heat treatment process design where the initial composition can vary from batch to batch, due to impurity intrusion, and also for the design of energy-efficient heat treatments. For testing this hypothesis, an agent without penalty on the total consumed energy is compared with one that considers energy costs. The energy cost penalty is imposed as an additional criterion on the agent for finding the optimal temperature-time profile.
We present a comparison of 14 galaxy formation models: 12 different semi-analytical models and 2 halo-occupation distribution models for galaxy formation based upon the same cosmological simulation and merger tree information derived from it. The participating codes have proven to be very successful in their own right but they have all been calibrated independently using various observational data sets, stellar models, and merger trees. In this paper we apply them without recalibration and this leads to a wide variety of predictions for the stellar mass function, specific star formation rates, stellar-to- halo mass ratios, and the abundance of orphan galaxies. The scatter is much larger than seen in previous comparison studies primarily because the codes have been used outside of their native environment within which they are well tested and calibrated. The purpose of the `nIFTy comparison of galaxy formation models' is to bring together as many different galaxy formation modellers as possible and to investigate a common approach to model calibration. This paper provides a unified description for all participating models and presents the initial, uncalibrated comparison as a baseline for our future studies where we will develop a common calibration framework and address the extent to which that reduces the scatter in the model predictions seen here.
The last decade has seen unprecedented effort in dark matter model building at all mass scales coupled with the design of numerous new detection strategies. Transformative advances in quantum technologies have led to a plethora of new high-precision quantum sensors and dark matter detection strategies for ultralight (<10\,eV) bosonic dark matter that can be described by an oscillating classical, largely coherent field. This white paper focuses on searches for wavelike scalar and vector dark matter candidates.
We introduce a numerical algorithm to simulate the time evolution of a matrix product state under a long-ranged Hamiltonian. In the effectively one-dimensional representation of a system by matrix product states, long-ranged interactions are necessary to simulate not just many physical interactions but also higher-dimensional problems with short-ranged interactions. Since our method overcomes the restriction to short-ranged Hamiltonians of most existing methods, it proves particularly useful for studying the dynamics of both power-law interacting one-dimensional systems, such as Coulombic and dipolar systems, and quasi two-dimensional systems, such as strips or cylinders. First, we benchmark the method by verifying a long-standing theoretical prediction for the dynamical correlation functions of the Haldane-Shastry model. Second, we simulate the time evolution of an expanding cloud of particles in the two-dimensional Bose-Hubbard model, a subject of several recent experiments.
While the simplest quantum Hall plateaus, such as the ν=1/3\nu = 1/3 state in GaAs, can be conveniently analyzed by assuming only a single active Landau level participates, for many phases the spin, valley, bilayer, subband, or higher Landau level indices play an important role. These `multi-component' problems are difficult to study using exact diagonalization because each component increases the difficulty exponentially. An important example is the plateau at ν=5/2\nu = 5/2, where scattering into higher Landau levels chooses between the competing non-Abelian Pfaffian and anti-Pfaffian states. We address the methodological issues required to apply the infinite density matrix renormalization group to quantum Hall systems with multiple components and long-range Coulomb interactions, greatly extending accessible system sizes. As an initial application we study the problem of Landau level mixing in the $\nu = 5/2$ state. Within the approach to Landau level mixing used here, we find that at the Coulomb point the anti-Pfaffian is preferred over the Pfaffian state over a range of Landau level mixing up to the experimentally relevant values.
We show that several quantum circuit families can be simulated efficiently classically if it is promised that their output distribution is approximately sparse i.e. the distribution is close to one where only a polynomially small, a priori unknown subset of the measurement probabilities are nonzero. Classical simulations are thereby obtained for quantum circuits which---without the additional sparsity promise---are considered hard to simulate. Our results apply in particular to a family of Fourier sampling circuits (which have structural similarities to Shor's factoring algorithm) but also to several other circuit families, such as IQP circuits. Our results provide examples of quantum circuits that cannot achieve exponential speed-ups due to the presence of too much destructive interference i.e. too many cancelations of amplitudes. The crux of our classical simulation is an efficient algorithm for approximating the significant Fourier coefficients of a class of states called computationally tractable states. The latter result may have applications beyond the scope of this work. In the proof we employ and extend sparse approximation techniques, in particular the Kushilevitz-Mansour algorithm, in combination with probabilistic simulation methods for quantum circuits.
Fast radio bursts are a new class of transient radio phenomena currently detected as millisecond radio pulses with very high dispersion measures. As new radio surveys begin searching for FRBs a large population is expected to be detected in real-time, triggering a range of multi-wavelength and multi-messenger telescopes to search for repeating bursts and/or associated emission. Here we propose a method for disseminating FRB triggers using Virtual Observatory Events (VOEvents). This format was developed and is used successfully for transient alerts across the electromagnetic spectrum and for multi-messenger signals such as gravitational waves. In this paper we outline a proposed VOEvent standard for FRBs that includes the essential parameters of the event and where these parameters should be specified within the structure of the event. An additional advantage to the use of VOEvents for FRBs is that the events can automatically be ingested into the FRB Catalogue (FRBCAT) enabling real-time updates for public use. We welcome feedback from the community on the proposed standard outlined below and encourage those interested to join the nascent working group forming around this topic.
7
We use the IllustrisTNG simulations to show how the fractions of quenched galaxies vary across different environments and cosmic time, and to quantify the role AGN feedback and preprocessing play in quenching group and cluster satellites. At z=0z=0, we select galaxies with M=10912MM_* = 10^{9-12} M_{\odot} residing within (R200c\leq R_{200c}) groups and clusters of total host mass M200c=101315.2MM_{200c}=10^{13-15.2} M_{\odot}. TNG predicts a quenched fraction of 7090%\sim70-90\% (on average) for centrals and satellites $\gtrsim 10^{10.5} M_{\odot},regardlessofhostmass,cosmictime(, regardless of host mass, cosmic time (0\leq z\leq0.5$), clustercentric distance and time since infall in the z=0z=0 host. Low-mass centrals (1010M\lesssim 10^{10} M_{\odot}), instead, are rarely quenched unless they become members of groups (101314M10^{13-14} M_{\odot}) or clusters (1014M\geq10^{14} M_{\odot}), where the quenched fraction rises to 80%\sim80\%. The fraction of low-mass passive galaxies is higher closer to the host center and for more massive hosts. The population of low-mass satellites accreted \gtrsim4-6 Gyr ago in massive hosts is almost entirely passive, thus suggesting an upper limit for the time needed for environmental quenching to occur. In fact, 30%\sim30\% of group and cluster satellites that are quenched at z=0z=0 were already quenched before falling into their current host, and the bulk of them quenched as early as 4 to 10 billion years ago. For low-mass galaxies (101010.5M\lesssim10^{10-10.5}M_{\odot}), this is due to preprocessing, whereby current satellites may have been members of other hosts, and hence have undergone environmental processes, before falling into their final host, this mechanism being more common and more effective for the purposes of quenching for satellites found today in more massive hosts. On the other hand, massive galaxies quench on their own and because of AGN feedback, regardless of whether they are centrals or satellites.
Strong gravitational lenses with measured time delays between the multiple images allow a direct measurement of the time-delay distance to the lens, and thus a measure of cosmological parameters, particularly the Hubble constant, H0H_{0}. We present a blind lens model analysis of the quadruply-imaged quasar lens HE 0435-1223 using deep Hubble Space Telescope imaging, updated time-delay measurements from the COSmological MOnitoring of GRAvItational Lenses (COSMOGRAIL), a measurement of the velocity dispersion of the lens galaxy based on Keck data, and a characterization of the mass distribution along the line of sight. HE 0435-1223 is the third lens analyzed as a part of the H0H_{0} Lenses in COSMOGRAIL's Wellspring (H0LiCOW) project. We account for various sources of systematic uncertainty, including the detailed treatment of nearby perturbers, the parameterization of the galaxy light and mass profile, and the regions used for lens modeling. We constrain the effective time-delay distance to be DΔt=2612191+208 MpcD_{\Delta t} = 2612_{-191}^{+208}~\mathrm{Mpc}, a precision of 7.6%. From HE 0435-1223 alone, we infer a Hubble constant of $H_{0} = 73.1_{-6.0}^{+5.7}~\mathrm{km~s^{-1}~Mpc^{-1}}assumingaflat assuming a flat \Lambda$CDM cosmology. The cosmographic inference based on the three lenses analyzed by H0LiCOW to date is presented in a companion paper (H0LiCOW Paper V).
There are no more papers matching your filters at the moment.