Predictions from statistical physics postulate that recovery of the communities in Stochastic Block Model (SBM) is possible in polynomial time above, and only above, the Kesten-Stigum (KS) threshold. This conjecture has given rise to a rich literature, proving that non-trivial community recovery is indeed possible in SBM above the KS threshold, as long as the number K of communities remains smaller than n, where n is the number of nodes in the observed graph. Failure of low-degree polynomials below the KS threshold was also proven when K=o(n).
When K≥n, Chin et al.(2025) recently prove that, in a sparse regime, community recovery in polynomial time is possible below the KS threshold by counting non-backtracking paths. This breakthrough result lead them to postulate a new threshold for the many communities regime K≥n. In this work, we provide evidences that confirm their conjecture for K≥n:
1- We prove that, for any density of the graph, low-degree polynomials fail to recover communities below the threshold postulated by Chin et al.(2025);
2- We prove that community recovery is possible in polynomial time above the postulated threshold, not only in the sparse regime of~Chin et al., but also in some (but not all) moderately sparse regimes by essentially by counting occurrences of cliques or self-avoiding paths of suitable size in the observed graph.
In addition, we propose a detailed conjecture regarding the structure of motifs that are optimal in sparsity regimes not covered by cliques or self-avoiding paths counting. In particular, counting self-avoiding paths of length log(n)--which is closely related to spectral algorithms based on the Non-Backtracking operator--is optimal only in the sparse regime. Other motif counts--unrelated to spectral properties--should be considered in denser regimes.
Mapping the local and distant Universe is key to our understanding of it. For decades, the Sloan Digital Sky Survey (SDSS) has made a concerted effort to map millions of celestial objects to constrain the physical processes that govern our Universe. The most recent and fifth generation of SDSS (SDSS-V) is organized into three scientific ``mappers". Milky Way Mapper (MWM) that aims to chart the various components of the Milky Way and constrain its formation and assembly, Black Hole Mapper (BHM), which focuses on understanding supermassive black holes in distant galaxies across the Universe, and Local Volume Mapper (LVM), which uses integral field spectroscopy to map the ionized interstellar medium in the local group. This paper describes and outlines the scope and content for the nineteenth data release (DR19) of SDSS and the most substantial to date in SDSS-V. DR19 is the first to contain data from all three mappers. Additionally, we also describe nine value added catalogs (VACs) that enhance the science that can be conducted with the SDSS-V data. Finally, we discuss how to access SDSS DR19 and provide illustrative examples and tutorials.
Diffusion generative models unlock new possibilities for inverse problems as they allow for the incorporation of strong empirical priors in scientific inference. Recently, diffusion models are repurposed for solving inverse problems using Gaussian approximations to conditional densities of the reverse process via Tweedie's formula to parameterise the mean, complemented with various heuristics. To address various challenges arising from these approximations, we leverage higher order information using Tweedie's formula and obtain a statistically principled approximation. We further provide a theoretical guarantee specifically for posterior sampling which can lead to a better theoretical understanding of diffusion-based conditional sampling. Finally, we illustrate the empirical effectiveness of our approach for general linear inverse problems on toy synthetic examples as well as image restoration. We show that our method (i) removes any time-dependent step-size hyperparameters required by earlier methods, (ii) brings stability and better sample quality across multiple noise levels, (iii) is the only method that works in a stable way with variance exploding (VE) forward processes as opposed to earlier works.
Pulsar Timing Array experiments probe the presence of possible scalar or pseudoscalar ultralight dark matter particles through decade-long timing of an ensemble of galactic millisecond radio pulsars. With the second data release of the European Pulsar Timing Array, we focus on the most robust scenario, in which dark matter interacts only gravitationally with ordinary baryonic matter. Our results show that ultralight particles with masses 10−24.0eV≲m≲10−23.3eV cannot constitute 100% of the measured local dark matter density, but can have at most local density ρ≲0.3 GeV/cm3.
Compressible turbulence governs energy transfer across scales in space and astrophysical systems. Capturing both the turbulence cascade and damping is therefore crucial for models of energy conversion, plasma heating, and particle transport in diverse plasma environments, but remains challenging. Progress is constrained by two unresolved fundamental questions: the persistence of the turbulence cascade in the presence of shocks and discontinuities, and the validity of classical wave theories under strong nonlinearity. In particular, it remains unclear whether meaningful cascade dynamics can be defined in compressible turbulence with phase steepening, and whether frameworks developed for monochromatic waves remain applicable to complex, broadband fluctuations. Using large-scale, high-resolution kinetic simulations, we analyze turbulence-particle interactions, which are beyond the capability of standard magnetohydrodynamic (MHD) simulations. We show that compressible turbulence damping at MHD scales in quantitative agreement with transit-time damping theory, even in fully developed nonlinear states. Moreover, the cascade persists despite the generation of shocks and discontinuities due to phase steepening, revealing a surprising robustness of cross-scale energy transfer under extreme conditions. We further provide the spectral expression of compressible turbulence. These results close a long-standing gap in the physics of compressible turbulence and establish a robust foundation for turbulence modeling from the heliosphere to galaxies.
High-dimensional partial differential equations (PDEs) are ubiquitous in economics, science and engineering. However, their numerical treatment poses formidable challenges since traditional grid-based methods tend to be frustrated by the curse of dimensionality. In this paper, we argue that tensor trains provide an appealing approximation framework for parabolic PDEs: the combination of reformulations in terms of backward stochastic differential equations and regression-type methods in the tensor format holds the promise of leveraging latent low-rank structures enabling both compression and efficient computation. Following this paradigm, we develop novel iterative schemes, involving either explicit and fast or implicit and accurate updates. We demonstrate in a number of examples that our methods achieve a favorable trade-off between accuracy and computational efficiency in comparison with state-of-the-art neural network based approaches.
We present three-dimensional hydrodynamical simulations of mergers between low-mass hybrid HeCO white dwarfs (WDs), offering new insights into the diversity of thermonuclear transients. Unlike previously studied mergers involving higher-mass HeCO WDs and CO WDs, where helium detonation often triggers core ignition, our simulations reveal incomplete helium shell detonations in comparable-mass, lower-mass WD pairs. The result is a faint, rapidly evolving transient driven by the ejection of intermediate-mass elements and radioactive isotopes such as 48Cr and 52Fe, without significant 56Ni production. These transients may be detectable in upcoming wide-field surveys and could account for a subset of faint thermonuclear supernovae. Long-term evolution of the merger remnant shows that high-velocity PG-1159-type stars might be formed through this scenario, similar to normal CO-CO white dwarf mergers. This work expands our understanding of white dwarf mergers and their implications for nucleosynthesis and stellar evolution.
In this paper, the problem of finding optimal success probabilities of static linear optics quantum gates is linked to the theory of convex optimization. It is shown that by exploiting this link, upper bounds for the success probability of networks realizing single-mode gates can be derived, which hold in generality for linear optical networks followed by postselection, i.e., for networks of arbitrary size, any number of auxiliary modes, and arbitrary photon numbers. As a corollary, the previously formulated conjecture is proven that the optimal success probability of a postselected non-linear sign shift without feed-forward is 1/4, a gate playing the central role in the scheme of Knill-Laflamme-Milburn for quantum computation with linear optics. The concept of Lagrange duality is shown to be applicable to provide rigorous proofs for such bounds for elementary gates, although the original problem is a difficult non-convex problem in infinitely many objective variables. The versatility of this approach to identify other optimal linear optical schemes is demonstrated.
Milky Way Cepheid variables with accurate {\it Hubble Space Telescope} photometry have been established as standards for primary calibration of the cosmic distance ladder to achieve a percent-level determination of the Hubble constant (H0). These 75 Cepheid standards are the fundamental sample for investigation of possible residual systematics in the local H0 determination due to metallicity effects on their period-luminosity relations. We obtained new high-resolution (R∼81,000), high signal-to-noise (S/N∼50−150) multi-epoch spectra of 42 out of 75 Cepheid standards using ESPaDOnS instrument at the 3.6-m Canada-France-Hawaii Telescope. Our spectroscopic metallicity measurements are in good agreement with the literature values with systematic differences up to 0.1 dex due to different metallicity scales. We homogenized and updated the spectroscopic metallicities of all 75 Milky Way Cepheid standards and derived their multiwavelength (GVIJHKs) period-luminosity-metallicity and period-Wesenheit-metallicity relations using the latest {\it Gaia} parallaxes. The metallicity coefficients of these empirically calibrated relations exhibit large uncertainties due to low statistics and a narrow metallicity range (Δ[Fe/H]=0.6~dex). These metallicity coefficients are up to three times better constrained if we include Cepheids in the Large Magellanic Cloud and range between −0.21±0.07 and −0.43±0.06 mag/dex. The updated spectroscopic metallicities of these Milky Way Cepheid standards were used in the Cepheid-Supernovae distance ladder formalism to determine H0=72.9±1.0\textrm{~km~s−1~Mpc−1}, suggesting little variation (∼0.1 ~km~s−1~Mpc−1) in the local H0 measurements due to different Cepheid metallicity scales.
We present new calculations of the mass inflow and outflow rates around the Milky Way, derived from a catalog of ultraviolet metal-line high velocity clouds (HVCs). These calculations are conducted by transforming the HVC velocities into the Galactic Standard of Rest (GSR) reference frame, identifying inflowing (v_GSR < 0 km/s) and outflowing (v_GSR > 0 km/s) populations, and using observational constraints on the distance, metallicity, dust content, covering fractions, and total hydrogen column density of each population. After removing HVCs associated with the Magellanic Stream and the Fermi Bubbles, we find inflow and outflow rates in cool (T~10^4 K) ionized gas of dM_in/dt >~ 0.53+/-0.17 (d/12 kpc) (Z/0.2 Z_sun)^-1 M_sun/yr and dM_out/dt >~ 0.16+/-0.06 (d/12 kpc) (Z/0.5 Z_sun)^-1 M_sun/yr. The excess of inflowing over outflowing gas suggests that the Milky Way is currently in an inflow-dominated phase, but the presence of substantial mass flux in both directions supports a Galactic fountain model, in which gas is constantly recycled between the disk and the halo. We also find that the metal flux in both directions (in and out) is indistinguishable. By comparing the outflow rate to the Galactic star formation rate, we present the first estimate of the mass loading factor (etc_HVC) of the disk-wide Milky Way wind, finding eta_HVC >~ 0.10+/-0.06 (d/12 kpc) (Z/0.5 Z_sun)^-1. Including the contributions from low- and intermediate-velocity clouds and from hot gas would increase these inflow and outflow estimates.
Gamma-ray line signatures can be expected in the very-high-energy (VHE; E_\gamma > 100 GeV) domain due to self-annihilation or decay of dark matter (DM) particles in space. Such a signal would be readily distinguishable from astrophysical \gamma-ray sources that in most cases produce continuous spectra which span over several orders of magnitude in energy. Using data collected with the H.E.S.S. \gamma-ray instrument, upper limits on line-like emission are obtained in the energy range between ~500 GeV and ~25 TeV for the central part of the Milky Way halo and for extragalactic observations, complementing recent limits obtained with the Fermi-LAT instrument at lower energies. No statistically significant signal could be found. For monochromatic \gamma-ray line emission, flux limits of (2x10^-7 - 2x10^-5) m^-2 s^-1 sr^-1 and (1x10^-8 - 2x10^-6) m^-2 s^-1 sr^-1 are obtained for the central part of the Milky Way halo and extragalactic observations, respectively. For a DM particle mass of 1 TeV, limits on the velocity-averaged DM annihilation cross section < \sigma v >(\chi\chi -> \gamma\gamma) reach ~10^-27 cm^3 s^-1, based on the Einasto parametrization of the Galactic DM halo density profile.
We propose an improved estimator for the multi-task averaging problem, whose goal is the joint estimation of the means of multiple distributions using separate, independent data sets. The naive approach is to take the empirical mean of each data set individually, whereas the proposed method exploits similarities between tasks, without any related information being known in advance. First, for each data set, similar or neighboring means are determined from the data by multiple testing. Then each naive estimator is shrunk towards the local average of its neighbors. We prove theoretically that this approach provides a reduction in mean squared error. This improvement can be significant when the dimension of the input space is large, demonstrating a "blessing of dimensionality" phenomenon. An application of this approach is the estimation of multiple kernel mean embeddings, which plays an important role in many modern applications. The theoretical results are verified on artificial and real world data.
Gravitational lensing is a powerful tool to detect compact matter on very different mass scales. Of particular importance is the fact that lensing is sensitive to both luminous and dark matter alike. Depending on the mass scale, all lensing effects are used in the search for matter: offset in position, image distortion, magnification, and multiple images. Gravitational lens detections cover three main mass ranges: roughly stellar mass, galaxy mass and galaxy cluster mass scales, i.e. well known classes of objects. Various searches based on different techniques explored the frequency of compact objects over more than 15 orders of magnitude, so far mostly providing null results in mass ranges different from the ones just mentioned. Combined, the lensing results offer some interesting limits on the cosmological frequency of compact objects in the mass interval 10^{-3} <= M/M_odot <= 10^{15}, unfortunately still with some gaps in between. In the near future, further studies along these lines promise to fill the gaps and to push the limits further down, or they might even detect new object classes.
Gravitational systems in astrophysics often comprise a body -- the primary -- that far outweights the others, and which is taken as the centre of the reference frame. A fictitious acceleration, also known as the indirect term, must therefore be added to all other bodies in the system to compensate for the absence of motion of the primary. In this paper, we first stress that there is not one indirect term but as many indirect terms as there are bodies in the system that exert a gravitational pull on the primary. For instance, in the case of a protoplanetary disc with two planets, there are three indirect terms: one arising from the whole disc, and one per planet. We also highlight that the direct and indirect gravitational accelerations should be treated in a balanced way: the indirect term from one body should be applied to the other bodies in the system that feel its direct gravitational acceleration, and only to them. We point to situations where one of those terms is usually neglected however, which may lead to spurious results. These ideas are developed here for star-disc-planets interactions, for which we propose a recipe for the force to be applied onto a migrating planet, but they can easily be generalized to other astrophysical systems.
Ensemble Kalman inversion (EKI) is an ensemble-based method to solve inverse problems. Its gradient-free formulation makes it an attractive tool for problems with involved formulation. However, EKI suffers from the ''subspace property'', i.e., the EKI solutions are confined in the subspace spanned by the initial ensemble. It implies that the ensemble size should be larger than the problem dimension to ensure EKI's convergence to the correct solution. Such scaling of ensemble size is impractical and prevents the use of EKI in high dimensional problems. To address this issue, we propose a novel approach using dropout regularization to mitigate the subspace problem. We prove that dropout-EKI converges in the small ensemble settings, and the computational cost of the algorithm scales linearly with dimension. We also show that dropout-EKI reaches the optimal query complexity, up to a constant factor. Numerical examples demonstrate the effectiveness of our approach.
Wolf-Rayet (WR) stars are the evolved descendants of the most massive stars and show emission-line dominated spectra formed in their powerful stellar winds. Marking the final evolution stage before core collapse, the standard picture of WR stars has been that they evolve through three well-defined spectral subtypes known as WN, WC, and WO. Here, we present a detailed analysis of five objects that defy this scheme, demonstrating that WR stars can also evolve directly from the WN to the WO stage. Our study reveals that this direct transition is connected to low metallicity and weaker winds. The WN/WO stars and their immediate WN precursors are hot and emit a high flux of photons capable of fully ionizing helium. The existence of these stages unveil that high mass stars which manage to shed off their outer hydrogen layers in a low-metallicity environment can spend a considerable fraction of their lifetime in a stage that is difficult to detect in integrated stellar populations, but at the same time yields hard ionizing flux. The identification of the WN to WO evolution path for massive stars has significant implications for understanding the chemical enrichment and ionizing feedback in star-forming galaxies, in particular at earlier cosmic times.
We perform a quantitative analysis of extensive chess databases and show that the frequencies of opening moves are distributed according to a power-law with an exponent that increases linearly with the game depth, whereas the pooled distribution of all opening weights follows Zipf's law with universal exponent. We propose a simple stochastic process that is able to capture the observed playing statistics and show that the Zipf law arises from the self-similar nature of the game tree of chess. Thus, in the case of hierarchical fragmentation the scaling is truly universal and independent of a particular generating mechanism. Our findings are of relevance in general processes with composite decisions.
Motivated by the observation of non-exponential run-time distributions of bacterial swimmers, we propose a minimal phenomenological model for taxis of active particles whose motion is controlled by an internal clock. The ticking of the clock depends on an external concentration field, e.g. a chemical substance. We demonstrate that these particles can detect concentration gradients and respond to them by moving up- or down-gradient depending on the clock design, albeit measurements of these fields are purely local in space and instantaneous in time. Altogether, our results open a new route in the study of directional navigation, by showing that the use of a clock to control motility actions represents a generic and versatile toolbox to engineer behavioral responses to external cues, such as light, chemical, or temperature gradients.
GW230529_181500 represented the first gravitational-wave detection with one of the component objects' mass inferred to lie in the previously hypothesized mass gap between the heaviest neutron stars and the lightest observed black holes. Given the expected maximum mass values for neutron stars, this object was identified as a black hole, and, with the secondary component being a neutron star, the detection was classified as a neutron star-black hole merger. However, due to the low signal-to-noise ratio and the known waveform degeneracy between the spin and mass ratio in the employed gravitational-wave models, GW230529_181500 could also be interpreted as a merger of two heavy (≳2M⊙) neutron stars with high spins. We investigate the distinguishability of these scenarios by performing parameter estimation on simulated signals obtained from numerical-relativity waveforms for both neutron star-black hole and binary neutron star systems, with parameters consistent with GW230529_181500, and comparing them to the analysis of the real event data. We find that GW230529_181500 is more likely to have originated from a neutron star-black hole merger, though the possibility of a binary neutron star origin can not be ruled out. Moreover, we use the simulation data to estimate the signatures of potential electromagnetic counterparts emitted by the systems. We find them to be too dim to be located by current wide-field surveys if only the dynamical ejecta is considered, and detectable by the Vera C. Rubin Observatory during the first two days after merger if one accounts for additional disk wind ejecta.
Data assimilation is a method that combines observations (that is, real world data) of a state of a system with model output for that system in order to improve the estimate of the state of the system and thereby the model output. The model is usually represented by a discretised partial differential equation. The data assimilation problem can be formulated as a large scale Bayesian inverse problem. Based on this interpretation we will derive the most important variational and sequential data assimilation approaches, in particular three-dimensional and four-dimensional variational data assimilation (3D-Var and 4D-Var) and the Kalman filter. We will then consider more advanced methods which are extensions of the Kalman filter and variational data assimilation and pay particular attention to their advantages and disadvantages. The data assimilation problem usually results in a very large optimisation problem and/or a very large linear system to solve (due to inclusion of time and space dimensions). Therefore, the second part of this article aims to review advances and challenges, in particular from the numerical linear algebra perspective, within the various data assimilation approaches.
There are no more papers matching your filters at the moment.