Université Montpellier
We employ the renormalization group optimized perturbation theory (RGOPT) resummation method to evaluate the equation of state (EoS) for strange (Nf=2+1N_f=2+1) and non-strange (Nf=2N_f=2) cold quark matter at NLO. This allows us to obtain the mass-radius relation for pure quark stars and compare the results with the predictions from perturbative QCD (pQCD) at NNLO. Choosing the renormalization scale to generate maximum star masses of order M=22.6MM=2 - 2.6 M_\odot, we show that the RGOPT can produce mass-radius curves compatible with the masses and radii of some recently observed pulsars, regardless of their strangeness content. The scale values required to produce the desired maximum masses are higher in the strange scenario since the EoS is softer in this case. The possible reasons for such behavior are discussed. Our results also show that, as expected, the RGOPT predictions for the relevant observables are less sensitive to scale variations than those furnished by pQCD.
We revise primordial black holes (PBHs) production in the axion-curvaton model, in light of recent developments in the computation of their abundance accounting for non-gaussianities (NGs) in the curvature perturbation up to all orders. We find that NGs intrinsically generated in such scenarios have a relevant impact on the phenomenology associated to PBHs and, in particular, on the relation between the abundance and the signal of second-order gravitational waves. We show that this model could explain both the totality of dark matter in the asteroid mass range and the tentative signal reported by the NANOGrav and IPTA collaborations in the nano-Hz frequency range. En route, we provide a new, explicit computation of the power spectrum of curvature perturbations going beyond the sudden-decay approximation.
The Large Area Telescope (LAT), the primary instrument for the Fermi Gamma-ray Space Telescope (Fermi) mission, is an imaging, wide field-of-view, high-energy gamma-ray telescope, covering the energy range from 30 MeV to more than 300 GeV. We describe the performance of the instrument at the 10-year milestone. LAT performance remains well within the specifications defined during the planning phase, validating the design choices and supporting the compelling case to extend the duration of the Fermi mission. The details provided here will be useful when designing the next generation of high-energy gamma-ray observatories.
Prototypical part learning is emerging as a promising approach for making semantic segmentation interpretable. The model selects real patches seen during training as prototypes and constructs the dense prediction map based on the similarity between parts of the test image and the prototypes. This improves interpretability since the user can inspect the link between the predicted output and the patterns learned by the model in terms of prototypical information. In this paper, we propose a method for interpretable semantic segmentation that leverages multi-scale image representation for prototypical part learning. First, we introduce a prototype layer that explicitly learns diverse prototypical parts at several scales, leading to multi-scale representations in the prototype activation output. Then, we propose a sparse grouping mechanism that produces multi-scale sparse groups of these scale-specific prototypical parts. This provides a deeper understanding of the interactions between multi-scale object representations while enhancing the interpretability of the segmentation model. The experiments conducted on Pascal VOC, Cityscapes, and ADE20K demonstrate that the proposed method increases model sparsity, improves interpretability over existing prototype-based methods, and narrows the performance gap with the non-interpretable counterpart models. Code is available at github.com/eceo-epfl/ScaleProtoSeg.
1
We present a comprehensive review of the physical behavior of yield stress materials in soft condensed matter, which encompass a broad range of materials from colloidal assemblies and gels to emulsions and non-Brownian suspensions. All these disordered materials display a nonlinear flow behavior in response to external mechanical forces, due to the existence of a finite force threshold for flow to occur: the yield stress. We discuss both the physical origin and rheological consequences associated with this nonlinear behavior, and give an overview of experimental techniques available to measure the yield stress. We discuss recent progress concerning a microscopic theoretical description of the flow dynamics of yield stress materials, emphasizing in particular the role played by relaxation time scales, the interplay between shear flow and aging behavior, the existence of inhomogeneous shear flows and shear bands, wall slip, and non-local effects in confined geometries.
The second Gaia data release (DR2), contains very precise astrometric and photometric properties for more than one billion sources, astrophysical parameters for dozens of millions, radial velocities for millions, variability information for half a million of stellar sources and orbits for thousands of solar system objects. Before the Catalogue publication, these data have undergone dedicated validation processes. The goal of this paper is to describe the validation results in terms of completeness, accuracy and precision of the various Gaia DR2 data. The validation processes include a systematic analysis of the Catalogue content to detect anomalies, either individual errors or statistical properties, using statistical analysis, and comparisons to external data or to models. Although the astrometric, photometric and spectroscopic data are of unprecedented quality and quantity, it is shown that the data cannot be used without a dedicated attention to the limitations described here, in the Catalogue documentation and in accompanying papers. A particular emphasis is put on the caveats for the statistical use of the data in scientific exploitation.
The Fermi Large Area Telescope (LAT) Collaboration has recently released a catalog of 360 sources detected above 50 GeV (2FHL). This catalog was obtained using 80 months of data re-processed with Pass 8, the newest event-level analysis, which significantly improves the acceptance and angular resolution of the instrument. Most of the 2FHL sources at high Galactic latitude are blazars. Using detailed Monte Carlo simulations, we measure, for the first time, the source count distribution, dN/dSdN/dS, of extragalactic γ\gamma-ray sources at E>50E>50 GeV and find that it is compatible with a Euclidean distribution down to the lowest measured source flux in the 2FHL (8×1012\sim8\times 10^{-12} ph cm2^{-2} s1^{-1}). We employ a one-point photon fluctuation analysis to constrain the behavior of dN/dSdN/dS below the source detection threshold. Overall the source count distribution is constrained over three decades in flux and found compatible with a broken power law with a break flux, SbS_b, in the range [8×1012,1.5×1011][8 \times 10^{-12},1.5 \times 10^{-11}] ph cm2^{-2} s1^{-1} and power-law indices below and above the break of α2[1.60,1.75]\alpha_2 \in [1.60,1.75] and $\alpha_1 = 2.49 \pm 0.12respectively.Integrationof respectively. Integration of dN/dS$ shows that point sources account for at least 8614+16%86^{+16}_{-14}\% of the total extragalactic γ\gamma-ray background. The simple form of the derived source count distribution is consistent with a single population (i.e. blazars) dominating the source counts to the minimum flux explored by this analysis. We estimate the density of sources detectable in blind surveys that will be performed in the coming years by the Cherenkov Telescope Array.
We employ all-atom well-tempered metadynamics simulations to study the mechanistic details of both the early stages of nucleation and crystal decomposition for the benchmark metal-organic framework ZIF-8. To do so, we developed and validated a force field that reliably models the modes of coordination bonds via a Morse potential functional form and employs cationic and anionic dummy atoms to capture coordination symmetry. We also explored a set of physically relevant collective variables and carefully selected an appropriate subset for our problem at hand. After a rapid increase of the Zn-N connectivity, we observe the evaporation of small clusters in favor of a few large clusters, that lead to the formation of an amorphous highly-connected aggregate. Zn(MIm)42- and Zn(MIm)3- complexes are observed, with lifetimes in the order of a few picoseconds, while larger structures, such as 4-, 5- and 6-membered rings, have substantially longer lifetimes of a few nanoseconds. The free ligands act as ``templating agents'' for the formation of the sodalite cages. ZIF-8 crystal decomposition results in the formation of a vitreous phase. Our findings contribute to a fundamental understanding of MOF's synthesis that paves the way to controlling synthesis products. Furthermore, our developed force field and methodology can be applied to model solution processes that require coordination bond reactivity for other ZIFs besides ZIF-8.
New long read sequencing technologies, like PacBio SMRT and Oxford NanoPore, can produce sequencing reads up to 50,000 bp long but with an error rate of at least 15%. Reducing the error rate is necessary for subsequent utilisation of the reads in, e.g., de novo genome assembly. The error correction problem has been tackled either by aligning the long reads against each other or by a hybrid approach that uses the more accurate short reads produced by second generation sequencing technologies to correct the long reads. We present an error correction method that uses long reads only. The method consists of two phases: first we use an iterative alignment-free correction method based on de Bruijn graphs with increasing length of k-mers, and second, the corrected reads are further polished using long-distance dependencies that are found using multiple alignments. According to our experiments the proposed method is the most accurate one relying on long reads only for read sets with high coverage. Furthermore, when the coverage of the read set is at least 75x, the throughput of the new method is at least 20% higher. LoRMA is freely available at this http URL
09 Jun 2016
Ultrasound-modulated optical tomography (UOT) is a technique that images optical contrast deep inside scattering media. Heterodyne holography is a promising tool able to detect the UOT tagged photons with high efficiency. In this work, we describe theoretically the detection of the tagged photon in heterodyne holography based UOT, show how to filter the untagged photon discuss, and discuss the effect of speckle decorrelation. We show that optimal detection sensitivity can obtain, if the frame exposure time is of the order of the decorrelation time.
Overlaps between words are crucial in many areas of computer science, such as code design, stringology, and bioinformatics. A self overlapping word is characterized by its periods and borders. A period of a word uu is the starting position of a suffix of uu that is also a prefix uu, and such a suffix is called a border. Each word of length, say n>0n>0, has a set of periods, but not all combinations of integers are sets of periods. Computing the period set of a word uu takes linear time in the length of uu. We address the question of computing, the set, denoted Γn\Gamma_n, of all period sets of words of length nn. Although period sets have been characterized, there is no formula to compute the cardinality of Γn\Gamma_n (which is exponential in nn), and the known dynamic programming algorithm to enumerate Γn\Gamma_n suffers from its space complexity. We present an incremental approach to compute Γn\Gamma_n from Γn1\Gamma_{n-1}, which reduces the space complexity, and then a constructive certification algorithm useful for verification purposes. The incremental approach defines a parental relation between sets in Γn1\Gamma_{n-1} and Γn\Gamma_n, enabling one to investigate the dynamics of period sets, and their intriguing statistical properties. Moreover, the period set of a word uu is the key for computing the absence probability of uu in random texts. Thus, knowing Γn\Gamma_n is useful to assess the significance of word statistics, such as the number of missing words in a random text.
There is no much doubt that biotic interactions shape community assembly and ultimately the spatial co-variations between species. There is a hope that the signal of these biotic interactions can be observed and retrieved by investigating the spatial associations between species while accounting for the direct effects of the environment. By definition, biotic interactions can be both symmetric and asymmetric. Yet, most models that attempt to retrieve species associations from co-occurrence or co-abundance data internally assume symmetric relationships between species. Here, we propose and validate a machine-learning framework able to retrieve bidirectional associations by analyzing species community and environmental data. Our framework (1) models pairwise species associations as directed influences from a source to a target species, parameterized with two species-specific latent embeddings: the effect of the source species on the community, and the response of the target species to the community; and (2) jointly fits these associations within a multi-species conditional generative model with different modes of interactions between environmental drivers and biotic associations. Using both simulated and empirical data, we demonstrate the ability of our framework to recover known asymmetric and symmetric associations and highlight the properties of the learned association networks. By comparing our approach to other existing models such as joint species distribution models and probabilistic graphical models, we show its superior capacity at retrieving symmetric and asymmetric interactions. The framework is intuitive, modular and broadly applicable across various taxonomic groups.
Our approach is part of the close link between continuous dissipative dynamical systems and optimization algorithms. We aim to solve convex minimization problems by means of stochastic inertial differential equations which are driven by the gradient of the objective function. This will provide a general mathematical framework for analyzing fast optimization algorithms with stochastic gradient input. Our study is a natural extension of our previous work devoted to the first-order in time stochastic steepest descent. Our goal is to develop these results further by considering second-order stochastic differential equations in time, incorporating a viscous time-dependent damping and a Hessian-driven damping. To develop this program, we rely on stochastic Lyapunov analysis. Assuming a square-integrability condition on the diffusion term times a function dependant on the viscous damping, and that the Hessian-driven damping is a positive constant, our first main result shows that almost surely, there is convergence of the values, and states fast convergence of the values in expectation. Besides, in the case where the Hessian-driven damping is zero, we conclude with the fast convergence of the values in expectation and in almost sure sense, we also managed to prove almost sure weak convergence of the trajectory. We provide a comprehensive complexity analysis by establishing several new pointwise and ergodic convergence rates in expectation for the convex and strongly convex case.
The dwarf spheroidal satellite galaxies of the Milky Way are some of the most dark-matter-dominated objects known. Due to their proximity, high dark matter content, and lack of astrophysical backgrounds, dwarf spheroidal galaxies are widely considered to be among the most promising targets for the indirect detection of dark matter via gamma rays. Here we report on gamma-ray observations of Milky Way dwarf spheroidal satellite galaxies based on 6 years of Fermi Large Area Telescope data processed with the new Pass 8 reconstruction and event-level analysis. None of the dwarf galaxies are significantly detected in gamma rays, and we present upper limits on the dark matter annihilation cross section from a combined analysis of the 15 most promising dwarf galaxies. The constraints derived are among the strongest to date using gamma rays and lie below the canonical thermal relic cross section for WIMPs of mass 100 GeV\lesssim 100~GeV annihilating via the bbˉb \bar b and τ+τ\tau^{+}\tau^{-} channels.
The gamma-ray sky has been observed with unprecedented accuracy in the last decade by the Fermi large area telescope (LAT), allowing us to resolve and understand the high-energy Universe. The nature of the remaining unresolved emission (unresolved gamma-ray background, UGRB) below the LAT source detection threshold can be uncovered by characterizing the amplitude and angular scale of the UGRB fluctuation field. This work presents a measurement of the UGRB autocorrelation angular power spectrum based on eight years of Fermi LAT Pass 8 data products. The analysis is designed to be robust against contamination from resolved sources and noise systematics. The sensitivity to subthreshold sources is greatly enhanced with respect to previous measurements. We find evidence (with \sim3.7σ\sigma significance) that the scenario in which two classes of sources contribute to the UGRB signal is favored over a single class. A double power law with exponential cutoff can explain the anisotropy energy spectrum well, with photon indices of the two populations being 2.55 ±\pm 0.23 and 1.86 ±\pm 0.15.
The Hubble diagram of type-Ia supernovae (SNe-Ia) provides cosmological constraints on the nature of dark energy with an accuracy limited by the flux calibration of currently available spectrophotometric standards. The StarDICE experiment aims at establishing a 5-stage metrology chain from NIST photodiodes to stars, with a targeted accuracy of \SI{1}{mmag} in grizgriz colors. We present the first two stages, resulting in the calibration transfer from NIST photodiodes to a demonstration \SI{150}{Mpixel} CMOS sensor (Sony IMX411ALR as implemented in the QHY411M camera by QHYCCD). As a side-product, we provide full characterization of this camera. A fully automated spectrophotometric bench is built to perform the calibration transfer. The sensor readout electronics is studied using thousands of flat-field images from which we derive stability, high resolution photon transfer curves and estimates of the individual pixel gain. The sensor quantum efficiency is then measured relative to a NIST-calibrated photodiode. Flat-field scans at 16 different wavelengths are used to build maps of the sensor response. We demonstrate statistical uncertainty on quantum efficiency below \SI{0.001}{e^-/\gamma} between \SI{387}{nm} and \SI{950}{nm}. Systematic uncertainties in the bench optics are controlled at the level of \SI{1e-3}{e^-/\gamma}. Uncertainty in the overall normalization of the QE curve is 1\%. Regarding the camera we demonstrate stability in steady state conditions at the level of \SI{32.5}{ppm}. Homogeneity in the response is below \SI{1}{\percent} RMS across the entire sensor area. Quantum efficiency stays above \SI{50}{\percent} in most of the visible range, peaking well above \SI{80}{\percent} between \SI{440}{nm} and \SI{570}{nm}. Differential non-linearities at the level of \SI{1}{\percent} are detected. A simple 2-parameter model is proposed to mitigate the effect.
We revisit the optimal control problem of maximizing biogas production in continuous bio-processes in two directions: 1. over an infinite horizon, 2. with sub-optimal controllers independent of the time horizon. For the first point, we identify a set of optimal controls for the problems with an averaged reward and with a discounted reward when the discount factor goes to 0 and we show that the value functions of both problems are equal. For the finite horizon problem, our approach relies on a framing of the value function by considering a different reward for which the optimal solution has an explicit optimal feedback that is time-independent. In particular, we show that this technique allows us to provide explicit bounds on the sub-optimality of the proposed controllers. The various strategies are finally illustrated on Haldane and Contois growth functions.
Antares is the closest red supergiant (RSG) to Earth. The discovery of linear polarization in the atomic lines of the star opens the path to produce direct images of the photosphere, hence probing the dynamics at the surface. We analyze this linear polarization signals following the same scheme as has been previously done for the RSG Betelgeuse, and find that they are comparable in all its details. This allows us to use the same models for the analysis of these polarization signals in both stars. We found that as in Betelgeuse, the linear polarization signal of Antares is due to the depolarization of the continuum combined with brightness inhomogeneities. This allows us to produce images of the photosphere of star. We show that in Antares, convective cells can last several months and occupy roughly 30\% of the stellar radius for the largest ones.
The Helfrich-Hurault (HH) elastic instability is a well-known mechanism behind patterns that form as a result of strain upon liquid crystal systems with periodic ground states. In the HH model, layered structures undulate and buckle in response to local, geometric incompatibilities, in order to maintain the preferred layer spacing. Classic HH systems include cholesteric liquid crystals under electromagnetic field distortions and smectic liquid crystals under mechanical strains, where both materials are confined between rigid substrates. However, richer phenomena are observed when undulation instabilities occur in the presence of deformable interfaces and variable boundary conditions. Understanding how the HH instability is affected by deformable surfaces is imperative for applying the instability to a broader range of materials. In this review, we re-examine the HH instability and give special focus to how the boundary conditions influence the mechanical response of lamellar systems to geometrical frustration. We use lamellar liquid crystals confined within a spherical shell geometry as our model system. Made possible by the relatively recent advances in microfluidics within the past 15 years, liquid crystal shells are composed entirely of fluid interfaces and have boundary conditions that can be dynamically controlled at will. We examine past and recent work that exemplifies how topological constraints, molecular anchoring conditions, and boundary curvature can trigger the HH instability in liquid crystals with periodic ground states. We then end by identifying similar phenomena across a wide variety of materials, both biological and synthetic. With this review, we aim to highlight that the HH instability is a generic and often overlooked response of periodic materials to geometrical frustration.
M.I. Dyakonov's paper provides a comprehensive, unified survey of surface waves across diverse physical systems, from hydrodynamics to electromagnetism, emphasizing their shared underlying principles and mathematical similarities. It details various types of surface waves, including the author's own contribution of Dyakonov surface waves, and explains observable phenomena through these common frameworks.
230
There are no more papers matching your filters at the moment.