Consejo Nacional de Investigaciones Científicas y Técnicas (CONICET)
The DREAMS Project, based on IllustrisTNG simulations, quantifies the relative impact of baryonic feedback and intrinsic halo-to-halo variance on Milky Way dark matter density profiles, revealing that halo-to-halo variation is the dominant source of scatter. The study also demonstrates IllustrisTNG halos generally contract adiabatically, contrasting with bursty feedback models like FIRE-2.
Motivated by the recent baryon acoustic oscillation measurements of DESI DR2 collaboration, this works presents an extended analysis of a cosmological model based on holographic dark energy within the framework of Unimodular Gravity. We probe the model with an extensive set of observations: cosmic chronometers, Pantheon Plus++SH0ES Type Ia supernovae, DESI DR2 BAO distances, quasar X-ray/UV fluxes (two independent calibrations), and Planck 2018 CMB data. The results are analyzed to assess the modelś ability to alleviate the Hubble tension and, in comparison with the standard Λ\LambdaCDM framework, to determine which of the two scenarios is preferred according to Bayesian evidence. We conclude that the present implementation of holographic dark energy in Unimodular Gravity, while theoretically appealing, does not alleviate the Hubble tension and is not statistically preferred by Bayesian criteria when compared with the standard Λ\LambdaCDM model. Nevertheless, in neither case does the preference become very strong or conclusive against it.
The development of novel instrumentation requires an iterative cycle with three stages: design, prototyping, and testing. Recent advancements in simulation and nanofabrication techniques have significantly accelerated the design and prototyping phases. Nonetheless, detector characterization continues to be a major bottleneck in device development. During the testing phase, a significant time investment is required to characterize the device in different operating conditions and find optimal operating parameters. The total effort spent on characterization and parameter optimization can occupy a year or more of an expert's time. In this work, we present a novel technique for automated sensor calibration that aims to accelerate the testing stage of the development cycle. This technique leverages closed-loop Bayesian optimization (BO), using real-time measurements to guide parameter selection and identify optimal operating states. We demonstrate the method with a novel low-noise CCD, showing that the machine learning-driven tool can efficiently characterize and optimize operation of the sensor in a couple of days without supervision of a device expert.
Mobile genetic elements (e.g., endogenous viruses, LINEs, SINEs) can transfer between genomes, even between species, triggering dramatic genetic changes. Endogenous viral elements (EVEs) arise when infectious viruses integrate into the host germline. EVEs integrate at specific sites; their genes or regulatory regions can be exapted and could induce chromosomal rearrangement. We propose that EVEs participate in adaptive radiations and that their parent viruses, through interspecific transfer, could initiate new species formation. By synchronously affecting multiple individuals, viral outbreaks generate shared genomic changes that both facilitate reproductive isolation and provide the simultaneous modifications needed for groups to emerge as founders of new species. We suggest horizontal viral transfer during the K-Pg accelerated mammalian radiation linking viral epidemics to macroevolutionary diversification. This theoretical work proposes endogenous viruses as catalysts for explosive speciation.
07 Dec 2024
Localization microscopy enables imaging with resolutions that surpass the conventional optical diffraction limit. Notably, the MINFLUX method achieves super-resolution by shaping the excitation point-spread function (PSF) to minimize the required photon flux for a given precision. Various beam shapes have recently been proposed to improve localization efficiency, yet their optimality remains an open question. In this work, we deploy a numerical and theoretical framework to determine optimal excitation patterns for MINFLUX. Such a computational approach allows us to search for new beam patterns in a fast and low-cost fashion, and to avoid time-consuming and expensive experimental explorations. We show that the conventional donut beam is a robust optimum when the excitation beams are all constrained to the same shape. Further, our PSF engineering framework yields two pairs of half-moon beams (orthogonal to each other) which can improve the theoretical localization precision by a factor of about two.
Stars and planets both form by accreting material from a surrounding disk. Because they grow from the same material, theory predicts that there should be a relationship between their compositions. In this study, we search for a compositional link between rocky exoplanets and their host stars. We estimate the iron-mass fraction of rocky exoplanets from their masses and radii and compare it with the compositions of their host stars, which we assume reflect the compositions of the protoplanetary disks. We find a correlation (but not a 1:1 relationship) between these two quantities, with a slope of >4, which we interpret as being attributable to planet formation processes. Super-Earths and super-Mercuries appear to be distinct populations with differing compositions, implying differences in their formation processes.
Context: The VISTA Variables in the Via Lactea (VVV) and its extension (VVVX) are near-infrared surveys mapping the Galactic bulge and adjacent disk. These data have enabled the discovery of numerous star clusters obscured by high and spatially variable extinction. Most previous searches relied on visual inspection of individual tiles, which is inefficient and biased against faint or low-density systems. Aims: We aim to develop an automated, homogeneous algorithm for systematic cluster detection across different surveys. Here, we apply our method to VVVX data covering low-latitude regions of the Galactic bulge and disk, affected by extinction and crowding. Methods: We introduce the Consensus-based Algorithm for Nonparametric Detection of Star Clusters (CANDiSC), which integrates kernel density estimation, the Density-Based Spatial Clustering of Applications with Noise (DBSCAN), and nearest-neighbour density estimation within a consensus framework. A stellar overdensity is classified as a candidate if identified by at least two of these methods. We apply CANDiSC to 680 tiles in the VVVX PSF photometric catalogue, covering approximately 1100 square degrees. Results: We detect 163 stellar overdensities, of which 118 are known clusters. Cross-matching with recent catalogues yields five additional matches, leaving 40 likely new candidates absent from existing compilations. The estimated false-positive rate is below 5 percent. Conclusions: CANDiSC offers a robust and scalable approach for detecting stellar clusters in deep near-infrared surveys, successfully recovering known systems and revealing new candidates in the obscured and crowded regions of the Galactic plane.
We introduce MINFLUX, a concept for localizing photon emitters in space. By probing the emitter with a local intensity minimum of excitation light, MINFLUX minimizes the fluorescence photons needed for high localization precision. A 22-fold reduction of photon detections over that required in popular centroid-localization is demonstrated. In superresolution microscopy, MINFLUX attained ~1 nm precision, resolving molecules only 6 nm apart. Tracking single fluorescent proteins by MINFLUX increased the temporal resolution and the localizations per trace by 100-fold, as demonstrated with diffusing 30S ribosomal subunits in living E. coli. Since conceptual limits have not been reached, we expect this localization modality to break new ground for observing the dynamics, distribution, and structure of macromolecules in living cells and beyond.
In the context of an extended General Relativity theory with boundary terms included, we introduce a new nonlinear quantum algebra involving a quantum differential operator, with the aim to calculate quantum geometric alterations when a particle is created in the vicinity of a Schwarzschild black-hole by the Hawking radiation mechanism. The boundary terms in the varied action give rise to modifications in the geometric background, which are investigated by analyzing the metric tensor and the Ricci curvature within the framework of a renormalized quantum theory of gravity.
This study presents the development and optimization of a deep learning model based on Long Short-Term Memory (LSTM) networks to predict short-term hourly electricity demand in Córdoba, Argentina. Integrating historical consumption data with exogenous variables (climatic factors, temporal cycles, and demographic statistics), the model achieved high predictive precision, with a mean absolute percentage error of 3.20\% and a determination coefficient of 0.95. The inclusion of periodic temporal encodings and weather variables proved crucial to capture seasonal patterns and extreme consumption events, enhancing the robustness and generalizability of the model. In addition to the design and hyperparameter optimization of the LSTM architecture, two complementary analyses were carried out: (i) an interpretability study using Random Forest regression to quantify the relative importance of exogenous drivers, and (ii) an evaluation of model performance in predicting the timing of daily demand maxima and minima, achieving exact-hour accuracy in more than two-thirds of the test days and within abs(1) hour in over 90\% of cases. Together, these results highlight both the predictive accuracy and operational relevance of the proposed framework, providing valuable insights for grid operators seeking optimized planning and control strategies under diverse demand scenarios.
Un problema de gran interes en disciplinas como la ocupacional, ergonomica y deportiva, es la medicion de variables biomecanicas involucradas en el movimiento humano (como las fuerzas musculares internas y torque de articulaciones). Actualmente este problema se resuelve en un proceso de dos pasos. Primero capturando datos con dispositivos poco pr\'acticos, intrusivos y costosos. Luego estos datos son usados como entrada en modelos complejos para obtener las variables biomecanicas como salida. El presente trabajo representa una alternativa automatizada, no intrusiva y economica al primer paso, proponiendo la captura de estos datos a traves de imagenes. En trabajos futuros la idea es automatizar todo el proceso de calculo de esas variables. En este trabajo elegimos un caso particular de medicion de variables biomecanicas: el problema de estimar el nivel discreto de carga muscular que estan ejerciendo los musculos de un brazo. Para estimar a partir de imagenes estaticas del brazo ejerciendo la fuerza de sostener la carga, el nivel de la misma, realizamos un proceso de clasificacion. Nuestro enfoque utiliza Support Vector Machines para clasificacion, combinada con una etapa de pre-procesamiento que extrae caracter{\i}sticas visuales utilizando variadas tecnicas (Bag of Keypoints, Local Binary Patterns, Histogramas de Color, Momentos de Contornos) En los mejores casos (Local Binary Patterns y Momentos de Contornos) obtenemos medidas de performance en la clasificacion (Precision, Recall, F-Measure y Accuracy) superiores al 90 %.
Motivated by the new BAO data and the significant results recently published by the DESI DR2 Collaboration, this study presents a Markov Chain Monte Carlo (MCMC) analysis of all currently viable f (R) models using this new dataset, and compares its constraining power with that of previous BAO compilations. A corresponding Bayesian model comparison is then carried out. The results reveal, for the first time, very strong statistical evidence in favor of f (R) models over the standard {\Lambda}CDM scenario. The analysis also incorporates data from cosmic chronometers and the latest Pantheon++SH0ES supernovae compilation.
The Neumann Equation of State (EQS) allows obtaining the value of the surface free energy of a solid γSV{\gamma}_{SV} from the contact angle (θ)({\theta}) of a probe liquid with known surface tension γLV{\gamma}_{LV}. The value of γSV{\gamma}_{SV} is obtained by numerical methods solving the corresponding EQS. In this work, we analyzed the discrepancies between the values of γSV{\gamma}_{SV} obtained using the three versions of the EQS reported in the literature. The condition number of the different EQS was used to analyze their sensitivity to the uncertainty in the θ{\theta} values. Polynomials fit to one of these versions of EQS are proposed to obtain values of γSV{\gamma}_{SV} directly from contact angles (γSV(θ))({\gamma}_{SV} ({\theta})) of particular probe liquids. Finally, a general adjusted polynomial is presented to obtain the values of γSV{\gamma}_{SV} not restricted to a particular probe liquid (γSV(θ,γLV))({\gamma}_{SV}({\theta},{\gamma}_{LV})). Results showed that the three versions of EQS present non-negligible discrepancies, especially at high values of θ{\theta}. The sensitivity of the EQS to the uncertainty in the values of θ{\theta} is very similar in the three versions and depends on the probe liquid used (greater sensitivity at higher γLV){\gamma}_{LV}) and on the value of γSV{\gamma}_{SV} of the solid (greater sensitivity at lower γSV){\gamma}_{SV}). The discrepancy of the values obtained by numerical resolution of both the fifth-order fit polynomials and the general fit polynomial was low, no larger than ±0.40mJ/m2{\pm}0.40\,mJ/m^{2}. The polynomials obtained allow the analysis and propagation of the uncertainty of the input variables in the determination of γSV{\gamma}_{SV} in a simple and fast way.
We review the current status of the leptogenesis scenario originally proposed by Akhmedov, Rubakov and Smirnov (ARS). It takes place in the parametric regime where the right-handed neutrinos are at the electroweak scale or below and the CP-violating effects are induced by the coherent superposition of different right-handed mass eigenstates. Two main theoretical approaches to derive quantum kinetic equations, the Hamiltonian time evolution as well as the Closed-Time-Path technique are presented, and we discuss their relations. For scenarios with two right-handed neutrinos, we chart the viable parameter space. Both, a Bayesian analysis, that determines the most likely configurations for viable leptogenesis given different variants of flat priors, and a determination of the maximally allowed mixing between the light, mostly left-handed, and heavy, mostly right-handed, neutrino states are discussed. Rephasing invariants are shown to be a useful tool to classify and to understand various distinct contributions to ARS leptogenesis that can dominate in different parametric regimes. While these analyses are carried out for the parametric regime where initial asymmetries are generated predominantly from lepton-number conserving, but flavor violating effects, we also review the contributions from lepton-number violating operators and identify the regions of parameter space where these are relevant.
We present a simple and broadly applicable extension of the Casas-Ibarra parametrisation that captures the structure of all Majorana neutrino mass models. Building directly on the original formulation, our approach naturally accommodates additional degrees of freedom and provides a unified, minimal framework for parametrising the Yukawa sector. It significantly simplifies both analytical treatments and numerical scans, and can be universally applied to any Majorana neutrino mass model, regardless of the underlying dynamics. The approach also offers a unified framework for classifying neutrino mass models according to the structure of the neutrino mass matrix, which naturally motivates the proposal of an extended version of the Scotogenic Model. This classification scheme yields tree-level (loop-level) representative models: the seesaw (Scotogenic Model), the linear seesaw (the Generalised Scotogenic Model), and the linear plus inverse seesaw (the Extended Scotogenic Model). We provide ready-to-use explicit expressions for several well-known scenarios, including the Zee model where one of the Yukawa matrices is antisymmetric.
We put forth an approach to obtain a quantum master equation for the propagation of light in nonlinear fiber optics by relying on simple quantum pictures of the processes (linear and nonlinear) occurring along propagation in an optical fiber. This equation is shown to be in excellent agreement with the classical Generalized Nonlinear Schrodinger Equation and predicts the effects of self-steepening and spontaneous Raman scattering. Last, we apply these results to the analysis of two cases of relevance in quantum technologies: single-photon frequency translation and spontaneous four-wave mixing.
Focused optical fields are key to a multitude of applications involving light-matter interactions, such as optical microscopy, single-molecule spectroscopy, optical tweezers, lithography, or quantum coherent control. A detailed vectorial characterization of the focused optical fields that includes a description beyond the paraxial approximation is key to optimize technological performance as well as for the design of meaningful experiments and interpret properly their results. Here, we present PyFocus, an open-source Python software package to perform fully vectorial calculations of focused electromagnetic fields after modulation by an arbitrary phase mask and in the presence of a multilayer system. We provide a graphical user interface and high-level functions to easily integrate PyFocus into custom scripts. Furthermore, to demonstrate the potential of PyFocus, we apply it to extensively characterize the generation of toroidal foci with a high numerical aperture objective, as it is commonly done in super-resolution fluorescence microscopy methods such as STED or MINFLUX. We provide examples of the effects of different experimental factors such as polarization, aberrations, and misalignments of key optical elements. Finally, we present calculations of toroidal foci through an interface of different mediums, and, to our knowledge, the first calculations of toroidal foci generated in total internal reflection conditions.
We study the bolometric evolution of the exceptional Type Ic Supernova (SN) 2022jli, aiming to understand the underlying mechanisms responsible for its distinctive double-peaked light curve morphology, extended timescales, and the rapid, steep decline in luminosity observed at around 270 days after the SN discovery. We present a quantitative assessment of two leading models through hydrodynamic radiative simulations: two shells enriched with nickel and a combination of nickel and magnetar power. We explore the parameter space of a model in which the SN is powered by radioactive decay assuming a bimodal nickel distribution. While this setup can reproduce the early light curve properties, it faces problems to explain the prominent second peak. We therefore consider a hybrid scenario with a rapidly rotating magnetar as additional energy source. We find that the observed light curve morphology can be well reproduced by a model combining a magnetar engine and a double-layer 56^{56}Ni distribution. The best-fitting case consist of a magnetar with a spin period of P22P\simeq 22 ms and a bipolar magnetic field strength of B5×1014B\simeq 5\times 10^{14} G and a radioactive content with total nickel mass of 0.15 M_\odot, distributed across two distinct shells within a pre-SN structure of 11 M_\odot. To reproduce the abrupt drop in luminosity at 270\sim 270 d, the energy deposition from the magnetar must be rapidly and effectively switched off.
·
This paper presents the first comprehensive interpretability analysis of a Transformer-based Sign Language Translation (SLT) model, focusing on the translation from video-based Greek Sign Language to glosses and text. Leveraging the Greek Sign Language Dataset, we examine the attention mechanisms within the model to understand how it processes and aligns visual input with sequential glosses. Our analysis reveals that the model pays attention to clusters of frames rather than individual ones, with a diagonal alignment pattern emerging between poses and glosses, which becomes less distinct as the number of glosses increases. We also explore the relative contributions of cross-attention and self-attention at each decoding step, finding that the model initially relies on video frames but shifts its focus to previously predicted tokens as the translation progresses. This work contributes to a deeper understanding of SLT models, paving the way for the development of more transparent and reliable translation systems essential for real-world applications.
Optical antennas have been extensively employed to manipulate the photophysical properties of single photon emitters. Coupling between an emitter and a given resonant mode of an optical antenna depends mainly on three parameters: spectral overlap, relative distance, and relative orientation between the emitter's transition dipole moment and the antenna. While the first two have been already extensively demonstrated, achieving full coupling control remains unexplored due to the challenges in manipulating at the same time both the position and orientation of single molecules. Here, we use the DNA origami technique to assemble a dimer optical antenna and position a single fluorescent molecule at the antenna gap with controlled orientation, predominately parallel or perpendicular to the antenna's main axis. We study the coupling for both conditions through fluorescence measurements correlated with scanning electron microscopy images, revealing a 5-fold higher average fluorescence intensity when the emitter is aligned with the antenna's main axis and a maximum fluorescence enhancement of ~ 1400-fold. A comparison to realistic numerical simulations suggests that the observed distribution of fluorescence enhancement arises from small variations in emitter orientation and gap size. This work establishes DNA origami as a versatile platform to fully control the coupling between emitters and optical antennas, trailblazing the way for self-assembled nanophotonic devices with optimized and more homogenous performance.
There are no more papers matching your filters at the moment.