Univ Lyon
We develop a framework for learning properties of quantum states beyond the assumption of independent and identically distributed (i.i.d.) input states. We prove that, given any learning problem (under reasonable assumptions), an algorithm designed for i.i.d. input states can be adapted to handle input states of any nature, albeit at the expense of a polynomial increase in training data size (aka sample complexity). Importantly, this polynomial increase in sample complexity can be substantially improved to polylogarithmic if the learning algorithm in question only requires non-adaptive, single-copy measurements. Among other applications, this allows us to generalize the classical shadow framework to the non-i.i.d. setting while only incurring a comparatively small loss in sample efficiency. We use rigorous quantum information theory to prove our main results. In particular, we leverage permutation invariance and randomized single-copy measurements to derive a new quantum de Finetti theorem that mainly addresses measurement outcome statistics and, in turn, scales much more favorably in Hilbert space dimension.
The CGM around unobscured AGN has received much attention in recent years. Comparatively, nebulae associated with obscured AGN are less studied. Here, we simulate the Lyα\alpha, Hα\alpha, and HeII nebulae around the two types of AGN at z=23z=2-3 with ten massive systems from the FIRE simulations based on the unified model to show their differences and to test if they can be used to constrain the AGN model. We post-process the data with the CLOUDY and the Lyα\alpha radiative transfer code, RASCAS. Overall, we find that the Lyα\alpha nebulae around the unobscured AGN (type-I nebulae) and obscured AGN (type-II nebulae) do not exhibit significant differences in the luminosity, area, and HeII/Lyα\alpha when the simulated cutout is set to the halo virial radius. Whereas, the type-II nebulae exhibit less symmetric morphologies, flatter surface brightness profiles, and larger emission line widths (at R10R\geq 10 kpc) than those of the type-I nebulae. These nebulae properties exhibit complicated correlations with the AGN, indicating that nebulae observations can be applied to constrain the AGN engine. However, independent observations on nebulae in the mentioned emissions are insufficient to test the unified model as a priori in observations is not possible to know the direction and opening angle of the ionization cone. We prompt that the joint observations of Lyα\alpha nebulae and radio jets can help to reveal the ionization cone to probe the unified model. Our calculations suggest that this method requires 75\geq 75 type-II Lyα\alpha nebulae with current instruments to reach a confidence level of 95%\geq 95\%.
JWST has revealed an abundance of supermassive black holes (BHs) in the early Universe, and yet the lowest mass seed black holes that gave rise to these populations remain elusive. Here we present a systematic search for broad-line Active Galactic Nuclei (AGNs) in some of the faintest high-zz galaxies surveyed yet by combining ultra-deep JWST/NIRSpec G395M spectroscopy with the strong lensing aid in Abell S1063. By employing the profile of the [OIII]λ5007\lambda 5007 emission lines as a template for narrow-line components and carefully cross-validating with mock observations, we identify a sample of ten broad-line AGNs at $4.5
DeepInverse is an open-source PyTorch-based library for solving imaging inverse problems. The library covers all crucial steps in image reconstruction from the efficient implementation of forward operators (e.g., optics, MRI, tomography), to the definition and resolution of variational problems and the design and training of advanced neural network architectures. In this paper, we describe the main functionality of the library and discuss the main design choices.
Multivariate longitudinal data of mixed-type are increasingly collected in many science domains. However, algorithms to cluster this kind of data remain scarce, due to the challenge to simultaneously model the within- and between-time dependence structures for multivariate data of mixed kind. We introduce the Mixture of Mixed-Matrices (MMM) model: reorganizing the data in a three-way structure and assuming that the non-continuous variables are observations of underlying latent continuous variables, the model relies on a mixture of matrix-variate normal distributions to perform clustering in the latent dimension. The MMM model is thus able to handle continuous, ordinal, binary, nominal and count data and to concurrently model the heterogeneity, the association among the responses and the temporal dependence structure in a parsimonious way and without assuming conditional independence. The inference is carried out through an MCMC-EM algorithm, which is detailed. An evaluation of the model through synthetic data shows its inference abilities. A real-world application on financial data is presented.
Dwarf galaxies provide powerful laboratories for studying galaxy formation physics. Their early assembly, shallow gravitational potentials, and bursty, clustered star formation histories make them especially sensitive to the processes that regulate baryons through multi-phase outflows. Using high-resolution, cosmological zoom-in simulations of a dwarf galaxy from \textit{the Pandora suite}, we explore the impact of stellar radiation, magnetic fields, and cosmic ray feedback on star formation, outflows, and metal retention. We find that our purely hydrodynamical model without non-thermal physics - in which supernova feedback is boosted to reproduce realistic stellar mass assembly - drives violent, overly enriched outflows that suppress the metal content of the host galaxy. Including radiation reduces the clustering of star formation and weakens feedback. However, the additional incorporation of cosmic rays produces fast, mass-loaded, multi-phase outflows consisting of both ionized and neutral gas components, in better agreement with observations. These outflows, which entrain a denser, more temperate ISM, exhibit broad metallicity distributions while preserving metals within the galaxy. Furthermore, the star formation history becomes more bursty, in agreement with recent JWST findings. These results highlight the essential role of non-thermal physics in galaxy evolution and the need to incorporate it in future galaxy formation models.
We present an overview of the JWST GLIMPSE program, highlighting its survey design, primary science goals, gravitational lensing models, and first results. GLIMPSE provides ultra-deep JWST/NIRCam imaging across seven broadband filters (F090W, F115W, F200W, F277W, F356W, F444W) and two medium-band filters (F410M, F480M), with exposure times ranging from 20 to 40 hours per filter. This yields a 5σ\sigma limiting magnitude of 30.9 AB (measured in a 0.2 arcsec diameter aperture). The field is supported by extensive ancillary data, including deep HST imaging from the Hubble Frontier Fields program, VLT/MUSE spectroscopy, and deep JWST/NIRSpec medium-resolution multi-object spectroscopy. Exploiting the strong gravitational lensing of the galaxy cluster Abell S1063, GLIMPSE probes intrinsic depths beyond 33 AB magnitudes and covers an effective source-plane area of approximately 4.4 arcmin2^2 at z6z \sim 6. The program's central aim is to constrain the abundance of the faintest galaxies from z6z \sim 6 up to the highest redshifts, providing crucial benchmarks for galaxy formation models, which have so far been tested primarily on relatively bright systems. We present an initial sample of 540\sim 540 galaxy candidates identified at 6 < z < 16, with intrinsic UV magnitudes spanning MUVM_{\mathrm UV} = -20 to -12. This enables unprecedented constraints on the extreme faint end of the UV luminosity function at these epochs. In addition, GLIMPSE opens new windows for spatially resolved studies of star clusters in early galaxies and the detection and characterization of faint high-zz active galactic nuclei. This paper accompanies the first public data release, which includes reduced JWST and HST mosaics, photometric catalogs, and gravitational lensing models.
Enshrouded in several well-known controversies, dwarf galaxies have been extensively studied to learn about the underlying cosmology, notwithstanding that physical processes regulating their properties are poorly understood. To shed light on these processes, we introduce the Pandora suite of 17 high-resolution (3.5 parsec half-cell side) dwarf galaxy formation cosmological simulations. Commencing with thermo-turbulent star formation and mechanical supernova feedback, we gradually increase the complexity of physics incorporated leading to full-physics models combining magnetism, on-the-fly radiative transfer and the corresponding stellar photoheating, and SN-accelerated cosmic rays. We investigate combinations of these processes, comparing them with observations to constrain what are the main mechanisms determining dwarf galaxy properties. We find hydrodynamical `SN feedback-only' simulations struggle to produce realistic dwarf galaxies, leading either to overquenched or too centrally concentrated, dispersion dominated systems when compared to observed field dwarfs. Accounting for radiation with cosmic rays results in extended and rotationally-supported systems. Spatially `distributed' feedback leads to realistic stellar and HI masses as well as kinematics. Furthermore, resolved kinematic maps of our full-physics models predict kinematically distinct clumps and kinematic misalignments of stars, HI and HII after star formation events. Episodic star formation combined with its associated feedback induces more core-like dark matter central profiles, which our `SN feedback-only' models struggle to achieve. Our results demonstrate the complexity of physical processes required to capture realistic dwarf galaxy properties, making tangible predictions for integral field unit surveys, radio synchrotron emission, and for galaxy and multi-phase interstellar medium properties that JWST will probe.
This paper tackles two key challenges: detecting small, dense, and overlapping objects (a major hurdle in computer vision) and improving the quality of noisy images, especially those encountered in industrial environments. [1, 2]. Our focus is on evaluating methods built on supervised deep learning. We perform an analysis of these methods, using a newly de- veloped dataset comprising over 10k images and 120k in- stances. By evaluating their performance, accuracy, and com- putational efficiency, we identify the most reliable detection systems and highlight the specific challenges they address in industrial applications. This paper also examines the use of deep learning models to improve image quality in noisy industrial environments. We introduce a lightweight model based on a fully connected convolutional network. Addition- ally, we suggest potential future directions for further enhanc- ing the effectiveness of the model. The repository of the dataset and proposed model can be found at: this https URL, this https URL
1
We leverage JWST's superb resolution to derive strong lensing mass maps of 14 clusters, spanning a redshift range of z0.251.06z\sim0.25 - 1.06 and a mass range of M500212×1014MM_{500}\sim2-12 \times 10^{14}M_\odot, from the Strong LensIng and Cluster Evolution (SLICE) JWST program. These clusters represent a small subsample of the first clusters observed in the SLICE program that are chosen based on the detection of new multiple image constraints in the SLICE-JWST NIRCam/F150W2 and F322W2 imaging. These constraints include new lensed dusty galaxies and new substructures in previously identified lensed background galaxies. Four clusters have never been modeled before. For the remaining 10 clusters, we present updated models based on JWST and HST imaging and, where available, ground-based spectroscopy. We model the global mass profile for each cluster and report the mass enclosed within 200 and 500 kpc. We report the number of new systems identified in the JWST imaging, which in one cluster is as high as 19 new systems. The addition of new lensing systems and constraints from substructure clumps in lensed galaxies improves the ability of strong lensing models to accurately reproduce the interior mass distribution of each cluster. We also report the discovery of a candidate transient in a lensed image of the galaxy cluster SPT-CL J0516-5755. All lens models and their associated products are available for download at the Strong Lensing Cluster Atlas Data Base, which is hosted at Laboratoire d'Astrophysique de Marseille.
We present a Sugawara-type construction for boundary charges in 4d BF theory and in a general family of related TQFTs. Starting from the underlying current Lie algebra of boundary symmetries, this gives rise to well-defined quadratic charges forming an algebra of vector fields. In the case of 3d BF theory (i.e. 3d gravity), it was shown in [PRD 106 (2022), arXiv:2012.05263 [hep-th]] that this construction leads to a two-dimensional family of diffeomorphism charges which satisfy a certain modular duality. Here we show that adapting this construction to 4d BF theory first requires to split the underlying gauge algebra. Surprisingly, the space of well-defined quadratic generators can then be shown to be once again two-dimensional. In the case of tangential vector fields, this canonically endows 4d BF theory with a diff(S2)×diff(S2)\mathrm{diff}(S^2)\times\mathrm{diff}(S^2) or diff(S2)vect(S2)ab\mathrm{diff}(S^2)\ltimes\mathrm{vect}(S^2)_\mathrm{ab} algebra of boundary symmetries depending on the gauge algebra. The prospect is to then understand how this can be reduced to a gravitational symmetry algebra by imposing Plebański simplicity constraints.
Numerous high-zz galaxies have recently been observed with the James Webb Space Telescope (JWST), providing new insights into early galaxy evolution. Their physical properties are typically derived through spectral energy distribution (SED) fitting, but the reliability of this approach for such early systems remains uncertain. Applying {\sc Bagpipes} on simulated SEDs at z=6z=6 from the {\sc Sphinx20^{20}} cosmological simulation, we examine uncertainties in the recovery of stellar masses, star formation rates (SFR10_{10}), and stellar metallicities from mock JWST/Near-Infrared Camera photometry. Even without dust or emission lines, fitting the intrinsic stellar continuum overestimates the stellar mass by about 60\% on average (and by up to a factor of five for low-mass galaxies with recent starbursts) and underestimates SFR10_{10} by a factor of two, owing to inaccurate star formation histories and age-metallicity degeneracies. The addition of dust and nebular emission further amplifies these biases, yielding offsets of approximately +0.3 and -0.4 dex in stellar mass and SFR10_{10}, respectively, while leaving stellar metallicities largely unconstrained. Incorporating bands free of strong emission lines, such as F410M, helps mitigate stellar mass overestimation by disentangling line emission from older stellar populations. We also find that best-fit or likelihood-weighted estimates are generally more accurate than median posterior values. Although stellar mass functions are reproduced reasonably well, the slope of the star formation main sequence depends sensitively on the adopted fitting model. Overall, these results underscore the importance of careful modelling when interpreting high-zz photometry, particularly for galaxies with recent star formation burst and/or strong emission lines, to minimise systematic biases in derived physical properties.
The 4MOST Cosmology Redshift Survey (CRS) will obtain nearly 5.4 million spectroscopic redshifts over 5700\sim5700\,deg2^2 to map large-scale structure and enable measurements of baryon acoustic oscillations (BAOs), growth rates via redshift-space distortions, and cross-correlations with weak-lensing surveys. We validate the target selections, photometry, masking, systematics and redshift distributions of the CRS Bright Galaxy (BG) and Luminous Red Galaxy (LRG) target catalogues selected from DESI Legacy Surveys DR10.1 imaging. We measure the angular two-point correlation function, test masking strategies, and recover redshift distributions via cross-correlation with DESI DR1 spectroscopy. For BG, we adopt Legacy Survey \texttt{MASKBITS} that veto bright stars, SGA large galaxies, and globular clusters; for LRG, we pair these with an unWISE W1 artefact mask. These choices suppress small-scale excess power without imprinting large-scale modes. A Limber-scaling test across BG rr-band magnitude slices shows that, after applying the scaling, the w(θ)w(\theta) curves collapse to a near-common power law over the fitted angular range, demonstrating photometric uniformity with depth and consistency between the North (NGC) and South (SGC) Galactic Caps. Cross-correlations with DESI spectroscopy recover the expected N(z)N(z), with higher shot noise at the brightest magnitudes. For LRGs, angular clustering in photo-zz slices (0.4\le z<1.0) is mutually consistent between the DECaLS and DES footprints at fixed zz and is well described by an approximate power law once photo-zz smearing is accounted for; halo-occupation fits yield results consistent with recent LRG studies. Together, these tests indicate that the masks and target selections yield uniform clustering statistics, supporting precision large-scale structure analyses with 4MOST CRS.
Adversarial Robustness Toolbox (ART) is a Python library supporting developers and researchers in defending Machine Learning models (Deep Neural Networks, Gradient Boosted Decision Trees, Support Vector Machines, Random Forests, Logistic Regression, Gaussian Processes, Decision Trees, Scikit-learn Pipelines, etc.) against adversarial threats and helps making AI systems more secure and trustworthy. Machine Learning models are vulnerable to adversarial examples, which are inputs (images, texts, tabular data, etc.) deliberately modified to produce a desired response by the Machine Learning model. ART provides the tools to build and deploy defences and test them with adversarial attacks. Defending Machine Learning models involves certifying and verifying model robustness and model hardening with approaches such as pre-processing inputs, augmenting training data with adversarial samples, and leveraging runtime detection methods to flag any inputs that might have been modified by an adversary. The attacks implemented in ART allow creating adversarial attacks against Machine Learning models which is required to test defenses with state-of-the-art threat models. Supported Machine Learning Libraries include TensorFlow (v1 and v2), Keras, PyTorch, MXNet, Scikit-learn, XGBoost, LightGBM, CatBoost, and GPy. The source code of ART is released with MIT license at this https URL. The release includes code examples, notebooks with tutorials and documentation (this http URL).
We present the first results from SPHINX-MHD, a suite of cosmological radiation-magnetohydrodynamics simulations designed to study the impact of primordial magnetic fields (PMFs) on galaxy formation and the evolution of the intergalactic medium during the epoch of reionization. The simulations are among the first to employ multi-frequency, on-the-fly radiation transfer and constrained transport ideal MHD in a cosmological context to simultaneously model the inhomogeneous process of reionization as well as the growth of PMFs. We run a series of (5cMpc)3(5\,\text{cMpc})^3 cosmological volumes, varying both the strength of the seed magnetic field (B0B_0) and its spectral index (nBn_B). We find that PMFs that have nB>0.562log10(B01nG)3.35n_B > -0.562\log_{10}\left(\frac{B_0}{1{\rm n}G}\right) - 3.35 produce electron optical depths (τe\tau_e) that are inconsistent with CMB constraints due to the unrealistically early collapse of low-mass dwarf galaxies. For nB2.9n_B\geq-2.9, our constraints are considerably tighter than the nG\sim{\rm n}G constraints from Planck. PMFs that do not satisfy our constraints have little impact on the reionization history or the shape of the UV luminosity function. Likewise, detecting changes in the Lya forest due to PMFs will be challenging because photoionisation and photoheating efficiently smooth the density field. However, we find that the first absorption feature in the global 21cm signal is a sensitive indicator of the properties of the PMFs, even for those that satisfy our τe\tau_e constraint. Furthermore, strong PMFs can marginally increase the escape of LyC photons by up to 25\% and shrink the effective radii of galaxies by 44%\sim44\% which could increase the completeness fraction of galaxy surveys. Finally, our simulations show that surveys with a magnitude limit of MUV,1500=13{\rm M_{UV,1500}=-13} can probe the sources that provide the majority of photons for reionization out to z=12z=12.
A key challenge in maximizing the benefits of Magnetic Resonance Imaging (MRI) in clinical settings is to accelerate acquisition times without significantly degrading image quality. This objective requires a balance between under-sampling the raw k-space measurements for faster acquisitions and gathering sufficient raw information for high-fidelity image reconstruction and analysis tasks. To achieve this balance, we propose to use sequential Bayesian experimental design (BED) to provide an adaptive and task-dependent selection of the most informative measurements. Measurements are sequentially augmented with new samples selected to maximize information gain on a posterior distribution over target images. Selection is performed via a gradient-based optimization of a design parameter that defines a subsampling pattern. In this work, we introduce a new active BED procedure that leverages diffusion-based generative models to handle the high dimensionality of the images and employs stochastic optimization to select among a variety of patterns while meeting the acquisition process constraints and budget. So doing, we show how our setting can optimize, not only standard image reconstruction, but also any associated image analysis task. The versatility and performance of our approach are demonstrated on several MRI acquisitions.
Primordial black holes are under intense scrutiny since the detection of gravitational waves from mergers of solar-mass black holes in 2015. More recently, the development of numerical tools and the precision observational data have rekindled the effort to constrain the black hole abundance in the lower mass range, that is M < 10^{23}g. In particular, primordial black holes of asteroid mass M10171023M \sim 10^{17}-10^{23}\,g may represent 100\% of dark matter. While the microlensing and stellar disruption constraints on their abundance have been relieved, Hawking radiation of these black holes seems to be the only detection (and constraining) mean. Hawking radiation constraints on primordial black holes date back to the first papers by Hawking. Black holes evaporating in the early universe may have generated the baryon asymmetry, modified big bang nucleosynthesis, distorted the cosmic microwave background, or produced cosmological backgrounds of stable particles such as photons and neutrinos. At the end of their lifetime, exploding primordial black holes would produce high energy cosmic rays that would provide invaluable access to the physics at energies up to the Planck scale. In this review, we describe the main principles of Hawking radiation, which lie at the border of general relativity, quantum mechanics and statistical physics. We then present an up-to-date status of the different constraints on primordial black holes that rely on the evaporation phenomenon, and give, where relevant, prospects for future work. In particular, non-standard black holes and emission of beyond the Standard Model degrees of freedom is currently a hot subject.
Kronecker-sparse (KS) matrices -- whose supports are Kronecker products of identity and all-ones blocks -- underpin the structure of Butterfly and Monarch matrices and offer the promise of more efficient models. However, existing GPU kernels for KS matrix multiplication suffer from high data movement costs, with up to 50% of time spent on memory-bound tensor permutations. We propose a fused, output-stationary GPU kernel that eliminates these overheads, reducing global memory traffic threefold. Across 600 KS patterns, our kernel achieves in FP32 a median speedup of x1.4 and lowers energy consumption by 15%. A simple heuristic based on KS pattern parameters predicts when our method outperforms existing ones. We release all code at github.com/PascalCarrivain/ksmm, including a PyTorch-compatible KSLinear layer, and demonstrate in FP32 end-to-end latency reductions of up to 22% in ViT-S/16 and 16% in GPT-2 medium.
CNRS logoCNRSCalifornia Institute of Technology logoCalifornia Institute of TechnologyUniversity of OsloChinese Academy of Sciences logoChinese Academy of SciencesUniversity of Manchester logoUniversity of ManchesterUniversity of ZurichUniversity College London logoUniversity College LondonUniversity of Oxford logoUniversity of OxfordUniversity of California, Irvine logoUniversity of California, IrvineUniversity of BonnUniversity of Copenhagen logoUniversity of CopenhagenUniversity of EdinburghTexas A&M University logoTexas A&M UniversityNASA Goddard Space Flight Center logoNASA Goddard Space Flight CenterRutherford Appleton LaboratoryUniversity of Southampton logoUniversity of SouthamptonJohns Hopkins University logoJohns Hopkins UniversityUniversité Paris-Saclay logoUniversité Paris-SaclayStockholm University logoStockholm UniversityUniversity of HelsinkiSorbonne Université logoSorbonne UniversitéUniversity of TurkuLeiden University logoLeiden UniversityUniversity of GenevaUniversity of PortsmouthUniversitat de BarcelonaUniversity of FerraraUniv LyonUniversity of SussexUniversité Côte d’AzurUniversità di TriesteDurham University logoDurham UniversityAix Marseille UniversityUniversity of CaliforniaJet Propulsion LaboratoryUniversity of Lyon 1Instituto de Astrofísica de CanariasEuropean Space AgencyUniversity of Cape TownThe University of Western AustraliaCNESJodrell Bank Centre for AstrophysicsUniversity of ValenciaFederal University of Rio de JaneiroUniversity of Hawai’iUniversity of KwaZulu-NatalThe University of ArizonaLudwig-Maximilians-UniversitätMax Planck Institute for AstronomyINAF Istituto di Astrofisica Spaziale e Fisica cosmica di MilanoINAF-Istituto di RadioastronomiaUniversité de MarseilleINAF – Osservatorio Astronomico di RomaInstitut d'Astrophysique de ParisInstitut de Física d’Altes Energies (IFAE)INAF-IASF MilanoDTU SpaceINAF-Osservatorio Astronomico di BolognaUniversité de LausanneCNRS-IN2P3Paris SaclayINAF - Osservatorio Astronomico di TorinoInstituto de Estudios Espaciales de Cataluña (IEEC)Dipartimento di Fisica e Astronomia, Università degli Studi di TriestePort d'Informació Científica (PIC)Universit Grenoble AlpesUniversit degli Studi di GenovaINAF Osservatorio Astronomico di CapodimonteUniversit degli Studi di PadovaUniversit Paris CitUniversit de LyonINAF Osservatorio Astrofisico di ArcetriINAF Osservatorio Astronomico di PadovaUniversit de MontpellierUniversity of Naples “Federico II”INAF Osservatorio di Astrofisica e Scienza dello Spazio di BolognaUniversit Di BolognaIFPU Institute for fundamental physics of the UniverseINFN Sezione di TriesteINAF ` Osservatorio Astronomico di TriesteINAF Osservatorio Astronomico di BreraUniversity of Milano Bicocca
We present the Flagship galaxy mock, a simulated catalogue of billions of galaxies designed to support the scientific exploitation of the Euclid mission. Euclid is a medium-class mission of the European Space Agency optimised to determine the properties of dark matter and dark energy on the largest scales of the Universe. It probes structure formation over more than 10 billion years primarily from the combination of weak gravitational lensing and galaxy clustering data. The breath of Euclid's data will also foster a wide variety of scientific analyses. The Flagship simulation was developed to provide a realistic approximation to the galaxies that will be observed by Euclid and used in its scientific analyses. We ran a state-of-the-art N-body simulation with four trillion particles, producing a lightcone on the fly. From the dark matter particles, we produced a catalogue of 16 billion haloes in one octant of the sky in the lightcone up to redshift z=3. We then populated these haloes with mock galaxies using a halo occupation distribution and abundance matching approach, calibrating the free parameters of the galaxy mock against observed correlations and other basic galaxy properties. Modelled galaxy properties include luminosity and flux in several bands, redshifts, positions and velocities, spectral energy distributions, shapes and sizes, stellar masses, star formation rates, metallicities, emission line fluxes, and lensing properties. We selected a final sample of 3.4 billion galaxies with a magnitude cut of H_E<26, where we are complete. We have performed a comprehensive set of validation tests to check the similarity to observational data and theoretical models. In particular, our catalogue is able to closely reproduce the main characteristics of the weak lensing and galaxy clustering samples to be used in the mission's main cosmological analysis. (abridged)
Markarian 231 (Mrk 231) is one of the brightest ultraluminous infrared galaxies (ULIRGs) known to date. It displays a unique optical-UV spectrum, characterized by a strong and perplexing attenuation in the near-UV, associated with a sudden polarization peak. Building on previous spectro-photometric modeling, we investigated the hypothesis that the core of Mrk 231 may host a binary SMBH system. In this scenario, the accretion disk of the primary, more massive SMBH is responsible for the optical-UV spectrum. The disk of the secondary, less massive SMBH, would be expected to essentially emit in the far UV. We applied this model to archival photometric and polarimetric data of Mrk 231 and tried to obtain the best fit possible. To support our findings, we performed radiative transfer calculations to determine the spatial disposition of each main component constituting Mrk 231. We find that a binary SMBH model can reproduce both the observed flux and polarization of Mrk 231 remarkably well. We infer that the core potentially hosts a binary SMBH system, with a primary SMBH of about 1.6x10^8 solar masses and a secondary of about 1.1x10^7 solar masses , separated by a semimajor axis of 146 AU.The secondary SMBH drives a degree of polarization of 3 % between 0.1 and 0.2 {\mu}m, with a corresponding polarization position angle of about 134{\deg} , which is consistent with scattering within an accretion disk. The primary SMBH and the structure around it are responsible for a degree of polarization of 23 % between 0.3 and 0.4 {\mu}m with a corresponding polarization position angle of about 96{\deg} , that is possibly attributed to scattering within the quasar's wind. Finally, our model predicts the existence of a second peak in polarized flux in the far-ultraviolet, a telltale signature that could definitively prove the presence of a binary SMBH.
There are no more papers matching your filters at the moment.