Katholieke Universiteit Leuven
With the rise of emerging risks, model uncertainty poses a fundamental challenge in the insurance industry, making robust pricing a first-order question. This paper investigates how insurers' robustness preferences shape competitive equilibrium in a dynamic insurance market. Insurers optimize their underwriting and liquidity management strategies to maximize shareholder value, leading to equilibrium outcomes that can be analytically derived and numerically solved. Compared to a benchmark without model uncertainty, robust insurance pricing results in significantly higher premiums and equity valuations. Notably, our model yields three novel insights: (1) The minimum, maximum, and admissible range of aggregate capacity all expand, indicating that insurers' liquidity management becomes more conservative. (2) The expected length of the underwriting cycle increases substantially, far exceeding the range commonly reported in earlier empirical studies. (3) While the capacity process remains ergodic in the long run, the stationary density becomes more concentrated in low-capacity states, implying that liquidity-constrained insurers require longer to recover. Together, these findings provide a potential explanation for recent skepticism regarding the empirical evidence of underwriting cycles, suggesting that such cycles may indeed exist but are considerably longer than previously assumed.
19
University of Cambridge logoUniversity of CambridgeUniversity of BernUniversity of EdinburghETH Zürich logoETH ZürichTechnische Universität DresdenUniversity of PisaStockholm University logoStockholm UniversitySorbonne Université logoSorbonne UniversitéUniversity of TurkuLeiden University logoLeiden UniversityUniversity of GenevaUniversity of BelgradeUniversity of ViennaUniversity of LeicesterUniversity of VigoUniversiteit LeidenObservatoire de ParisUniversité de LiègeINAF - Osservatorio Astrofisico di TorinoUniversity of Groningen logoUniversity of GroningenUniversity of BathLund UniversityUniversity of LausanneInstituto de Astrofísica de CanariasUniversity of AntioquiaEuropean Space AgencyUniversidad de ValparaísoUniversité de MonsELTE Eötvös Loránd UniversityUniversity of BordeauxObservatoire de la Côte d’AzurFaculdade de Ciências da Universidade de LisboaUniversity of BarcelonaMax Planck Institute for AstronomyNational Observatory of AthensUniversité de Paris-SaclayInstituto de Astrofísica de AndalucíaUniversité de Franche-ComtéINAF – Osservatorio Astronomico di RomaKatholieke Universiteit LeuvenRoyal Observatory of BelgiumSpace Research InstituteUniversité de RennesUniversity of AarhusKonkoly ObservatoryTartu ObservatoryHellenic Open UniversityARI, Zentrum für Astronomie der Universität HeidelbergCopernicus Astronomical CenterESAC, Villanueva de la CañadaAstronomical Observatory of TurinUniversité de BesançonCENTRA, Universidade de LisboaUniversité de NiceObservatoire de la Côte d'Azur, CNRSINAF – Osservatorio Astronomico di CataniaUniversit catholique de LouvainUniversit de ToulouseUniversit Libre de BruxellesINAF Osservatorio Astronomico di CapodimonteUniversit de LorraineAix-Marseille Universit",Universit de StrasbourgUniversit de LilleINAF Osservatorio Astrofisico di ArcetriINAF Osservatorio Astronomico di PadovaUniversit de MontpellierINAF Osservatorio di Astrofisica e Scienza dello Spazio di Bologna
The Gaia Galactic survey mission is designed and optimized to obtain astrometry, photometry, and spectroscopy of nearly two billion stars in our Galaxy. Yet as an all-sky multi-epoch survey, Gaia also observes several million extragalactic objects down to a magnitude of G~21 mag. Due to the nature of the Gaia onboard selection algorithms, these are mostly point-source-like objects. Using data provided by the satellite, we have identified quasar and galaxy candidates via supervised machine learning methods, and estimate their redshifts using the low resolution BP/RP spectra. We further characterise the surface brightness profiles of host galaxies of quasars and of galaxies from pre-defined input lists. Here we give an overview of the processing of extragalactic objects, describe the data products in Gaia DR3, and analyse their properties. Two integrated tables contain the main results for a high completeness, but low purity (50-70%), set of 6.6 million candidate quasars and 4.8 million candidate galaxies. We provide queries that select purer sub-samples of these containing 1.9 million probable quasars and 2.9 million probable galaxies (both 95% purity). We also use high quality BP/RP spectra of 43 thousand high probability quasars over the redshift range 0.05-4.36 to construct a composite quasar spectrum spanning restframe wavelengths from 72-100 nm.
This paper presents an automated approach for designing processors that support a subset of the RISC-V instruction set architecture (ISA) for a new class of applications at Extreme Edge. The electronics used in extreme edge applications must be area and power-efficient, but also provide additional qualities, such as low cost, conformability, comfort and sustainability. Flexible electronics, rather than silicon-based electronics, will be able to meet the above qualities. For this purpose, we propose a methodology for generating RISC-V instruction subset processors (RISSPs) tailored to these applications and implementing them as flexible integrated circuits (FlexICs). The methodology makes verification an integral part of the processor design by treating each instruction in the ISA as a discrete, fully functional, pre-verified hardware block. It automatically builds a custom processor by stitching together the instruction hardware blocks required by an application or a set of applications in a specific domain. We generate RISSPs using the proposed methodology for three extreme edge applications, and embedded applications from the Embench benchmark suite. When synthesized, RISSPs can achieve 8-to-43% reduction in area and 3-to-30% reduction in power compared to a processor supporting the full RISC-V ISA, and are also on average ~40 times more energy efficient than Serv - the world's smallest 32-bit RISC-V processor. When physically implemented as FlexICs, the three extreme edge RISSPs achieve up to 42% area and 21% power savings with respect to the full RISC-V processor.
This work presents Homomorphic Encryption Intermediate Representation (HEIR), a unified approach to building homomorphic encryption (HE) compilers. HEIR aims to support all mainstream techniques in homomorphic encryption, integrate with all major software libraries and hardware accelerators, and advance the field by providing a platform for research and benchmarking. Built on the MLIR compiler framework, HEIR introduces HE-specific abstraction layers at which existing optimizations and new research ideas may be easily implemented. Although many HE optimization techniques have been proposed, it remains difficult to combine or compare them effectively. HEIR provides a means to effectively explore the space of HE optimizations. HEIR addresses the entire HE stack and includes support for various frontends, including Python. The contribution of this work includes: (1) We introduce HEIR as a framework for building HE compilers. (2) We validate HEIR's design by porting a large fraction of the HE literature to HEIR, and we argue that HEIR can tackle more complicated and diverse programs than prior literature. (3) We provide evidence that HEIR is emerging as the de facto HE compiler for academic research and industry development.
We describe stabilizer states and Clifford group operations using linear operations and quadratic forms over binary vector spaces. We show how the n-qubit Clifford group is isomorphic to a group with an operation that is defined in terms of a (2n+1)x(2n+1) binary matrix product and binary quadratic forms. As an application we give two schemes to efficiently decompose Clifford group operations into one and two-qubit operations. We also show how the coefficients of stabilizer states and Clifford group operations in a standard basis expansion can be described by binary quadratic forms. Our results are useful for quantum error correction, entanglement distillation and possibly quantum computing.
We describe generalizations of the Pauli group, the Clifford group and stabilizer states for qudits in a Hilbert space of arbitrary dimension d. We examine a link with modular arithmetic, which yields an efficient way of representing the Pauli group and the Clifford group with matrices over the integers modulo d. We further show how a Clifford operation can be efficiently decomposed into one and two-qudit operations. We also focus in detail on standard basis expansions of stabilizer states.
We show that all half-BPS Wilson loop operators in N=4 SYM -- which are labeled by Young tableaus -- have a gravitational dual description in terms of D5-branes or alternatively in terms of D3-branes in AdS_5xS^5. We prove that the insertion of a half-BPS Wilson loop operator in the cal N=4 SYM path integral is achieved by integrating out the degrees of freedom on a configuration of bulk D5-branes or alternatively on a configuration of bulk D3-branes. The bulk D5-brane and D3-brane descriptions are related by bosonization.
The newly accessible mid-infrared (MIR) window offered by the James Webb Space Telescope (JWST) for exoplanet imaging is expected to provide valuable information to characterize their atmospheres. In particular, coronagraphs on board the JWST Mid-InfraRed instrument (MIRI) are capable of imaging the coldest directly imaged giant planets at the wavelengths where they emit most of their flux. The MIRI coronagraphs have been specially designed to detect the NH3 absorption around 10.5 microns, which has been predicted by atmospheric models. We aim to assess the presence of NH3 while refining the atmospheric parameters of one of the coldest companions detected by directly imaging GJ 504 b. Its mass is still a matter of debate and depending on the host star age estimate, the companion could either be placed in the brown dwarf regime or in the young Jovian planet regime. We present an analysis of MIRI coronagraphic observations of the GJ 504 system. We took advantage of previous observations of reference stars to build a library of images and to perform a more efficient subtraction of the stellar diffraction pattern. We detected the presence of NH3 at 12.5 sigma in the atmosphere, in line with atmospheric model expectations for a planetary-mass object and observed in brown dwarfs within a similar temperature range. The best-fit model with Exo-REM provides updated values of its atmospheric parameters, yielding a temperature of Teff = 512 K and radius of R = 1.08 RJup. These observations demonstrate the capability of MIRI coronagraphs to detect NH3 and to provide the first MIR observations of one of the coldest directly imaged companions. Overall, NH3 is a key molecule for characterizing the atmospheres of cold planets, offering valuable insights into their surface gravity. These observations provide valuable information for spectroscopic observations planned with JWST.
This paper compares the sensing performance of a narrowband near-field system across several practical antenna array geometries and SIMO/MISO and MIMO configurations. For identical transmit and receive apertures, MIMO processing is equivalent to squaring the near-field array factor, resulting in improved beamdepth and sidelobe level. Analytical derivations, supported by simulations, show that the MIMO processing improves the maximum near-field sensing range and resolution by approximately a factor of 1.4 compared to a single-aperture system. Using a quadratic approximation of the mainlobe of the array factor, an analytical improvement factor of 2\sqrt{2} is derived, validating the numerical results. Finally, MIMO is shown to improve the poor sidelobe performance observed in the near-field by a factor of two, due to squaring of the array factor.
Using the blackfold approach, we study new classes of higher-dimensional rotating black holes with electric charges and string dipoles, in theories of gravity coupled to a 2-form or 3-form field strength and to a dilaton with arbitrary coupling. The method allows to describe not only black holes with large angular momenta, but also other regimes that include charged black holes near extremality with slow rotation. We construct explicit examples of electric rotating black holes of dilatonic and non-dilatonic Einstein-Maxwell theory, with horizons of spherical and non-spherical topology. We also find new families of solutions with string dipoles, including a new class of prolate black rings. Whenever there are exact solutions that we can compare to, their properties in the appropriate regime are reproduced precisely by our solutions. The analysis of blackfolds with string charges requires the formulation of the dynamics of anisotropic fluids with conserved string-number currents, which is new, and is carried out in detail for perfect fluids. Finally, our results indicate new instabilities of near-extremal, slowly rotating charged black holes, and motivate conjectures about topological constraints on dipole hair.
This paper proposes a generalized passivity sensitivity analysis for power system stability studies. The method uncovers the most effective instability mitigation actions for both device-level and system-level investigations. The particular structure of the admittance and nodal models is exploited in the detailed derivation of the passivity sensitivity expressions. These proposed sensitivities are validated for different parameters at device-level and at system-level. Compared to previous stability and sensitivity methods, it does not require detailed system information, such as exact system eigenvalues, while it provides valuable information for a less conservative stable system design. In addition, we demonstrate how to utilize the proposed method through case studies with different converter controls and system-wide insights showing its general applicability.
Recently, we witness a rapid increase in the use of machine learning in self-adaptive systems. Machine learning has been used for a variety of reasons, ranging from learning a model of the environment of a system during operation to filtering large sets of possible configurations before analysing them. While a body of work on the use of machine learning in self-adaptive systems exists, there is currently no systematic overview of this area. Such overview is important for researchers to understand the state of the art and direct future research efforts. This paper reports the results of a systematic literature review that aims at providing such an overview. We focus on self-adaptive systems that are based on a traditional Monitor-Analyze-Plan-Execute feedback loop (MAPE). The research questions are centred on the problems that motivate the use of machine learning in self-adaptive systems, the key engineering aspects of learning in self-adaptation, and open challenges. The search resulted in 6709 papers, of which 109 were retained for data collection. Analysis of the collected data shows that machine learning is mostly used for updating adaptation rules and policies to improve system qualities, and managing resources to better balance qualities and resources. These problems are primarily solved using supervised and interactive learning with classification, regression and reinforcement learning as the dominant methods. Surprisingly, unsupervised learning that naturally fits automation is only applied in a small number of studies. Key open challenges in this area include the performance of learning, managing the effects of learning, and dealing with more complex types of goals. From the insights derived from this systematic literature review we outline an initial design process for applying machine learning in self-adaptive systems that are based on MAPE feedback loops.
Very low-mass stars (those <0.3 solar masses) host orbiting terrestrial planets more frequently than other types of stars, but the compositions of those planets are largely unknown. We use mid-infrared spectroscopy with the James Webb Space Telescope to investigate the chemical composition of the planet-forming disk around ISO-ChaI 147, a 0.11 solar-mass star. The inner disk has a carbon-rich chemistry: we identify emission from 13 carbon-bearing molecules including ethane and benzene. We derive large column densities of hydrocarbons indicating that we probe deep into the disk. The high carbon to oxygen ratio we infer indicates radial transport of material within the disk, which we predict would affect the bulk composition of any planets forming in the disk.
Many natural processes exhibit power-law behavior. The power-law exponent is linked to the underlying physical process and therefore its precise value is of interest. With respect to the energy content of nanoflares, for example, a power-law exponent steeper than 2 is believed to be a necessary condition to solve the enigmatic coronal heating problem. Studying power-law distributions over several orders of magnitudes requires sufficient data and appropriate methodology. In this paper we demonstrate the shortcomings of some popular methods in solar physics that are applied to data of typical sample sizes. We use synthetic data to study the effect of the sample size on the performance of different estimation methods and show that vast amounts of data are needed to obtain a reliable result with graphical methods (where the power-law exponent is estimated by a linear fit on a log-transformed histogram of the data). We revisit published results on power laws for the angular width of solar coronal mass ejections and the radiative losses of nanoflares. We demonstrate the benefits of the maximum likelihood estimator and advocate its use.
We compute two-point functions of chiral operators Tr(\Phi^k) for any k, in {\cal N}=4 supersymmetric SU(N) Yang-Mills theory. We find that up to the order g^4 the perturbative corrections to the correlators vanish for all N. The cancellation occurs in a highly non trivial way, due to a complicated interplay between planar and non planar diagrams. In complete generality we show that this same result is valid for any simple gauge group. Contact term contributions signal the presence of ultraviolet divergences. They are arbitrary at the tree level, but the absence of perturbative renormalization in the non singular part of the correlators allows to compute them unambiguously at higher orders. In the spirit of the AdS/CFT correspondence we comment on their relation to infrared singularities in the supergravity sector.
Robot world model representations are a vital part of robotic applications. However, there is no support for such representations in model-driven engineering tool chains. This work proposes a novel Domain Specific Language (DSL) for robotic world models that are based on the Robot Scene Graph (RSG) approach. The RSG-DSL can express (a) application specific scene configurations, (b) semantic scene structures and (c) inputs and outputs for the computational entities that are loaded into an instance of a world model.
We developed and validated TRisk, a Transformer-based AI model predicting 36-month mortality in heart failure patients by analysing temporal patient journeys from UK electronic health records (EHR). Our study included 403,534 heart failure patients (ages 40-90) from 1,418 English general practices, with 1,063 practices for model derivation and 355 for external validation. TRisk was compared against the MAGGIC-EHR model across various patient subgroups. With median follow-up of 9 months, TRisk achieved a concordance index of 0.845 (95% confidence interval: [0.841, 0.849]), significantly outperforming MAGGIC-EHR's 0.728 (0.723, 0.733) for predicting 36-month all-cause mortality. TRisk showed more consistent performance across sex, age, and baseline characteristics, suggesting less bias. We successfully adapted TRisk to US hospital data through transfer learning, achieving a C-index of 0.802 (0.789, 0.816) with 21,767 patients. Explainability analyses revealed TRisk captured established risk factors while identifying underappreciated predictors like cancers and hepatic failure that were important across both cohorts. Notably, cancers maintained strong prognostic value even a decade after diagnosis. TRisk demonstrated well-calibrated mortality prediction across both healthcare systems. Our findings highlight the value of tracking longitudinal health profiles and revealed risk factors not included in previous expert-driven models.
We investigate a family of radially symmetric Coulomb gas systems at inverse temperature β=2\beta = 2. The family is characterised by the property that the density of the equilibrium measure vanishes on a ring at radius rr_*, which lies strictly inside the droplet. The large nn expansion of the logarithm of the partition function is obtained up to a novel n1/4n^{1/4} term. We perform a double scaling limit of the correlation kernel at the n1/4n^{1/4} scale and obtain a new limiting kernel in the bulk, which differs from the well-known Ginibre kernel.
Classification is a well-studied machine learning task which concerns the assignment of instances to a set of outcomes. Classification models support the optimization of managerial decision-making across a variety of operational business processes. For instance, customer churn prediction models are adopted to increase the efficiency of retention campaigns by optimizing the selection of customers that are to be targeted. Cost-sensitive and causal classification methods have independently been proposed to improve the performance of classification models. The former considers the benefits and costs of correct and incorrect classifications, such as the benefit of a retained customer, whereas the latter estimates the causal effect of an action, such as a retention campaign, on the outcome of interest. This study integrates cost-sensitive and causal classification by elaborating a unifying evaluation framework. The framework encompasses a range of existing and novel performance measures for evaluating both causal and conventional classification models in a cost-sensitive as well as a cost-insensitive manner. We proof that conventional classification is a specific case of causal classification in terms of a range of performance measures when the number of actions is equal to one. The framework is shown to instantiate to application-specific cost-sensitive performance measures that have been recently proposed for evaluating customer retention and response uplift models, and allows to maximize profitability when adopting a causal classification model for optimizing decision-making. The proposed framework paves the way toward the development of cost-sensitive causal learning methods and opens a range of opportunities for improving data-driven business decision-making.
The embedded phase is a crucial period in the development of a young star. Mid-IR observations, now possible with JWST with unprecedented sensitivity, spectral resolution and sharpness are key for probing many physical and chemical processes on sub-arcsecond scales. JOYS addresses a wide variety of questions, from protostellar accretion and the nature of primeval jets, winds and outflows, to the chemistry of gas and ice, and the characteristics of embedded disks. We introduce the program and show representative results. MIRI-MRS data of 17 low-mass and 6 high-mass protostars show a wide variety of features. Atomic line maps differ among refractory (e.g., Fe), semi-refractory (e.g., S) and volatile elements (e.g., Ne), linked to their different levels of depletion and local (shock) conditions. Nested, stratified jet structures consisting of an inner ionized core seen in [Fe II] with an outer H2 layer are commonly seen. Wide-angle winds are found in low-J H2 lines. [S I] follows the jet in the youngest protostars, but is concentrated on source when more evolved. [Ne II] reveals a mix of jet shock and photoionized emission. H I lines measure accretion, but are also associated with jets. Molecular emission (CO2, C2H2, HCN, H2O, ..) is cool compared with disks, and likely associated with hot cores. Deep ice absorption features reveal not just the major ice components but also ions (as part of salts) and complex organic molecules, with comparable abundances from low- to high-mass sources. A second detection of HDO ice in a solar-mass source is presented with HDO/H2O ~ 0.4%, providing a link with disks and comets. A deep search for solid O2 suggests it is not a significant oxygen reservoir. Only few embedded Class I disks show the same forest of water lines as Class II disks do, perhaps due to significant dust extinction of the upper layers [abridged].
There are no more papers matching your filters at the moment.