Institute of AstrophysicsFoundation for Research and Technology-Hellas (FORTH)
We propose an O(100)m Atom Interferometer (AI) experiment -- AICE -- to be installed against a wall of the PX46 access shaft to the LHC. This experiment would probe unexplored ranges of the possible couplings of bosonic ultralight dark matter (ULDM) to atomic constituents and undertake a pioneering search for gravitational waves (GWs) at frequencies intermediate between those to which existing and planned experiments are sensitive, among other fundamental physics studies. A conceptual feasibility study showed that this AI experiment could be isolated from the LHC by installing a shielding wall in the TX46 gallery, and surveyed issues related to the proximity of the LHC machine, finding no technical obstacles. A detailed technical implementation study has shown that the preparatory civil-engineering work, installation of bespoke radiation shielding, deployment of access-control systems and safety alarms, and installation of an elevator platform could be carried out during LS3, allowing installation and operation of the AICE detector to proceed during Run 4 without impacting HL-LHC operation. These studies have established that PX46 is a uniquely promising location for an AI experiment. We foresee that, if the CERN management encourages this Letter of Intent, a significant fraction of the Terrestrial Very Long Baseline Atom Interferometer (TVLBAI) Proto-Collaboration may wish to contribute to AICE.
University of Toronto logoUniversity of TorontoUniversity of New South WalesUniversity of Amsterdam logoUniversity of AmsterdamImperial College London logoImperial College LondonGhent UniversityUniversity College London logoUniversity College LondonUniversity of Oxford logoUniversity of OxfordNagoya University logoNagoya UniversityUniversity of Copenhagen logoUniversity of CopenhagenYale University logoYale UniversityUniversitat Pompeu FabraKU Leuven logoKU LeuvenUniversity of CampinasEmory University logoEmory UniversityHarvard Medical SchoolKing’s College London logoKing’s College LondonMohamed bin Zayed University of Artificial Intelligence logoMohamed bin Zayed University of Artificial IntelligenceAristotle University of ThessalonikiTechnical University of Munich logoTechnical University of MunichKorea Advanced Institute of Science and TechnologyUniversitat de BarcelonaGerman Cancer Research Center (DKFZ)Universidad Politécnica de MadridTechnical University of DenmarkMaastricht UniversityUniversity of LeedsINSERMUniversity of LondonThe University of Western AustraliaUmeå UniversityUniversity of California San FranciscoThe Barcelona Institute of Science and TechnologyUniversidad Nacional de ColombiaFraunhofer Heinrich-Hertz-InstituteHelmholtz Center MunichKempelen Institute of Intelligent TechnologiesUniversidad Nacional del LitoralStanford University School of MedicineUniversity of ColomboUniversity of Tunis El ManarUniversity of Colorado Anschutz Medical CampusMilitary Institute of Science and TechnologyUniversity of Texas MD Anderson Cancer CenterFoundation for Research and Technology-Hellas (FORTH)Hellenic Mediterranean UniversityNile UniversityIBM Research AfricaNepal Applied Mathematics and Informatics Institute for research (NAAMII)Jimma UniversityUniversity of GhanaMedical University of GdanskTechnology and Research (A*STAR)University of Arkansas for Medical SciencesBBMRI-ERICGalileo UniversityMilton Margai Technical UniversityMuhimbili University of Health and Allied SciencesEuropean Heart NetworkPasqual Maragall FoundationNamibia University of Science & TechnologyErasmus MC University Medical CenterAlmaty AI LabUniversity M’Hamed BougaraHospital Universitario y Politécnico La FeLa Fe Health Research InstituteHospital Clínic of BarcelonaUniversity of KassalaChildren’s National Hospital Washington DCUniversity of ZambiaEurécomIstanbul Technical University
Despite major advances in artificial intelligence (AI) for medicine and healthcare, the deployment and adoption of AI technologies remain limited in real-world clinical practice. In recent years, concerns have been raised about the technical, clinical, ethical and legal risks associated with medical AI. To increase real world adoption, it is essential that medical AI tools are trusted and accepted by patients, clinicians, health organisations and authorities. This work describes the FUTURE-AI guideline as the first international consensus framework for guiding the development and deployment of trustworthy AI tools in healthcare. The FUTURE-AI consortium was founded in 2021 and currently comprises 118 inter-disciplinary experts from 51 countries representing all continents, including AI scientists, clinicians, ethicists, and social scientists. Over a two-year period, the consortium defined guiding principles and best practices for trustworthy AI through an iterative process comprising an in-depth literature review, a modified Delphi survey, and online consensus meetings. The FUTURE-AI framework was established based on 6 guiding principles for trustworthy AI in healthcare, i.e. Fairness, Universality, Traceability, Usability, Robustness and Explainability. Through consensus, a set of 28 best practices were defined, addressing technical, clinical, legal and socio-ethical dimensions. The recommendations cover the entire lifecycle of medical AI, from design, development and validation to regulation, deployment, and monitoring. FUTURE-AI is a risk-informed, assumption-free guideline which provides a structured approach for constructing medical AI tools that will be trusted, deployed and adopted in real-world practice. Researchers are encouraged to take the recommendations into account in proof-of-concept stages to facilitate future translation towards clinical practice of medical AI.
Researchers evaluated how different weak-lensing mass-mapping algorithms affect cosmological parameter inference when using peak counts, finding that the advanced MCALens method significantly improves constraining power by up to 296% compared to linear methods, particularly with multi-scale analysis.
We present a large spectroscopic survey with \textit{JWST}'s Mid-Infrared Instrument (MIRI) Low Resolution Spectrometer (LRS) targeting 3737 infrared-bright galaxies between z=0.652.46z=0.65-2.46 with infrared luminosities logLIR/L>11.5\log L_{\rm IR}/L_\odot>11.5 and logM/M=1011.5\log M_*/M_\odot=10-11.5. Targets were taken from a \textit{Spitzer} 24μ24\,\mum-selected sample with archival spectroscopy from the Infrared Spectrograph (IRS) and include a mix of star-forming galaxies and dust-obscured AGN. By combining IRS with the increased sensitivity of LRS, we expand the range of spectral features observed between 530μ5-30\,\mum for every galaxy in our sample. In this paper, we outline the sample selection, \textit{JWST} data reduction, 1D spectral extraction, and polycyclic aromatic hydrocarbon (PAH) feature measurements from λrest=3.311.2μ\lambda_{rest}=3.3-11.2\,\mum. In the \textit{JWST} spectra, we detect PAH emission features at 3.35.3μ3.3-5.3\,\mum, as well as Paschen and Brackett lines. The 3.3μ3.3\,\mum feature can be as bright as 1%1\% of the 81000μ8-1000\,\mum infrared luminosity and exhibits a tight correlation with the dust-obscured star-formation rate. We detect absorption features from CO gas, CO2_2 ice, H2_2O ice, and aliphatic dust. From the joint \textit{JWST} and \textit{Spitzer} analysis we find that the 11.3/3.3μ11.3/3.3\,\mum PAH ratios are on-average three times higher than that of local luminous, infrared galaxies. This is interpreted as evidence that the PAH grains are larger at z12z\sim1-2. The size distribution may be affected by coagulation of grains due to high gas densities and low temperatures. These conditions are supported by the observation of strong water ice absorption at 3.05μ3.05\,\mum, and can lower stellar radiative feedback as large PAHs transmit less energy per photon into the interstellar medium.
Over the past decade, the IceCube Neutrino Observatory has detected a few hundreds of high-energy (HE) neutrinos from cosmic sources. Despite numerous studies searching for their origin, it is still not known which source populations emit them. A few confident individual associations exist with active galactic nuclei (AGN), mostly with blazars which are jetted AGN whose jet points in our direction. Nonetheless, on a population level, blazar-neutrino correlation strengths are rather weak. This could mean that blazars as a population do not emit HE neutrinos, or that the detection power of the tests is insufficient due to the strong atmospheric neutrino background. By assuming an increase in HE neutrino emission during major blazar flares, in our previous studies we leveraged the arrival time of the neutrinos to boost the detection power. In this paper we utilize the same principle while substantially increasing the number of blazars. We search for the spatio-temporal correlation of 356 IceCube HE neutrinos with major optical flares of 3225 radio- and 3814 γ\gamma-ray-selected blazars. We find that, despite the increase in data size, the number of confident spatio-temporal associations remains low and the overall correlation strengths weak. Two individual associations drive our strongest and the only >>2σ\sigma post-trial spatio-temporal correlation, occurring with the BL Lac objects of the radio-selected blazar sample. We estimate that \lesssim8\% of the detected cosmic neutrinos were emitted by blazars during major optical flares. As a complementary analysis, we compare the synchrotron peak frequency, redshift, Doppler factor, X-ray brightness, and optical variability of spatially neutrino-associated blazars to those of the general blazar population. We find that spatially neutrino-associated blazars of the tested samples have higher than average Doppler factor and X-ray brightness.
Since the celebrated PPAD-completeness result for Nash equilibria in bimatrix games, a long line of research has focused on polynomial-time algorithms that compute ε\varepsilon-approximate Nash equilibria. Finding the best possible approximation guarantee that we can have in polynomial time has been a fundamental and non-trivial pursuit on settling the complexity of approximate equilibria. Despite a significant amount of effort, the algorithm of Tsaknakis and Spirakis, with an approximation guarantee of (0.3393+δ)(0.3393+\delta), remains the state of the art over the last 15 years. In this paper, we propose a new refinement of the Tsaknakis-Spirakis algorithm, resulting in a polynomial-time algorithm that computes a (13+δ)(\frac{1}{3}+\delta)-Nash equilibrium, for any constant δ>0\delta>0. The main idea of our approach is to go beyond the use of convex combinations of primal and dual strategies, as defined in the optimization framework of Tsaknakis and Spirakis, and enrich the pool of strategies from which we build the strategy profiles that we output in certain bottleneck cases of the algorithm.
This paper aims to present new alarming trends in the field of child sexual abuse through imagery, as part of SafeLine's research activities in the field of cybercrime, child sexual abuse material and the protection of children's rights to safe online experiences. It focuses primarily on the phenomenon of AI-generated CSAM, sophisticated ways employed for its production which are discussed in dark web forums and the crucial role that the open-source AI models play in the evolution of this overwhelming phenomenon. The paper's main contribution is a correlation analysis between the hotline's reports and domain names identified in dark web forums, where users' discussions focus on exchanging information specifically related to the generation of AI-CSAM. The objective was to reveal the close connection of clear net and dark web content, which was accomplished through the use of the ATLAS dataset of the Voyager system. Furthermore, through the analysis of a set of posts' content drilled from the above dataset, valuable conclusions on forum members' techniques employed for the production of AI-generated CSAM are also drawn, while users' views on this type of content and routes followed in order to overcome technological barriers set with the aim of preventing malicious purposes are also presented. As the ultimate contribution of this research, an overview of the current legislative developments in all country members of the INHOPE organization and the issues arising in the process of regulating the AI- CSAM is presented, shedding light in the legal challenges regarding the regulation and limitation of the phenomenon.
We present a large spectroscopic survey with \textit{JWST}'s Mid-Infrared Instrument (MIRI) Low Resolution Spectrometer (LRS) targeting 3737 infrared-bright galaxies between z=0.652.46z=0.65-2.46 with infrared luminosities logLIR/L>11.5\log L_{\rm IR}/L_\odot>11.5 and logM/M=1011.5\log M_*/M_\odot=10-11.5. Targets were taken from a \textit{Spitzer} 24μ24\,\mum-selected sample with archival spectroscopy from the Infrared Spectrograph (IRS) and include a mix of star-forming galaxies and dust-obscured AGN. By combining IRS with the increased sensitivity of LRS, we expand the range of spectral features observed between 530μ5-30\,\mum for every galaxy in our sample. In this paper, we outline the sample selection, \textit{JWST} data reduction, 1D spectral extraction, and polycyclic aromatic hydrocarbon (PAH) feature measurements from λrest=3.311.2μ\lambda_{rest}=3.3-11.2\,\mum. In the \textit{JWST} spectra, we detect PAH emission features at 3.35.3μ3.3-5.3\,\mum, as well as Paschen and Brackett lines. The 3.3μ3.3\,\mum feature can be as bright as 1%1\% of the 81000μ8-1000\,\mum infrared luminosity and exhibits a tight correlation with the dust-obscured star-formation rate. We detect absorption features from CO gas, CO2_2 ice, H2_2O ice, and aliphatic dust. From the joint \textit{JWST} and \textit{Spitzer} analysis we find that the 11.3/3.3μ11.3/3.3\,\mum PAH ratios are on-average three times higher than that of local luminous, infrared galaxies. This is interpreted as evidence that the PAH grains are larger at z12z\sim1-2. The size distribution may be affected by coagulation of grains due to high gas densities and low temperatures. These conditions are supported by the observation of strong water ice absorption at 3.05μ3.05\,\mum, and can lower stellar radiative feedback as large PAHs transmit less energy per photon into the interstellar medium.
Today, using multiple heterogeneous accelerators efficiently from applications and high-level frameworks, such as TensorFlow and Caffe, poses significant challenges in three respects: (a) sharing accelerators, (b) allocating available resources elastically during application execution, and (c) reducing the required programming effort. In this paper, we present Arax, a runtime system that decouples applications from heterogeneous accelerators within a server. First, Arax maps application tasks dynamically to available resources, managing all required task state, memory allocations, and task dependencies. As a result, Arax can share accelerators across applications in a server and adjust the resources used by each application as load fluctuates over time. dditionally, Arax offers a simple API and includes Autotalk, a stub generator that automatically generates stub libraries for applications already written for specific accelerator types, such as NVIDIA GPUs. Consequently, Arax applications are written once without considering physical details, including the number and type of accelerators. Our results show that applications, such as Caffe, TensorFlow, and Rodinia, can run using Arax with minimum effort and low overhead compared to native execution, about 12% (geometric mean). Arax supports efficient accelerator sharing, by offering up to 20% improved execution times compared to NVIDIA MPS, which supports NVIDIA GPUs only. Arax can transparently provide elasticity, decreasing total application turn-around time by up to 2x compared to native execution without elasticity support.
This paper presents a light-emitting reconfigurable intelligent surface (LeRIS) architecture that integrates vertical cavity surface emitting lasers (VCSELs) to jointly support user localization, obstacle-aware mapping, and millimeter-wave (mmWave) communication in programmable wireless environments (PWEs). Unlike prior light-emitting diode (LED)-based LeRIS designs with diffuse emission or LiDAR-assisted schemes requiring bulky sensing modules, the proposed VCSEL-based approach exploits narrow Gaussian beams and multimode diversity to enable compact, low-power, and analytically tractable integration. We derive closed-form expressions to jointly recover user position and orientation from received signal strength using only five VCSELs, and reduce this requirement to three under specific geometric conditions by leveraging dual-mode operation. In parallel, we introduce a VCSEL-based mapping method that uses reflected signal time-of-arrival measurements to detect obstructions and guide blockage-resilient RIS beam routing. Simulation results demonstrate millimeter-level localization accuracy, robust obstacle detection, high spectral efficiency, and substantial gains in minimum user rate. These findings establish VCSEL-based LeRIS as a scalable and practically integrable enabler for resilient 6G wireless systems with multi-functional PWEs.
Since the celebrated PPAD-completeness result for Nash equilibria in bimatrix games, a long line of research has focused on polynomial-time algorithms that compute ε\varepsilon-approximate Nash equilibria. Finding the best possible approximation guarantee that we can have in polynomial time has been a fundamental and non-trivial pursuit on settling the complexity of approximate equilibria. Despite a significant amount of effort, the algorithm of Tsaknakis and Spirakis, with an approximation guarantee of (0.3393+δ)(0.3393+\delta), remains the state of the art over the last 15 years. In this paper, we propose a new refinement of the Tsaknakis-Spirakis algorithm, resulting in a polynomial-time algorithm that computes a (13+δ)(\frac{1}{3}+\delta)-Nash equilibrium, for any constant δ>0\delta>0. The main idea of our approach is to go beyond the use of convex combinations of primal and dual strategies, as defined in the optimization framework of Tsaknakis and Spirakis, and enrich the pool of strategies from which we build the strategy profiles that we output in certain bottleneck cases of the algorithm.
Networked Music Performance (NMP) systems involve musicians located in different places who perform music while staying synchronized via the Internet. The maximum end-to-end delay in NMP applications is called Ensemble Performance Threshold (EPT) and should be less than 25 milliseconds. Due to this constraint, NMPs require ultra-low delay solutions for audio coding, network transmission, relaying and decoding, each one a challenging task on its own. There are two directions for study in the related work referring to the NMP systems. From the audio perspective, researchers experiment on low-delay encoders and transmission patterns, aiming to reduce the processing delay of the audio transmission, but they ignore the network performance. On the other hand, network-oriented researchers try to reduce the network delay, which contributes to reduced end-to-end delay. In our proposed approach, we introduce an integration of dynamic audio and network configuration to satisfy the EPT constraint. The basic idea is that the major components participating in an NMP system the application and the network interact during the live music performance. As the network delay increases, the network tries to equalize it by modifying the routing behavior using Software Defined Networking principles. If the network delay exceeds a maximum affordable threshold, the network reacts by informing the application to change the audio processing pattern to overcome the delay increase, resulting in below EPT end-to-end delay. A full prototype of the proposed system was implemented and extensively evaluated in an emulated environment.
15 Sep 2025
We experimentally generate optical tornado waves using spatial multiplexing on a single-phase modulation device. In their focal region, the intensity pattern outlines a spiral of decreasing radius and pitch. We examine the propagation dynamics of such novel waves and reveal the key factors that lead to angular acceleration. Moreover, we propose a two-color scheme that makes it possible to generate dynamically twisting light, an optical analog of a drill, that can rotate at THz frequencies.
During the preceding decades, human gait analysis has been the center of attention for the scientific community, while the association between gait analysis and overall health monitoring has been extensively reported. Technological advances further assisted in this alignment, resulting in access to inexpensive and remote healthcare services. Various assessment tools, such as software platforms and mobile applications, have been proposed by the scientific community and the market that employ sensors to monitor human gait for various purposes ranging from biomechanics to the progression of functional recovery. The framework presented herein offers a valuable digital biomarker for diagnosing and monitoring Parkinson's disease that can help clinical experts in the decision-making process leading to corrective planning or patient-specific treatment. More accurate and reliable decisions can be provided through a wide variety of integrated Artificial Intelligence algorithms and straightforward visualization techniques, including, but not limited to, heatmaps and bar plots. The framework consists of three core components: the insole pair, the mobile application, and the cloud-based platform. The insole pair deploys 16 plantar pressure sensors, an accelerometer, and a gyroscope to acquire gait data. The mobile application formulates the data for the cloud platform, which orchestrates the component interaction through the web application. Utilizing open communication protocols enables the straightforward replacement of one of the core components with a relative one (e.g., a different model of insoles), transparently from the end user, without affecting the overall architecture, resulting in a framework with the flexibility to adjust its modularity.
California Institute of Technology logoCalifornia Institute of TechnologyChinese Academy of Sciences logoChinese Academy of SciencesUniversity College London logoUniversity College LondonPeking University logoPeking UniversityUniversity of CreteUniversity of Florida logoUniversity of FloridaUniversidad Diego PortalesICREAUniversitat de BarcelonaConsejo Superior de Investigaciones CientíficasNational Astronomical ObservatoriesUniversity of Virginia logoUniversity of VirginiaChalmers University of Technology logoChalmers University of TechnologySpace Science InstituteNational Radio Astronomy ObservatoryGeorge Mason UniversityFoundation for Research and Technology- HellasEuropean Southern Observatory logoEuropean Southern ObservatoryPontificia Universidad Católica de ChileKavli Institute for Astronomy and AstrophysicsMillennium Institute of AstrophysicsInstituto de Astrofísica de AndalucíaIPACOnsala Space ObservatoryOccidental CollegeInstitute of Space SciencesInstitut de Ciències del CosmosInstitute of AstrophysicsUniversidad de Alcal
Enhanced emission from the dense gas tracer HCN (relative to HCO+^+ ) has been proposed as a signature of active galactic nuclei (AGN). In a previous single-dish millimeter line survey we identified galaxies with HCN/HCO+ ^+ (1-0) intensity ratios consistent with those of many AGN but whose mid-infrared spectral diagnostics are consistent with little to no ( 15%\lesssim15\% ) contribution of an AGN to the bolometric luminosity. To search for putative heavily obscured AGN, we present and analyze \nustar hard X-ray (3-79 keV) observations of four such galaxies from the Great Observatories All-sky LIRG Survey. We find no X-ray evidence for AGN in three of the systems and place strong upper limits on the energetic contribution of any heavily obscured (NH>1024N_{\rm H}>10^{24} cm2^{-2}) AGN to their bolometric luminosity. The X-ray flux upper limits are presently an order of magnitude below what XDR-driven chemistry model predict are necessary to drive HCN enhancements. In a fourth system we find a hard X-ray excess consistent with the presence of an AGN, but contributing only 3%\sim3\% of the bolometric luminosity. It is also unclear if the AGN is spatially associated with the HCN enhancement. We further explore the relationship between HCN/HCO+^+ (for several Jupper\mathrm{J}_\mathrm{upper} levels) and LAGN/LIRL_\mathrm{AGN}/L_\mathrm{IR} for a larger sample of systems in the literature. We find no evidence for correlations between the line ratios and the AGN fraction derived from X-rays, indicating that HCN/HCO+^+ intensity ratios are not driven by the energetic dominance of AGN, nor are they reliable indicators of whether SMBH accretion is ongoing.
Context: Weak gravitational lensing is a key cosmological probe for current and future large-scale surveys. While power spectra are commonly used for analyses, they fail to capture non-Gaussian information from nonlinear structure formation, necessitating higher-order statistics and methods for efficient map generation. Aims: To develop an emulator that generates accurate convergence maps directly from an input power spectrum and wavelet l1-norm without relying on computationally intensive simulations. Methods: We use either numerical or theoretical predictions to construct convergence maps by iteratively adjusting wavelet coefficients to match target marginal distributions and their inter-scale dependencies, incorporating higher-order statistical information. Results: The resulting kappa maps accurately reproduce the input power spectrum and exhibit higher-order statistical properties consistent with the input predictions, providing an efficient tool for weak lensing analyses.
Key-value (KV) separation is a technique that introduces randomness in the I/O access patterns to reduce I/O amplification in LSM-based key-value stores for fast storage devices (NVMe). KV separation has a significant drawback that makes it less attractive: Delete and especially update operations that are important in modern workloads result in frequent and expensive garbage collection (GC) in the value log. In this paper, we design and implement Parallax, which proposes hybrid KV placement that reduces GC overhead significantly and maximizes the benefits of using a log. We first model the benefits of KV separation for different KV pair sizes. We use this model to classify KV pairs in three categories small, medium, and large. Then, Parallax uses different approaches for each KV category: It always places large values in a log and small values in place. For medium values it uses a mixed strategy that combines the benefits of using a log and eliminates GC overhead as follows: It places medium values in a log for all but the last few (typically one or two) levels in the LSM structure, where it performs a full compaction, merges values in place, and reclaims log space without the need for GC. We evaluate Parallax against RocksDB that places all values in place and BlobDB that always performs KV separation. We find that Parallax increases throughput by up to 12.4x and 17.83x, decreases I/O amplification by up to 27.1x and 26x, and increases CPU efficiency by up to 18.7x and 28x respectively, for all but scan-based YCSB workloads.
By the seminal paper of Claude Shannon \cite{Shannon48}, the computation of the capacity of a discrete memoryless channel has been considered as one of the most important and fundamental problems in Information Theory. Nearly 50 years ago, Arimoto and Blahut independently proposed identical algorithms to solve this problem in their seminal papers \cite{Arimoto1972AnAF, Blahut1972ComputationOC}. The Arimoto-Blahut algorithm was proven to converge to the capacity of the channel as tt \to \infty with the convergence rate upper bounded by O(log(m)/t)O\left(\log(m)/t\right), where mm is the size of the input distribution, and being inverse exponential when there is a unique solution in the interior of the input probability simplex \cite{Arimoto1972AnAF}. Recently it was proved, in \cite{Nakagawa2020AnalysisOT}, that the convergence rate is at worst inverse linear O(1/t)O(1/t) in some specific cases. In this paper, we revisit this fundamental algorithm looking at the rate of convergence to the capacity and the time complexity, given m,nm,n, where nn is size of the output of the channel, focusing on the approximation of the capacity. We prove that the rate of convergence to an ε\varepsilon-optimal solution, for any sufficiently small constant ε>0\varepsilon > 0, is inverse exponential O(log(m)/ct)O\left(\log(m)/c^t\right), for a constant c>1c > 1 and O(log(log(m)/ε))O\left(\log \left(\log (m)/\varepsilon\right)\right) at most iterations, implying O(mnlog(log(m)/ε))O\left(m n\log \left(\log (m)/\varepsilon\right)\right) total complexity of the algorithm.
Over the past years, there has been an increasing number of key-value (KV) store designs, each optimizing for a different set of requirements. Furthermore, with the advancements of storage technology the design space of KV stores has become even more complex. More recent KV-store designs target fast storage devices, such as SSDs and NVM. Most of these designs aim to reduce amplification during data reorganization by taking advantage of device characteristics. However, until today most analysis of KV-store designs is experimental and limited to specific design points. This makes it difficult to compare tradeoffs across different designs, find optimal configurations and guide future KV-store design. In this paper, we introduce the Variable Amplification- Throughput analysis (VAT) to calculate insert-path amplification and its impact on multi-level KV-store performance.We use VAT to express the behavior of several existing design points and to explore tradeoffs that are not possible or easy to measure experimentally. VAT indicates that by inserting randomness in the insert-path, KV stores can reduce amplification by more than 10x for fast storage devices. Techniques, such as key-value separation and tiering compaction, reduce amplification by 10x and 5x, respectively. Additionally, VAT predicts that the advancements in device technology towards NVM, reduces the benefits from both using key-value separation and tiering.
We present a spatially-resolved (~3 pc pix1^{-1}) analysis of the distribution, kinematics, and excitation of warm H2 gas in the nuclear starburst region of M83. Our JWST/MIRI IFU spectroscopy reveals a clumpy reservoir of warm H2 (> 200 K) with a mass of ~2.3 x 105^{5} Msun in the area covered by all four MRS channels. We additionally use the [Ne II] 12.8 μ{\mu}m and [Ne III] 15.5 μ{\mu}m lines as tracers of the star formation rate, ionizing radiation hardness, and kinematics of the ionized ISM, finding tantalizing connections to the H2 properties and to the ages of the underlying stellar populations. Finally, qualitative comparisons to the trove of public, high-spatial-resolution multiwavelength data available on M83 shows that our MRS spectroscopy potentially traces all stages of the process of creating massive star clusters, from the embedded proto-cluster phase through the dispersion of ISM from stellar feedback.
There are no more papers matching your filters at the moment.