STFCRutherford Appleton Laboratory
Nowadays, machine learning (ML) teams have multiple concurrent ML workflows for different applications. Each workflow typically involves many experiments, iterations, and collaborative activities and commonly takes months and sometimes years from initial data wrangling to model deployment. Organizationally, there is a large amount of intermediate data to be stored, processed, and maintained. \emph{Data virtualization} becomes a critical technology in an infrastructure to serve ML workflows. In this paper, we present the design and implementation of a data virtualization service, focusing on its service architecture and service operations. The infrastructure currently supports six ML applications, each with more than one ML workflow. The data virtualization service allows the number of applications and workflows to grow in the coming years.
3
Control system middle layers act as a co-ordination and communication bridge between end users, including operators, system experts, scientists, and experimental users, and the low-level control system interface. This article describes a Python package -- Controls Abstraction Towards Acclerator Physics (CATAP) -- which aims to build on previous experience and provide a modern Python-based middle layer with explicit abstraction, YAML-based configuration, and procedural code generation. CATAP provides a structured and coherent interface to a control system, allowing researchers and operators to centralize higher-level control logic and device information. This greatly reduces the amount of code that a user must write to perform a task, and codifies system knowledge that is usually anecdotal. The CATAP design has been deployed at two accelerator facilities, and has been developed to produce a procedurally generated facility-specific middle layer package from configuration files to enable its wider dissemination across other machines.
The Large Particle Physics Laboratory Directors Group (LDG) established the Working Group (WG) on the Sustainability Assessment of Future Accelerators in 2024 with the mandate to develop guidelines and a list of key parameters for the assessment of the sustainability of future accelerators in particle physics. While focused on accelerator projects, much of the work will also be relevant to other present and upcoming research infrastructures. The development and continuous update of such a framework aim to enable a coherent communication amongst scientists and adequately convey the information to a broader set of stakeholders. This document outlines the key findings and recommendations of the LDG Sustainability WG and provides a summary of current best practices aimed at enabling sustainable accelerator-based research infrastructures. Not all sustainability topics are addressed at the same level. The assessment process is complex, largely under development and a homogeneous evaluation of all the aspects requires a strategy to be developed and implemented over time.
Feature foundation models - usually vision transformers - offer rich semantic descriptors of images, useful for downstream tasks such as (interactive) segmentation and object detection. For computational efficiency these descriptors are often patch-based, and so struggle to represent the fine features often present in micrographs; they also struggle with the large image sizes present in materials and biological image analysis. In this work, we train a convolutional neural network to upsample low-resolution (i.e, large patch size) foundation model features with reference to the input image. We apply this upsampler network (without any further training) to efficiently featurise and then segment a variety of microscopy images, including plant cells, a lithium-ion battery cathode and organic crystals. The richness of these upsampled features admits separation of hard to segment phases, like hairline cracks. We demonstrate that interactive segmentation with these deep features produces high-quality segmentations far faster and with far fewer labels than training or finetuning a more traditional convolutional network.
Physics-informed neural networks (PINNs) have emerged as a promising approach for solving complex fluid dynamics problems, yet their application to fluid-structure interaction (FSI) problems with moving boundaries remains largely unexplored. This work addresses the critical challenge of modeling FSI systems with deformable interfaces, where traditional unified PINN architectures struggle to capture the distinct physics governing fluid and structural domains simultaneously. We present an innovative Eulerian-Lagrangian PINN architecture that integrates immersed boundary method (IBM) principles to solve FSI problems with moving boundary conditions. Our approach fundamentally departs from conventional unified architectures by introducing domain-specific neural networks: an Eulerian network for fluid dynamics and a Lagrangian network for structural interfaces, coupled through physics-based constraints. Additionally, we incorporate learnable B-spline activation functions with SiLU to capture both localized high-gradient features near interfaces and global flow patterns. Empirical studies on a 2D cavity flow problem involving a moving solid structure show that while baseline unified PINNs achieve reasonable velocity predictions, they suffer from substantial pressure errors (12.9%) in structural regions. Our Eulerian-Lagrangian architecture with learnable activations (EL-L) achieves better performance across all metrics, improving accuracy by 24.1-91.4% and particularly reducing pressure errors from 12.9% to 2.39%. These results demonstrate that domain decomposition aligned with physical principles, combined with locality-aware activation functions, is essential for accurate FSI modeling within the PINN framework.
We propose an O(100)m Atom Interferometer (AI) experiment -- AICE -- to be installed against a wall of the PX46 access shaft to the LHC. This experiment would probe unexplored ranges of the possible couplings of bosonic ultralight dark matter (ULDM) to atomic constituents and undertake a pioneering search for gravitational waves (GWs) at frequencies intermediate between those to which existing and planned experiments are sensitive, among other fundamental physics studies. A conceptual feasibility study showed that this AI experiment could be isolated from the LHC by installing a shielding wall in the TX46 gallery, and surveyed issues related to the proximity of the LHC machine, finding no technical obstacles. A detailed technical implementation study has shown that the preparatory civil-engineering work, installation of bespoke radiation shielding, deployment of access-control systems and safety alarms, and installation of an elevator platform could be carried out during LS3, allowing installation and operation of the AICE detector to proceed during Run 4 without impacting HL-LHC operation. These studies have established that PX46 is a uniquely promising location for an AI experiment. We foresee that, if the CERN management encourages this Letter of Intent, a significant fraction of the Terrestrial Very Long Baseline Atom Interferometer (TVLBAI) Proto-Collaboration may wish to contribute to AICE.
Recent advances in quantum computers are demonstrating the ability to solve problems at a scale beyond brute force classical simulation. As such, a widespread interest in quantum algorithms has developed in many areas, with optimization being one of the most pronounced domains. Across computer science and physics, there are a number of different approaches for major classes of optimization problems, such as combinatorial optimization, convex optimization, non-convex optimization, and stochastic extensions. This work draws on multiple approaches to study quantum optimization. Provably exact versus heuristic settings are first explained using computational complexity theory - highlighting where quantum advantage is possible in each context. Then, the core building blocks for quantum optimization algorithms are outlined to subsequently define prominent problem classes and identify key open questions that, if answered, will advance the field. The effects of scaling relevant problems on noisy quantum devices are also outlined in detail, alongside meaningful benchmarking problems. We underscore the importance of benchmarking by proposing clear metrics to conduct appropriate comparisons with classical optimization techniques. Lastly, we highlight two domains - finance and sustainability - as rich sources of optimization problems that could be used to benchmark, and eventually validate, the potential real-world impact of quantum optimization.
University of Washington logoUniversity of WashingtonCNRS logoCNRSUniversity of Toronto logoUniversity of TorontoUniversity of MississippiUniversity of CincinnatiCalifornia Institute of Technology logoCalifornia Institute of TechnologyUniversity of Cambridge logoUniversity of CambridgeINFN Sezione di NapoliMonash University logoMonash UniversityNational Central UniversityNational Astronomical Observatory of JapanVanderbilt UniversityUniversity of Notre Dame logoUniversity of Notre DameTel Aviv University logoTel Aviv UniversityUniversity College London logoUniversity College LondonNikhefGeorgia Institute of Technology logoGeorgia Institute of TechnologyUniversity of Science and Technology of China logoUniversity of Science and Technology of ChinaTsinghua University logoTsinghua UniversityThe Chinese University of Hong Kong logoThe Chinese University of Hong KongUniversity of MelbourneThe University of Texas at Austin logoThe University of Texas at AustinUniversity of WarsawPeking University logoPeking UniversityTexas A&M University logoTexas A&M UniversityUniversity of British Columbia logoUniversity of British ColumbiaNorthwestern University logoNorthwestern UniversityNASA Goddard Space Flight Center logoNASA Goddard Space Flight CenterLouisiana State UniversityUniversity of Florida logoUniversity of FloridaINFN Sezione di PisaRutherford Appleton LaboratoryUniversity of Minnesota logoUniversity of MinnesotaUniversity of Maryland logoUniversity of MarylandUniversity of Tokyo logoUniversity of TokyoIndian Institute of ScienceNational Taiwan Normal UniversityThe Pennsylvania State University logoThe Pennsylvania State UniversityRochester Institute of TechnologyGran Sasso Science InstituteSorbonne Université logoSorbonne UniversitéUniversity of Massachusetts AmherstAustralian National University logoAustralian National UniversityUniversity of AucklandCardiff UniversityUniversity of GlasgowLeibniz Universität HannoverUniversity of PortsmouthUniversidade Federal do ABCHigh Energy Accelerator Research Organization (KEK)Indian Institute of Technology MadrasUniversity of StrathclydeUniversità di GenovaUniversity of Alabama in HuntsvilleSyracuse UniversityUniversity of SannioRMIT UniversityInstituto Nacional de Pesquisas EspaciaisUniversità di CamerinoUniversitat de les Illes BalearsMaastricht UniversityUniversity of BirminghamUniversità di TriesteNational Cheng Kung UniversityAix Marseille UniversityKyushu UniversityUniversity of South CarolinaWashington State UniversityUniversity of OregonNational Tsing-Hua UniversityKindai UniversityThe University of Western AustraliaUniversidade de AveiroEötvös Loránd UniversityUniversitat Autònoma de BarcelonaSofia UniversityNicolaus Copernicus Astronomical CenterInstituto de Fisica Teorica UAM/CSICShanghai Astronomical ObservatoryNicolaus Copernicus UniversityINFN, Laboratori Nazionali di FrascatiUniversity of Western OntarioUniversità di Napoli Federico IIUniversity of California, Santa Cruz logoUniversity of California, Santa CruzEmbry-Riddle Aeronautical UniversityUniversity of Hawai’iUniversity of Electro-CommunicationsNational Chung Hsing UniversityMontana State UniversityInternational Centre for Theoretical SciencesINFN Sezione di PerugiaIstituto Nazionale di Alta MatematicaThe University of SheffieldUniversité de la Côte d’AzurPhysikalisch-Technische BundesanstaltInstitut de Física d’Altes Energies (IFAE)INFN - Sezione di PadovaUniversity of the Balearic IslandsLaboratoire Kastler BrosselUniversità di FirenzeUniversity of ToyamaIstituto Nazionale di OtticaINFN-Sezione di GenovaUniversiteit AntwerpenThe University of MississippiUniversity of SzegedUniversità di PerugiaINFN-Sezione di BolognaUniversità di CagliariVU AmsterdamInstitute for Cosmic Ray Research, University of TokyoINFN Sezione di Roma Tor VergataUniversité de Paris, CNRS, Astroparticule et Cosmologie,California State University, Los AngelesUniversità di SienaLIGO Livingston ObservatoryNational Center for High-Performance ComputingNCBJLaboratoire AstroParticule et Cosmologie - CNRSUniversità di Urbino Carlo BoUniversità degli Studi di SassariUniversità di Trento, INFN-TIFPAWigner RCP, RMKIINFN Sezione di CagliariRESCEU, University of TokyoUniv Lyon, ENS de Lyon, CNRS, Université Claude Bernard Lyon 1Universite de Nice, ARTEMIS, CNRS, Observatoire de la Cote d’AzurIstituto de Fısica Teórica, UAM/CSICAlbert-Einstein-Institut, HanoverAPC, AstroParticule et Cosmologie, CNRSGSSI, INFN, Laboratori Nazionali del Gran SassoNational Institute of Technology, Akashi CollegeLAPP, Universit´e Savoie Mont BlancUniversità di NapoliUniversità degli Studi di CamerinoThe University of Sheffield, Department of Physics and AstronomyUniversite de Paris* National and Kapodistrian University of AthensFriedrich-Schiller-Universität JenaUniversit Grenoble AlpesUniversit degli Studi di GenovaUniversit Libre de BruxellesUniversit di TrentoUniversit di SalernoUniversit degli Studi di PadovaUniversit de BordeauxUniversit di Roma La SapienzaUniversit Paris CitUniversit de StrasbourgUniversit de LyonUniversit di PisaINAF Osservatorio Astronomico di PadovaUniversit de MontpellierUniversit di Roma Tor VergataUniversit Di BolognaINAF ` Osservatorio Astronomico di TriesteINFN Sezione di Firenze
The ever-increasing number of detections of gravitational waves (GWs) from compact binaries by the Advanced LIGO and Advanced Virgo detectors allows us to perform ever-more sensitive tests of general relativity (GR) in the dynamical and strong-field regime of gravity. We perform a suite of tests of GR using the compact binary signals observed during the second half of the third observing run of those detectors. We restrict our analysis to the 15 confident signals that have false alarm rates 103yr1\leq 10^{-3}\, {\rm yr}^{-1}. In addition to signals consistent with binary black hole (BH) mergers, the new events include GW200115_042309, a signal consistent with a neutron star--BH merger. We find the residual power, after subtracting the best fit waveform from the data for each event, to be consistent with the detector noise. Additionally, we find all the post-Newtonian deformation coefficients to be consistent with the predictions from GR, with an improvement by a factor of ~2 in the -1PN parameter. We also find that the spin-induced quadrupole moments of the binary BH constituents are consistent with those of Kerr BHs in GR. We find no evidence for dispersion of GWs, non-GR modes of polarization, or post-merger echoes in the events that were analyzed. We update the bound on the mass of the graviton, at 90% credibility, to mg2.42×1023eV/c2m_g \leq 2.42 \times 10^{-23} \mathrm{eV}/c^2. The final mass and final spin as inferred from the pre-merger and post-merger parts of the waveform are consistent with each other. The studies of the properties of the remnant BHs, including deviations of the quasi-normal mode frequencies and damping times, show consistency with the predictions of GR. In addition to considering signals individually, we also combine results from the catalog of GW signals to calculate more precise population constraints. We find no evidence in support of physics beyond GR.
Typical schemes to encode classical data in variational quantum machine learning (QML) lead to quantum Fourier models with O(exp(n))\mathcal{O}(\exp(n)) Fourier basis functions in the number of qubits. Despite this, in order for the model to be efficiently trainable, the number of parameters must scale as O(poly(n))\mathcal{O}(\mathrm{poly}(n)). This imbalance implies the existence of correlations between the Fourier modes, which depend on the structure of the circuit. In this work, we demonstrate that this phenomenon exists and show cases where these correlations can be used to predict ansatz performance. For several popular ansatzes, we numerically compute the Fourier coefficient correlations (FCCs) and construct the Fourier fingerprint, a visual representation of the correlation structure. We subsequently show how, for the problem of learning random Fourier series, the FCC correctly predicts relative performance of ansatzes whilst the widely-used expressibility metric does not. Finally, we demonstrate how our framework applies to the more challenging problem of jet reconstruction in high-energy physics. Overall, our results demonstrate how the Fourier fingerprint is a powerful new tool in the problem of optimal ansatz choice for QML.
A key open problem in physics is the correct way to combine gravity (described by general relativity) with everything else (described by quantum mechanics). This problem suggests that general relativity and possibly also quantum mechanics need fundamental corrections. Most physicists expect that gravity should be quantum in character, but gravity is fundamentally different to the other forces because it alone is described by spacetime geometry. Experiments are needed to test whether gravity, and hence space-time, is quantum or classical. We propose an experiment to test the quantum nature of gravity by checking whether gravity can entangle two micron-sized crystals. A pathway to this is to create macroscopic quantum superpositions of each crystal first using embedded spins and Stern-Gerlach forces. These crystals could be nanodiamonds containing nitrogen-vacancy (NV) centres. The spins can subsequently be measured to witness the gravitationally generated entanglement. This is based on extensive theoretical feasibility studies and experimental progress in quantum technology. The eventual experiment will require a medium-sized consortium with excellent suppression of decoherence including vibrations and gravitational noise. In this white paper, we review the progress and plans towards realizing this. While implementing these plans, we will further explore the most macroscopic superpositions that are possible, which will test theories that predict a limit to this.
University of Toronto logoUniversity of TorontoCalifornia Institute of Technology logoCalifornia Institute of TechnologyUniversity of Pittsburgh logoUniversity of PittsburghUniversity of OsloChinese Academy of Sciences logoChinese Academy of SciencesUniversity of Southern California logoUniversity of Southern CaliforniaUniversity College London logoUniversity College LondonUniversity of Oxford logoUniversity of OxfordUniversity of California, Irvine logoUniversity of California, IrvineUniversity of Copenhagen logoUniversity of CopenhagenUniversity of EdinburghETH Zürich logoETH ZürichUniversity of British Columbia logoUniversity of British ColumbiaRutherford Appleton LaboratoryUniversity of Maryland logoUniversity of MarylandUniversité Paris-Saclay logoUniversité Paris-SaclayStockholm University logoStockholm UniversityUniversity of HelsinkiInstituto de Física Teórica UAM-CSICTechnical University of Munich logoTechnical University of MunichCEA logoCEAUniversity of GenevaUniversity of PortsmouthConsejo Superior de Investigaciones CientíficasUniversità di GenovaUniversiteit LeidenUniversity of SussexUniversité Côte d’AzurINAFUniversity of CaliforniaJet Propulsion LaboratoryInstituto de Astrofísica de CanariasUniversity of NottinghamEuropean Space AgencySISSAUniversidad de CantabriaUniversity of Hawai’iUniversity of KwaZulu-NatalLudwig-Maximilians-UniversitätNational Observatory of AthensLaboratoire d’Astrophysique de MarseilleUniversidad de AtacamaMax-Planck Institut für extraterrestrische PhysikInstitut d’Estudis Espacials de CatalunyaINAF–Osservatorio Astronomico di PadovaUniversité Claude Bernard LyonDeutsches Elektronen SynchrotronInstitut de Physique des 2 Infinis de LyonINAF-IASF MilanoUniversità di FirenzeUniversity of RomeTuorla ObservatoryINAF-Osservatorio Astronomico di BolognaUniversità degli Studi di Roma TreIstituto Nazionale di Fisica Nucleare, Sezione di PadovaInstitute for Advanced Study, Technische Universität MünchenInstituto de Astrofísica e Ciências do Espaço, Universidade de LisboaUniversité Paris-Saclay, CNRS, CEAINAF - Osservatorio Astronomico di TorinoIstituto Nazionale di Fisica Nucleare, Sezione di Roma TreUniversité Paris-Saclay, CNRS, Institut d'astrophysique spatialeUniversité Paris-Saclay, CNRSIstituto Nazionale di Fisica Nucleare, Sezione di NapoliUniversité de Paris, CNRSSpace Science Data Center - Italian Space AgencyINAF-Osservatorio Astronomico di Bologna, Sezione di BolognaINAF-Osservatorio di Astrofisica e Scienza dello Spazio di Bologna, Sezione di BolognaUniversity of Sussex, Astronomy CentreUniversité Paris-Saclay, CNRS, Université Paris CitéUniversità di BonnUniversità di Trieste, Sezione di TriesteUniversité de Genève, Observatoire de GenèveIstituto Nazionale di Astrofisica, Sezione di BolognaUniversit Grenoble AlpesUniversit del SalentoUniversit di FerraraINAF Osservatorio Astronomico di CapodimonteUniversit de LorraineAix-Marseille Universit",Universit de StrasbourgUniversit di PisaUniversit di PadovaUniversit degli Studi di MilanoUniversit de MontpellierUniversit degli Studi di Napoli Federico IIUniversit di Roma Tor VergataINAF Osservatorio di Astrofisica e Scienza dello Spazio di BolognaUniversit Di BolognaUniversit degli Studi di Trieste
The ESA Euclid mission will measure the photometric redshifts of billions of galaxies in order to provide an accurate 3D view of the Universe at optical and near-infrared wavelengths. Photometric redshifts are determined by the PHZ processing function on the basis of the multi-wavelength photometry of Euclid and ground-based observations. In this paper, we describe the PHZ processing used for the Euclid Quick Data Release, the output products, and their validation. The PHZ pipeline is responsible for the following main tasks: source classification into star, galaxy, and QSO classes based on photometric colours; determination of photometric redshifts and of physical properties of galaxies. The classification is able to provide a star sample with a high level of purity, a highly complete galaxy sample, and reliable probabilities of belonging to those classes. The identification of QSOs is more problematic: photometric information seems to be insufficient to accurately separate QSOs from galaxies. The performance of the pipeline in the determination of photometric redshifts has been tested using the COSMOS2020 catalogue and a large sample of spectroscopic redshifts. The results are in line with expectations: the precision of the estimates are compatible with Euclid requirements, while, as expected, a bias correction is needed to achieve the accuracy level required for the cosmological probes. Finally, the pipeline provides reliable estimates of the physical properties of galaxies, in good agreement with findings from the COSMOS2020 catalogue, except for an unrealistically large fraction of very young galaxies with very high specific star-formation rates. The application of appropriate priors is, however, sufficient to obtain reliable physical properties for those problematic objects. We present several areas for improvement for future Euclid data releases.
Sample-based quantum diagonalization (SQD) is a recently proposed algorithm to approximate the ground-state wave function of many-body quantum systems on near-term and early-fault-tolerant quantum devices. In SQD, the quantum computer acts as a sampling engine that generates the subspace in which the Hamiltonian is classically diagonalized. A recently proposed SQD variant, Sample-based Krylov Quantum Diagonalization (SKQD), uses quantum Krylov states as circuits from which samples are collected. Convergence guarantees can be derived for SKQD under similar assumptions to those of quantum phase estimation, provided that the ground-state wave function is concentrated, i.e., has support on a small subset of the full Hilbert space. Implementations of SKQD on current utility-scale quantum computers are limited by the depth of time-evolution circuits needed to generate Krylov vectors. For many complex many-body Hamiltonians of interest, such as the molecular electronic-structure Hamiltonian, this depth exceeds the capability of state-of-the-art quantum processors. In this work, we introduce a new SQD variant that combines SKQD with the qDRIFT randomized compilation of the Hamiltonian propagator. The resulting algorithm, termed SqDRIFT, enables SQD calculations at the utility scale on chemical Hamiltonians while preserving the convergence guarantees of SKQD. We apply SqDRIFT to calculate the electronic ground-state energy of several polycyclic aromatic hydrocarbons, up to system sizes beyond the reach of exact diagonalization.
This report outlines the scientific potential and technical feasibility for the Future Circular Collider (FCC) at CERN, proposing a staged approach with an electron-positron collider followed by a hadron collider in the same tunnel to enable unprecedented precision measurements of fundamental particles and extend the direct discovery reach for new physics. The study details accelerator designs, detector concepts, and physics programs, demonstrating the capability to improve electroweak precision by orders of magnitude and probe new physics up to 40-50 TeV.
We investigate the nature of the topological phase transition of the antiferromagnetic Kitaev model on the honeycomb lattice in the presence of a magnetic field along the [111] direction. The field opens a topological gap in the Majorana fermion spectrum and leads to a sequence of topological phase transitions before the field polarised state is reached. At mean field level the gap first closes at the three MM points in the Brillouin zone, where the Majorana fermions form Dirac cones, resulting in a change of Chern number by three. An odd number of Dirac fermions in the infrared is unusual and requires Berry curvature compensation in the UV, which occurs via topological, ring-like hybridisation gaps with higher-energy bands. We perform a renormalisation-group analysis of the topological phase transition at the three MM points within the Yukawa theory, allowing for intra- and inter-valley fluctuations of the spin-liquid bond operators. We find that the latter lead to a breaking of Lorentz invariance and hence a different universality compared to the standard Ising Gross-Neveu-Yukawa class.
The Large Hadron electron Collider (LHeC) is designed to move the field of deep inelastic scattering (DIS) to the energy and intensity frontier of particle physics. Exploiting energy recovery technology, it collides a novel, intense electron beam with a proton or ion beam from the High Luminosity--Large Hadron Collider (HL-LHC). The accelerator and interaction region are designed for concurrent electron-proton and proton-proton operation. This report represents an update of the Conceptual Design Report (CDR) of the LHeC, published in 2012. It comprises new results on parton structure of the proton and heavier nuclei, QCD dynamics, electroweak and top-quark physics. It is shown how the LHeC will open a new chapter of nuclear particle physics in extending the accessible kinematic range in lepton-nucleus scattering by several orders of magnitude. Due to enhanced luminosity, large energy and the cleanliness of the hadronic final states, the LHeC has a strong Higgs physics programme and its own discovery potential for new physics. Building on the 2012 CDR, the report represents a detailed updated design of the energy recovery electron linac (ERL) including new lattice, magnet, superconducting radio frequency technology and further components. Challenges of energy recovery are described and the lower energy, high current, 3-turn ERL facility, PERLE at Orsay, is presented which uses the LHeC characteristics serving as a development facility for the design and operation of the LHeC. An updated detector design is presented corresponding to the acceptance, resolution and calibration goals which arise from the Higgs and parton density function physics programmes. The paper also presents novel results on the Future Circular Collider in electron-hadron mode, FCC-eh, which utilises the same ERL technology to further extend the reach of DIS to even higher centre-of-mass energies.
A new algorithm has been developed at LHCb which is able to reconstruct and select very displaced vertices in real-time at the first level of the trigger (HLT1). It makes use of the Upstream Tracker (UT) and the Scintillator Fiber detector (SciFi) of LHCb and it is executed on GPUs inside the Allen framework. In addition to an optimized strategy, it utilizes a Neural Network (NN) implementation to increase the track efficiency and reduce the ghost rates, with very high throughput and limited time budget. Besides serving to reconstruct Ks0K_{s}^{0} and Λ\Lambda particles from the Standard Model, the Downstream algorithm and the associated two-track vertexing could largely increase the LHCb physics potential for detecting long-lived particles during the Run 3.
Knowing the redshift of galaxies is one of the first requirements of many cosmological experiments, and as it's impossible to perform spectroscopy for every galaxy being observed, photometric redshift (photo-z) estimations are still of particular interest. Here, we investigate different deep learning methods for obtaining photo-z estimates directly from images, comparing these with traditional machine learning algorithms which make use of magnitudes retrieved through photometry. As well as testing a convolutional neural network (CNN) and inception-module CNN, we introduce a novel mixed-input model which allows for both images and magnitude data to be used in the same model as a way of further improving the estimated redshifts. We also perform benchmarking as a way of demonstrating the performance and scalability of the different algorithms. The data used in the study comes entirely from the Sloan Digital Sky Survey (SDSS) from which 1 million galaxies were used, each having 5-filter (ugriz) images with complete photometry and a spectroscopic redshift which was taken as the ground truth. The mixed-input inception CNN achieved a mean squared error (MSE)=0.009, which was a significant improvement (30%) over the traditional Random Forest (RF), and the model performed even better at lower redshifts achieving a MSE=0.0007 (a 50% improvement over the RF) in the range of z<0.3. This method could be hugely beneficial to upcoming surveys such as the Vera C. Rubin Observatory's Legacy Survey of Space and Time (LSST) which will require vast numbers of photo-z estimates produced as quickly and accurately as possible.
Relativistic electron-positron plasmas are ubiquitous in extreme astrophysical environments such as black holes and neutron star magnetospheres, where accretion-powered jets and pulsar winds are expected to be enriched with such pair plasmas. Their behaviour is quite different from typical electron-ion plasmas due to the matter-antimatter symmetry of the charged components and their role in the dynamics of such compact objects is believed to be fundamental. So far, our experimental inability to produce large yields of positrons in quasi-neutral beams has restricted the understanding of electron-positron pair plasmas to simple numerical and analytical studies which are rather limited. We present first experimental results confirming the generation of high-density, quasi-neutral, relativistic electron-positron pair beams using the 440 GeV/c beam at CERN's Super Proton Synchrotron (SPS) accelerator. The produced pair beams have a volume that fills multiple Debye spheres and are thus able to sustain collective plasma oscillations. Our work opens up the possibility of directly probing the microphysics of pair plasmas beyond quasi-linear evolution into regimes that are challenging to simulate or measure via astronomical observations.
Volume 3 of the FCC Feasibility Report presents studies related to civil engineering, the development of a project implementation scenario, and environmental and sustainability aspects. The report details the iterative improvements made to the civil engineering concepts since 2018, taking into account subsurface conditions, accelerator and experiment requirements, and territorial considerations. It outlines a technically feasible and economically viable civil engineering configuration that serves as the baseline for detailed subsurface investigations, construction design, cost estimation, and project implementation planning. Additionally, the report highlights ongoing subsurface investigations in key areas to support the development of an improved 3D subsurface model of the region. The report describes development of the project scenario based on the 'avoid-reduce-compensate' iterative optimisation approach. The reference scenario balances optimal physics performance with territorial compatibility, implementation risks, and costs. Environmental field investigations covering almost 600 hectares of terrain - including numerous urban, economic, social, and technical aspects - confirmed the project's technical feasibility and contributed to the preparation of essential input documents for the formal project authorisation phase. The summary also highlights the initiation of public dialogue as part of the authorisation process. The results of a comprehensive socio-economic impact assessment, which included significant environmental effects, are presented. Even under the most conservative and stringent conditions, a positive benefit-cost ratio for the FCC-ee is obtained. Finally, the report provides a concise summary of the studies conducted to document the current state of the environment.
The knowledge of scintillation quenching of α\alpha-particles plays a paramount role in understanding α\alpha-induced backgrounds and improving the sensitivity of liquid argon-based direct detection of dark matter experiments. We performed a relative measurement of scintillation quenching in the MeV energy region using radioactive isotopes (222^{222}Rn, 218^{218}Po and 214^{214}Po isotopes) present in trace amounts in the DEAP-3600 detector and quantified the uncertainty of extrapolating the quenching factor to the low-energy region.
There are no more papers matching your filters at the moment.