high-energy-physics-experiment
We derive a complete expression for the neutrino-mediated quantum force beyond the four-Fermi approximation within the Standard Model. Using this new result, we study the effect of atomic parity violation caused by neutrinos. We find that the neutrino effect is sizable compared to the current experimental sensitivity and can also significantly affect the value of the Weinberg angle measured in atomic systems. This offers a promising method for detecting the neutrino force in the future and facilitates the application of precision atomic physics as a probe for neutrino physics and the electroweak sector of the Standard Model.
We present searches for light dark matter (DM) with masses 3-9 GeV/c2c^2 in the presence of coherent elastic neutrino-nucleus scattering (CEν\nuNS) from 8^{8}B solar neutrinos with the LUX-ZEPLIN experiment. This analysis uses a 5.7 tonne-year exposure with data collected between March 2023 and April 2025. In an energy range spanning 1-6 keV, we report no significant excess of events attributable to dark matter nuclear recoils, but we observe a significant signal from 8^{8}B CEν\nuNS interactions that is consistent with expectation. We set world-leading limits on spin-independent and spin-dependent-neutron DM-nucleon interactions for masses down to 5 GeV/c2c^2. In the no-dark-matter scenario, we observe a signal consistent with 8^{8}B CEν\nuNS events, corresponding to a 4.5σ4.5\sigma statistical significance. This is the most significant evidence of 8^{8}B CEν\nuNS interactions and is enabled by robust background modeling and mitigation techniques. This demonstrates LZ's ability to detect rare signals at keV-scale energies.
We present a proof-of-principle study demonstrating the use of large language model (LLM) agents to automate a representative high energy physics (HEP) analysis. Using the Higgs boson diphoton cross-section measurement as a case study with ATLAS Open Data, we design a hybrid system that combines an LLM-based supervisor-coder agent with the Snakemake workflow manager. In this architecture, the workflow manager enforces reproducibility and determinism, while the agent autonomously generates, executes, and iteratively corrects analysis code in response to user instructions. We define quantitative evaluation metrics including success rate, error distribution, costs per specific task, and average number of API calls, to assess agent performance across multi-stage workflows. To characterize variability across architectures, we benchmark a representative selection of state-of-the-art LLMs spanning the Gemini and GPT-5 series, the Claude family, and leading open-weight models. While the workflow manager ensures deterministic execution of all analysis steps, the final outputs still show stochastic variation. Although we set the temperature to zero, other sampling parameters (e.g., top-p, top-k) remained at their defaults, and some reasoning-oriented models internally adjust these settings. Consequently, the models do not produce fully deterministic results. This study establishes the first LLM-agent-driven automated data-analysis framework in HEP, enabling systematic benchmarking of model capabilities, stability, and limitations in real-world scientific computing environments. The baseline code used in this work is available at this https URL. This work was accepted as a poster at the Machine Learning and the Physical Sciences (ML4PS) workshop at NeurIPS 2025. The initial submission was made on August 30, 2025.
A sample of 365 fb1^{-1} of e+eΥ(4S)BBˉe^+e^- \to \Upsilon(4S) \to B\bar{B} data collected by the Belle II experiment is used to measure the partial branching fractions of charmless semileptonic BB meson decays and determine the magnitude of the Cabibbo-Kobayashi-Maskawa matrix element VubV_{ub}. Events containing a signal electron or muon \ell and a fully reconstructed hadronic BB decay that constrains the signal kinematics are selected, while the rest of the event defines the hadronic system XuX_u associated with the signal. To discriminate the signal from the 50-times larger background originating from CKM-favored semileptonic BB decays, a template fit is performed in both signal and control regions after applying an optimized selection. The partial branching fraction measured for lepton energies greater than 1 GeV in the signal BB meson rest frame is ΔB(BXuν)=(1.54±0.08(stat.)±0.12(syst.))×103\Delta\mathcal{B}(B \to X_u \ell \nu) = (1.54 \pm 0.08 \, {\rm (stat.)} \pm 0.12 \, {\rm (syst.)}) \times 10^{-3}. From this measurement, using the Gambino, Giordano, Ossola, Uraltsev theoretical framework, Vub=(4.01±0.190.08+0.07)×103|V_{ub}| = (4.01 \pm 0.19 ^{+0.07} _{-0.08}) \times 10^{-3} is determined, where the uncertainties are experimental and theoretical, respectively. This value is consistent with the world average obtained from previous inclusive measurements. Different theoretical predictions and partial branching fractions measured in other phase-space regions, defined by additional selections on the XuX_u and leptonic system masses, are also used to determine Vub|V_{ub}|.
We present results on the production of π±\pi^{\pm}, K±K^{\pm}, pp, and pˉ\bar{p} in Au+Au collisions at sNN\sqrt{s_\mathrm{NN}} = 54.4 GeV using the STAR detector at RHIC, at mid-rapidity (|y| < 0.1). Invariant yields of these particles as a function of transverse momentum are shown. We determine bulk properties such as integrated particle yields (dN/dydN/dy), mean transverse momentum (pT\langle p_{T} \rangle), particle ratios, which provide insight into the particle production mechanisms. Additionally, the kinetic freeze-out parameters (TkinT_\text{kin} and βT\langle \beta_{T} \rangle), which provide information about the dynamics of the system at the time of freeze-out, are obtained. The Bjorken energy density (ϵBJ\epsilon_{BJ}), which gives an estimate of the energy density in the central rapidity region of the collision zone at the formation time τ\tau, is calculated and presented as a function of multiplicity for various energies. The results are compared with those from the models such as A Multi-Phase Transport (AMPT) and Heavy Ion Jet INteraction Generator (HIJING) for further insights.
In this work, we present a first-principles lattice-QCD calculation of the unpolarized quark PDF for the pion and the kaon. The lattice data rely on matrix elements calculated for boosted mesons coupled to non-local operators containing a Wilson line. The calculations on this lattice ensemble correspond to two degenerate light, a strange, and a charm quark (Nf=2+1+1N_f=2+1+1), using maximally twisted mass fermions with a clover term. The lattice volume is 323×6432^3\times 64, with a lattice spacing of 0.0934 fm, and a pion mass of 260 MeV. Matrix elements are calculated for hadron boosts of P3=0, 0.41, 0.83, 1.25, 1.66,|P_3| = 0,~0.41,~0.83,~1.25,~1.66, and 2.07 GeV. To match lattice QCD results to their light-cone counterparts, we employ two complementary frameworks: the large-momentum effective theory (LaMET) and the short-distance factorization (SDF). Using these approaches in parallel, we also test the lattice data to identify methodology-driven systematics. Results are presented for the standard quark PDFs, as well as the valence sector. Beyond obtaining the PDFs, we also explore the possibility of extracting information on SU(3) flavor-symmetry-breaking effects. For LaMET, we also parametrize the momentum dependence to obtain the infinite-momentum PDFs.
The quality of recent SRC/CT Collaboration J/ψJ/\psi photoproduction data off a 4^4He target from Hall~D at Jefferson Laboratory, combined with the feasibility of measuring the reaction close to the free-nucleon energy threshold, opens the door to using incoherent J/ψJ/\psi photoproduction to access a variety of interesting physics aspects. An example is an estimate of the J/ψ pJ/\psi~p scattering length αJ/ψ p|\alpha_{J/\psi~p}| on the bound proton obtained using the Vector Meson Dominance model. This value can be compared with that of the free proton from the GlueX Collaboration. One may then project what would be expected from the SRC/CT Collaboration Experiment E12--25--002, which was recently approved by the JLab PAC. Using a plane-wave theoretical model to generate quasi-data, we find the experiment could achieve a result of αJ/ψ p=3.08±0.45 mfm|\alpha_{J/\psi~p}| = 3.08\pm 0.45~\mathrm{mfm}, an uncertainty competitive with that of the free-proton measurement. A comparison between the two would allow an evaluation of the effects of medium modification in the case of light nuclei.
We investigate direct CP violation in neutral meson decays by reconstructing the associated density matrices and measuring their difference using the trace distance. Our results cover neutral kaon decays into two scalar triplets of isospin space, specifically the pions, and decays of BB- and DD- mesons into two scalar octets of SU(3) flavor space. We briefly discuss the quantum properties of these states, including entanglement, contextuality, and nonlocality. Additionally, we demonstrate a comparable approach for spin-1 final states by employing a density matrix describing states in the space of helicities. The significance of CP violation obtained through this method is consistently comparable, and often surpasses that obtained using only single, or combinations of, asymmetries.
A unique feature of gas xenon electroluminescent time projection chambers (GXeEL TPCs) in 0νββ0\nu\beta\beta searches is their ability to reconstruct event topology, in particular to distinguish "single-electron" from "double-electron" tracks, the latter being the signature of a 0νββ0\nu\beta\beta decay near the decay endpoint QββQ_{\beta\beta}. Together with excellent energy resolution and the t0_0 provided by primary scintillation, this topological information is key to suppressing backgrounds. Preserving EL, however, requires operating in pure xenon (with helium as the only benign additive), and in pure xenon the diffusion of drifting electrons is large. As a result, the fidelity of reconstructed tracks is limited both by diffusion and by the intrinsic blurring of EL amplification. We propose augmenting the detector with the ability to image not only the electron track but also the corresponding mirror ion track. Introducing trace amounts of NH3_3 (\sim100 ppb) converts Xe+^+ ions into NH4+_4^+ while leaving EL unaffected. For events in the region-of-interest, an ion sensor positioned near the cathode at the projected barycenter captures the NH4+_4^+ ions. Electrons drift rapidly to the anode, producing the standard EL image, whereas the NH4+_4^+ ions drift slowly toward the cathode. Their slow drift provides time to determine the event energy and barycenter. Laser interrogation of the sensor's molecular layer then reveals an ion-track image with sub-millimeter diffusion and no EL-induced smearing. The combined electron-ion imaging substantially strengthens topological discrimination, improving background rejection by about an order of magnitude and significantly extending the discovery potential of GXeEL TPCs for very long 0νββ0\nu\beta\beta lifetimes.
Pulse shape discrimination (PSD) is a critical component in background rejection for neutrinoless double-beta decay and dark matter searches using Broad Energy Germanium (BEGe) detectors. To date, advanced discrimination has relied on Deep Learning approaches employing e.g. Denoising Autoencoders (DAE) and Convolutional Neural Networks (CNN). While effective, these models require tens of thousands of parameters and heavy pre-processing. In this work, we present, to the best of our knowledge, the first application of Quantum Machine Learning (QML) to real, experimental pulse waveforms from a germanium detector. We propose a quantum-classical hybrid approach using Variational Quantum Circuits (VQC) with amplitude encoding. By mapping the 1024-sample waveforms directly into a 10-qubit Hilbert space, we demonstrate that a VQC with only 302 trainable parameters achieves a receiver operating characteristic (ROC) area under the curve (AUC) of 0.98 and a global accuracy of 97.1%. This result demonstrates that even in the current Noisy Intermediate-Scale Quantum (NISQ) era, quantum models can match the performance of state-of-the-art classical baselines while reducing model complexity by over two orders of magnitude. Furthermore, we envision a scenario where future quantum sensors transmit quantum states directly to such processing units, exploiting the exponentially large Hilbert space in a natively quantum pipeline.
Inverse beta decay (IBD), νepe+n(γ)\overline{\nu}_e p \to e^+ n \left( \gamma \right), is the main detection channel for reactor antineutrinos in water- and hydrocarbon-based detectors. As reactor antineutrino experiments now target sub-percent-level sensitivity to oscillation parameters, a precise theoretical description of IBD, including recoil, weak magnetism, nucleon structure, and radiative corrections, becomes essential. In this work, we give a detailed and precise calculation of the total and differential cross sections for radiative IBD, νepe+nγ\overline{\nu}_e p \to e^+ n \gamma. We use a heavy baryon chiral perturbation theory framework, systematically incorporating electroweak, electromagnetic, and strong-interaction corrections. We derive new analytic cross-section expressions, clarify the collinear structure of radiative corrections, and provide a systematic uncertainty analysis. We also discuss phenomenological applications for reactor antineutrino experiments, e.g., JUNO, and neutron decay. Our results enable sub-permille theoretical precision, supporting current and future experiments.
We derive the differential distribution of semileptonic decays with respect to the perpendicular momentum component of the final state hadron. The benefits and shortfalls arising from measurements of these distributions are discussed. Our approach is illustrated on the LHCb measurement of the Bˉs0Ds+μνˉ\bar{B}_s^0\to D_s^+\mu^-\bar\nu decay distribution where the publicly available data by the LHCb experiment is used in an independent phenomenological analysis for the first time. We extract the CKM element Vcb|V_{cb}| and information on the shape of the relevant hadronic form factors from the measurement of the binned rate in the perpendicular momentum component of the hadron.
Inference of standard CNNs on FPGAs often incurs high latency and a long initiation interval due to the deep nested loops required to densely convolve every input pixel regardless of its feature value, especially when the image size is large. However, in some image data, input features can be spatially sparse, and semantic information may occupy only a small fraction of the input pixels. In this case most computation would be wasted on empty regions. In this work, we introduce SparsePixels, a framework for efficient convolution for spatially sparse image data on FPGAs, targeting fast inference applications in constrained environments with latency requirements of microseconds or below. Our approach implements a special class of CNNs that selectively retain and compute on a small subset of pixels that are active while ignoring the rest. We show that, for example, in a neutrino physics dataset for identifying neutrino interactions in LArTPC images that have around 4k input pixels but are naturally very sparse, a standard CNN with a compact size of 4k parameters incurs an inference latency of 48.665 μ\mus on an FPGA, whereas a sparse CNN of the same base architecture computing on less than 1% of the input pixels results in a ×73\times 73 inference speedup to 0.665 μ\mus, with resource utilization well within on-chip budgets, trading only a small percent-level performance loss. At least one-order-of magnitude speedups with comparable performance are also demonstrated in similar datasets with sparse image patterns. This work aims to benefit future algorithm developments for fast and efficient data readout in modern experiments such as the trigger and data acquisition systems at the CERN Large Hadron Collider. For easy adoption, we have developed a library to support building sparse CNNs with quantization-aware training, as well as an HLS implementation for FPGA deployment.
Accurate gluon densities at small xx are essential for reducing theoretical uncertainties in collider predictions, yet remain one of the least constrained ingredients in global analyses. We report recent advances in the resummation of small-xx logarithms in the gluon sector, focusing on collinear distributions and their interplay with transverse-momentum-dependent formulations. Particular attention is paid to the impact of gluon-proton spin correlations and to the emergence of unintegrated gluon densities derived from resummed dynamics. These developments aim to fill the current precision gap in the small-xx region and enable robust applications to LHC and future-collider observables.
Reconstructing the trajectories of charged particles in high-energy collisions requires high precision to ensure reliable event reconstruction and accurate downstream physics analyses. In particular, both precise hit selection and transverse momentum estimation are essential to improve the overall resolution of reconstructed physics observables. Enhanced momentum resolution also enables more efficient trigger threshold settings, leading to more effective data selection within the given data acquisition constraints. In this paper, we introduce a novel end-to-end tracking approach that employs the differentiable programming paradigm to incorporate physics priors directly into a machine learning model. This results in an optimized pipeline capable of simultaneously reconstructing tracks and accurately determining their transverse momenta. The model combines a graph attention network with differentiable clustering and fitting routines, and is trained using a composite loss that, due to its differentiable design, allows physical constraints to be back-propagated effectively through both the neural network and the fitting procedures. This proof of concept shows that introducing differentiable connections within the reconstruction process improves overall performance compared to an equivalent factorized and more standard-like approach, highlighting the potential of integrating physics information through differentiable programming.
A search is presented for the two-body charmed baryonic decays B(s)0Λc+Λc\overline{B}_{(s)}^{0}\to\Lambda_{c}^{+}\overline{\Lambda}_{c}{}^{-}, using a data sample collected by the LHCb experiment during 2011--2012 and 2015--2018 corresponding to an integrated luminosity of 9fb19\,\mathrm{fb}^{-1}. The first observation of the Bs0Λc+Λc\overline{B}_{s}^{0}\to\Lambda_{c}^{+}\overline{\Lambda}_{c}{}^{-} decay is reported with 6.2σ6.2\sigma significance, along with 4.3σ4.3\sigma evidence for the B0Λc+Λc\overline{B}^{0}\to\Lambda_{c}^{+}\overline{\Lambda}_{c}{}^{-} decay. The branching fractions are measured to be B(B0Λc+Λc)=(1.010.28+0.27±0.08±0.15)×105\cal{B}(\overline{B}^{0}\to\Lambda_{c}^{+}\overline{\Lambda}_{c}^{-}) = (1.01^{+0.27}_{-0.28} \pm 0.08 \pm 0.15) \times 10^{-5} and B(Bs0Λc+Λc)=(5.0±1.3±0.5±0.8)×105\cal{B}(\overline{B}_{s}^{0}\to\Lambda_{c}^{+}\overline{\Lambda}_{c}{}^{-}) = (5.0 \pm 1.3 \pm 0.5 \pm 0.8) \times 10^{-5}, where the first uncertainty is statistical, the second systematic, and the third due to external inputs. These results provide novel experimental inputs for the theoretical framework describing two-body baryonic decays of BB mesons via WW-emission and WW-exchange mechanisms.
The RELICS (REactor neutrino LIquid xenon Coherent elastic Scattering) experiment aims to detect coherent elastic neutrino-nucleus scattering from reactor antineutrinos using a dual-phase xenon time projection chamber. To validate the detector concept and ensure technical reliability for the full-scale experiment, a dedicated prototype was designed, constructed, and operated. This work presents an overview of the design, construction, and operational performance of the prototype, with emphasis on its major subsystems, including the TPC, cryogenic and xenon purification systems, slow control, and data acquisition. During operation, the detector demonstrated the capability to achieve a sub-keV energy threshold required for the RELICS physics program, as reflected by a measured single electron gain of 34.30~±\pm~0.01~(stat.)~PE/e^- and the successful detection of 0.27~keV L-shell decay events from 37^{37}Ar. In addition, essential data analysis techniques and simulation frameworks were developed and validated, establishing the methodological foundation for future RELICS operations. The successful construction and operation of this prototype confirm the feasibility of the core technologies and provide a crucial experimental basis for the final RELICS detector.
The excellent sensitivities of quantum sensors are a double-edged sword: minuscule quantities can be observed, but any undesired signal acts as noise. This is challenging when detecting quantities that are obscured by such noise. Decoupling sequences improve coherence times and hence sensitivities, though only AC signals in narrow frequency bands are distinguishable. Alternatively, comagnetometers operate gaseous spin mixtures at high temperatures in the self-compensating regime to counteract slowly varying noise. These are applied with great success in various exotic spin-interaction searches. Here, we propose a method that decouples specific DC fields from DC and AC magnetic noise. It requires any spin cluster where the effect on each individual spin is different for the target field and local magnetic fields, which allows for a different approach compared to comagnetometers. The presented method has several key advantages, including an orders-of-magnitude increase in noise frequencies to which we are resistant. We explore electron-spin nuclear-spin pairs in nitrogen-vacancy centres in diamond, with a focus on their merit for light dark-matter searches. Other applications include gradient sensing, quantum memory, and gyroscopes.
A hot and dense state of nuclear matter, known as the quark-gluon plasma, is created in collisions of ultrarelativistic heavy nuclei. Highly energetic quarks and gluons, collectively referred to as partons, lose energy as they travel through this matter, leading to suppressed production of particles with large transverse momenta (pTp_\mathrm{T}). Conversely, high-pTp_\mathrm{T} particle suppression has not been seen in proton-lead collisions, raising questions regarding the minimum system size required to observe parton energy loss. Oxygen-oxygen (OO) collisions examine a region of effective system size that lies between these two extreme cases. The CMS detector at the CERN LHC has been used to quantify charged-particle production in inclusive OO collisions for the first time via measurements of the nuclear modification factor (RAAR_\mathrm{AA}). The RAAR_\mathrm{AA} is derived by comparing particle production to expectations based on proton-proton (pp) data and has a value of unity in the absence of nuclear effects. The data for OO and pp collisions at a nucleon-nucleon center-of-mass energy sNN\sqrt{s_\mathrm{NN}} = 5.36 TeV correspond to integrated luminosities of 6.1 nb1^{-1} and 1.02 pb1^{-1}, respectively. The RAAR_\mathrm{AA} is below unity with a minimum of 0.69 ±\pm 0.04 around pTp_\mathrm{T} = 6 GeV. The data exhibit better agreement with theoretical models incorporating parton energy loss as compared to baseline models without energy loss.
Charged particle track reconstruction is a foundational task in collider experiments and the main computational bottleneck in particle reconstruction. Graph neural networks (GNNs) have shown strong performance for this problem, but costly graph construction, irregular computations, and random memory access patterns substantially limit their throughput. The recently proposed Hashing-based Efficient Point Transformer (HEPT) offers a theoretically guaranteed near-linear complexity for large point cloud processing via locality-sensitive hashing (LSH) in attention computations; however, its evaluations have largely focused on embedding quality, and the object condensation pipeline on which HEPT relies requires a post-hoc clustering step (e.g., DBScan) that can dominate runtime. In this work, we make two contributions. First, we present a unified, fair evaluation of physics tracking performance for HEPT and a representative GNN-based pipeline under the same dataset and metrics. Second, we introduce HEPTv2 by extending HEPT with a lightweight decoder that eliminates the clustering stage and directly predicts track assignments. This modification preserves HEPT's regular, hardware-friendly computations while enabling ultra-fast end-to-end inference. On the TrackML dataset, optimized HEPTv2 achieves approximately 28 ms per event on an A100 while maintaining competitive tracking efficiency. These results position HEPTv2 as a practical, scalable alternative to GNN-based pipelines for fast tracking.
There are no more papers matching your filters at the moment.