Instituto de F`ısica Corpuscular
We investigate the physics potential of SHIFT@LHC, a proposed gaseous fixed target installed in the LHC tunnel, as a novel source of detectable neutrinos. Using simulations of proton-gas collisions, hadron propagation, and neutrino interactions, we estimate that O(104)O(10^4) muon-neutrino and O(103)O(10^3) electron-neutrino interactions, spanning energies from 20 GeV to 1 TeV, would occur in the CMS and ATLAS detectors with 1% of the LHC Run-4 integrated luminosity. This unique configuration provides access to hadron production in the pseudorapidity range 5<η\eta<8, complementary to existing LHC detectors. If realized, this would mark the first detection of neutrinos in a general-purpose LHC detector, opening a new avenue to study neutrino production and interactions in a regime directly relevant to atmospheric neutrino experiments.
Two-dimensional dilaton gravity provides a valuable framework to study the dynamics of quantum black holes. These models are often coupled to conformal scalar fields, which capture essential quantum effects such as the trace anomaly, while remaining analytically tractable. From the viewpoint of two-dimensional quantum field theory, unitary theories require a positive central charge. However, theories with a total negative central charge naturally arise from the contribution of the Faddeev-Popov ghosts to the effective action. Recent analyses of the Callan-Giddins-Harvey-Strominger (CGHS) model with a Russo-Susskin-Thorlacius (RST) counterterm have shown that a negative central charge can remove curvature singularities in the backreacted geometry. In this work, we argue that singularity resolution arises from the negative central charge itself, rather than the particular dynamics of a given model. To support this, we present analogous results in spherically reduced Einstein gravity.
The increasing volume of gamma-ray data demands new analysis approaches that can handle large-scale datasets while providing robustness for source detection. We present a Deep Learning (DL) based pipeline for detection, localization, and characterization of gamma-ray sources. We extend our AutoSourceID (ASID) method, initially tested with \textit{Fermi}-LAT simulated data and optical data (MeerLICHT), to Cherenkov Telescope Array Observatory (CTAO) simulated data. This end-to-end pipeline demonstrates a versatile framework for future application to other surveys and potentially serves as a building block for a foundational model for astrophysical source detection.
We present an extension of the Casas-Ibarra parametrization that applies to all possible Majorana neutrino mass models. This framework allows us to systematically identify minimal models, defined as those with the smallest number of free parameters. We further analyze the phenomenologically relevant combination of the Yukawa matrix, yyy^\dagger y, and show that in certain scenarios it exhibits an unexpected reduction in the number of free parameters, depending on just one real degree of freedom. Finally, the application of our results is illustrated in specific models, which can be tested or falsified due to their definite experimental predictions in heavy neutrino and charged lepton flavor violating decays.
We review the main components that have to be considered, within Resonance Chiral Theory, in the study of processes whose dynamics is dominated by hadron resonances. We show its application in the study of the tau -> (3 pion) nu_tau decay.
Many present-day axion searches attempt to probe the mixing of axions and photons, which occurs in the presence of an external magnetic field. While this process is well-understood in a number of simple and idealized contexts, a strongly varying or highly inhomogeneous background can impact the efficiency and evolution of the mixing in a non-trivial manner. In an effort to develop a generalized framework for analyzing axion-photon mixing in arbitrary systems, we focus in this work on directly solving the axion-modified form of Maxwell's equations across a simulation domain with a spatially varying background. We concentrate specifically on understanding resonantly enhanced axion-photon mixing in a highly magnetized plasma, which is a key ingredient for developing precision predictions of radio signals emanating from the magnetospheres of neutron stars. After illustrating the success and accuracy of our approach for simplified limiting cases, we compare our results with a number of analytic solutions recently derived to describe mixing in these systems. We find that our numerical method demonstrates a high level of agreement with one, but only one, of the published results. Interestingly, our method also recovers the mixing between the axion and magnetosonic-t and Alfv\'{e}n modes; these modes cannot escape from the regions of dense plasma, but could non-trivially alter the dynamics in certain environments. Future work will focus on extending our calculations to study resonant mixing in strongly variable backgrounds, mixing in generalized media (beyond the strong magnetic field limit), and the mixing of photons with other light bosonic fields, such as dark photons.
At the n\_TOF experiment at CERN a dedicated single-crystal chemical vapor deposition (sCVD) Diamond Mosaic-Detector has been developed for (n,α\alpha) cross-section measurements. The detector, characterized by an excellent time and energy resolution, consists of an array of 9 sCVD diamond diodes. The detector has been characterized and a cross-section measurement has been performed for the 59^{59}Ni(n,α\alpha)56^{56}Fe reaction in 2012. The characteristics of the detector, its performance and the promising preliminary results of the experiment are presented.
The so-called trivializing flows were proposed to speed up Hybrid Monte Carlo simulations, where the Wilson flow was used as an approximation of a trivializing map, a transformation of the gauge fields which trivializes the theory. It was shown that the scaling of the computational costs towards the continuum did not change with respect to HMC. The introduction of machine learning tecniques, especially normalizing flows, for the sampling of lattice gauge theories has shed some hope on solving topology freezing in lattice QCD simulations. In this talk I will present our work in a ϕ4\phi^{4} theory using normalizing flows as trivializing flows (given its similarity with the idea of a trivializing map), training from a trivial distribution as well as from coarser lattices, and study its scaling towards the continuum, comparing it with standard HMC.
Leptoquark-Higgs interactions induce mixing between leptoquark states with different chiralities once the electro-weak symmetry is broken. In such LQ models Majorana neutrino masses are generated at 1-loop order. Here we calculate the neutrino mass matrix and explore the constraints on the parameter space enforced by the assumption that LQ-loops explain current neutrino oscillation data. LQs will be produced at the LHC, if their masses are at or below the TeV scale. Since the fermionic decays of LQs are governed by the same Yukawa couplings, which are responsible for the non-trivial neutrino mass matrix, several decay branching ratios of LQ states can be predicted from measured neutrino data. Especially interesting is that large lepton flavour violating rates in muon and tau final states are expected. In addition, the model predicts that, if kinematically possible, heavier LQs decay into lighter ones plus either a standard model Higgs boson or a Z0/W±Z^0/W^{\pm} gauge boson. Thus, experiments at the LHC might be able to exclude the LQ mechanism as explanation of neutrino data.
We introduce a quantum algorithm that performs Quantum Adaptive Importance Sampling (QAIS) for Monte Carlo integration of multidimensional functions, targeting in particular the computational challenges of high-energy physics. In this domain, the fundamental ingredients for theoretical predictions such as multiloop Feynman diagrams and the phase-space require evaluating high-dimensional integrals that are computationally demanding due to divergences and complex mathematical structures. The established method of Adaptive Importance Sampling, as implemented in tools like VEGAS, uses a grid-based approach that is iteratively refined in a separable way, per dimension. This separable approach efficiently suppresses the exponentially growing grid-handling computational cost, but also introduces performance drawbacks whenever strong inter-variable correlations are present. To utilize sampling resources more efficiently, QAIS exploits the exponentially large Hilbert space of a Parameterised Quantum Circuit (PQC) to manipulate a non-separable Probability Density Function (PDF) defined on a multidimensional grid. In this setting, entanglement within the PQC captures the correlations and intricacies of the target integrand's structure. Performing measurements on the PQC determines the sample allocation across the multidimensional grid. This focuses samples in the small subspace where the important structures of the target integrand lie, and thus generates very precise integral estimations. As an application, we look at a very sharply peaked loop Feynman integral and at multi-modal benchmark integrals.
The spontaneous breaking of a U(1)U(1) symmetry via an intermediate discrete symmetry may yield a hybrid topological defect of \emph{domain walls bounded by cosmic strings}. The decay of this defect network leads to a unique gravitational wave signal spanning many orders in observable frequencies, that can be distinguished from signals generated by other sources. We investigate the production of gravitational waves from this mechanism in the context of the type-I two-Higgs-doublet model extended by a U(1)RU(1)_R symmetry, that simultaneously accommodates the seesaw mechanism, anomaly cancellation, and eliminates flavour-changing neutral currents. The gravitational wave spectrum produced by the string-bounded-wall network can be detected for U(1)RU(1)_R breaking scale from 101210^{12} to 101510^{15} GeV in forthcoming interferometers including LISA and Einstein Telescope, with a distinctive f3f^{3} slope and inflexion in the frequency range between microhertz and hertz.
The two dark sectors of the universe - dark matter and dark energy - may interact with each other. Background and linear density perturbation evolution equations are developed for a generic coupling. We then establish the general conditions necessary to obtain models free from early time non-adiabatic instabilities. As an application, we consider a viable universe in which the interaction strength is proportional to the dark energy density. The scenario does not exhibit "phantom crossing" and is free from instabilities, including early ones. A sizeable interaction strength is compatible with combined WMAP, HST, SN, LSS and H(z) data. Neutrino mass and/or cosmic curvature are allowed to be larger than in non-interacting models. Our analysis sheds light as well on unstable scenarios previously proposed.
A toy model where a massless, real, scalar field Φ\Phi in a compact space-time M4×S1{\cal M}_4 \times {\cal S}_1 is coupled to a brane (parametrized as a δ\delta-function) through the unique relevant operator δ(y)Φ2(x,y)\delta (y) \Phi^2 (x,y) is considered. The exact Kaluza-Klein spectrum of the model is computed for any value of the coupling between field and brane using the Burniston-Siewert method to solve analytically transcendental equations. The exact KK-spectrum of a model with a Brane-Localized Kinetic Term is also computed. Weak- and strong-coupling limits are derived, matching or extending mathematically equivalent existing results. For a negative coupling, the would-be zero-mode ψ0e\psi_{0^-}^e is found to localize into the brane, behaving as an effective four-dimensional field. The 4-dimensional KK-decomposition of the model once a renormalizable cubic self-interaction Φ3(x,y)\Phi^3 (x,y) is added to the action is derived computing the overlaps between the KK-modes. It is found that the localized would-be zero-mode ψ0e\psi_{0^-}^e decouples from the massive KK-spectrum in the limit of large brane-to-bulk coupling.
In recent years, theoretical and phenomenological studies with effective field theories have become a trending and prolific line of research in the field of high-energy physics. In order to discuss present and future prospects concerning automated tools in this field, the SMEFT-Tools 2022 workshop was held at the University of Zurich from 14th-16th September 2022. The current document collects and summarizes the content of this workshop.
We analyze helioseismic waves near the solar equator in the presence of magnetic fields deep within the solar radiative zone. We find that reasonable magnetic fields can significantly alter the shapes of the wave profiles for helioseismic g-modes. They can do so because the existence of density gradients allows g-modes to resonantly excite Alfven waves, causing mode energy to be funnelled along magnetic field lines, away from the solar equatorial plane. The resulting wave forms show comparatively sharp spikes in the density profile at radii where these resonances take place. We estimate how big these waves might be in the Sun, and perform a first search for observable consequences. We find the density excursions at the resonances to be too narrow to be ruled out by present-day analyses of p-wave helioseismic spectra, even if their amplitudes were to be larger than a few percent. (In contrast it has been shown in (Burgess et al. 2002) that such density excursions could affect solar neutrino fluxes in an important way.) Because solar p-waves are not strongly influenced by radiative-zone magnetic fields, standard analyses of helioseismic data should not be significantly altered. The influence of the magnetic field on the g-mode frequency spectrum could be used to probe sufficiently large radiative-zone magnetic fields should solar g-modes ever be definitively observed. Our results would have stronger implications if overstable solar g-modes should prove to have very large amplitudes, as has sometimes been argued.
We present a new, comprehensive global analysis of parton-to-pion fragmentation functions at next-to-leading order accuracy in QCD. The obtained results are based on the latest experimental information on single-inclusive pion production in electron-positron annihilation, lepton-nucleon deep-inelastic scattering, and proton-proton collisions. An excellent description of all data sets is achieved, and the remaining uncertainties in parton-to-pion fragmentation functions are estimated based on the Hessian method. Extensive comparisons to the results from our previous global analysis are performed.
We study the capabilities of a muon collider experiment to detect disappearing tracks originating when a heavy and electrically charged long-lived particle decays via X+Y+Z0X^+ \to Y^+ Z^0, where X+X^+ and Z0Z^0 are two almost mass degenerate new states and Y+Y^+ is a charged Standard Model particle. The backgrounds induced by the in-flight decays of the muon beams (BIB) can create detector hit combinations that mimic long-lived particle signatures, making the search a daunting task. We design a simple strategy to tame the BIB, based on a detector-hit-level selection exploiting timing information and hit-to-hit correlations, followed by simple requirements on the quality of reconstructed tracks. Our strategy allows us to reduce the number of tracks from BIB to an average of 0.08 per event, hence being able to design a cut-and-count analysis that shows that it is possible to cover weak doublets and triplets with masses close to s/2\sqrt{s}/2 in the 0.1-10 ns range. In particular, this implies that a 10 TeV muon collider is able to probe thermal MSSM higgsinos and thermal MSSM winos, thus rivaling the FCC-hh in that respect, and further enlarging the physics program of the muon collider into the territory of WIMP dark matter and long-lived signatures. We also provide parton-to-reconstructed level efficiency maps, allowing an estimation of the coverage of disappearing tracks at muon colliders for arbitrary models.
The ANTARES detector, completed in 2008, is the largest neutrino telescope in the Northern hemisphere. Located at a depth of 2.5 km in the Mediterranean Sea, 40 km off the Toulon shore, its main goal is the search for astrophysical high energy neutrinos. In this paper we collect the 14 contributions of the ANTARES collaboration to the 33rd International Cosmic Ray Conference (ICRC 2013). The scientific output is very rich and the contributions included in these proceedings cover the main physics results, ranging from steady point sources to exotic physics and multi-messenger analyses.
We present the Conceptual Design Report (CDR) for the MATHUSLA (MAssive Timing Hodoscope for Ultra-Stable neutraL pArticles) long-lived particle detector at the HL-LHC, covering the design, fabrication and installation at CERN Point 5. MATHUSLA is a 40 m-scale detector with an air-filled decay volume that is instrumented with scintillator tracking detectors, to be located near CMS. Its large size, close proximity to the CMS interaction point and about 100 m of rock shielding from HL-LHC backgrounds allows it to detect LLP production rates and lifetimes that are one to two orders of magnitude beyond the ultimate sensitivity of the HL-LHC main detectors for many highly motivated LLP signals. Data taking is projected to commence with the start of HL-LHC operations. We present a new 40m design for the detector: its individual scintillator bars and wavelength-shifting fibers, their organization into tracking layers, tracking modules, tower modules and the veto detector; define a high-level design for the supporting electronics, DAQ and trigger system, including supplying a hardware trigger signal to CMS to record the LLP production event; outline computing systems, civil engineering and safety considerations; and present preliminary cost estimates and timelines for the project. We also conduct detailed simulation studies of the important cosmic ray and HL-LHC muon backgrounds, implementing full track/vertex reconstruction and background rejection, to ultimately demonstrate high signal efficiency and 1\ll 1 background event in realistic LLP searches for the main physics targets at MATHUSLA. This sensitivity is robust with respect to detector design or background simulation details. Appendices provide various supplemental information.
We present the current status of the MATHUSLA (MAssive Timing Hodoscope for Ultra-Stable neutraL pArticles) long-lived particle (LLP) detector at the HL-LHC, covering the design, fabrication and installation at CERN Point 5. MATHUSLA40 is a 40 m-scale detector with an air-filled decay volume that is instrumented with scintillator tracking detectors, to be located near CMS. Its large size, close proximity to the CMS interaction point and about 100 m of rock shielding from LHC backgrounds allows it to detect LLP production rates and lifetimes that are one to two orders of magnitude beyond the ultimate reach of the LHC main detectors. This provides unique sensitivity to many LLP signals that are highly theoretically motivated, due to their connection to the hierarchy problem, the nature of dark matter, and baryogenesis. Data taking is projected to commence with the start of HL-LHC operations. We summarize the new 40m design for the detector that was recently presented in the MATHUSLA Conceptual Design Report, alongside new realistic background and signal simulations that demonstrate high efficiency for the main target LLP signals in a background-free HL-LHC search. We argue that MATHUSLA's uniquely robust expansion of the HL-LHC physics reach is a crucial ingredient in CERN's mission to search for new physics and characterize the Higgs boson with precision.
There are no more papers matching your filters at the moment.