National Institute for Theoretical and Computational Sciences
Researchers developed CMBFSCNN, a convolutional neural network, to accurately subtract astrophysical foregrounds from Cosmic Microwave Background polarization data. The method successfully recovers the CMB lensing B-mode power spectrum from simulations for next-generation experiments like CMB-S4 and LiteBIRD, demonstrating high fidelity in map-level recovery and power spectrum reconstruction.
Many natural language processing systems operate over tokenizations of text to address the open-vocabulary problem. In this paper, we give and analyze an algorithm for the efficient construction of deterministic finite automata designed to operate directly on tokenizations produced by the popular byte pair encoding technique. This makes it possible to apply many existing techniques and algorithms to the tokenized case, such as pattern matching, equivalence checking of tokenization dictionaries, and composing tokenized languages in various ways.
CNRS logoCNRSUniversity of Pittsburgh logoUniversity of PittsburghUniversity of Waterloo logoUniversity of WaterlooSLAC National Accelerator LaboratoryChinese Academy of Sciences logoChinese Academy of SciencesUC Berkeley logoUC BerkeleyUniversity College London logoUniversity College LondonUniversity of Michigan logoUniversity of MichiganBoston University logoBoston UniversityKansas State UniversityUniversität HeidelbergThe University of Texas at DallasUniversité Paris-Saclay logoUniversité Paris-SaclayStockholm University logoStockholm UniversityLawrence Berkeley National Laboratory logoLawrence Berkeley National LaboratoryPerimeter Institute for Theoretical Physics logoPerimeter Institute for Theoretical PhysicsSorbonne Université logoSorbonne UniversitéFermi National Accelerator LaboratoryCEA logoCEAPrinceton University logoPrinceton UniversityUniversity of PortsmouthThe Ohio State University logoThe Ohio State UniversityDurham University logoDurham UniversityUniversidad Nacional Autónoma de MéxicoLawrence Livermore National LaboratorySouth African Astronomical ObservatoryUniversität PotsdamInstituto de Astrofísica de AndalucíaInstitut d’Estudis Espacials de CatalunyaCIEMATLeibniz-Institut für Astrophysik PotsdamInstitució Catalana de Recerca i Estudis AvançatsLaboratoire de Physique des 2 Infinis Irène Joliot-CurieCenter for Cosmology and AstroParticle PhysicsNOIRLabThe Oskar Klein Centre for Cosmoparticle PhysicsNational Institute for Theoretical and Computational SciencesUniversidad ECCIKavli Institute for Particle Astrophysics and CosmologyAstroparticule et CosmologieInstitut de Física d’Altes EnergiesInstitute of Space SciencesUniversidad Antonio NariñoLaboratoire de Physique Nucléaire et de Hautes EnergiesCorporación Universitaria UnihorizonteCentro de Investigaciones en Ciencias Básicas y Aplicadas (CIBCIA)Universit de ParisUniversit degli Studi di PadovaUniversit Paris CitUniversit di Roma Tor Vergata
We perform a frequentist analysis using the standard profile likelihood method for clustering measurements from Data Release 1 of the Dark Energy Spectroscopic Instrument (DESI). While Bayesian inferences for Effective Field Theory models of galaxy clustering can be highly sensitive to the choice of priors for extended cosmological models, frequentist inferences are not susceptible to such effects. We compare Bayesian and frequentist constraints for the parameter set {σ8,H0,Ωm,w0,wa}\{\sigma_8, H_0, \Omega_{\rm{m}}, w_0, w_a\} when fitting to the full-shape of the power spectrum multipoles, the post-reconstruction Baryon Acoustic Oscillation (BAO) measurements, as well as external datasets from the CMB and type Ia supernovae measurements. Bayesian prior effects are very significant for the w0waw_0w_aCDM model; while the 1σ1 \sigma frequentist confidence intervals encompass the maximum a posteriori (MAP), the Bayesian credible intervals almost always exclude the maximum likelihood estimate (MLE) and the MAP - indicating strong prior volume projection effects - unless supernovae data are included. We observe limited prior effects for the Λ\LambdaCDM model, due to the reduced number of parameters. When DESI full-shape and BAO data are jointly fit, we obtain the following 1σ1\sigma frequentist confidence intervals for Λ\LambdaCDM (w0waw_0w_aCDM): σ8=0.8670.041+0.048, H0=68.910.79+0.80 km s1Mpc1, Ωm=0.3038±0.0110\sigma_8 = 0.867^{+0.048}_{-0.041} , \ H_0 = 68.91^{+0.80}_{-0.79} \ \rm{km \ s^{-1}Mpc^{-1}} , \ \Omega_{\rm{m}} = 0.3038\pm0.0110 (σ8=0.7930.048+0.069, H0=64.92.8+4.8 km s1Mpc1, Ωm=0.3690.059+0.029\sigma_8 = 0.793^{+0.069}_{-0.048} , \ H_0 = 64.9^{+4.8}_{-2.8} \ \rm{km \ s^{-1}Mpc^{-1}} , \ \Omega_{\rm{m}} = 0.369^{+0.029}_{-0.059} , w0=0.240.64+0.17w_0 = -0.24^{+0.17}_{-0.64} , wa=2.5+1.9w_a = -2.5^{+1.9}_{}), corresponding to 0.7σ\sigma, 0.3σ\sigma, 0.7σ\sigma (1.9σ\sigma, 3.4σ\sigma, 5.6σ\sigma, 5.5σ\sigma, 5.6σ\sigma) shifts between the MLE relative to the Bayesian posterior mean for Λ\LambdaCDM (w0waw_0w_aCDM) respectively.
The lifetime behaviour of loans is notoriously difficult to model, which can compromise a bank's financial reserves against future losses, if modelled poorly. Therefore, we present a data-driven comparative study amongst three techniques in modelling a series of default risk estimates over the lifetime of each loan, i.e., its term-structure. The behaviour of loans can be described using a nonstationary and time-dependent semi-Markov model, though we model its elements using a multistate regression-based approach. As such, the transition probabilities are explicitly modelled as a function of a rich set of input variables, including macroeconomic and loan-level inputs. Our modelling techniques are deliberately chosen in ascending order of complexity: 1) a Markov chain; 2) beta regression; and 3) multinomial logistic regression. Using residential mortgage data, our results show that each successive model outperforms the previous, likely as a result of greater sophistication. This finding required devising a novel suite of simple model diagnostics, which can itself be reused in assessing sampling representativeness and the performance of other modelling techniques. These contributions surely advance the current practice within banking when conducting multistate modelling. Consequently, we believe that the estimation of loss reserves will be more timeous and accurate under IFRS 9.
2
Machine learning with hierarchical quantum circuits, usually referred to as Quantum Convolutional Neural Networks (QCNNs), is a promising prospect for near-term quantum computing. The QCNN is a circuit model inspired by the architecture of Convolutional Neural Networks (CNNs). CNNs are successful because they do not need manual feature design and can learn high-level features from raw data. Neural Architecture Search (NAS) builds on this success by learning network architecture and achieves state-of-the-art performance. However, applying NAS to QCNNs presents unique challenges due to the lack of a well-defined search space. In this work, we propose a novel framework for representing QCNN architectures using techniques from NAS, which enables search space design and architecture search. Using this framework, we generate a family of popular QCNNs, those resembling reverse binary trees. We then evaluate this family of models on a music genre classification dataset, GTZAN, to justify the importance of circuit architecture. Furthermore, we employ a genetic algorithm to perform Quantum Phase Recognition (QPR) as an example of architecture search with our representation. This work provides a way to improve model performance without increasing complexity and to jump around the cost landscape to avoid barren plateaus. Finally, we implement the framework as an open-source Python package to enable dynamic QCNN creation and facilitate QCNN search space design for NAS.
We demonstrate that nn-way junctions in three dimensional gravity correspond to coupled n1n-1 strings each satisfying the Nambu-Goto equation in the smoothened background, and with sources consisting of Monge-Ampère like terms which couple the strings. For n3n\geq 3, these n1n-1 degrees of freedom survive the tensionless limit implying that matter-like behavior can arise out of purepure gravity. We interpret these stringy degrees of freedom of gravitational junctions holographically in terms of wavepackets which collectively undergo perfect reflection at the multi-interface in the dual conformal field theory.
We investigate the applicability of complex Langevin dynamics to the three-dimensional XY model at finite chemical potential. To assess correctness, we introduce a new diagnostic based on the configurational temperature (or configurational coupling) estimator, recently proposed as a thermodynamic consistency check. We compare this criterion with the established Nagata-Nishimura-Shimasaki drift-decay test across a range of couplings and chemical potentials. Our results show that complex Langevin dynamics yields reliable results in the ordered phase (large β\beta), but fails in the disordered phase (small β\beta), even when the sign problem is mild. The configurational estimator provides a clear and physics-driven reliability test that complements drift-based diagnostics. These findings establish the estimator as a practical tool for identifying incorrect convergence, and highlight its potential for broader applications in lattice field theories with complex actions.
We use a loop truncated Jevicki-Sakita effective collective field Hamiltonian to obtain, over a very large range of values of 't Hooft's coupling, and directly in the large N limit, the large N (planar) ground state energy, the planar ground state expectation values of invariant correlators, and the 1/N spectrum of the quantum mechanical system of three massless Yang-Mills coupled matrices. This captures the dynamics of the (residual) gauge invariant sector of the spatially reduced 3+1 dimensional pure Yang-Mills theory, in the large N limit. The large N loop space constraints are handled by the use of master variables. As is the case for two matrices, the method is highly efficient directly in the massless limit, and it reproduces to a very high precision the scaling dependence of physical quantities, determined by their dimensions, on the dimensionful 't Hooft coupling. We obtain the bound state masses of "glueballs", their quantum numbers and ensuing degeneracies.
We explore the relationship between complexity and duality in quantum systems, focusing on how local and non-local operators evolve under time evolution. We find that non-local operators, which are dual to local operators under specific mappings, exhibit behavior that mimics the growth of their local counterparts, particularly when considering state complexity. For the open transverse Ising model this leads to a neat organisation of the operator dynamics on either side of the duality, both consistent with growth expected in a quadratic fermion model like the Kitaev chain. When examing periodic chains, however, the mapping of boundary terms provides access to multiple branches of highly complex operators. These give rise to much larger saturation values of complexity for parity-mixing operators and are in contrast to what one would expect for a quadratic Hamiltonian. Our results shed light on the intricate relationship between non-locality, complexity growth, and duality in quantum systems.
We investigate cosmology-driven modifications to Schwarzschild-like black hole spacetimes and analyze their impact on photon propagation, gravitational lensing, and shadow observation. The gravitational deflection angle is computed using the Rindler-Ishak method, which incorporates finite-distance corrections and provides a consistent framework for non-asym-ptotically flat spacetimes. The effective potential for null geodesics exhibits a single unstable maximum corresponding to the photon sphere, and we study photon orbits classified according to the critical impact parameter into capture, escape, and unstable circular trajectories. Our analysis shows that the deflection angle decreases with increasing model parameter (α)(\alpha), resulting in weaker light bending compared to the Schwarzschild case. In addition, we examine the angular diameter of the black hole shadow as measured by a static observer, highlighting its dependence on the cosmological modification parameters. These results suggest that high-precision astrometric and lensing observations can place meaningful constraints on cosmology-inspired modifications to gravity, thereby linking astrophysical black holes with cosmic expansion and offering a novel probe of gravitational physics in strong-field regimes.
In July 2025 the Large Hadron Collider (LHC) collided 16^{16}O16^{16}O and 20^{20}Ne20^{20}Ne isotopes in a quest to understand the physics of ultrarelativistic light ion collisions. One of the key motivations for this run is to discover partonic energy loss in systems with a small quark-gluon plasma (QGP). In this letter we combine a BDMPS-Z based model, a path-length based energy loss prescription, and JEWEL together with two realistic geometries of the 16^{16}O and 20^{20}Ne isotopes. The different sizes of the ions affect the energy loss in characteristically different ways depending on the model.
We report on an educational pilot program for low-cost physics experimentation run in Ecuador, South Africa, and the United States. The program was developed after having needs-based discussions with African educators, researchers, and leaders. It was determined that the need and desire for low-cost, skills-building, and active-learning tools is very high. From this, we developed a 3D-printable, Raspberry Pi-based multispectral camera (15 to 25 spectral channels in the visible and near-IR) for as little as $100. The program allows students to learn 3D modeling, 3D printing, feedback, control, image analysis, Python programming, systems integration and artificial intelligence as well as spectroscopy. After completing their cameras, the students in the program studied plant health, plant stress, post-harvest fruit ripeness, and polarization and spectral analysis of nanostructured insect wings, the latter of which won the ``best-applied research" award at a conference poster session and will be highlighted in this paper. Importantly, these cameras can be an integral part of any developing country's agricultural, recycling, medical, and pharmaceutical infrastructure. Thus, we believe this experiment can play an important role at the intersection of student training and developing countries' capacity building.
We define strict and lax orthogonal factorization systems on double categories. These consist of an orthogonal factorization system on arrows and one on double cells that are compatible with each other. Our definitions are motivated by several explicit examples, including factorization systems on double categories of spans, relations and bimodules. We then prove monadicity results for orthogonal factorization systems on double categories in order to justify our definitions. For fibrant double categories we discuss the structure of the double orthogonal factorization systems that have a given orthogonal factorization system on the arrows in common. Finally, we study the interaction of orthogonal factorization systems on double categories with double fibrations.
We consider two-dimensional N=(2,2){\cal N} = (2, 2) Yang--Mills theory with gauge group SU(NN) in Euclidean signature compactified on a torus with thermal fermion boundary conditions imposed on one cycle. We perform non-perturbative lattice analyses of this theory for large 12N2012 \leq N \leq 20. Although no holographic dual of this theory is yet known, we conduct numerical investigations to check for features similar to the two-dimensional ${\cal N} = (8, 8)$ Yang--Mills theory, which has a well-defined gravity dual. We perform lattice field theory calculations to determine the phase diagram, observing a spatial deconfinement transition similar to the maximally supersymmetric case. However, the transition does not continue to low temperature, implying the absence of a topology-changing transition between black hole geometries in any holographic dual for this four-supercharge theory.
It has recently been shown that the Nambu-Goto equation for a string emerges from the junction conditions in three-dimensional gravity. Holographically, gravitational junctions are dual to interfaces in conformal field theory. We demonstrate that each stringy mode of the junction corresponds to a universal HinHout\mathcal{H}_{in}\rightarrow \mathcal{H}_{out} quantum map between in and out Hilbert spaces of excitations scattered at the interface, and also a universal HLHR\mathcal{H}_{L}\rightarrow \mathcal{H}_{R} quantum map relating the excitations on both sides. These quantum maps generalize those realized by defect operators and preserve the conformal boundary condition at the interface.
The observation of collectivity in collisions of small systems has constituted a challenge for the heavy-ion community for over a decade now. The absence of jet quenching in those systems presents an apparent contradiction to the presence of an azimuthal anisotropy of high-pp_\perp particles. In the present work, we investigate the role of colour coherence in this puzzle. For that, we use the \textsc{Jewel} Monte Carlo model in its latest version, which includes effects of colour coherence in the jet-medium interactions. We then compare the two scenarios, with and without colour coherence, and quantify the effect on hadron and jet RAAR_{AA} as well as on high-pp_\perp v2v_2. The results indicate that, although coherence effects do account for an increase in RAAR_{AA}, they do not affect v2v_2 to the same extent. Using hydrodynamic profiles generated with \textit{Trajectum} we compare O+O and Pb+Pb collisions at the same charged particle multiplicity and find that the nuclear modification factors are the same in both systems despite their different shapes.
Vehicular communication systems face significant challenges due to high mobility and rapidly changing environments, which affect the channel over which the signals travel. To address these challenges, neural network (NN)-based channel estimation methods have been suggested. These methods are primarily trained on high signal-to-noise ratio (SNR) with the assumption that training a NN in less noisy conditions can result in good generalisation. This study examines the effectiveness of training NN-based channel estimators on mixed SNR datasets compared to training solely on high SNR datasets, as seen in several related works. Estimators evaluated in this work include an architecture that uses convolutional layers and self-attention mechanisms; a method that employs temporal convolutional networks and data pilot-aided estimation; two methods that combine classical methods with multilayer perceptrons; and the current state-of-the-art model that combines Long-Short-Term Memory networks with data pilot-aided and temporal averaging methods as post processing. Our results indicate that using only high SNR data for training is not always optimal, and the SNR range in the training dataset should be treated as a hyperparameter that can be adjusted for better performance. This is illustrated by the better performance of some models in low SNR conditions when trained on the mixed SNR dataset, as opposed to when trained exclusively on high SNR data.
A comprehensive multiwavelength study of five very-high-energy (VHE) gamma-ray bursts establishes that synchrotron self-Compton emission from the forward shock is the primary mechanism for VHE photon production. The research reveals these events often occur in constant-density interstellar media and possess distinct physical characteristics, including lower magnetic field strengths and higher intrinsic energies, compared to typical GRBs.
Path entanglement is an essential resource for photonic quantum information processing, including in quantum computing, quantum communication and quantum sensing. In this work, we experimentally study the generation and verification of bipartite path-entangled states using single photons produced by a nitrogen-vacancy center within a nanodiamond. We perform a range of measurements to characterize the photons being generated and verify the presence of path entanglement. The experiment is performed using continuous-wave laser excitation and a novel state generation 'time-window' method. This approach to path entanglement verification is different to previous work as it does not make use of a pulsed laser excitation source.
Recent measurements of the 4-point correlation function in large-scale galaxy surveys have found apparent evidence of parity violation in the distribution of galaxies. This cannot happen via dynamical gravitational effects in general relativity. If such a violation arose from physics in the early Universe it could indicate important new physics beyond the standard model, and would be at odds with most models of inflation. It is therefore now timely to consider the galaxy trispectrum in more detail. While the intrinsic 4-point correlation function, or equivalently the trispectrum, its Fourier counterpart, is parity invariant, the observed trispectrum must take redshift-space distortions into account. Although the standard Newtonian correction also respects parity invariance, we show that sub-leading relativistic corrections do not. We demonstrate that these can be significant at intermediate linear scales and are dominant over the Newtonian parity-invariant part around the equality scale and above. Therefore when observing the galaxy 4-point correlation function, we should expect to detect parity violation on large scales.
There are no more papers matching your filters at the moment.