University of Salamanca
We show that, in the thin-wall regime, QQ-ball--anti-QQ-ball collisions reveal chaotic behaviour. This is explained by the resonant energy transfer mechanism triggered by the internal modes hosted by the QQ-balls and by the existence of {\it ephemeral} states, that is unstable, sometimes even short-lived, field configurations that appear as intermediate states. The most important examples of such states are the {\it bubble} of the false broken vacuum, which as intermediate states govern the QQQQ^* annihilation, and the {\it charged oscillons}. The usually short-lived bubble can be dynamically temporarily stabilized, which explains their importance in the dynamics of QQ-balls. This happens due to the excitation of massless Goldstone modes, which, exerting pressure on the bubble boundaries or being trapped as bound modes, prevent the bubble from collapsing.
Quantum computing is the process of performing calculations using quantum mechanics. This field studies the quantum behavior of certain subatomic particles for subsequent use in performing calculations, as well as for large-scale information processing. These capabilities can give quantum computers an advantage in terms of computational time and cost over classical computers. Nowadays, there are scientific challenges that are impossible to perform by classical computation due to computational complexity or the time the calculation would take, and quantum computation is one of the possible answers. However, current quantum devices have not yet the necessary qubits and are not fault-tolerant enough to achieve these goals. Nonetheless, there are other fields like machine learning or chemistry where quantum computation could be useful with current quantum devices. This manuscript aims to present a Systematic Literature Review of the papers published between 2017 and 2023 to identify, analyze and classify the different algorithms used in quantum machine learning and their applications. Consequently, this study identified 94 articles that used quantum machine learning techniques and algorithms. The main types of found algorithms are quantum implementations of classical machine learning algorithms, such as support vector machines or the k-nearest neighbor model, and classical deep learning algorithms, like quantum neural networks. Many articles try to solve problems currently answered by classical machine learning but using quantum devices and algorithms. Even though results are promising, quantum machine learning is far from achieving its full potential. An improvement in the quantum hardware is required since the existing quantum computers lack enough quality, speed, and scale to allow quantum computing to achieve its full potential.
In this letter we uncover a new parametric resonance of axionic cosmic strings. This process is triggered by the presence on the string of internal mode excitations that resonantly amplify the amplitude of transverse displacements of the string. We study this process by running numerical simulations that demonstrate the existence of this phenomenon in a (3+1)(3+1) dimensional lattice field theory and compare the results with the analytic expectations for the effective Lagrangian of the amplitude of these modes and their interactions. Finally, we also analyze the massless and massive radiation produced by these excited strings and comment on its relevance for the interpretation of the results of current numerical simulations of axionic cosmic string networks.
The cosmic infrared background (CIB) contains emissions accumulated over the entire history of the Universe, including from objects inaccessible to individual telescopic studies. The near-IR (~1-10 mic) part of the CIB, and its fluctuations, reflects emissions from nucleosynthetic sources and gravitationally accreting black holes (BHs). If known galaxies are removed to sufficient depths the source-subtracted CIB fluctuations at near-IR can reveal sources present in the first-stars-era and possibly new stellar populations at more recent times. This review discusses the recent progress in this newly emerging field which identified, with new data and methodology, significant source-subtracted CIB fluctuations substantially in excess of what can be produced by remaining known galaxies. The CIB fluctuations further appear coherent with unresolved cosmic X-ray background (CXB) indicating a very high fraction of BHs among the new sources producing the CIB fluctuations. These observations have led to intensive theoretical efforts to explain the measurements and their properties. While current experimental configurations have limitations in decisively probing these theories, their potentially remarkable implications will be tested in the upcoming CIB measurements with the ESA's Euclid dark energy mission. We describe the goals and methodologies of LIBRAE (Looking at Infrared Background Radiation with Euclid), a NASA-selected project for CIB science with Euclid, which has the potential for transforming the field into a new area of precision cosmology.
Online streaming services have become the most popular way of listening to music. The majority of these services are endowed with recommendation mechanisms that help users to discover songs and artists that may interest them from the vast amount of music available. However, many are not reliable as they may not take into account contextual aspects or the ever-evolving user behavior. Therefore, it is necessary to develop systems that consider these aspects. In the field of music, time is one of the most important factors influencing user preferences and managing its effects, and is the motivation behind the work presented in this paper. Here, the temporal information regarding when songs are played is examined. The purpose is to model both the evolution of user preferences in the form of evolving implicit ratings and user listening behavior. In the collaborative filtering method proposed in this work, daily listening habits are captured in order to characterize users and provide them with more reliable recommendations. The results of the validation prove that this approach outperforms other methods in generating both context-aware and context-free recommendations
Recommender Systems (RSs) are used to provide users with personalized item recommendations and help them overcome the problem of information overload. Currently, recommendation methods based on deep learning are gaining ground over traditional methods such as matrix factorization due to their ability to represent the complex relationships between users and items and to incorporate additional information. The fact that these data have a graph structure and the greater capability of Graph Neural Networks (GNNs) to learn from these structures has led to their successful incorporation into recommender systems. However, the bias amplification issue needs to be investigated while using these algorithms. Bias results in unfair decisions, which can negatively affect the company reputation and financial status due to societal disappointment and environmental harm. In this paper, we aim to comprehensively study this problem through a literature review and an analysis of the behavior against biases of different GNN-based algorithms compared to state-of-the-art methods. We also intend to explore appropriate solutions to tackle this issue with the least possible impact on the model performance.
We calculate in a 1D General Relativistic (GR) hydrodynamic simulation the neutrino luminosity in an astrophysical scenario where a neutron star (NS) displays a hadron-quark phase transition (HQPT) into a Quark Star (QS). Deconfinement is triggered once the central density exceeds a critical threshold above 3n0\sim 3n_0 being n0n_0, saturation density. We use descriptions based on DD2 and the MIT Bag model equations of state (EOSs). We account for neutrinos using a microphysics forward emission model including ee+e^-e^+ annihilation, plasmon decay, nucleon (N) modified (or direct) Urca processes, and NNNN bremsstrahlung, and, for the post transition, the quark direct Urca and an opacity-based leakage scheme with GR redshift. We find that the neutrino light curve generically develops a short \simeq10-50 ms, spectrally harder feature near deconfinement, appearing as either a prompt shoulder or a distinct secondary peak. Heavy lepton neutrinos result in a delayed peak with respect to the previous. We identify three diagnostics that are only mildly degenerate with hadronic uncertainties: (i) an enhanced peak-to-plateau ratio RppR_{\rm pp} sourced by latent-heat release, (ii) a characteristic lag Δt\Delta t between the collapse rise and the HQPT feature that tracks the central density trajectory, and (iii) a flavor hardening Δ ⁣Eν\Delta\!\langle E_\nu\rangle driven by quark-matter phase space. After MSW flavor conversion, these signatures remain detectable with current experiments. For a Galactic event (d10d\sim 10 kpc), IceCube and Hyper-K should resolve the HQPT feature and distinguish it from both no transition NS collapse and canonical core-collapse supernova (CCSN) templates.
This paper investigates the ontological characterization of Large Language Models (LLMs) like ChatGPT. Between inflationary and deflationary accounts, we pay special attention to their status as agents. This requires explaining in detail the architecture, processing, and training procedures that enable LLMs to display their capacities, and the extensions used to turn LLMs into agent-like systems. After a systematic analysis we conclude that a LLM fails to meet necessary and sufficient conditions for autonomous agency in the light of embodied theories of mind: the individuality condition (it is not the product of its own activity, it is not even directly affected by it), the normativity condition (it does not generate its own norms or goals), and, partially the interactional asymmetry condition (it is not the origin and sustained source of its interaction with the environment). If not agents, then ... what are LLMs? We argue that ChatGPT should be characterized as an interlocutor or linguistic automaton, a library-that-talks, devoid of (autonomous) agency, but capable to engage performatively on non-purposeful yet purpose-structured and purpose-bounded tasks. When interacting with humans, a "ghostly" component of the human-machine interaction makes it possible to enact genuine conversational experiences with LLMs. Despite their lack of sensorimotor and biological embodiment, LLMs textual embodiment (the training corpus) and resource-hungry computational embodiment, significantly transform existing forms of human agency. Beyond assisted and extended agency, the LLM-human coupling can produce midtended forms of agency, closer to the production of intentional agency than to the extended instrumentality of any previous technologies.
03 Sep 2016
We introduce a numerical method for the approximation of functions which are analytic on compact intervals, except at the endpoints. This method is based on variable transforms using particular parametrized exponential and double-exponential mappings, in combination with Fourier-like approximation in a truncated domain. We show theoretically that this method is superior to variable transform techniques based on the standard exponential and double-exponential mappings. In particular, it can resolve oscillatory behaviour using near-optimal degrees of freedom, whereas the standard mappings require degrees of freedom that grow superlinearly with the frequency of oscillation. We highlight these results with several numerical experiments. Therein it is observed that near-machine epsilon accuracy is achieved using a number of degrees of freedom that is between four and ten times smaller than those of existing techniques.
In this work we investigate the possible condensation of tetraneutron resonant states in the lower density neutron rich gas regions inside Neutron Stars (NSs). Using a relativistic density functional approach we characterize the system containing different hadronic species including, besides tetraneutrons, nucleons and a set of light clusters (3^3He, α\alpha particles, deuterium and tritium). σ,ω\sigma,\omega and ρ\rho mesonic fields provide the interaction in the nuclear system. We study how the tetraneutron presence could significantly impact the nucleon pairing fractions and the distribution of baryonic charge among species. For this we assume that they can be thermodynamically produced in an equilibrated medium and scan a range of coupling strengths to the mesonic fields from prescriptions based on isospin symmetry arguments. We find that tetraneutrons may appear over a range of densities belonging to the outer NS crust carrying a sizable amount of baryonic charge thus depleting the nucleon pairing fractions.
03 Apr 2015
A Lie system is a system of differential equations describing the integral curves of a tt-dependent vector field taking values in a finite-dimensional real Lie algebra of vector fields, a Vessiot-Guldberg Lie algebra. We define and analyze Lie systems possessing a Vessiot-Guldberg Lie algebra of Hamiltonian vector fields relative to a Jacobi manifold, the hereafter called Jacobi-Lie systems. We classify Jacobi-Lie systems on R\mathbb{R} and R2\mathbb{R}^2. Our results shall be illustrated through examples of physical and mathematical interest.
14 May 2019
Focusing control of ultrashort pulsed beams is an important research topic, due to its impact to subsequent interaction with matter. In this work, we study the propagation near the focus of ultrashort laser pulses of ~25 fs duration under diffractive focusing. We perform the spatio-spectral and spatio-temporal measurements of their amplitude and phase, complemented by the corresponding simulations. With them, we demonstrate that pulse shaping allows modifying in a controlled way not only the spatio-temporal distribution of the light irradiance in the focal region, but also the way it propagates as well as the frequency distribution within the pulse (temporal chirp). To gain a further intuitive insight, the role of diverse added spectral phase components is analyzed, showing the symmetries that arise for each case. In particular, we compare the effects, similarities and differences of the second and third order dispersion cases.
Perturbation theory in geometric theories of gravitation is a gauge theory of symmetric tensors defined on a Lorentzian manifold (the background spacetime). The gauge freedom makes uniqueness problems in perturbation theory particularly hard as one needs to understand in depth the process of gauge fixing before attempting any uniqueness proof. This is the first paper of a series of two aimed at deriving an existence and uniqueness result for rigidly rotating stars to second order in perturbation theory in General Relativity. A necessary step is to show the existence of a suitable choice of gauge and to understand the differentiability and regularity properties of the resulting gauge tensors in some "canonical form", particularly at the centre of the star. With a wider range of applications in mind, in this paper we analyse the fixing and regularity problem in a more general setting. In particular we tackle the problem of the Hodge-type decomposition into scalar, vector and tensor components on spheres of symmetric and axially symmetric tensors with finite differentiability down to the origin, exploiting a strategy in which the loss of differentiability is as low as possible. Our primary interest, and main result, is to show that stationary and axially symmetric second order perturbations around static and spherically symmetric background configurations can indeed be rendered in the usual "canonical form" used in the literature while loosing only one degree of differentiability and keeping all relevant quantities bounded near the origin.
04 Nov 2013
We study Lie-Hamilton systems on the plane, i.e. systems of first-order differential equations describing the integral curves of a tt-dependent vector field taking values in a finite-dimensional real Lie algebra of planar Hamiltonian vector fields with respect to a Poisson structure. We start with the local classification of finite-dimensional real Lie algebras of vector fields on the plane obtained in [A. González-López, N. Kamran and P.J. Olver, Proc. London Math. Soc. 64, 339 (1992)] and we interpret their results as a local classification of Lie systems. Moreover, by determining which of these real Lie algebras consist of Hamiltonian vector fields with respect to a Poisson structure, we provide the complete local classification of Lie-Hamilton systems on the plane. We present and study through our results new Lie-Hamilton systems of interest which are used to investigate relevant non-autonomous differential equations, e.g. we get explicit local diffeomorphisms between such systems. In particular, the Milne-Pinney, second-order Kummer-Schwarz, complex Riccati and Buchdahl equations as well as some Lotka-Volterra and nonlinear biomathematical models are analysed from this Lie-Hamilton approach.
11 Apr 2015
A Lie-Hamilton system is a nonautonomous system of first-order ordinary differential equations describing the integral curves of a tt-dependent vector field taking values in a finite-dimensional real Lie algebra of Hamiltonian vector fields with respect to a Poisson structure. We provide new algebraic/geometric techniques to easily determine the properties of such Lie algebras on the plane, e.g., their associated Poisson bivectors. We study new and known Lie-Hamilton systems on R2\mathbb{R}^2 with physical, biological and mathematical applications. New results cover Cayley-Klein Riccati equations, the here defined planar diffusion Riccati systems, complex Bernoulli differential equations and projective Schrödinger equations. Constants of motion for planar Lie-Hamilton systems are explicitly obtained which, in turn, allow us to derive superposition rules through a coalgebra approach.
We measured the dipole of the diffuse γ\gamma-ray background (DGB) identifying a highly significant time-independent signal coincidental with that of the Pierre Auger UHECR. The DGB dipole is determined from flux maps in narrow energy bands constructed from 13 years of observations by the Large Area Telescope (LAT) of the {\it Fermi} satellite. The γ\gamma-ray maps were clipped iteratively of sources and foregrounds similar to that done for the cosmic infrared background. The clipped narrow energy band maps were then assembled into one broad energy map out to the given energy starting at E=2.74E=2.74 Gev, where the LAT beam falls below the sky's pixel resolution. Next we consider cuts in Galactic latitude and longitude to probe residual foreground contaminations from the Galactic Plane and Center. In the broad energy range 2.74 < E\leq115.5 GeV the measured dipoles are stable with respect to the various Galactic cuts, consistent with an extragalactic origin. The γ\gamma-ray sky's dipole/monopole ratio is much greater than that expected from the DGB clustering component and the Compton-Getting effect origin with reasonable velocities. At (6.57)%\simeq (6.5-7)\% it is similar to the Pierre Auger UHECRs with EUHECR8E_{\rm UHECR}\ge 8 EeV pointing to a common origin of the two dipoles. However, the DGB flux associated with the found DGB dipole reaches parity with that of the UHECR around EUHECR1E_{\rm UHECR}\le 1 EeV, perhaps arguing for a non-cascading mechanism if the DGB dipole were to come from the higher energy UHECRs. The signal/noise of the DGB dipole is largest in the 5305-30 GeV range, possibly suggesting the γ\gamma-photons at these energies are the ones related to cosmic rays.
Global information about dynamical systems can be extracted by analysing associated infinite-dimensional transfer operators, such as Perron-Frobenius and Koopman operators as well as their infinitesimal generators. In practice, these operators typically need to be approximated from data. Popular approximation methods are extended dynamic mode decomposition (EDMD) and generator extended mode decomposition (gEDMD). We propose a unified framework that leverages Monte Carlo sampling to approximate the operator of interest on a finite-dimensional space spanned by a set of basis functions. Our framework contains EDMD and gEDMD as special cases, but can also be used to approximate more general operators. Our key contributions are proofs of the convergence of the approximating operator and its spectrum under non-restrictive conditions. Moreover, we derive explicit convergence rates and account for the presence of noise in the observations. Whilst all these results are broadly applicable, they also refine previous analyses of EDMD and gEDMD. We verify the analytical results with the aid of several numerical experiments.
In most applications of ultrashort pulse lasers, temporal compressors are used to achieve a desired pulse duration in a target or sample, and precise temporal characterization is important. The dispersion-scan (d-scan) pulse characterization technique usually involves using glass wedges to impart variable, well-defined amounts of dispersion to the pulses, while measuring the spectrum of a nonlinear signal produced by those pulses. This works very well for broadband few-cycle pulses, but longer, narrower bandwidth pulses are much more difficult to measure this way. Here we demonstrate the concept of self-calibrating d-scan, which extends the applicability of the d-scan technique to pulses of arbitrary duration, enabling their complete measurement without prior knowledge of the introduced dispersion. In particular, we show that the pulse compressors already employed in chirped pulse amplification (CPA) systems can be used to simultaneously compress and measure the temporal profile of the output pulses on-target in a simple way, without the need of additional diagnostics or calibrations, while at the same time calibrating the often-unknown differential dispersion of the compressor itself. We demonstrate the technique through simulations and experiments under known conditions. Finally, we apply it to the measurement and compression of 27.5 fs pulses from a CPA laser.
This paper analyzes the direction of the causality between crude oil, gold and stock markets for the largest economy in the world with respect to such markets, the US. To do so, we apply non-linear Granger causality tests. We find a nonlinear causal relationship among the three markets considered, with the causality going in all directions, when the full sample and different subsamples are considered. However, we find a unidirectional nonlinear causal relationship between the crude oil and gold market (with the causality only going from oil price changes to gold price changes) when the subsample runs from the first date of any year between the mid-1990s and 2001 to last available data (February 5, 2015). The latter result may explain the lack of consensus existing in the literature about the direction of the causal link between the crude oil and gold markets.
An application of the Soil Moisture Agricultural Drought Index (SMADI) at the global scale is presented. The index integrates surface soil moisture from the SMOS mission with land surface temperature (LST) and Normalized Difference Vegetation Index (NDVI) from MODIS and allows for global drought monitoring at medium spatial scales (0.05 deg).. Biweekly maps of SMADI were obtained from year 2010 to 2015 over all agricultural areas on Earth. The SMADI time-series were compared with state-of-the-art drought indices over the Iberian Peninsula. Results show a good agreement between SMADI and the Crop Moisture Index (CMI) retrieved at five weather stations (with correlation coefficient, R from -0.64 to -0.79) and the Soil Water Deficit Index (SWDI) at the Soil Moisture Measurement Stations Network of the University of Salamanca (REMEDHUS) (R=-0.83). Some preliminary tests were also made over the continental United States using the Vegetation Drought Response Index (VegDRI), with very encouraging results regarding the spatial occurrence of droughts during summer seasons. Additionally, SMADI allowed to identify distinctive patterns of regional drought over the Indian Peninsula in spring of 2012. Overall results support the use of SMADI for monitoring agricultural drought events world-wide.
There are no more papers matching your filters at the moment.