Universidade do Estado de Santa Catarina
In contrast to Newtonian physics, there is no absolute time in relativistic (Lorentzian) spacetimes. This immediately implies that two twins may, in general, age at different rates. For this to happen, there must be, of course, some asymmetry between their worldlines, along which the elapsed proper times are evaluated; such asymmetry might not be, however, so intuitively apparent. Our primary objective is to present, in a concise and didactic manner, some lesser-known results and derive novel ones from a modern, geometrical, and covariant standpoint, which aims to clarify the issue and dispel related misconceptions. First, we recall that: (i) the original ``twin paradox'' may be perfectly dealt with in special relativity (physics in a flat spacetime) and does not necessarily involve an accelerated twin. We then explore the issue of differential aging in general relativity (physics in a curved background) in the prototypical case of the vacuum Schwarzschild spacetime, considering several pairs of twins. In this context, we show that: (ii) it is not true that a twin which gets closer to the Schwarzschild horizon, by being subject to a stronger gravitational field, where time sort of slows down, should always get younger than a twin that stays further away, in a region of weaker gravitational field, and (iii) it is also false that an accelerated twin always returns younger than a geodesic one. Finally, we argue that (iv) in a generic spacetime, there is no universal correlation between the phenomena of differential aging and the Doppler effect. Two particularly pedagogical resources provided are a glossary of relevant terms and supplementary Python notebooks in a GitHub repository.
In contrast to Newtonian physics, there is no absolute time in relativistic (Lorentzian) spacetimes. This immediately implies that two twins may, in general, age at different rates. For this to happen, there must be, of course, some asymmetry between their worldlines, along which the elapsed proper times are evaluated; such asymmetry might not be, however, so intuitively apparent. Our primary objective is to present, in a concise and didactic manner, some lesser-known results and derive novel ones from a modern, geometrical, and covariant standpoint, which aims to clarify the issue and dispel related misconceptions. First, we recall that: (i) the original ``twin paradox'' may be perfectly dealt with in special relativity (physics in a flat spacetime) and does not necessarily involve an accelerated twin. We then explore the issue of differential aging in general relativity (physics in a curved background) in the prototypical case of the vacuum Schwarzschild spacetime, considering several pairs of twins. In this context, we show that: (ii) it is not true that a twin which gets closer to the Schwarzschild horizon, by being subject to a stronger gravitational field, where time sort of slows down, should always get younger than a twin that stays further away, in a region of weaker gravitational field, and (iii) it is also false that an accelerated twin always returns younger than a geodesic one. Finally, we argue that (iv) in a generic spacetime, there is no universal correlation between the phenomena of differential aging and the Doppler effect. Two particularly pedagogical resources provided are a glossary of relevant terms and supplementary Python notebooks in a GitHub repository.
We propose a new observable derived from a centrality-dependent scaling of transverse particle spectra. By removing the global scales of total particle number and mean transverse momentum, we isolate the shape of the spectrum. In hydrodynamic simulations, while the multiplicity and mean transverse momentum fluctuate significantly, the scaled spectrum is found to be almost constant even at an event-by-event level and after resonance decays. This universality survives when averaging over events in each centrality bin before scaling. We then investigate the presence of this scaling in experimental data from the ALICE collaboration in Pb-Pb, Xe-Xe, and p-Pb collisions. We find a remarkable universality in the experimentally observed scaled spectra at low transverse momentum, compatible with hydrodynamic predictions. The data show a minor breaking of universality at large transverse momentum and hints of evolution with the system size that are not seen in simulations. Our results motivate further theoretical and experimental investigations of this new observable to bring to light the collective and non-collective behavior encoded in the transverse particle spectrum of different collision systems.
Despite not have been yet identified by the IceCube detector, events generated from ντ\nu_{\tau} deep inelastic neutrino scattering in ice with varied topologies, such as double cascades (often called \textit{double bangs}), \textit{lollipops} and \textit{sugardaddies}, constitute a potential laboratory for low-x parton studies. Here we investigate these events, analyzing the effect of next-to-next-to-leading order (NNLO) Parton Distribution Function (PDFs) in the total neutrino--nucleon cross section, as compared with the color dipole formalism, where saturation effects play a major role. Energy deposit profiles in the `bangs' are also analysed in terms of virtual WW-boson and tauon energy distributions and are found to be crucial in establishing a clear signal for gluon distribution determination at very small xx. By taking the average (all flavor) neutrino flux (ΦνEν2.3\Phi_{\nu}\sim E_{\nu}^{-2.3}) into differential cross sections as a function of τ\tau and WW energies, we find significant deviations from pure DGLAP parton interactions for neutrino energies already at a few PeV. With these findings one aims at providing not only possible observables to be measured in large volume neutrino detectors in the near future, but also theoretical ways of unraveling QCD dynamics using unintegrated neutrino-nucleon cross sections in the ultrahigh-energy frontier.
The exclusive production of a vector-toponium ψt\psi_t state by photon-hadron interactions is investigated considering proton-proton and proton-nucleus collisions at the Large Hadron Collider (LHC) and Future Circular Collider (FCC) energies. The scattering amplitude is calculated using the kTk_T-factorization formalism assuming that the vector-toponium state can be described by a Gaussian light-cone wave function and considering different models for the unintegrated gluon distribution. Predictions for the rapidity distributions and total cross-sections are presented.
This work investigates the possibility of accessing the initial geometric shape of the proton in proton-nucleus collisions at the LHC, in particular the configuration in which the proton is made of three quarks linked by a Y-shape gluon string, called baryon junction. This initial state spatial configuration has been used in the past to describe data on baryon rapidity distributions, diffractive J/ψJ/\psi production and multiplicity distributions in pp collisions. In spite of its success in explaining the data, the evidence of the baryon junction still needs confirmation. Further studies will be undertaken at the electron-ion collider. In this work we study multiplicity distributions measured in pPb collisions. Different initial state geometries are used as input in a Monte Carlo event generator which implements the kTk_T-factorization formalism of the CGC with KLN unintegrated gluon distributions. The results indicate that the hard-sphere and Gaussian proton configurations are incompatible with the data. In contrast, the baryon junction configuration can describe the data provided that intrinsic fluctuations of the saturation scale are included.
Rigid (Uniform) rotation is usually assumed when investigating the properties of mature neutron stars (NSs). Although it simplifies their description, it is an assumption because we cannot observe the NS's innermost parts. Here, we analyze the structure of NSs in the simple case of ''almost rigidity,'' where the innermost and outermost parts rotate with different angular velocities. This is motivated by the possibility of NSs having superfluid interiors, phase transitions, and angular momentum transfer during accretion processes. We show that, in general relativity, the relative difference in angular velocity between different parts of an NS induces a change in the moment of inertia compared to that of rigid rotation. The relative change depends nonlinearly on where the angular velocity jump occurs inside the NS. For the same observed angular velocity in both configurations, if the jump location is close to the star's surface-which is possible in central compact objects (CCOs) and accreting stars-the relative change in the moment of inertia is close to that of the angular velocity (which is expected due to total angular momentum aspects). If the jump occurs deep within the NS, for instance, due to phase transitions or superfluidity, smaller relative changes in the moment of inertia are observed; we found that if it is at a radial distance smaller than approximately 40%40\% of the star's radius, the relative changes are negligible. Additionally, we outline the relevance of systematic uncertainties that nonrigidity could have on some NS observables, such as radius, ellipticity, and the rotational energy budget of pulsars, which could explain the X-ray luminosity of some sources. Finally, we also show that non-rigidity weakens the universal II-Love-QQ relations.
We investigate chaos in mixed-phase-space Hamiltonian systems using time series of the finite- time Lyapunov exponents. The methodology we propose uses the number of Lyapunov exponents close to zero to define regimes of ordered (stickiness), semi-ordered (or semi-chaotic), and strongly chaotic motion. The dynamics is then investigated looking at the consecutive time spent in each regime, the transition between different regimes, and the regions in the phase-space associated to them. Applying our methodology to a chain of coupled standard maps we obtain: (i) that it allows for an improved numerical characterization of stickiness in high-dimensional Hamiltonian systems, when compared to the previous analyses based on the distribution of recurrence times; (ii) that the transition probabilities between different regimes are determined by the phase-space volume associated to the corresponding regions; (iii) the dependence of the Lyapunov exponents with the coupling strength.
(Abridged) Neutron stars (NSs), the densest known objects composed of matter, provide a unique laboratory to probe whether strange quark matter is the true ground state of matter. We investigate the parameter space of the equation of state of strange stars using a quantum chromodynamics (QCD)-informed model. The parameters - related to the energy density difference between quark matter and the QCD vacuum, the strength of strong interactions, and the gap parameter for color superconductivity - are sampled via quasi-random Latin hypercube sampling to ensure uniform coverage. To constrain them, we incorporate observational data on the maximum mass of NSs (from binary and merger systems), the radii of 1.41.4 M_{\odot} NSs (from gravitational wave and electromagnetic observations), and tidal deformabilities (from GW170817). Our results show that quark strong interactions play a key role, requiring at least a 20%20\% deviation from the free-quark limit. We also find that color superconductivity is relevant, with the gap parameter reaching up to 230\sim 230 MeV for a strange quark mass of 100100 MeV. The surface-to-vacuum energy density jump lies in the range (1.12.2)(1.1-2.2) ρsat\rho_{\rm{sat}}, where ρsat2.7×1014\rho_{\rm{sat}} \simeq 2.7 \times 10^{14} g cm3^{-3}. Observational constraints also imply that a 1.41.4 M_{\odot} quark star has a radius of (10.012.3)(10.0-12.3) km and tidal deformability between 270270 and 970970. These are consistent with the low mass and radius inferred for the compact object XMMU J173203.3-344518. Our results provide useful inputs for future studies on quark and hybrid stars, including their tidal properties, thermal evolution, quasi-normal modes, and ellipticities.
Accurate star formation histories (SFHs) of galaxies are fundamental for understanding the build-up of their stellar content. However, the most accurate SFHs - those obtained from colour-magnitude diagrams (CMDs) of resolved stars reaching the oldest main sequence turnoffs (oMSTO) - are presently limited to a few systems in the Local Group. It is therefore crucial to determine the reliability and range of applicability of SFHs derived from integrated light spectroscopy, as this affects our understanding of unresolved galaxies from low to high redshift. To evaluate the reliability of current full spectral fitting techniques in deriving SFHs from integrated light spectroscopy by comparing SFHs from integrated spectra to those obtained from deep CMDs of resolved stars. We have obtained a high signal--to--noise (S/N \sim 36.3 per \AA) integrated spectrum of a field in the bar of the Large Magellanic Cloud (LMC) using EFOSC2 at the 3.6 meter telescope at La Silla Observatory. For this same field, resolved stellar data reaching the oMSTO are available. We have compared the star formation rate (SFR) as a function of time and the age-metallicity relation (AMR) obtained from the integrated spectrum using {\tt STECKMAP}, and the CMD using the IAC-star/MinnIAC/IAC-pop set of routines. For the sake of completeness we also use and discuss other synthesis codes ({\tt STARLIGHT} and {\tt ULySS}) to derive the SFR and AMR from the integrated LMC spectrum. We find very good agreement (average differences \sim 4.1 %\%) between the SFR(t) and the AMR obtained using {\tt STECKMAP} on the integrated light spectrum, and the CMD analysis. {\tt STECKMAP} minimizes the impact of the age-metallicity degeneracy and has the advantage of preferring smooth solutions to recover complex SFHs by means of a penalized χ2\chi^2. [abridged]
In this work we analyse the growth of the cumulative number of confirmed infected cases by the COVID-19 until March 27th, 2020, from countries of Asia, Europe, North and South America. Our results show (i) that power-law growth is observed for all countries; (ii) by using the distance correlation, that the power-law curves between countries are statistically highly correlated, suggesting the universality of such curves around the World; and (iii) that soft quarantine strategies are inefficient to flatten the growth curves. Furthermore, we present a model and strategies which allow the government to reach the flattening of the power-law curves. We found that, besides the social distance of individuals, of well known relevance, the strategy of identifying and isolating infected individuals in a large daily rate can help to flatten the power-laws. These are essentially the strategies used in the Republic of Korea. The high correlation between the power-law curves of different countries strongly indicate that the government containment measures can be applied with success around the whole World. These measures must be scathing and applied as soon as possible.
We investigate how the introduction of different types of disorder affects the generation of entanglement between the internal (spin) and external (position) degrees of freedom in one-dimensional quantum random walks (QRW). Disorder is modeled by adding another random feature to QRW, i.e., the quantum coin that drives the system's evolution is randomly chosen at each position and/or at each time step, giving rise to either dynamic, fluctuating, or static disorder. The first one is position-independent, with every lattice site having the same coin at a given time, the second has time and position dependent randomness, while the third one is time-independent. We show for several levels of disorder that dynamic disorder is the most powerful entanglement generator, followed closely by fluctuating disorder. Static disorder is the less efficient entangler, being almost always less efficient than the ordered case. Also, dynamic and fluctuating disorder lead to maximally entangled states asymptotically in time for any initial condition while static disorder has no asymptotic limit and, similarly to the ordered case, has a long time behavior highly sensitive to the initial conditions.
We present the detailed process of converting the classical Fourier Transform algorithm into the quantum one by using QR decomposition. This provides an example of a technique for building quantum algorithms using classical ones. The Quantum Fourier Transform is one of the most important quantum subroutines known at present, used in most algorithms that have exponential speed up compared to the classical ones. We briefly review Fast Fourier Transform and then make explicit all the steps that led to the quantum formulation of the algorithm, generalizing Coppersmith's work.
We investigate state estimation in discrete-time quantum walks with a single absorbing boundary. Using a spectral approach, we obtain closed expressions for the escape probability as a function of the coin state and the boundary position, and their corresponding classical Fisher information for a simple absorption readout. Comparing with the single-copy quantum Fisher information shows a clear complementarity: near boundaries carry broad information about the population angle of the coin, whereas moderate or distant boundaries reveal phase-sensitive regions. Because a single boundary probes only one information direction, combining two boundary placements yields a full-rank Fisher matrix and tight joint Cramér--Rao bounds, while retaining a binary, tomography-free measurement. We outline an integrated-photonics implementation in which an on-chip sink realizes the absorber and estimate a substantial reduction in configuration count compared to mode-resolved qubit tomography. These results identify absorption in quantum walks as a simple and scalable primitive for coin-state metrology.
We examine, through a Boltzmann equation approach, the generating action of hard thermal loops in the background of gravitational fields. Using the gauge and Weyl invariance of the theory at high temperature, we derive an explicit closed-form expression for the effective action.
We investigate the ballistic spreading behavior of the one-dimensional discrete time quantum walks whose time evolution is driven by any balanced quantum coin. We obtain closed-form expressions for the long-time variance of position of quantum walks starting from any initial qubit (spin-1/21/2 particle) and position states following a delta-like (local), Gaussian and uniform probability distributions. By averaging over all spin states, we find out that the average variance of a quantum walk starting from a local state is independent of the quantum coin, while from Gaussian and uniform states it depends on the sum of relative phases between spin states given by the quantum coin, being non-dispersive for a Fourier walk and large initial dispersion. We also perform numerical simulations of the average probability distribution and variance along the time to compare them with our analytical results.
The agent-based modeling and simulation (ABMS) paradigm has been used to analyze, reproduce, and predict phenomena related to many application areas. Although there are many agent-based platforms that support simulation development, they rely on programming languages that require extensive programming knowledge. Model-driven development (MDD) has been explored to facilitate simulation modeling, by means of high-level modeling languages that provide reusable building blocks that hide computational complexity, and code generation. However, there is still limited knowledge of how MDD approaches to ABMS contribute to increasing development productivity and quality. We thus in this paper present an empirical study that quantitatively compares the use of MDD and ABMS platforms mainly in terms of effort and developer mistakes. Our evaluation was performed using MDD4ABMS-an MDD approach with a core and extensions to two application areas, one of which developed for this study-and NetLogo, a widely used platform. The obtained results show that MDD4ABMS requires less effort to develop simulations with similar (sometimes better) design quality than NetLogo, giving evidence of the benefits that MDD can provide to ABMS.
Requirement Engineering (RE) is a Software Engineering (SE) process of defining, documenting, and maintaining the requirements from a problem. It is one of the most complex processes of SE because it addresses the relation between customer and developer. RE learning may be abstract and complex for most students because many of them cannot visualize the subject directly applied. Through the advancement of technology, Virtual Reality (VR) hardware is becoming increasingly more accessible, and it is not rare to use it in education. Little research and systematic studies explain the integration between SE and VR, and even less between RE and VR. Hence, this systematic review proposes to select and present studies that relate the use of VR applications to teach SE and RE concepts. We selected nine studies to include in this review. Despite the lack of articles addressing the topic, the results from this study showed that the use of VR technologies for learning SE is still very seminal. The projects based essentially on visualization. There are lack of tasks to build modeling artifacts, and also interaction with stakeholders and other software engineers. Learning tasks and the monitoring of students' progress by teachers also need to be considered.
3XMM J185246.6+003317 is a transient magnetar located in the vicinity of the supernova remnant Kes\,79. So far, observations have only set upper limits to its surface magnetic field and spindown, and there is no estimate for its mass and radius. Using ray-tracing modelling and Bayesian inference for the analysis of several light curves spanning a period of around three weeks, we have found that it may be one of the most massive neutron stars to date. In addition, our analysis suggests a multipolar magnetic field structure with a subcritical field strength and a carbon atmosphere composition. Due to the time-resolution limitation of the available light curves, we estimate the surface magnetic field and the mass to be log10(B/G)=11.890.93+0.19\log_{10} (B/{\rm G}) = 11.89^{+0.19}_{-0.93} and M=2.090.09+0.16M=2.09^{+0.16}_{-0.09}~MM_{\odot} at 1σ1\sigma confidence level, while the radius is estimated to be R=12.021.42+1.44R=12.02^{+1.44}_{-1.42} km at 2σ2\sigma confidence level. They were verified by simulations, i.e., data injections with known model parameters, and their subsequent recovery. The best-fitting model has three small hot spots, two of them in the southern hemisphere. These are, however, just first estimates and conclusions, based on a simple ray-tracing model with anisotropic emission; we also estimate the impact of modelling on the parameter uncertainties and the relevant phenomena on which to focus in more precise analyses. We interpret the above best-fitting results as due to accretion of supernova layers/interstellar medium onto 3XMM J185246.6+003317 leading to burying and a subsequent re-emergence of the magnetic field, and a carbon atmosphere being formed possibly due to hydrogen/helium diffusive nuclear burning. Finally, we briefly discuss some consequences of our findings for superdense matter constraints.
We study the probability flux on the central vertex in continuous-time quantum walks on weighted tree graphs. In a weighted graph, each edge has a weight we call hopping. This hopping sets the jump rate of the particle between the vertices connected by the edge. Here, the edges of the central vertex (root) have a hopping parameter JJ larger than those of the other edges. For star graphs, this hopping gives only how often the walker visits the central vertex over time. However, for weighted spider graphs Sn,2S_{n,2} and Sn,3S_{n,3}, the probability on the central vertex drops with J2J^2 for walks starting from a state of any superposition of leaf vertices. We map Cayley trees C3,2C_{3,2} and C3,3C_{3,3} into these spider graphs and observe the same dependency. Our results suggest this is a general feature of such walks on weighted trees and a way of probing decoherence effects in an open quantum system context.
There are no more papers matching your filters at the moment.