Universidad Austral de Chile
We investigate FLRW cosmology in the framework of symmetric teleparallel f(Q)f(Q) gravity with a nonminimal coupling between dark matter and the gravitational field. In the noncoincidence gauge, the field equations admit an equivalent multi-scalar field representation, which we investigate the phase-space using the Hubble-normalization approach. We classify all stationary points for arbitrary function f(Q)f(Q) and we discuss the physical properties of the asymptotic solutions. For the power-law theory, we perform a detailed stability analysis and show that the de Sitter solution is the unique future attractor, while the matter-dominated point appears as a saddle point. Moreover, there exist a family of scaling solutions that can be related to inflationary dynamics. In contrast with uncoupled f(Q)f(Q) models, the presence of the coupling introduces a viable matter-dominated era alongside late-time accelerated expansion. Our study shows that the coupling function plays a crucial role in cosmological dynamics in f(Q)f(Q) gravity.
We derive the odd parity perturbation equation in scalar-tensor theories with a non minimal kinetic coupling sector of the general Horndeski theory, where the kinetic term is coupled to the metric and the Einstein tensor. We derive the potential of the perturbation, by identifying a master function and switching to tortoise coordinates. We then prove the mode stability under linear odd- parity perturbations of hairy black holes in this sector of Horndeski theory, when a cosmological constant term in the action is included. Finally, we comment on the existence of slowly rotating black hole solutions in this setup and discuss their implications on the physics of compact objects configurations, such as neutron stars.
Vision Transformers (ViTs) have successfully been applied to image classification problems where large annotated datasets are available. On the other hand, when fewer annotations are available, such as in biomedical applications, image augmentation techniques like introducing image variations or combinations have been proposed. However, regarding ViT patch sampling, less has been explored outside grid-based strategies. In this work, we propose Random Vision Transformer Tokens (RaViTT), a random patch sampling strategy that can be incorporated into existing ViTs. We experimentally evaluated RaViTT for image classification, comparing it with a baseline ViT and state-of-the-art (SOTA) augmentation techniques in 4 datasets, including ImageNet-1k and CIFAR-100. Results show that RaViTT increases the accuracy of the baseline in all datasets and outperforms the SOTA augmentation techniques in 3 out of 4 datasets by a significant margin +1.23% to +4.32%. Interestingly, RaViTT accuracy improvements can be achieved even with fewer tokens, thus reducing the computational load of any ViT model for a given accuracy value.
Real-time visualization of computational simulations running over graphics processing units (GPU) is a valuable feature in modern science and technological research, as it allows researchers to visually assess the quality and correctness of their computational models during the simulation. Due to the high throughput involved in GPU-based simulations, classical visualization approaches such as ones based on copying to RAM or storage are not feasible anymore, as they imply large memory transfers between GPU and CPU at each moment, reducing both computational performance and interactivity. Implementing real-time visualizers for GPU simulation codes is a challenging task as it involves dealing with i) low-level integration of graphics APIs (e.g, OpenGL and Vulkan) into the general-purpose GPU code, ii) a careful and efficient handling of memory spaces and iii) finding a balance between rendering and computing as both need the GPU resources. In this work we present M\`imir, a CUDA/Vulkan interoperability C++ library that allows users to add real-time 2D/3D visualization to CUDA codes with low programming effort. With M\`imir, researchers can leverage state-of-the-art CUDA/Vulkan interoperability features without needing to invest time in learning the complex low-level technical aspects involved. Internally, M\`imir streamlines the interoperability mapping between CUDA device memory containing simulation data and Vulkan graphics resources, so that changes on the data are instantly reflected in the visualization. This abstraction scheme allows generating visualizations with minimal alteration over the original source code, needing only to replace the GPU memory allocation lines of the data to be visualized by the API calls provided by M\`imir among other optional changes.
The sky-averaged cosmological 21 cm signal can improve our understanding of the evolution of the early Universe from the Dark Age to the end of the Epoch of Reionization. Although the EDGES experiment reported an absorption profile of this signal, there have been concerns about the plausibility of these results, motivating independent validation experiments. One of these initiatives is the Mapper of the IGM Spin Temperature (MIST), which is planned to be deployed at different remote locations around the world. One of its key features is that it seeks to comprehensively compensate for systematic uncertainties through detailed modeling and characterization of its different instrumental subsystems, particularly its antenna. Here we propose a novel optimizing scheme which can be used to design an antenna applied to MIST, improving bandwidth, return loss, and beam chromaticity. This new procedure combines the Particle Swarm Optimization (PSO) algorithm with a commercial electromagnetic simulation software (HFSS). We improved the performance of two antenna models: a rectangular blade antenna, similar to the one used in the EDGES experiment, and a trapezoidal bow-tie antenna. Although the performance of both antennas improved after applying our optimization method, we found that our bow-tie model outperforms the blade antenna by achieving lower reflection losses and beam chromaticity in the entire band of interest. To further validate the optimization process, we also built and characterized 1:20 scale models of both antenna types showing an excellent agreement with our simulations.
In a number of data-driven applications such as detection of arrhythmia, interferometry or audio compression, observations are acquired indistinctly in the time or frequency domains: temporal observations allow us to study the spectral content of signals (e.g., audio), while frequency-domain observations are used to reconstruct temporal/spatial data (e.g., MRI). Classical approaches for spectral analysis rely either on i) a discretisation of the time and frequency domains, where the fast Fourier transform stands out as the \textit{de facto} off-the-shelf resource, or ii) stringent parametric models with closed-form spectra. However, the general literature fails to cater for missing observations and noise-corrupted data. Our aim is to address the lack of a principled treatment of data acquired indistinctly in the temporal and frequency domains in a way that is robust to missing or noisy observations, and that at the same time models uncertainty effectively. To achieve this aim, we first define a joint probabilistic model for the temporal and spectral representations of signals, to then perform a Bayesian model update in the light of observations, thus jointly reconstructing the complete (latent) time and frequency representations. The proposed model is analysed from a classical spectral analysis perspective, and its implementation is illustrated through intuitive examples. Lastly, we show that the proposed model is able to perform joint time and frequency reconstruction of real-world audio, healthcare and astronomy signals, while successfully dealing with missing data and handling uncertainty (noise) naturally against both classical and modern approaches for spectral estimation.
1
We investigate different types of complex soliton solutions with regard to their stability against linear pertubations. In the Bullough-Dodd scalar field theory we find linearly stable complex PT{\cal{PT}}-symmetric solutions and linearly unstable solutions for which the PT{\cal{PT}}-symmetry is broken. Both types of solutions have real energies. The auxiliary Sturm-Liouville eigenvalue equation in the stability analysis for the PT{\cal{PT}}-symmetric solutions can be solved exactly by supersymmetrically mapping it to an isospectral partner system involving a shifted and scaled inverse cosh\cosh-squared potential. We identify exactly one shape mode in form of a bound state solution and scattering states which when used as linear perturbations leave the solutions stable. The auxiliary problem for the solutions with broken PT{\cal{PT}}-symmetry involves a complex shifted and scaled inverse sin\sin-squared potential. The corresponding bound and scattering state solutions have complex eigenvalues, such that when used as linear perturbations for the corresponding soliton solutions lead to their decay or blow up as time evolves.
We examine the cosmological dynamics of Einstein-Gauss-Bonnet gravity models in a four-dimensional spatially flat FLRW metric. These models are described by f(R,G)=f(R+μG)f\left( R,\mathcal{G}\right) =f\left( R+\mu \mathcal{G}\right) theory of gravity. They are equivalent to models linear in the Ricci scalar RR and in the Gauss-Bonnet scalar G\mathcal{G} with one nonminimally coupled scalar field without kinetic term. We analyze the stability of the de Sitter solutions and construct the phase space of the field equations to investigate the cosmological evolution. We show that f(R+μG)f\left( R+\mu \mathcal{G}\right) -theory provides a double inflationary epoch, this can be used to unify the early-time and late-time acceleration phases of the universe. Moreover, we discuss the initial value problem for theory to be cosmologically viable. Finally, the effects of the cold dark matter in cosmic evolution are discussed.
It is shown that the Ablowitz-Kaup-Newell-Segur (AKNS) integrable hierarchy can be obtained as the dynamical equations of three-dimensional General Relativity with a negative cosmological constant. This geometrization of the AKNS system is possible through the construction of novel boundary conditions for the gravitational field. These are invariant under an asymptotic symmetry group characterized by an infinite set of AKNS commuting conserved charges. Gravitational configurations are studied by means of SL(2,R)SL(2,\mathbb{R}) conjugacy classes. Conical singularities and black hole solutions are included in the boundary conditions.
We introduce the Automatic Learning for the Rapid Classification of Events (ALeRCE) broker, an astronomical alert broker designed to provide a rapid and self--consistent classification of large etendue telescope alert streams, such as that provided by the Zwicky Transient Facility (ZTF) and, in the future, the Vera C. Rubin Observatory Legacy Survey of Space and Time (LSST). ALeRCE is a Chilean--led broker run by an interdisciplinary team of astronomers and engineers, working to become intermediaries between survey and follow--up facilities. ALeRCE uses a pipeline which includes the real--time ingestion, aggregation, cross--matching, machine learning (ML) classification, and visualization of the ZTF alert stream. We use two classifiers: a stamp--based classifier, designed for rapid classification, and a light--curve--based classifier, which uses the multi--band flux evolution to achieve a more refined classification. We describe in detail our pipeline, data products, tools and services, which are made public for the community (see \url{this https URL}). Since we began operating our real--time ML classification of the ZTF alert stream in early 2019, we have grown a large community of active users around the globe. We describe our results to date, including the real--time processing of 9.7×1079.7\times10^7 alerts, the stamp classification of 1.9×1071.9\times10^7 objects, the light curve classification of 8.5×1058.5\times10^5 objects, the report of 3088 supernova candidates, and different experiments using LSST-like alert streams. Finally, we discuss the challenges ahead to go from a single-stream of alerts such as ZTF to a multi--stream ecosystem dominated by LSST.
The Nvidia GPU architecture has introduced new computing elements such as the \textit{tensor cores}, which are special processing units dedicated to perform fast matrix-multiply-accumulate (MMA) operations and accelerate \textit{Deep Learning} applications. In this work we present the idea of using tensor cores for a different purpose such as the parallel arithmetic reduction problem, and propose a new GPU tensor-core based algorithm as well as analyze its potential performance benefits in comparison to a traditional GPU-based one. The proposed method, encodes the reduction of nn numbers as a set of m×mm\times m MMA tensor-core operations (for Nvidia's Volta architecture m=16m=16) and takes advantage from the fact that each MMA operation takes just one GPU cycle. When analyzing the cost under a simplified GPU computing model, the result is that the new algorithm manages to reduce a problem of nn numbers in $T(n) = 5\log_{m^2}(n)stepswithaspeedupof steps with a speedup of S = \frac{4}{5}\log_2(m^2)$.
We develop a generic spacetime model in General Relativity which can be used to build any gravitational model within General Relativity. The generic model uses two types of assumptions: (a) Geometric assumptions additional to the inherent geometric identities of the Riemannian geometry of spacetime and (b) Assumptions defining a class of observers by means of their 4-velocity uau^{a} which is a unit timelike vector field. The geometric assumptions as a rule concern symmetry assumptions (the so called collineations). The latter introduces the 1+3 decomposition of tensor fields in spacetime. The 1+3 decomposition results in two major results. The 1+3 decomposition of ua;bu_{a;b} defines the kinematic variables of the model (expansion, rotation, shear and 4-acceleration) and defines the kinematics of the gravitational model. The 1+3 decomposition of the energy momentum tensor representing all gravitating matter introduces the dynamic variables of the model (energy density, the isotropic pressure, the momentum transfer or heat flux vector and the traceless tensor of the anisotropic pressure) as measured by the defined observers and define the dynamics of he model. The symmetries assumed by the model act as constraints on both the kinematical and the dynamical variables of the model. As a second further development of the generic model we assume that in addition to the 4-velocity of the observers uau_{a} there exists a second universal vector field nan_{a} in spacetime so that one has a so called double congruence (ua,na)(u_{a},n_{a}) which can be used to define the 1+1+2 decomposition of tensor fields. The 1+1+2 decomposition leads to an extended kinematics concerning both fields building the double congruence and to a finer dynamics involving more physical variables.
We establish sufficient conditions to guarantee the oscillatory and non-oscillatory behavior of solutions to nonautonomous advanced and delayed linear differential equations with piecewise constant arguments: x(t)=a(t)x(t)+b(t)x([t±k]), x'(t) = a(t)x(t) + b(t)x([t \pm k]), where kNk \in \mathbb{N}, k1k \geq 1, in both impulsive and non-impulsive cases (DEPCA and IDEPCA). Due to the hybrid nature of these systems, our approach draws on results from the theory of advanced and delayed linear difference equations. The analysis encompasses various types of differential equations with deviating arguments, many of which have been previously studied as special cases.
This work proposes a GPU tensor core approach that encodes the arithmetic reduction of nn numbers as a set of chained m×mm \times m matrix multiply accumulate (MMA) operations executed in parallel by GPU tensor cores. The asymptotic running time of the proposed chained tensor core approach is $T(n)=5 log_{m^2}{n}anditsspeedupis and its speedup is S=\dfrac{4}{5} log_{2}{m^2}$ over the classic O(nlogn)O(n \log n) parallel reduction algorithm. Experimental performance results show that the proposed reduction method is 3.2×\sim 3.2 \times faster than a conventional GPU reduction implementation, and preserves the numerical precision because the sub-results of each chain of RR MMAs is kept as a 32-bit floating point value, before being all reduced into as a final 32-bit result. The chained MMA design allows a flexible configuration of thread-blocks; small thread-blocks of 32 or 128 threads can still achieve maximum performance using a chain of R=4,5R=4,5 MMAs per block, while large thread-blocks work best with R=1R=1. The results obtained in this work show that tensor cores can indeed provide a significant performance improvement to non-Machine Learning applications such as the arithmetic reduction, which is an integration tool for studying many scientific phenomena.
We consider an exact Einstein-Maxwell solution constructed by Alekseev and Garcia which describes a Schwarzschild black hole immersed in the magnetic universe of Levi-Civita, Bertotti and Robinson (LCBR). After reviewing the basic properties of this spacetime, we study the ultrarelativistic limit in which the black hole is boosted to the speed of light, while sending its mass to zero. This results in a non-expanding impulsive wave traveling in the LCBR universe. The wave front is a 2-sphere carrying two null point particles at its poles -- a remnant of the structure of the original static spacetime. It is also shown that the obtained line-element belongs to the Kundt class of spacetimes, and the relation with a known family of exact gravitational waves of finite duration propagating in the LCBR background is clarified. In the limit of a vanishing electromagnetic field, one point particle is pushed away to infinity and the single-particle Aichelburg-Sexl pp-wave propagating in Minkowski space is recovered.
We revisit the static spherically symmetric solutions of Einstein's General Relativity with a conformally coupled scalar field in arbitrary dimensions. Using a four rank tensor introduced earlier we recast the field equations in a manifestly symmetric form to elucidate a somewhat less-known feature of dual mapping between solutions. We also show that there is a two-parameter subfamily of solutions which enjoy a duality symmetry and in four dimensions both the BBMB black hole and the Barcelo-Visser wormhole belong to this subfamily. Along the way, we rederive the full three-parameter family of solutions by direct integration of the field equations and a natural choice of ansatz which arguably has several advantages over other previously known methods.
In this paper, we analyze the static solutions for the U(1)4U(1)^{4} consistent truncation of the maximally supersymmetric gauged supergravity in four dimensions. Using a new parametrization of the known solutions it is shown that for fixed charges there exist three possible black hole configurations according to the pattern of symmetry breaking of the (scalars sector of the) Lagrangian. Namely a black hole without scalar fields, a black hole with a primary hair and a black hole with a secondary hair respectively. This is the first, exact, example of a black hole with a primary scalar hair, where both the black hole and the scalar fields are regular on and outside the horizon. The configurations with secondary and primary hair can be interpreted as a spontaneous symmetry breaking of discrete permutation and reflection symmetries of the action. It is shown that there exist a triple point in the thermodynamic phase space where the three solution coexist. The corresponding phase transitions are discussed and the free energies are written explicitly as function of the thermodynamic coordinates in the uncharged case. In the charged case the free energies of the primary hair and the hairless black hole are also given as functions of the thermodynamic coordinates.
The advent of next-generation survey instruments, such as the Vera C. Rubin Observatory and its Legacy Survey of Space and Time (LSST), is opening a window for new research in time-domain astronomy. The Extended LSST Astronomical Time-Series Classification Challenge (ELAsTiCC) was created to test the capacity of brokers to deal with a simulated LSST stream. We describe ATAT, the Astronomical Transformer for time series And Tabular data, a classification model conceived by the ALeRCE alert broker to classify light-curves from next-generation alert streams. ATAT was tested in production during the first round of the ELAsTiCC campaigns. ATAT consists of two Transformer models that encode light curves and features using novel time modulation and quantile feature tokenizer mechanisms, respectively. ATAT was trained on different combinations of light curves, metadata, and features calculated over the light curves. We compare ATAT against the current ALeRCE classifier, a Balanced Hierarchical Random Forest (BHRF) trained on human-engineered features derived from light curves and metadata. When trained on light curves and metadata, ATAT achieves a macro F1-score of 82.9 +- 0.4 in 20 classes, outperforming the BHRF model trained on 429 features, which achieves a macro F1-score of 79.4 +- 0.1. The use of Transformer multimodal architectures, combining light curves and tabular data, opens new possibilities for classifying alerts from a new generation of large etendue telescopes, such as the Vera C. Rubin Observatory, in real-world brokering scenarios.
Understanding the quantitative patterns behind scientific disciplines is fundamental for informed research policy. While many fields have been studied from this perspective, Urban Science (USc) and its subfields remain underexplored. As organisms, urban systems rely on materials and energy inputs and transformation (i.e. metabolism) to sustain essential dynamics. This concept has been adopted by various disciplines, including architecture and sociology, and by those focused on metabolic processes, such as ecology and industrial ecology. This study addresses the structure and evolution of Urban Metabolism (UM) and Sustainability research, analyzing articles by disciplines, study subjects (e.g., cities, regions), methodologies, and author diversity (nationality and gender). Our review suggests that UM is an emerging field that grew until 2019, primarily addressed by environmental science and ecology. Common methods include Ecological Network Analysis, and Life Cycle Assessment, and Material Flow Analysis, focusing flows over stocks, ecosystem dynamics and evolutionary perspectives of the urban system. Authors are predominantly from China and the USA, and there are less gender gaps compared to general science research. Our analysis identifies relevant challenges that have become evident in the statistical properties of this scientific field and which might be helpful for the design of improved research policies.
We present a generalization of the standard Inönü-Wigner contraction by rescaling not only the generators of a Lie superalgebra but also the arbitrary constants appearing in the components of the invariant tensor. The procedure presented here allows to obtain explicitly the Chern-Simons supergravity action of a contracted superalgebra. In particular we show that the Poincaré limit can be performed to a D=2+1D=2+1 (p,q)\left(p,q\right) % AdS Chern-Simons supergravity in presence of the exotic form. We also construct a new three-dimensional (2,0)\left(2,0\right) Maxwell Chern-Simons supergravity theory as a particular limit of (2,0)\left(2,0\right) AdSAdS -Lorentz supergravity theory. The generalization for N=p+q\mathcal{N}=p+q gravitini is also considered.
There are no more papers matching your filters at the moment.