Universidade de Brasília
Promoting participation on digital platforms such as Brasil Participativo has emerged as a top priority for governments worldwide. However, due to the sheer volume of contributions, much of this engagement goes underutilized, as organizing it presents significant challenges: (1) manual classification is unfeasible at scale; (2) expert involvement is required; and (3) alignment with official taxonomies is necessary. In this paper, we introduce an approach that combines BERTopic with seed words and automatic validation by large language models. Initial results indicate that the generated topics are coherent and institutionally aligned, with minimal human effort. This methodology enables governments to transform large volumes of citizen input into actionable data for public policy.
Researchers evaluated Large Language Models' (LLMs) capacity to assess source code quality, particularly focusing on human-centric attributes like readability, and compared their evaluations with those from the traditional static analysis tool SonarQube. The study found that while LLMs can discern general code quality and offer fine-grained analysis of local readability, their criteria differ from static analyzers, providing a complementary perspective.
Panoptic segmentation combines instance and semantic predictions, allowing the detection of "things" and "stuff" simultaneously. Effectively approaching panoptic segmentation in remotely sensed data can be auspicious in many challenging problems since it allows continuous mapping and specific target counting. Several difficulties have prevented the growth of this task in remote sensing: (a) most algorithms are designed for traditional images, (b) image labelling must encompass "things" and "stuff" classes, and (c) the annotation format is complex. Thus, aiming to solve and increase the operability of panoptic segmentation in remote sensing, this study has five objectives: (1) create a novel data preparation pipeline for panoptic segmentation, (2) propose an annotation conversion software to generate panoptic annotations; (3) propose a novel dataset on urban areas, (4) modify the Detectron2 for the task, and (5) evaluate difficulties of this task in the urban setting. We used an aerial image with a 0,24-meter spatial resolution considering 14 classes. Our pipeline considers three image inputs, and the proposed software uses point shapefiles for creating samples in the COCO format. Our study generated 3,400 samples with 512x512 pixel dimensions. We used the Panoptic-FPN with two backbones (ResNet-50 and ResNet-101), and the model evaluation considered semantic instance and panoptic metrics. We obtained 93.9, 47.7, and 64.9 for the mean IoU, box AP, and PQ. Our study presents the first effective pipeline for panoptic segmentation and an extensive database for other researchers to use and deal with other data or related problems requiring a thorough scene understanding.
Galaxy clusters are important cosmological probes since their abundance and spatial distribution are directly linked to structure formation on large scales. The principal uncertainty source on the cosmological parameter constraints concerns the cluster mass estimation from mass proxies. In addition, future surveys will provide a large amount of data, requiring an improvement in the accuracy of other elements used in the construction of cluster likelihoods. Therefore, accurate modeling of the mass-observable relations and reducing the effect of different systematic errors are fundamental steps for the success of cluster cosmology. In this work, we briefly review the abundance of galaxy clusters and discuss many sources of uncertainty. Os aglomerados de galáxias são importantes sondas cosmológicas, já que a abundância e a distribuição espacial desses objetos estão diretamente ligadas à formação de estruturas em grandes escalas. A maior fonte de incerteza nas restrições de parâmetros cosmológicos é originária das estimativas das massas dos aglomerados a partir da relação massa-observável. Além disso, os próximos grandes levantamentos fornecerão uma grande quantidade de dados, requerendo uma melhoria na precisão de outros elementos utilizados na construção das verossimilhanças de aglomerados. Portanto, uma modelagem precisa da relação massa-observável e diminuir o efeito dos diferentes erros sistemáticos são passos fundamentais para o sucesso da cosmologia com aglomerados. Neste trabalho, fazemos uma breve revisão da abundância de aglomerados de galáxias, e discussão de diferentes fontes de incerteza.
In this study, we use well-localized fast radio bursts (FRBs) to constrain cosmological parameters through two model-independent approaches: the reconstruction of the Hubble parameter H(z)H(z) with an artificial neural network and cosmography. By integrating FRB data with supernovae (SNe), BAO from DESI DR2, and cosmic chronometers (CC), we derive constraints on the Hubble constant (H0H_0), the deceleration parameter (q0q_0), and the jerk parameter (j0j_0). For the reconstruction method, our MCMC analysis with FRB-only provides H0=69.9±5.8kms1Mpc1H_0 = 69.9 \pm 5.8 \, \text{km} \, \text{s}^{-1} \, \text{Mpc}^{-1}, corresponding to a precision of 8%\sim 8\%. A joint analysis with FRB+SNe+(BAO+BBN+CMB) gives H0=68.850.48+0.47kms1Mpc1H_0 = 68.85_{-0.48}^{+0.47} \, \text{km} \, \text{s}^{-1} \, \text{Mpc}^{-1}, reaching a precision below 1%1\%. The cosmographic approach with FRBs alone provides H0=65.834.87+3.77kms1Mpc1H_0 = 65.83_{-4.87}^{+3.77} \, \text{km} \, \text{s}^{-1} \, \text{Mpc}^{-1}, q0=0.450.31+0.26q_0 = -0.45_{-0.31}^{+0.26}, and j0=1.170.58+0.70j_0 = 1.17_{-0.58}^{+0.70} with a precision for the Hubble constant of 6%\sim 6\%. In addition, the BAO+BBN+CMB dataset yields H0=65.201.28+1.29kms1Mpc1H_0 = 65.20_{-1.28}^{+1.29} \, \text{km} \, \text{s}^{-1} \, \text{Mpc}^{-1}, q0=0.29±0.07q_0 = -0.29\pm 0.07, and j0=0.580.04+0.03j_0 = 0.58_{-0.04}^{+0.03}, indicating a precision of 2%\sim 2\% for the Hubble constant. Combining the FRB, SNe, BAO+BBN+CMB, and CC data sets provides tighter constraints, for example, H0=67.880.53+0.52kms1Mpc1H_0 = 67.88_{-0.53}^{+0.52} \, \text{km} \, \text{s}^{-1} \, \text{Mpc}^{-1}, q0=0.420.03+0.02q_0 = -0.42_{-0.03}^{+0.02}, and j0=0.56±0.02j_0 = 0.56 \pm 0.02. In particular, these findings provide a statistically significant indication of deviation from the Λ\LambdaCDM prediction of j0=1j_0 = 1, suggesting possible departures from standard cosmology at a confidence level 1σ1\sigma. They also confirm a model-independent accelerated expansion (q_0 < 0), demonstrating the potential of FRBs as valuable cosmological probes.
We present a novel application of cosmological rescaling, or "remapping," to generate 21 cm intensity mapping mocks for different cosmologies. The remapping method allows for computationally efficient generation of N-body catalogs by rescaling existing simulations. In this work, we employ the remapping method to construct dark matter halo catalogs, starting from the Horizon Run 4 simulation with WMAP5 cosmology, and apply it to different target cosmologies, including WMAP7, Planck18 and Chevallier-Polarski-Linder (CPL) models. These catalogs are then used to simulate 21 cm intensity maps. We use the halo occupation distribution (HOD) method to populate halos with neutral hydrogen (HI) and derive 21 cm brightness temperature maps. Our results demonstrate the effectiveness of the remapping approach in generating cosmological simulations for large-scale structure studies, offering an alternative for testing observational data pipelines and performing cosmological parameter forecasts without the need for computationally expensive full N-body simulations. We also analyze the precision and limitations of the remapping, in light of the rescaling parameters ss and sms_m, as well as the effects of the halo mass and box size thresholds.
This paper discusses the extension of the Prototype Verification System (PVS) sub-theory for rings, part of the PVS algebra theory, with theorems related to the division algorithm for Euclidean rings and Unique Factorization Domains that are general structures where an analog of the Fundamental Theorem of Arithmetic holds. First, we formalize the general abstract notions of divisibility, prime, and irreducible elements in commutative rings, essential to deal with unique factorization domains. Then, we formalize the landmark theorem, establishing that every principal ideal domain is a unique factorization domain. Finally, we specify the theory of Euclidean domains and formally verify that the rings of integers, the Gaussian integers, and arbitrary fields are Euclidean domains. To highlight the benefits of such a general abstract discipline of formalization, we specify a Euclidean gcd algorithm for Euclidean domains and formalize its correctness. Also, we show how this correctness is inherited under adequate parameterizations for the structures of integers and Gaussian integers.
11 Mar 2020
In this work we explore the structure of Clifford algebras and the representations of the algebraic spinors in quantum information theory. Initially we present an general formulation through elements of left minimal ideals in tensor products of the Clifford algebra Cl1,3+Cl^{+}_{1,3}. Posteriorly we perform some applications in quantum computation: qubits, entangled states, quantum gates, representations of the braid group, quantum teleportation, Majorana operators and supersymmetry. Finally, we discuss advantages related to standard Hilbert space formulation.
The first results of Einstein-Maxwell equations established by Raincih in 1925 are therefore called the Raincih conditions. Later the result was rediscovered by Misner and Wheeler in 1957 and made the basis of their geometrodynamics. The present survey will consider didactically the curvature of spacetime attributed to an electromagnetic field with conceptual and calculational details.
In this work, we generalize the non-geometrical construction of gauge theories, due to S. Deser, to a noncommutative setting. We show that in a free theory, along with the usual local Nöther current, there is another conserved current, which is non-local. Using the latter as a source for self-interaction, after a well-defined consistency procedure, we arrive at noncommutative gauge theories. In the non-abelian case, the standard restriction, namely that the theory should be U(N)U(N) in the fundamental representation, emerges as a consequence of the requirement that the non-local current be Lie algebra valued.
Granular flow down an inclined plane is ubiquitous in geophysical and industrial applications. On rough inclines, the flow exhibits Bagnold's velocity profile and follows the so-called μ(I)\mu(I) local rheology. On insufficiently rough or smooth inclines, however, velocity slip occurs at the bottom and a basal layer with strong agitation emerges below the bulk, which is not predicted by the local rheology. Here, we use discrete element method simulations to study detailed dynamics of the basal layer in granular flows down both smooth and rough inclines. We control the roughness via a dimensionless parameter, RaR_a, varied systematically from 0 (flat, frictional plane) to near 1 (very rough plane). Three flow regimes are identified: a slip regime (Ra0.45R_a \lesssim 0.45) where a dilated basal layer appears, a no-slip regime (Ra0.6R_a \gtrsim 0.6) and an intermediate transition regime. In the slip regime, the kinematics profiles (velocity, shear rate and granular temperature) of the basal layer strongly deviate from Bagnold's profiles. General basal slip laws are developed which express the slip velocity as a function of the local shear rate (or granular temperature), base roughness and slope angle. Moreover, the basal layer thickness is insensitive to flow conditions but depends somewhat on the inter-particle coefficient of restitution. Finally, we show that the rheological properties of the basal layer do not follow the μ(I)\mu(I) rheology, but are captured by Bagnold's stress scaling and an extended kinetic theory for granular flows. Our findings can help develop more predictive granular flow models in the future.
We consider the background cosmological solutions in the 6D6D (six-dimensional) model with one time and five space coordinates. The theory of our interest has the action composed by the Einstein term, cosmological constant, and two conformal terms constructed from the third powers of the Weyl tensor. It is shown how the highest derivative terms in the equations of motion can be isolated that opens the way for their numerical integration. There are flat anisotropic solutions which make one of the flat isotropic subspaces to be static. Depending on the value of bare cosmological constant, either two-dimensional or three-dimensional subspace can be static. In particular, there is a physically favorable solution with three ``large'' space coordinates and two extra inner dimensions stabilized. This solution is stable for a wide range of coupling constants, but this requires a special value of the bare cosmological constant.
A review of the teleparallel equivalent of general relativity is presented. It is emphasized that general relativity may be formulated in terms of the tetrad fields and of the torsion tensor, and that this geometrical formulation leads to alternative insights into the theory. The equivalence with the standard formulation in terms of the metric and curvature tensors takes place at the level of field equations. The review starts with a brief account of the history of teleparallel theories of gravity. Then the ordinary interpretation of the tetrad fields as reference frames adapted to arbitrary observers in space-time is discussed, and the tensor of inertial accelerations on frames is obtained. It is shown that the Lagrangian and Hamiltonian field equations allow to define the energy, momentum and angular momentum of the gravitational field, as surface integrals of the field quantities. In the phase space of the theory, these quantities satisfy the algebra of the Poincar\'{e} group.
The interaction between atomic systems and electromagnetic fields is central to modern physics and emerging quantum technologies. The Rabi models, in their semiclassical and quantum versions, provide the simplest and most fundamental description of this interaction. In this work, we present a concise derivation of both models and show how one- and multiphoton resonances arise in the semiclassical regime. We then analyze how these resonances manifest in the quantum Rabi model, discussing similarities and differences in relation to the classical description. Special attention is given to the three-photon resonance, a phenomenon usually neglected in textbooks due to its relative weakness, but which is intrinsic to the radiation-matter interaction. Our goal is to offer an accessible pedagogical reference for students and researchers interested in Quantum Optics and Quantum Information, with an emphasis on the fundamentals of the Rabi models.
Near the singularity, gravity should be modified to an effective theory, in the same sense as with the Euler-Heisenberg electrodynamics. This effective gravity surmounts to higher derivative theory, and as is well known, a much more reacher theory concerning the solution space. On the other hand, as a highly non linear theory, the understanding of this solution space must go beyond the linearized approach. In this talk we will present some results previously published by collaborators and myself, concerning solutions for vacuum spatially homogenous cases of Bianchi types II and VIIAVII_A. These are the anisotropic generalizations of the cosmological spatially "flat", and "open" models respectively. The solutions present isotropisation in a weak sense depending on the initial condition. Also, depending on the initial condition, singular solutions are obtained.
Code readability is one of the main aspects of code quality, influenced by various properties like identifier names, comments, code structure, and adherence to standards. However, measuring this attribute poses challenges in both industry and academia. While static analysis tools assess attributes such as code smells and comment percentage, code reviews introduce an element of subjectivity. This paper explores using Large Language Models (LLMs) to evaluate code quality attributes related to its readability in a standardized, reproducible, and consistent manner. We conducted a quasi-experiment study to measure the effects of code changes on Large Language Model (LLM)s interpretation regarding its readability quality attribute. Nine LLMs were tested, undergoing three interventions: removing comments, replacing identifier names with obscure names, and refactoring to remove code smells. Each intervention involved 10 batch analyses per LLM, collecting data on response variability. We compared the results with a known reference model and tool. The results showed that all LLMs were sensitive to the interventions, with agreement with the reference classifier being high for the original and refactored code scenarios. The LLMs demonstrated a strong semantic sensitivity that the reference model did not fully capture. A thematic analysis of the LLMs reasoning confirmed their evaluations directly reflected the nature of each intervention. The models also exhibited response variability, with 9.37% to 14.58% of executions showing a standard deviation greater than zero, indicating response oscillation, though this did not always compromise the statistical significance of the results. LLMs demonstrated potential for evaluating semantic quality aspects, such as coherence between identifier names, comments, and documentation with code purpose.
Granular material has significant implications for industrial and geophysical processes. A long-lasting challenge, however, is seeking a unified rheology for its solid- and liquid-like behaviors under quasi-static, inertial, and even unsteady shear conditions. Here, we present a data-driven framework to discover the hidden governing equation of sheared granular materials. The framework, PINNSR-DA, addresses noisy discrete particle data via physics-informed neural networks with sparse regression (PINNSR) and ensures dimensional consistency via machine learning-based dimensional analysis (DA). Applying PINNSR-DA to our discrete element method simulations of oscillatory shear flow, a general differential equation is found to govern the effective friction across steady and transient states. The equation consists of three interpretable terms, accounting respectively for linear response, nonlinear response and energy dissipation of the granular system, and the coefficients depends primarily on a dimensionless relaxation time, which is shorter for stiffer particles and thicker flow layers. This work pioneers a pathway for discovering physically interpretable governing laws in granular systems and can be readily extended to more complex scenarios involving jamming, segregation, and fluid-particle interactions.
In this work, we perform ab initio calculations, based on the density functional theory, of the effects on the graphene bilayer when we intercalate carbon atoms between the layers. We use the unit cell of the bilayer to construct larger unit cells (supercells), positioning a single carbon atom in the hollow position between the monolayers and periodically replicating the supercell. By increasing the size of the unit cell and consequently, the periodicity of the inserted atoms, we are able to minimize the carbon-carbon interaction and therefore infer the changes in the electronic, vibrational and thermal behavior of the bilayer when the intercalated atoms do not interact with each other. The main result, concerning the electronic properties, is the appearance of a doubly degenerate flat band at the Fermi level. These states are interpreted as coming from the periodic deformation of the bilayer due to the inserted atoms. It acts as a non-Abelian flux network creating zero energy at bands as predicted by San-Jose, González and Guinea in 2012. Since the periodic strain field associated to the defect array has such a strong influence on the electronic properties of the bilayer, it may be useful for practical applications. For instance, it can act as frozen-in magnetic-like field flux tubes. All-carbon nanostructures can then be designed to have electronic behavior at different regions tailored by the chosen defect pattern.
We introduce a framework for dynamic evaluation of the fingers movements: flexion, extension, abduction and adduction. This framework estimates angle measurements from joints computed by a hand pose estimation algorithm using a depth sensor (Realsense SR300). Given depth maps as input, our framework uses Pose-REN, which is a state-of-art hand pose estimation method that estimates 3D hand joint positions using a deep convolutional neural network. The pose estimation algorithm runs in real-time, allowing users to visualise 3D skeleton tracking results at the same time as the depth images are acquired. Once 3D joint poses are obtained, our framework estimates a plane containing the wrist and MCP joints and measures flexion/extension and abduction/aduction angles by applying computational geometry operations with respect to this plane. We analysed flexion and abduction movement patterns using real data, extracting the movement trajectories. Our preliminary results show that this method allows an automatic discrimination of hands with Rheumatoid Arthritis (RA) and healthy patients. The angle between joints can be used as an indicative of current movement capabilities and function. Although the measurements can be noisy and less accurate than those obtained statically through goniometry, the acquisition is much easier, non-invasive and patient-friendly, which shows the potential of our approach. The system can be used with and without orthosis. Our framework allows the acquisition of measurements with minimal intervention and significantly reduces the evaluation time.
We calculate the closed analytic form of the solution of heat kernel equation for the anisotropic generalizations of flat Laplacian. We consider a UV as well as UV/IR interpolating generalizations. In all cases, the result can be expressed in terms of Fox-Wright psi-functions. We perform different consistency checks, analytically reproducing some of the previous numerical or qualitative results, such as spectral dimension flow. Our study should be considered as a first step towards the construction of a heat kernel for curved Hořava-Lifshitz geometries, which is an essential ingredient in the spectral action approach to the construction of the Hořava-Lifshitz gravity.
There are no more papers matching your filters at the moment.