University of Belgrade
Researchers developed and validated a Large Language Model agent social simulation, replicating a Voat technology forum within a digital-twin environment. The simulation demonstrated operational validity by reproducing many aggregate distributions and meso-level social structures observed in real-world data, matching activity patterns and topical alignment, while exhibiting differences in network core-periphery structure and overall toxicity levels.
1
Multi-object tracking (MOT) is a core task in computer vision that involves detecting objects in video frames and associating them across time. The rise of deep learning has significantly advanced MOT, particularly within the tracking-by-detection paradigm, which remains the dominant approach. Advancements in modern deep learning-based methods accelerated in 2022 with the introduction of ByteTrack for tracking-by-detection and MOTR for end-to-end tracking. Our survey provides an in-depth analysis of deep learning-based MOT methods, systematically categorizing tracking-by-detection approaches into five groups: joint detection and embedding, heuristic-based, motion-based, affinity learning, and offline methods. In addition, we examine end-to-end tracking methods and compare them with existing alternative approaches. We evaluate the performance of recent trackers across multiple benchmarks and specifically assess their generality by comparing results across different domains. Our findings indicate that heuristic-based methods achieve state-of-the-art results on densely populated datasets with linear object motion, while deep learning-based association methods, in both tracking-by-detection and end-to-end approaches, excel in scenarios with complex motion patterns.
We discuss chaos and ensemble averaging in 1/2 BPS bubbling AdSAdS spaces of Lin, Lunin and Maldacena (LLM) by studying trapped and escaping null geodesics and estimating their decay rates. We find typical chaotic scattering behavior and confirm the Pesin relation between escape rates, Lyapunov exponents and Kolmogorov-Sinai entropy. On the other hand, for geodesics in coarse-grained (grayscale) LLM geometries (which exhibit a naked singularity) chaos is strongly suppressed, which is consistent with orbits and escape rates averaged over microscopic backgrounds. Also the singularities in these grayscale geometries produce an attractive potential and have some similarities to black hole throats trapping geodesics for a long time. Overall, averaging over the ensembles of LLM geometries brings us closer toward the typical behavior of geodesics in black hole backgrounds, but some important differences remain, in particular the existence of a threshold timescale when the averaging fails.
We report initial observations aimed at the characterization of a third interstellar object. This object, 3I/ATLAS or C/2025 N1 (ATLAS), was discovered on 2025 July 1 UT and has an orbital eccentricity of e6.1e\sim6.1, perihelion of q1.36q\sim 1.36 au, inclination of 175\sim175^\circ, and hyperbolic velocity of V58V_\infty\sim 58 km s1^{-1}. We report deep stacked images obtained using the Canada-France-Hawaii Telescope and the Very Large Telescope that resolve a compact coma. Using images obtained from several smaller ground-based telescopes, we find minimal light curve variation for the object over a 4\sim4 day time span. The visible/near-infrared spectral slope of the object is 17.1±\pm0.2 %/100 nm, comparable to other interstellar objects and primitive solar system small bodies (comets and D-type asteroids). 3I/ATLAS will be observable through early September 2025, then unobservable by Earth-based observatories near perihelion due to low solar elongation. It will be observable again from the ground in late November 2025. Although this limitation unfortunately prohibits detailed Earth-based observations at perihelion when the activity of 3I/ATLAS is likely to peak, spacecraft at Mars could be used to make valuable observations at this time.
University of Cambridge logoUniversity of CambridgeUniversity of BernUniversity of EdinburghETH Zürich logoETH ZürichTechnische Universität DresdenUniversity of PisaStockholm University logoStockholm UniversitySorbonne Université logoSorbonne UniversitéUniversity of TurkuLeiden University logoLeiden UniversityUniversity of GenevaUniversity of BelgradeUniversity of ViennaUniversity of LeicesterUniversity of VigoUniversiteit LeidenObservatoire de ParisUniversité de LiègeINAF - Osservatorio Astrofisico di TorinoUniversity of Groningen logoUniversity of GroningenUniversity of BathLund UniversityUniversity of LausanneInstituto de Astrofísica de CanariasUniversity of AntioquiaEuropean Space AgencyUniversidad de ValparaísoUniversité de MonsELTE Eötvös Loránd UniversityUniversity of BordeauxObservatoire de la Côte d’AzurFaculdade de Ciências da Universidade de LisboaUniversity of BarcelonaMax Planck Institute for AstronomyNational Observatory of AthensUniversité de Paris-SaclayInstituto de Astrofísica de AndalucíaUniversité de Franche-ComtéINAF – Osservatorio Astronomico di RomaKatholieke Universiteit LeuvenRoyal Observatory of BelgiumSpace Research InstituteUniversité de RennesUniversity of AarhusKonkoly ObservatoryTartu ObservatoryHellenic Open UniversityARI, Zentrum für Astronomie der Universität HeidelbergCopernicus Astronomical CenterESAC, Villanueva de la CañadaAstronomical Observatory of TurinUniversité de BesançonCENTRA, Universidade de LisboaUniversité de NiceObservatoire de la Côte d'Azur, CNRSINAF – Osservatorio Astronomico di CataniaUniversit catholique de LouvainUniversit de ToulouseUniversit Libre de BruxellesINAF Osservatorio Astronomico di CapodimonteUniversit de LorraineAix-Marseille Universit",Universit de StrasbourgUniversit de LilleINAF Osservatorio Astrofisico di ArcetriINAF Osservatorio Astronomico di PadovaUniversit de MontpellierINAF Osservatorio di Astrofisica e Scienza dello Spazio di Bologna
The Gaia Galactic survey mission is designed and optimized to obtain astrometry, photometry, and spectroscopy of nearly two billion stars in our Galaxy. Yet as an all-sky multi-epoch survey, Gaia also observes several million extragalactic objects down to a magnitude of G~21 mag. Due to the nature of the Gaia onboard selection algorithms, these are mostly point-source-like objects. Using data provided by the satellite, we have identified quasar and galaxy candidates via supervised machine learning methods, and estimate their redshifts using the low resolution BP/RP spectra. We further characterise the surface brightness profiles of host galaxies of quasars and of galaxies from pre-defined input lists. Here we give an overview of the processing of extragalactic objects, describe the data products in Gaia DR3, and analyse their properties. Two integrated tables contain the main results for a high completeness, but low purity (50-70%), set of 6.6 million candidate quasars and 4.8 million candidate galaxies. We provide queries that select purer sub-samples of these containing 1.9 million probable quasars and 2.9 million probable galaxies (both 95% purity). We also use high quality BP/RP spectra of 43 thousand high probability quasars over the redshift range 0.05-4.36 to construct a composite quasar spectrum spanning restframe wavelengths from 72-100 nm.
We present the serendipitous radio-continuum discovery of a likely Galactic supernova remnant (SNR) G305.4-2.2. This object displays a remarkable circular symmetry in shape, making it one of the most circular Galactic SNRs known. Nicknamed Teleios due to its symmetry, it was detected in the new Australian Square Kilometre Array Pathfinder (ASKAP) Evolutionary Map of the Universe (EMU) radio-continuum images with an angular size of 1320"x1260" and PA = 0 deg. While there is a hint of possible Hα\alpha and gamma-ray emission, Teleios is exclusively seen at radio-continuum frequencies. Interestingly, Teleios is not only almost perfectly symmetric, but it also has one of the lowest surface brightnesses discovered among Galactic SNRs and a steep spectral index of α=0.6±0.3\alpha=-0.6\pm 0.3. Our estimates from HI studies and the Sigma-D relation place Teleios as a type Ia SNR at a distance of either ~2.2 kpc of ~7.7 kpc. This indicates two possible scenarios, either a young (under 1000 yr) or an older SNR (over 10000 yr). With a corresponding diameter of 14/48 pc, our evolutionary studies place Teleios at the either early or late Sedov phase, depending on the distance estimate. However, our modelling also predicts X-ray emission, which we do not see in the present generation of eROSITA images. We also explored a type Iax explosion scenario that points to a much closer distance of <1 kpc and Teleios size of only ~3.3 pc, which would be similar to the only known type Iax remnant SN1181. Unfortunately, all examined scenarios have their challenges, and no definitive supernova (SN) origin type can be established at this stage. Teleios's symmetrical shape suggests expansion into a rarefied and isotropic ambient medium. The low radio surface brightness and the lack of pronounced polarisation can be explained by a high level of ambient rotation measure (RM), with the largest RM being observed at centre.
This paper addresses the Longest Filled Common Subsequence (LFCS) problem, a challenging NP-hard problem with applications in bioinformatics, including gene mutation prediction and genomic data reconstruction. Existing approaches, including exact, metaheuristic, and approximation algorithms, have primarily been evaluated on small-sized instances, which offer limited insights into their scalability. In this work, we introduce a new benchmark dataset with significantly larger instances and demonstrate that existing datasets lack the discriminative power needed to meaningfully assess algorithm performance at scale. To solve large instances efficiently, we utilize an adaptive Construct, Merge, Solve, Adapt (CMSA) framework that iteratively generates promising subproblems via component-based construction and refines them using feedback from prior iterations. Subproblems are solved using an external black-box solver. Extensive experiments on both standard and newly introduced benchmarks show that the proposed adaptive CMSA achieves state-of-the-art performance, outperforming five leading methods. Notably, on 1,510 problem instances with known optimal solutions, our approach solves 1,486 of them -- achieving over 99.9% optimal solution quality and demonstrating exceptional scalability. We additionally propose a novel application of LFCS for song identification from degraded audio excerpts as an engineering contribution, using real-world energy-profile instances from popular music. Finally, we conducted an empirical explainability analysis to identify critical feature combinations influencing algorithm performance, i.e., the key problem features contributing to success or failure of the approaches across different instance types are revealed.
Solving the many-electron problem, even approximately, is one of the most challenging and simultaneously most important problems in contemporary condensed matter physics with various connections to other fields. The standard approach is to follow a divide and conquer strategy that combines various numerical and analytical techniques. A crucial step in this strategy is the derivation of an effective model for a subset of degrees of freedom by a procedure called downfolding, which often corresponds to integrating out energy scales far away from the Fermi level. In this work we present a rigorous formulation of this downfolding procedure, which complements the renormalization group picture put forward by Honerkamp [PRB 85, 195129 (2012)}]. We derive an exact effective model in an arbitrarily chosen target space (e.g. low-energy degrees of freedom) by explicitly integrating out the the rest space (e.g. high-energy degrees of freedom). Within this formalism we state conditions that justify a perturbative truncation of the downfolded effective interactions to just a few low-order terms. Furthermore, we utilize the exact formalism to formally derive the widely used constrained random phase approximation (cRPA), uncovering underlying approximations and highlighting relevant corrections in the process. Lastly, we detail different contributions in the material examples of fcc Nickel and the infinite-layer cuprate SrCuO2_2. Our results open up a new pathway to obtain effective models in a controlled fashion and to judge whether a chosen target space is suitable.
We prove three structural impossibility results demonstrating that fuzzy metric spaces cannot capture essential features of quantum state geometry. First, we show they cannot model destructive interference between concepts due to phase insensitivity. Second, we prove there is no distance-preserving embedding from quantum state space into any fuzzy metric space. Third, we establish that fuzzy logic cannot distinguish symmetric from antisymmetric concept combinations -- a fundamental limitation for modeling structured knowledge. These theorems collectively show that fuzzy frameworks are structurally incapable of representing intrinsic uncertainty, where quantum mechanics provides a superior, geometrically coherent alternative.
We present the result of an experiment to measure the electric dipole moment (EDM) of the neutron at the Paul Scherrer Institute using Ramsey's method of separated oscillating magnetic fields with ultracold neutrons (UCN). Our measurement stands in the long history of EDM experiments probing physics violating time reversal invariance. The salient features of this experiment were the use of a Hg-199 co-magnetometer and an array of optically pumped cesium vapor magnetometers to cancel and correct for magnetic field changes. The statistical analysis was performed on blinded datasets by two separate groups while the estimation of systematic effects profited from an unprecedented knowledge of the magnetic field. The measured value of the neutron EDM is dn=(0.0±1.1stat±0.2sys)×1026ecmd_{\rm n} = (0.0\pm1.1_{\rm stat}\pm0.2_{\rm sys})\times10^{-26}e\,{\rm cm}.
Personality Computing is a field at the intersection of Personality Psychology and Computer Science. Started in 2005, research in the field utilizes computational methods to understand and predict human personality traits. The expansion of the field has been very rapid and, by analyzing digital footprints (text, images, social media, etc.), it helped to develop systems that recognize and even replicate human personality. While offering promising applications in talent recruiting, marketing and healthcare, the ethical implications of Personality Computing are significant. Concerns include data privacy, algorithmic bias, and the potential for manipulation by personality-aware Artificial Intelligence. This paper provides an overview of the field, explores key methodologies, discusses the challenges and threats, and outlines potential future directions for responsible development and deployment of Personality Computing technologies.
The production of multiple Higgs bosons at the CERN LHC provides a direct way to measure the trilinear and quartic Higgs self-interaction strengths as well as potential access to beyond the standard model effects that can enhance production at large transverse momentum pTp_{\mathrm{T}}. The largest event fraction arises from the fully hadronic final state in which every Higgs boson decays to a bottom quark-antiquark pair (bbˉb\bar{b}). This introduces a combinatorial challenge known as the \emph{jet assignment problem}: assigning jets to sets representing Higgs boson candidates. Symmetry-preserving attention networks (SPA-Nets) have been been developed to address this challenge. However, the complexity of jet assignment increases when simultaneously considering both HbbˉH\rightarrow b\bar{b} reconstruction possibilities, i.e., two "resolved" small-radius jets each containing a shower initiated by a bb-quark or one "boosted" large-radius jet containing a merged shower initiated by a bbˉb\bar{b} pair. The latter improves the reconstruction efficiency at high pTp_{\mathrm{T}}. In this work, we introduce a generalization to the SPA-Net approach to simultaneously consider both boosted and resolved reconstruction possibilities and unambiguously interpret an event as "fully resolved'', "fully boosted", or in between. We report the performance of baseline methods, the original SPA-Net approach, and our generalized version on nonresonant HHHH and HHHHHH production at the LHC. Considering both boosted and resolved topologies, our SPA-Net approach increases the Higgs boson reconstruction purity by 57--62\% and the efficiency by 23--38\% compared to the baseline method depending on the final state.
From neuroscience and genomics to systems biology and ecology, researchers rely on clustering similarity data to uncover modular structure. Yet widely used clustering methods, such as hierarchical clustering, k-means, and WGCNA, lack principled model selection, leaving them susceptible to noise. A common workaround sparsifies a correlation matrix representation to remove noise before clustering, but this extra step introduces arbitrary thresholds that can distort the structure and lead to unreliable results. To detect reliable clusters, we capitalize on recent advances in network science to unite sparsification and clustering with principled model selection. We test two Bayesian community detection methods, the Degree-Corrected Stochastic Block Model and the Regularized Map Equation, both grounded in the Minimum Description Length principle for model selection. In synthetic data, they outperform traditional approaches, detecting planted clusters under high-noise conditions and with fewer samples. Compared to WGCNA on gene co-expression data, the Regularized Map Equation identifies more robust and functionally coherent gene modules. Our results establish Bayesian community detection as a principled and noise-resistant framework for uncovering modular structure in high-dimensional data across fields.
The topological Kuramoto model generalizes classical synchronization models by including higher-order interactions, with oscillator dynamics defined on cells of arbitrary dimension within simplicial or cell complexes. In this article, we demonstrate multistability in the topological Kuramoto model and develop the topological nonlinear Kirchhoff conditions algorithm to identify all phase-locked states on arbitrary cell complexes. The algorithm is based on a generalization of Kirchhoff's laws to cell complexes of arbitrary dimension and nonlinear interactions between cells. By applying this framework to rings, Platonic solids, and simplexes, as minimal representative motifs of larger networks, we derive explicit bounds (based on winding number constraints) that determine the number of coexisting stable states. We uncover structural cascades of multistability, inherited from both lower and higher dimensions and demonstrate that cell complexes can generate richer multistability patterns than simplicial complexes of the same dimension. Moreover, we find that multistability patterns in cell complexes appear to be determined by the number of boundary cells, hinting a possible universal pattern.
The AGN Space Telescope and Optical Reverberation Mapping 2 (STORM 2) campaign targeted Mrk 817 with intensive multi-wavelength monitoring and found its soft X-ray emission to be strongly absorbed. We present results from 157 near-IR spectra with an average cadence of a few days. Whereas the hot dust reverberation signal as tracked by the continuum flux does not have a clear response, we recover a dust reverberation radius of 90\sim 90 light-days from the blackbody dust temperature light-curve. This radius is consistent with previous photometric reverberation mapping results when Mrk 817 was in an unobscured state. The heating/cooling process we observe indicates that the inner limit of the dusty torus is set by a process other than sublimation, rendering it a luminosity-invariant `dusty wall' of a carbonaceous composition. Assuming thermal equilibrium for dust optically thick to the incident radiation, we derive a luminosity of 6×1044\sim 6 \times 10^{44} erg s1^{-1} for the source heating it. This luminosity is similar to that of the obscured spectral energy distribution, assuming a disk with an Eddington accretion rate of m˙0.2\dot{m} \sim 0.2. Alternatively, the dust is illuminated by an unobscured lower luminosity disk with m˙0.1\dot{m} \sim 0.1, which permits the UV/optical continuum lags in the high-obscuration state to be dominated by diffuse emission from the broad-line region. Finally, we find hot dust extended on scales >140350> 140-350 pc, associated with the rotating disk of ionised gas we observe in spatially-resolved [SIII] λ9531\lambda 9531 images. Its likely origin is in the compact bulge of the barred spiral host galaxy, where it is heated by a nuclear starburst.
The Closest String Problem is an NP-hard problem that aims to find a string that has the minimum distance from all sequences that belong to the given set of strings. Its applications can be found in coding theory, computational biology, and designing degenerated primers, among others. There are efficient exact algorithms that have reached high-quality solutions for binary sequences. However, there is still room for improvement concerning the quality of solutions over DNA and protein sequences. In this paper, we introduce a three-stage algorithm that comprises the following process: first, we apply a novel alphabet pruning method to reduce the search space for effectively finding promising search regions. Second, a variant of beam search to find a heuristic solution is employed. This method utilizes a newly developed guiding function based on an expected distance heuristic score of partial solutions. Last, we introduce a local search to improve the quality of the solution obtained from the beam search. Furthermore, due to the lack of real-world benchmarks, two real-world datasets are introduced to verify the robustness of the method. The extensive experimental results show that the proposed method outperforms the previous approaches from the literature.
The rapid advancements in artificial intelligence (AI) have presented new opportunities for enhancing efficiency and economic competitiveness across various industries, espcially in banking. Machine learning (ML), as a subset of artificial intelligence, enables systems to adapt and learn from vast datasets, revolutionizing decision-making processes, fraud detection, and customer service automation. However, these innovations also introduce new challenges, particularly in the realm of cybersecurity. Adversarial attacks, such as data poisoning and evasion attacks, represent critical threats to machine learning models, exploiting vulnerabilities to manipulate outcomes or compromise sensitive information. Furthermore, this study highlights the dual-use nature of AI tools, which can be used by malicious users. To address these challenges, the paper emphasizes the importance of developing machine learning models with key characteristics such as security, trust, resilience and robustness. These features are essential to mitigating risks and ensuring the secure deployment of AI technologies in banking sectors, where the protection of financial data is paramount. The findings underscore the urgent need for enhanced cybersecurity frameworks and continuous improvements in defensive mechanisms. By exploring both opportunities and risks, this paper aims to guide the responsible integration of AI in the banking sector, paving the way for innovation while safeguarding against emerging threats.
The paper comprehensively reviews the phase space foundations of quantum theory, detailing the interrelations of Wigner, Husimi, and Glauber-Sudarshan quasi-probability distributions. It then applies this framework to analytically determine the Husimi quasi-probability function for the output state of a linear quantum amplifier, precisely accounting for operator ordering.
Given the exponential advancement in AI technologies and the potential escalation of harmful effects from recommendation systems, it is crucial to simulate and evaluate these effects early on. Doing so can help prevent possible damage to both societies and technology companies. This paper introduces the Recommender Systems LLMs Playground (RecSysLLMsP), a novel simulation framework leveraging Large Language Models (LLMs) to explore the impacts of different content recommendation setups on user engagement and polarization in social networks. By creating diverse AI agents (AgentPrompts) with descriptive, static, and dynamic attributes, we assess their autonomous behaviour across three scenarios: Plurality, Balanced, and Similarity. Our findings reveal that the Similarity Scenario, which aligns content with user preferences, maximizes engagement while potentially fostering echo chambers. Conversely, the Plurality Scenario promotes diverse interactions but produces mixed engagement results. Our study emphasizes the need for a careful balance in recommender system designs to enhance user satisfaction while mitigating societal polarization. It underscores the unique value and challenges of incorporating LLMs into simulation environments. The benefits of RecSysLLMsP lie in its potential to calculate polarization effects, which is crucial for assessing societal impacts and determining user engagement levels with diverse recommender system setups. This advantage is essential for developing and maintaining a successful business model for social media companies. However, the study's limitations revolve around accurately emulating reality. Future efforts should validate the similarity in behaviour between real humans and AgentPrompts and establish metrics for measuring polarization scores.
There are no more papers matching your filters at the moment.