University of Montenegro
The GUE framework introduces an unsupervised, multi-task approach for Synthetic Aperture Radar (SAR) image processing by manipulating the latent space of a Generative Adversarial Network. This method achieves competitive performance in despeckling and segmentation tasks while generating semantically consistent rotated target data for improved Automatic Target Recognition.
University of Washington logoUniversity of WashingtonCNRS logoCNRSCalifornia Institute of Technology logoCalifornia Institute of TechnologyUniversity of Illinois at Urbana-Champaign logoUniversity of Illinois at Urbana-ChampaignSLAC National Accelerator LaboratoryNational Central UniversityUCLA logoUCLACarnegie Mellon University logoCarnegie Mellon UniversityImperial College London logoImperial College LondonDESYUniversity of Chicago logoUniversity of ChicagoUC Berkeley logoUC BerkeleyUniversity College London logoUniversity College LondonUniversity of Oxford logoUniversity of Oxfordthe University of Tokyo logothe University of TokyoStanford University logoStanford UniversityUniversity of EdinburghINFN logoINFNETH Zürich logoETH ZürichUniversity of California, San Diego logoUniversity of California, San DiegoUniversity of British Columbia logoUniversity of British ColumbiaNASA Goddard Space Flight Center logoNASA Goddard Space Flight CenterUniversity of Texas at Austin logoUniversity of Texas at AustinKavli Institute for the Physics and Mathematics of the UniverseCurtin UniversityCERN logoCERNSpace Telescope Science Institute logoSpace Telescope Science InstituteJohns Hopkins University logoJohns Hopkins UniversityArizona State University logoArizona State UniversityUniversity of Maryland logoUniversity of MarylandThe Alan Turing InstituteUniversity of North Carolina at Chapel HillPurdue University logoPurdue UniversityUniversity of HelsinkiPolitecnico di MilanoUniversity of California, Davis logoUniversity of California, DavisDuke University logoDuke UniversityMIT logoMITCEA logoCEAPrinceton University logoPrinceton UniversityUniv. LilleUniversity of Central Florida logoUniversity of Central FloridaUniversity of Colorado BoulderUniversité Côte d’AzurUniversidade Federal do Rio de JaneiroNorthern Arizona UniversityJet Propulsion LaboratoryUniversidad de ChileEuropean Space AgencyUniversity of MontenegroCNESAdam Mickiewicz UniversityPSL Research UniversitySouthwest Research InstituteSETI InstituteUniversity of North DakotaThe Johns Hopkins University Applied Physics LaboratoryObservatoire de la Côte d’AzurUniversity of Hawai’iCalifornia State Polytechnic University, PomonaThe University of ArizonaMIT Kavli Institute for Astrophysics and Space ResearchUniversidade Federal de SergipeKavli Institute for Cosmological PhysicsThe Open UniversityCarnegie Institution for ScienceUniversidad Nacional de ColombiaVera C. Rubin ObservatoryCEA SaclayCNRS/IN2P3Queen's University BelfastInstituto de Astrofísica de Canarias (IAC)Lowell ObservatoryIPACLAPPUniv Grenoble AlpesIJCLabU.S. Naval ObservatoryPlanetary Science InstituteNSF’s National Optical-Infrared Astronomy Research LaboratoryPontificia Universidad Catolica de ChileUniversidad MayorLPNHEUniversities Space Research AssociationAcademia Sinica Institute of Astronomy and Astrophysics (ASIAA)California Polytechnic State University - San Luis ObispoMullard Space Science LaboratoryELTE Gothard Astrophysical ObservatoryParis ObservatoryAstroparticule et Cosmologie (APC)Universit\`a degli Studi di Urbino ‘Carlo Bo’Universit´e Paris DiderotIMCCEELTE Eotvos Lorand UniversityAix-Marseille Universit\'eUK ATCLaboratoire d’Astrophysique de Marseille (LAM)Observatorio Astronomico NacionalInstituto Nacional de Astrofısica Optica y ElectronicaObservatorio do ValongoEarth and Planets LaboratoryUniversit´e Paris Cit´eLSST Discovery AllianceUTFPR— Universidade Tecnol´ogica Federal do Paran´aInstituto de Ciencias Planetarias y Exoplanetarias (ICPE)CONICET-IARLaborat´orio Nacional de Astrof´ısica (LNA)The ExploratoriumELKH-CSFK Konkoly ObservatoryObservat´orio Nacional, MCTILudwig-Maximilians-Universität MünchenNASA, Ames Research CenterUniversité Paris-SaclayCenter for Astrophysics  Harvard & SmithsonianINAF ` Osservatorio Astronomico di TriesteSorbonne Université
We report on the observation and measurement of astrometry, photometry, morphology, and activity of the interstellar object 3I/ATLAS, also designated C/2025 N1 (ATLAS), with the NSF-DOE Vera C. Rubin Observatory. The third interstellar object, comet 3I/ATLAS, was first discovered on UT 2025 July 1. Serendipitously, the Rubin Observatory collected imaging in the area of the sky inhabited by the object during regular commissioning activities. We successfully recovered object detections from Rubin visits spanning UT 2025 June 21 (10 days before discovery) to UT 2025 July 7. Facilitated by Rubin's high resolution and large aperture, we report on the detection of cometary activity as early as June 21st, and observe it throughout. We measure the location and magnitude of the object on 37 Rubin images in r, i, and z bands, with typical precision of about 20 mas (100 mas, systematic) and about 10 mmag, respectively. We use these to derive improved orbit solutions, and to show there is no detectable photometric variability on hourly timescales. We derive a V-band absolute magnitude of H_V = (13.7 +/- 0.2) mag, and an equivalent effective nucleus radius of around (5.6 +/- 0.7) km. These data represent the earliest observations of this object by a large (8-meter class) telescope reported to date, and illustrate the type of measurements (and discoveries) Rubin's Legacy Survey of Space and Time (LSST) will begin to provide once operational later this year.
We present a new computational framework combining coarse-graining techniques with bootstrap methods to study quantum many-body systems. The method efficiently computes rigorous upper and lower bounds on both zero- and finite-temperature expectation values of any local observables of infinite quantum spin chains. This is achieved by using tensor networks to coarse-grain bootstrap constraints, including positivity, translation invariance, equations of motion, and energy-entropy balance inequalities. Coarse-graining allows access to constraints from significantly larger subsystems than previously possible, yielding tighter bounds compared to those obtained without coarse-graining.
Deep Neural Networks (DNN) and especially Convolutional Neural Networks (CNN) are a de-facto standard for the analysis of large volumes of signals and images. Yet, their development and underlying principles have been largely performed in an ad-hoc and black box fashion. To help demystify CNNs, we revisit their operation from first principles and a matched filtering perspective. We establish that the convolution operation within CNNs, their very backbone, represents a matched filter which examines the input signal/image for the presence of pre-defined features. This perspective is shown to be physically meaningful, and serves as a basis for a step-by-step tutorial on the operation of CNNs, including pooling, zero padding, various ways of dimensionality reduction. Starting from first principles, both the feed-forward pass and the learning stage (via back-propagation) are illuminated in detail, both through a worked-out numerical example and the corresponding visualizations. It is our hope that this tutorial will help shed new light and physical intuition into the understanding and further development of deep neural networks.
The paper addresses acoustic vehicle detection and speed estimation from single sensor measurements. We predict the vehicle's pass-by instant by minimizing clipped vehicle-to-microphone distance, which is predicted from the mel-spectrogram of input audio, in a supervised learning approach. In addition, mel-spectrogram-based features are used directly for vehicle speed estimation, without introducing any intermediate features. The results show that the proposed features can be used for accurate vehicle detection and speed estimation, with an average error of 7.87 km/h. If we formulate speed estimation as a classification problem, with a 10 km/h discretization interval, the proposed method attains the average accuracy of 48.7% for correct class prediction and 91.0% when an offset of one class is allowed. The proposed method is evaluated on a dataset of 304 urban-environment on-field recordings of ten different vehicles.
Accurate speed estimation of road vehicles is important for several reasons. One is speed limit enforcement, which represents a crucial tool in decreasing traffic accidents and fatalities. Compared with other research areas and domains, the number of available datasets for vehicle speed estimation is still very limited. We present a dataset of on-road audio-video recordings of single vehicles passing by a camera at known speeds, maintained stable by the on-board cruise control. The dataset contains thirteen vehicles, selected to be as diverse as possible in terms of manufacturer, production year, engine type, power and transmission, resulting in a total of 400 400 annotated audio-video recordings. The dataset is fully available and intended as a public benchmark to facilitate research in audio-video vehicle speed estimation. In addition to the dataset, we propose a cross-validation strategy which can be used in a machine learning model for vehicle speed estimation. Two approaches to training-validation split of the dataset are proposed.
In deep-inelastic positron-proton scattering, the lepton-jet azimuthal angular asymmetry is measured using data collected with the H1 detector at HERA. When the average transverse momentum of the lepton-jet system, $\lvert \vec{P}_\perp \rvert $, is much larger than the total transverse momentum of the system, q\lvert \vec{q}_\perp \rvert, the asymmetry between parallel and antiparallel configurations, P\vec{P}_\perp and q\vec{q}_\perp, is expected to be generated by initial and final state soft gluon radiation and can be predicted using perturbation theory. Quantifying the angular properties of the asymmetry therefore provides an additional test of the strong force. Studying the asymmetry is important for future measurements of intrinsic asymmetries generated by the proton's constituents through Transverse Momentum Dependent (TMD) Parton Distribution Functions (PDFs), where this asymmetry constitutes a dominant background. Moments of the azimuthal asymmetries are measured using a machine learning method for unfolding that does not require binning.
We propose the idea of using Kuramoto models (including their higher-dimensional generalizations) for machine learning over non-Euclidean data sets. These models are systems of matrix ODE's describing collective motions (swarming dynamics) of abstract particles (generalized oscillators) on spheres, homogeneous spaces and Lie groups. Such models have been extensively studied from the beginning of XXI century both in statistical physics and control theory. They provide a suitable framework for encoding maps between various manifolds and are capable of learning over spherical and hyperbolic geometries. In addition, they can learn coupled actions of transformation groups (such as special orthogonal, unitary and Lorentz groups). Furthermore, we overview families of probability distributions that provide appropriate statistical models for probabilistic modeling and inference in Geometric Deep Learning. We argue in favor of using statistical models which arise in different Kuramoto models in the continuum limit of particles. The most convenient families of probability distributions are those which are invariant with respect to actions of certain symmetry groups.
University of MississippiCalifornia Institute of Technology logoCalifornia Institute of TechnologyKyungpook National UniversitySLAC National Accelerator LaboratoryImperial College London logoImperial College LondonUniversity of Notre Dame logoUniversity of Notre DameUniversity of Chicago logoUniversity of ChicagoGhent UniversityNanjing University logoNanjing UniversityUniversity of BonnUniversity of Michigan logoUniversity of MichiganUniversity of MelbourneCornell University logoCornell UniversityBoston University logoBoston UniversityKansas State UniversityRutherford Appleton LaboratoryCERN logoCERNArgonne National Laboratory logoArgonne National LaboratoryUniversity of Minnesota logoUniversity of MinnesotaBrookhaven National Laboratory logoBrookhaven National LaboratoryUniversity of ColoradoPurdue University logoPurdue UniversityUniversity of HelsinkiUniversity of California, Davis logoUniversity of California, DavisUniversity of Massachusetts AmherstUniversity of IowaFermi National Accelerator LaboratoryMIT logoMITPrinceton University logoPrinceton UniversityUniversity of DelhiUniversity of New MexicoUniversity of OregonLawrence Livermore National LaboratoryMoscow State UniversityUniversity of MontenegroEwha Womans UniversityUniversity of California, Santa Cruz logoUniversity of California, Santa CruzGSIUniversity of Hawai’iMax Planck Institute for PhysicsUniversity of BarcelonaCEA SaclayNorthern Illinois UniversityLouisiana Tech UniversityLPNHEBristol UniversitySUNY, Stony BrookLaboratoire d’Annecy-le-Vieux de Physique des ParticulesInstitute of Microelectronics of Barcelona, IMB-CNM (CSIC)Institute of Nuclear Research (Atomki)University of IndianaMolecular Biology ConsortiumIPPPGomel State Technical UniversityInstituto de Fisica Corpuscular (IFIC)IHEP BeijingInstituto de Fisica de Cantabria (IFCA)Institute of Physics, PragueObninsk State Technical University for Nuclear Power EngineeringBirla Institute for Technology and Science, PilaniIPHC-IN2P3/CNRSUniversite´ Pierre et Marie Curie
Letter of intent describing SiD (Silicon Detector) for consideration by the International Linear Collider IDAG panel. This detector concept is founded on the use of silicon detectors for vertexing, tracking, and electromagnetic calorimetry. The detector has been cost-optimized as a general-purpose detector for a 500 GeV electron-positron linear collider.
In this review the physics of Pfaffian paired states, in the context of fractional quantum Hall effect, is discussed using field-theoretical approaches. The Pfaffian states are prime examples of topological (pp-wave) Cooper pairing and are characterized by non-Abelian statistics of their quasiparticles. Here we focus on conditions for their realization and competition among them at half-integer filling factors. Using the Dirac composite fermion description, in the presence of a mass term, we study the influence of Landau level mixing in selecting a particular Pfaffian state. While Pfaffian and anti-Pfaffian are selected when Landau level mixing is not strong, and can be taken into account perturbatively, the PH Pfaffian state requires non-perturbative inclusion of at least two Landau levels. Our findings, for small Landau level mixing, are in accordance with numerical investigations in the literature, and call for a non-perturbative approach in the search for PH Pfaffian correlations. We demonstrated that a method based on the Chern-Simons field-theoretical approach can be used to generate characteristic interaction pseudo-potentials for Pfaffian paired states.
Diffractive electroproduction of rho and phi mesons is measured at HERA with the H1 detector in the elastic and proton dissociative channels. The data correspond to an integrated luminosity of 51 pb^-1. About 10500 rho and 2000 phi events are analysed in the kinematic range of squared photon virtuality 2.5 < Q^2 < 60 GeV^2, photon-proton centre of mass energy 35 < W < 180 GeV and squared four-momentum transfer to the proton |t| < 3 GeV^2. The total, longitudinal and transverse cross sections are measured as a function of Q^2, W and |t|. The measurements show a transition to a dominantly "hard" behaviour, typical of high gluon densities and small q\bar{q} dipoles, for Q^2 larger than 10 to 20 GeV^2. They support flavour independence of the diffractive exchange, expressed in terms of the scaling variable (Q^2 + M_V^2)/4, and proton vertex factorisation. The spin density matrix elements are measured as a function of kinematic variables. The ratio of the longitudinal to transverse cross sections, the ratio of the helicity amplitudes and their relative phases are extracted. Several of these measurements have not been performed before and bring new information on the dynamics of diffraction in a QCD framework. The measurements are discussed in the context of models using generalised parton distributions or universal dipole cross sections.
Many modern data analytics applications on graphs operate on domains where graph topology is not known a priori, and hence its determination becomes part of the problem definition, rather than serving as prior knowledge which aids the problem solution. Part III of this monograph starts by addressing ways to learn graph topology, from the case where the physics of the problem already suggest a possible topology, through to most general cases where the graph topology is learned from the data. A particular emphasis is on graph topology definition based on the correlation and precision matrices of the observed data, combined with additional prior knowledge and structural conditions, such as the smoothness or sparsity of graph connections. For learning sparse graphs (with small number of edges), the least absolute shrinkage and selection operator, known as LASSO is employed, along with its graph specific variant, graphical LASSO. For completeness, both variants of LASSO are derived in an intuitive way, and explained. An in-depth elaboration of the graph topology learning paradigm is provided through several examples on physically well defined graphs, such as electric circuits, linear heat transfer, social and computer networks, and spring-mass systems. As many graph neural networks (GNN) and convolutional graph networks (GCN) are emerging, we have also reviewed the main trends in GNNs and GCNs, from the perspective of graph signal filtering. Tensor representation of lattice-structured graphs is next considered, and it is shown that tensors (multidimensional data arrays) are a special class of graph signals, whereby the graph vertices reside on a high-dimensional regular lattice structure. This part of monograph concludes with two emerging applications in financial data processing and underground transportation networks modeling.
The idea of representations of the data in negatively curved manifolds recently attracted a lot of attention and gave a rise to the new research direction named {\it hyperbolic machine learning} (ML). In order to unveil the full potential of this new paradigm, efficient techniques for data analysis and statistical modeling in hyperbolic spaces are necessary. In the present paper rigorous mathematical framework for clustering in hyperbolic spaces is established. First, we introduce the kk-means clustering in hyperbolic balls, based on the novel definition of barycenter. Second, we present the expectation-maximization (EM) algorithm for learning mixtures of novel probability distributions in hyperbolic balls. In such a way we lay the foundation of unsupervised learning in hyperbolic spaces.
A recent study on the interpretability of real-valued convolutional neural networks (CNNs) {Stankovic_Mandic_2023CNN} has revealed a direct and physically meaningful link with the task of finding features in data through matched filters. However, applying this paradigm to illuminate the interpretability of complex-valued CNNs meets a formidable obstacle: the extension of matched filtering to a general class of noncircular complex-valued data, referred to here as the widely linear matched filter (WLMF), has been only implicit in the literature. To this end, to establish the interpretability of the operation of complex-valued CNNs, we introduce a general WLMF paradigm, provide its solution and undertake analysis of its performance. For rigor, our WLMF solution is derived without imposing any assumption on the probability density of noise. The theoretical advantages of the WLMF over its standard strictly linear counterpart (SLMF) are provided in terms of their output signal-to-noise-ratios (SNRs), with WLMF consistently exhibiting enhanced SNR. Moreover, the lower bound on the SNR gain of WLMF is derived, together with condition to attain this bound. This serves to revisit the convolution-activation-pooling chain in complex-valued CNNs through the lens of matched filtering, which reveals the potential of WLMFs to provide physical interpretability and enhance explainability of general complex-valued CNNs. Simulations demonstrate the agreement between the theoretical and numerical results.
In the era of rapid digital communication, vast amounts of textual data are generated daily, demanding efficient methods for latent content analysis to extract meaningful insights. Large Language Models (LLMs) offer potential for automating this process, yet comprehensive assessments comparing their performance to human annotators across multiple dimensions are lacking. This study evaluates the reliability, consistency, and quality of seven state-of-the-art LLMs, including variants of OpenAI's GPT-4, Gemini, Llama, and Mixtral, relative to human annotators in analyzing sentiment, political leaning, emotional intensity, and sarcasm detection. A total of 33 human annotators and eight LLM variants assessed 100 curated textual items, generating 3,300 human and 19,200 LLM annotations, with LLMs evaluated across three time points to examine temporal consistency. Inter-rater reliability was measured using Krippendorff's alpha, and intra-class correlation coefficients assessed consistency over time. The results reveal that both humans and LLMs exhibit high reliability in sentiment analysis and political leaning assessments, with LLMs demonstrating higher internal consistency than humans. In emotional intensity, LLMs displayed higher agreement compared to humans, though humans rated emotional intensity significantly higher. Both groups struggled with sarcasm detection, evidenced by low agreement. LLMs showed excellent temporal consistency across all dimensions, indicating stable performance over time. This research concludes that LLMs, especially GPT-4, can effectively replicate human analysis in sentiment and political leaning, although human expertise remains essential for emotional intensity interpretation. The findings demonstrate the potential of LLMs for consistent and high-quality performance in certain areas of latent content analysis.
In this paper we propose an approach to implement specific relation-ship set between two entities called combinatorial relationship set. For the combinatorial relationship set B between entity sets G and I the mapping cardinality is many-to-many. Additionally, entities from G can be uniquely encoded with a pair of values (h, k) generated with the procedure for numbering combinations of entities from I. The encoding procedure is based on combinatorial number system that provides a representation of all possible k -combinations of a set of n elements by a single number. In general many-to-many relationship sets are represented by a relation or table, while the combinatorial relationship is not physically stored as separate table. However, all information is encapsulated into a single column added to G. The new column is a candidate key in G. Additional operation named Rank-Join to fundamental relational-algebra is presented to combine information from g and i associated with a combinatorial relationship set. Motivation for combinatorial relationship originates from challenges in designing and implementing multivalued dimensions and bridge tables in data-warehouse models.
Recovery of arbitrarily positioned samples that are missing in sparse signals recently attracted significant research interest. Sparse signals with heavily corrupted arbitrary positioned samples could be analyzed in the same way as compressive sensed signals by omitting the corrupted samples and considering them as unavailable during the recovery process. The reconstruction of missing samples is done by using one of the well known reconstruction algorithms. In this paper we will propose a very simple and efficient adaptive variable step algorithm, applied directly to the concentration measures, without reformulating the reconstruction problem within the standard linear programming form. Direct application of the gradient approach to the nondifferentiable forms of measures lead us to introduce a variable step size algorithm. A criterion for changing adaptive algorithm parameters is presented. The results are illustrated on the examples with sparse signals, including approximately sparse signals and noisy sparse signals.
The application of Compressive sensing approach to the speech and musical signals is considered in this paper. Compressive sensing (CS) is a new approach to the signal sampling that allows signal reconstruction from a small set of randomly acquired samples. This method is developed for the signals that exhibit the sparsity in a certain domain. Here we have observed two sparsity domains: discrete Fourier and discrete cosine transform domain. Furthermore, two different types of audio signals are analyzed in terms of sparsity and CS performance - musical and speech signals. Comparative analysis of the CS reconstruction using different number of signal samples is performed in the two domains of sparsity. It is shown that the CS can be successfully applied to both, musical and speech signals, but the speech signals are more demanding in terms of the number of observations. Also, our results show that discrete cosine transform domain allows better reconstruction using lower number of observations, compared to the Fourier transform domain, for both types of signals.
We examine five setups where an agent (or two agents) seeks to explore unknown environment without any prior information. Although seemingly very different, all of them can be formalized as Reinforcement Learning (RL) problems in hyperbolic spaces. More precisely, it is natural to endow the action spaces with the hyperbolic metric. We introduce statistical and dynamical models necessary for addressing problems of this kind and implement algorithms based on this framework. Throughout the paper we view RL through the lens of the black-box optimization.
Stock market returns are typically analyzed using standard regression, yet they reside on irregular domains which is a natural scenario for graph signal processing. To this end, we consider a market graph as an intuitive way to represent the relationships between financial assets. Traditional methods for estimating asset-return covariance operate under the assumption of statistical time-invariance, and are thus unable to appropriately infer the underlying true structure of the market graph. This work introduces a class of graph spectral estimators which cater for the nonstationarity inherent to asset price movements, and serve as a basis to represent the time-varying interactions between assets through a dynamic spectral market graph. Such an account of the time-varying nature of the asset-return covariance allows us to introduce the notion of dynamic spectral portfolio cuts, whereby the graph is partitioned into time-evolving clusters, allowing for online and robust asset allocation. The advantages of the proposed framework over traditional methods are demonstrated through numerical case studies using real-world price data.
There are no more papers matching your filters at the moment.