Naval Research Laboratory
Semantic segmentation of structural defects in civil infrastructure remains challenging due to variable defect appearances, harsh imaging conditions, and significant class imbalance. Current deep learning methods, despite their effectiveness, typically require millions of parameters, rendering them impractical for real-time inspection systems. We introduce KARMA (Kolmogorov-Arnold Representation Mapping Architecture), a highly efficient semantic segmentation framework that models complex defect patterns through compositions of one-dimensional functions rather than conventional convolutions. KARMA features three technical innovations: (1) a parameter-efficient Tiny Kolmogorov-Arnold Network (TiKAN) module leveraging low-rank factorization for KAN-based feature transformation; (2) an optimized feature pyramid structure with separable convolutions for multi-scale defect analysis; and (3) a static-dynamic prototype mechanism that enhances feature representation for imbalanced classes. Extensive experiments on benchmark infrastructure inspection datasets demonstrate that KARMA achieves competitive or superior mean IoU performance compared to state-of-the-art approaches, while using significantly fewer parameters (0.959M vs. 31.04M, a 97% reduction). Operating at 0.264 GFLOPS, KARMA maintains inference speeds suitable for real-time deployment, enabling practical automated infrastructure inspection systems without compromising accuracy. The source code can be accessed at the following URL: this https URL.
Observations from the Chandra X-ray Observatory provide extensive evidence that active galactic nuclei (AGN) heating effectively offsets radiative cooling in the centers of galaxy clusters, resolving the long-standing cooling flow problem. The research quantifies how AGN feedback, through mechanisms like buoyant X-ray cavities, shocks, and sound waves, injects sufficient mechanical energy to maintain the thermal balance of the intracluster medium.
The Fermi Large Area Telescope (LAT) has revealed a mysterious extended excess of GeV gamma-ray emission around the Galactic Center, which can potentially be explained by unresolved emission from a population of pulsars, particularly millisecond pulsars (MSPs), in the Galactic bulge. We used the distributed volunteer computing system Einstein@Home to search the Fermi-LAT data for gamma-ray pulsations from sources in the inner Galaxy, to try to identify the brightest members of this putative population. We discovered four new pulsars, including one new MSP and one young pulsar whose angular separation to the Galactic Center of 0.93° is the smallest of any known gamma-ray pulsar. We demonstrate a phase-resolved difference imaging technique that allows the flux from this pulsar to be disentangled from the diffuse Galactic Center emission. No radio pulsations were detected from the four new pulsars in archival radio observations or during the MPIfR-MeerKAT Galactic Plane Survey. While the distances to these pulsars remain uncertain, we find that it is more likely that they are all foreground sources from the Galactic disk, rather than pulsars originating from the predicted bulge population. Nevertheless, our results are not incompatible with an MSP explanation for the GC excess, as only one or two members of this population would have been detectable in our searches.
Accurate pulsar astrometric estimates play an essential role in almost all high-precision pulsar timing experiments. Traditional pulsar timing techniques refine these estimates by including them as free parameters when fitting a model to observed pulse time-of-arrival measurements. However, reliable sub-milliarcsecond astrometric estimations require years of observations and, even then, power from red noise can be inadvertently absorbed into astrometric parameter fits, biasing the resulting estimations and reducing our sensitivity to red noise processes, including gravitational waves (GWs). In this work, we seek to mitigate these shortcomings by using pulsar astrometric estimates derived from Very Long Baseline Interferometry (VLBI) as priors for the timing fit. First, we calibrated a frame tie to account for the offsets between the reference frames used in VLBI and timing. Then, we used the VLBI-informed priors and timing-based likelihoods of several astrometric solutions consistent with both techniques to obtain a maximum-posterior astrometric solution. We found offsets between our results and the timing-based astrometric solutions, which, if real, would lead to absorption of spectral power at frequencies of interest for single-source GW searches. However, we do not find significant power absorption due to astrometric fitting at the low-frequency domain of the GW background.
The rapid progress in Large Language Models (LLMs) could transform many fields, but their fast development creates significant challenges for oversight, ethical creation, and building user trust. This comprehensive review looks at key trust issues in LLMs, such as unintended harms, lack of transparency, vulnerability to attacks, alignment with human values, and environmental impact. Many obstacles can undermine user trust, including societal biases, opaque decision-making, potential for misuse, and the challenges of rapidly evolving technology. Addressing these trust gaps is critical as LLMs become more common in sensitive areas like finance, healthcare, education, and policy. To tackle these issues, we suggest combining ethical oversight, industry accountability, regulation, and public involvement. AI development norms should be reshaped, incentives aligned, and ethics integrated throughout the machine learning process, which requires close collaboration across technology, ethics, law, policy, and other fields. Our review contributes a robust framework to assess trust in LLMs and analyzes the complex trust dynamics in depth. We provide contextualized guidelines and standards for responsibly developing and deploying these powerful AI systems. This review identifies key limitations and challenges in creating trustworthy AI. By addressing these issues, we aim to build a transparent, accountable AI ecosystem that benefits society while minimizing risks. Our findings provide valuable guidance for researchers, policymakers, and industry leaders striving to establish trust in LLMs and ensure they are used responsibly across various applications for the good of society.
Neurodynamic behavior of artificial neuron circuits made of Mott memristors provides versatile opportunities to utilize them for artificial sensing. Their small size and energy efficiency of generating spiking electrical signals enable usage in fully implantable cochlear implants. Here, we propose an auditory sensing unit realized by a piezo-MEMS (micro-electromechanical systems) cantilever connected to a VO2_2 nanogap Mott memristor-based oscillator circuit. This auditory sensing unit is capable of frequency-selective detection of vibrations and subsequent emission of a neural spiking waveform. The auditory sensing unit is tested under biologically realistic vibration amplitudes, and spike rate-encoding of the incoming stimulus is demonstrated, similarly to natural hearing processes. The tunability of the output spiking frequency and the shape of the spiking waveform are also demonstrated to provide suitable voltage spikes for the nervous system.
The recent surge in machine learning (ML) methods for geophysical modeling has raised the question of how these methods might be applied to data assimilation (DA). We focus on diffusion modeling (a form of generative artificial intelligence) for systems that can perform the entire DA process, rather than on ML-based tools used within a conventional DA system. We identify at least three distinct types of diffusion-based DA systems and show that they differ in the posterior distribution they target for sampling. These posterior distributions correspond to different priors and/or likelihoods, which in turn result in unique training datasets, computational requirements, and state estimate qualities. Our analysis further shows that a diffusion DA system designed to target the same posterior distribution as current ensemble DA algorithms requires re-training at each DA cycle, which is computationally costly. We discuss the implications of these findings for the use of diffusion modeling in DA.
Both IrV and RhV crystallize in the alpha-IrV structure, with a transition to the higher symmetry L1_0 structure at high temperature, or with the addition of excess Ir or Rh. Here we present evidence that this transition is driven by the lowering of the electronic density of states at the Fermi level of the alpha-IrV structure. The transition has long been thought to be second order, with a simple doubling of the L1_0 unit cell due to an unstable phonon at the R point (0 1/2 1/2). We use first-principles calculations to show that all phonons at the R point are, in fact, stable, but do find a region of reciprocal space where the L1_0 structure has unstable (imaginary frequency) phonons. We use the frozen phonon method to examine two of these modes, relaxing the structures associated with the unstable phonon modes to obtain new structures which are lower in energy than L1_0 but still above alpha-IrV. We examine the phonon spectra of these structures as well, looking for instabilities, and find further instabilities, and more relaxed structures, all of which have energies above the alpha-IrV phase. In addition, we find that all of the relaxed structures, stable and unstable, have a density comparable to the L1_0 phase (and less than the alpha-IrV phase), so that any transition from one of these structures to the ground state will have a volume change as well as an energy discontinuity. We conclude that the transition from L1_0 to alpha-IrV is probably weakly first order.
We report an analysis of the interstellar γ\gamma-ray emission in the third Galactic quadrant measured by the {Fermi} Large Area Telescope. The window encompassing the Galactic plane from longitude 210\arcdeg210\arcdeg to 250\arcdeg250\arcdeg has kinematically well-defined segments of the Local and the Perseus arms, suitable to study the cosmic-ray densities across the outer Galaxy. We measure no large gradient with Galactocentric distance of the γ\gamma-ray emissivities per interstellar H atom over the regions sampled in this study. The gradient depends, however, on the optical depth correction applied to derive the \HI\ column densities. No significant variations are found in the interstellar spectra in the outer Galaxy, indicating similar shapes of the cosmic-ray spectrum up to the Perseus arm for particles with GeV to tens of GeV energies. The emissivity as a function of Galactocentric radius does not show a large enhancement in the spiral arms with respect to the interarm region. The measured emissivity gradient is flatter than expectations based on a cosmic-ray propagation model using the radial distribution of supernova remnants and uniform diffusion properties. In this context, observations require a larger halo size and/or a flatter CR source distribution than usually assumed. The molecular mass calibrating ratio, XCO=N(H2)/WCOX_{\rm CO} = N({\rm H_{2}})/W_{\rm CO}, is found to be (2.08±0.11)×1020cm2(Kkms1)1(2.08 \pm 0.11) \times 10^{20} {\rm cm^{-2} (K km s^{-1})^{-1}} in the Local-arm clouds and is not significantly sensitive to the choice of \HI\ spin temperature. No significant variations are found for clouds in the interarm region.
Few-shot learning (FSL) enables object detection models to recognize novel classes given only a few annotated examples, thereby reducing expensive manual data labeling. This survey examines recent FSL advances for video and 3D object detection. For video, FSL is especially valuable since annotating objects across frames is more laborious than for static images. By propagating information across frames, techniques like tube proposals and temporal matching networks can detect new classes from a couple examples, efficiently leveraging spatiotemporal structure. FSL for 3D detection from LiDAR or depth data faces challenges like sparsity and lack of texture. Solutions integrate FSL with specialized point cloud networks and losses tailored for class imbalance. Few-shot 3D detection enables practical autonomous driving deployment by minimizing costly 3D annotation needs. Core issues in both domains include balancing generalization and overfitting, integrating prototype matching, and handling data modality properties. In summary, FSL shows promise for reducing annotation requirements and enabling real-world video, 3D, and other applications by efficiently leveraging information across feature, temporal, and data modalities. By comprehensively surveying recent advancements, this paper illuminates FSL's potential to minimize supervision needs and enable deployment across video, 3D, and other real-world applications.
09 Jun 1999
We have simulated the formation of an X-ray cluster in a cold dark matter universe using 12 different codes. The codes span the range of numerical techniques and implementations currently in use, including SPH and grid methods with fixed, deformable or multilevel meshes. The goal of this comparison is to assess the reliability of cosmological gas dynamical simulations of clusters in the simplest astrophysically relevant case, that in which the gas is assumed to be non-radiative. We compare images of the cluster at different epochs, global properties such as mass, temperature and X-ray luminosity, and radial profiles of various dynamical and thermodynamical quantities. On the whole, the agreement among the various simulations is gratifying although a number of discrepancies exist. Agreement is best for properties of the dark matter and worst for the total X-ray luminosity. Even in this case, simulations that adequately resolve the core radius of the gas distribution predict total X-ray luminosities that agree to within a factor of two. Other quantities are reproduced to much higher accuracy. For example, the temperature and gas mass fraction within the virial radius agree to about 10%, and the ratio of specific kinetic to thermal energies of the gas agree to about 5%. Various factors contribute to the spread in calculated cluster properties, including differences in the internal timing of the simulations. Based on the overall consistency of results, we discuss a number of general properties of the cluster we have modelled.
Researchers analyzed Joint Probabilistic Data Association (JPDA), Multiple Hypothesis Tracking (MHT), and Belief Propagation (BP) methods in multitarget tracking to understand track coalescence and repulsion. They demonstrated that BP-based methods effectively mitigate both track coalescence and repulsion, achieving superior performance and linear scalability in dense target environments compared to classical approaches.
We present a two-level decomposition strategy for solving the Vehicle Routing Problem (VRP) using the Quantum Approximate Optimization Algorithm. A Problem-Level Decomposition partitions a 13-node (156-qubit) VRP into smaller Traveling Salesman Problem (TSP) instances. Each TSP is then further cut via Circuit-Level Decomposition, enabling execution on near-term quantum devices. Our approach achieves up to 95\% reductions in the circuit depth, 96\% reduction in the number of qubits and a 99.5\% reduction in the number of 2-qubit gates. We demonstrate this hybrid algorithm on the standard edge encoding of the VRP as well as a novel amplitude encoding. These results demonstrate the feasibility of solving VRPs previously too complex for quantum simulators and provide early evidence of potential quantum utility.
The past years have witnessed tremendous progress in understanding the properties of neutron stars and of the dense matter in their cores, made possible by electromagnetic observations of neutron stars and the detection of gravitational waves from their mergers. These observations provided novel constraints on neutron-star structure, that is intimately related to the properties of dense neutron-rich matter described by the nuclear equation of state. Nevertheless, constraining the equation of state over the wide range of densities probed by astrophysical observations is still challenging, as the physics involved is very broad and the system spans many orders of magnitude in densities. Here, we review theoretical approaches to calculate and model the neutron-star equation of state in various regimes of densities, and discuss the related consequent properties of neutron stars. We describe how the equation of state can be calculated from nuclear interactions that are constrained and benchmarked by nuclear experiments. We review neutron-star observations, with particular emphasis on information provided by gravitational-wave signals and electromagnetic observations. Finally, we discuss future challenges and opportunities in the field.
A 2.1-year periodic oscillation of the gamma-ray flux from the blazar PG 1553+113 has previously been tentatively identified in almost 7 year of data from the Fermi Large Area Telescope. After 15 years of Fermi sky-survey observations, doubling the total time range, we report >7 cycle gamma-ray modulation with an estimated significance of 4 sigma against stochastic red noise. Independent determinations of oscillation period and phase in the earlier and the new data are in close agreement (chance probability <0.01). Pulse timing over the full light curve is also consistent with a coherent periodicity. Multiwavelength new data from Swift X-Ray Telescope, Burst Alert Telescope, and UVOT, and from KAIT, Catalina Sky Survey, All-Sky Automated Survey for Supernovae, and Owens Valley Radio Observatory ground-based observatories as well as archival Rossi X-Ray Timing Explorer satellite-All Sky Monitor data, published optical data of Tuorla, and optical historical Harvard plates data are included in our work. Optical and radio light curves show clear correlations with the gamma-ray modulation, possibly with a nonconstant time lag for the radio flux. We interpret the gamma-ray periodicity as possibly arising from a pulsational accretion flow in a sub-parsec binary supermassive black hole system of elevated mass ratio, with orbital modulation of the supplied material and energy in the jet. Other astrophysical scenarios introduced include instabilities, disk and jet precession, rotation or nutation, and perturbations by massive stars or intermediate-mass black holes in polar orbit.
We introduce KANICE (Kolmogorov-Arnold Networks with Interactive Convolutional Elements), a novel neural architecture that combines Convolutional Neural Networks (CNNs) with Kolmogorov-Arnold Network (KAN) principles. KANICE integrates Interactive Convolutional Blocks (ICBs) and KAN linear layers into a CNN framework. This leverages KANs' universal approximation capabilities and ICBs' adaptive feature learning. KANICE captures complex, non-linear data relationships while enabling dynamic, context-dependent feature extraction based on the Kolmogorov-Arnold representation theorem. We evaluated KANICE on four datasets: MNIST, Fashion-MNIST, EMNIST, and SVHN, comparing it against standard CNNs, CNN-KAN hybrids, and ICB variants. KANICE consistently outperformed baseline models, achieving 99.35% accuracy on MNIST and 90.05% on the SVHN dataset. Furthermore, we introduce KANICE-mini, a compact variant designed for efficiency. A comprehensive ablation study demonstrates that KANICE-mini achieves comparable performance to KANICE with significantly fewer parameters. KANICE-mini reached 90.00% accuracy on SVHN with 2,337,828 parameters, compared to KANICE's 25,432,000. This study highlights the potential of KAN-based architectures in balancing performance and computational efficiency in image classification tasks. Our work contributes to research in adaptive neural networks, integrates mathematical theorems into deep learning architectures, and explores the trade-offs between model complexity and performance, advancing computer vision and pattern recognition. The source code for this paper is publicly accessible through our GitHub repository (this https URL).
Dark matter in the Milky Way may annihilate directly into gamma rays, producing a monoenergetic spectral line. Therefore, detecting such a signature would be strong evidence for dark matter annihilation or decay. We search for spectral lines in the Fermi Large Area Telescope observations of the Milky Way halo in the energy range 200 MeV to 500 GeV using analysis methods from our most recent line searches. The main improvements relative to previous works are our use of 5.8 years of data reprocessed with the Pass 8 event-level analysis and the additional data resulting from the modified observing strategy designed to increase exposure of the Galactic center region. We searched in five sky regions selected to optimize sensitivity to different theoretically-motivated dark matter scenarios and find no significant detections. In addition to presenting the results from our search for lines, we also investigate the previously reported tentative detection of a line at 133 GeV using the new Pass 8 data.
We present high-sensitivity follow-up observations of the giant fossil radio lobe in the Ophiuchus galaxy cluster with the upgraded Giant Metrewave Radio Telescope (uGMRT) in the 125-250 MHz and 300-500 MHz frequency bands. The new data have sufficient angular resolution to exclude compact sources and enable us to trace the faint extended emission from the relic lobe to a remarkable distance of 820 kpc from the cluster center. The new images reveal intricate spatial structure within the fossil lobe, including narrow (5-10 kpc), long (70-100 kpc) radio filaments embedded within the diffuse emission at the bottom of the lobe. The filaments exhibit a very steep spectrum (SνναS_\nu\propto \nu^{-\alpha} with α3\alpha \sim 3), significantly steeper than the ambient synchrotron emission from the lobe (α1.52\alpha \sim 1.5-2); they mostly disappear in recently-published MeerKAT images at 1.28 GHz. Their origin is unclear; similar features observed in some other radio lobes typically have a spectrum flatter than that of their ambient medium. These radio filaments may trace regions where the magnetic field has been stretched and amplified by gas circulation within the rising bubble. The spectrum of the brightest region of the radio lobe exhibits a spectral break, which corresponds to a radiative cooling age of the fossil lobe of approximately 174 Myr, giving a date for this most powerful AGN explosion.
Robots, particularly in service and companionship roles, must develop positive relationships with people they interact with regularly to be successful. These positive human-robot relationships can be characterized as establishing "rapport," which indicates mutual understanding and interpersonal connection that form the groundwork for successful long-term human-robot interaction. However, the human-robot interaction research literature lacks scale instruments to assess human-robot rapport in a variety of situations. In this work, we developed the 18-item Connection-Coordination Rapport (CCR) Scale to measure human-robot rapport. We first ran Study 1 (N = 288) where online participants rated videos of human-robot interactions using a set of candidate items. Our Study 1 results showed the discovery of two factors in our scale, which we named "Connection" and "Coordination." We then evaluated this scale by running Study 2 (N = 201) where online participants rated a new set of human-robot interaction videos with our scale and an existing rapport scale from virtual agents research for comparison. We also validated our scale by replicating a prior in-person human-robot interaction study, Study 3 (N = 44), and found that rapport is rated significantly greater when participants interacted with a responsive robot (responsive condition) as opposed to an unresponsive robot (unresponsive condition). Results from these studies demonstrate high reliability and validity for the CCR scale, which can be used to measure rapport in both first-person and third-person perspectives. We encourage the adoption of this scale in future studies to measure rapport in a variety of human-robot interactions.
There are no more papers matching your filters at the moment.