The University of Alabama in Huntsville
This technical report presents Prithvi-EO-2.0, a new geospatial foundation model that offers significant improvements over its predecessor, Prithvi-EO-1.0. Trained on 4.2M global time series samples from NASA's Harmonized Landsat and Sentinel-2 data archive at 30m resolution, the new 300M and 600M parameter models incorporate temporal and location embeddings for enhanced performance across various geospatial tasks. Through extensive benchmarking with GEO-Bench, the 600M version outperforms the previous Prithvi-EO model by 8\% across a range of tasks. It also outperforms six other geospatial foundation models when benchmarked on remote sensing tasks from different domains and resolutions (i.e. from 0.1m to 15m). The results demonstrate the versatility of the model in both classical earth observation and high-resolution applications. Early involvement of end-users and subject matter experts (SMEs) are among the key factors that contributed to the project's success. In particular, SME involvement allowed for constant feedback on model and dataset design, as well as successful customization for diverse SME-led applications in disaster response, land use and crop mapping, and ecosystem dynamics monitoring. Prithvi-EO-2.0 is available on Hugging Face and IBM terratorch, with additional resources on GitHub. The project exemplifies the Trusted Open Science approach embraced by all involved organizations.
Significant progress in the development of highly adaptable and reusable Artificial Intelligence (AI) models is expected to have a significant impact on Earth science and remote sensing. Foundation models are pre-trained on large unlabeled datasets through self-supervision, and then fine-tuned for various downstream tasks with small labeled datasets. This paper introduces a first-of-a-kind framework for the efficient pre-training and fine-tuning of foundational models on extensive geospatial data. We have utilized this framework to create Prithvi, a transformer-based geospatial foundational model pre-trained on more than 1TB of multispectral satellite imagery from the Harmonized Landsat-Sentinel 2 (HLS) dataset. Our study demonstrates the efficacy of our framework in successfully fine-tuning Prithvi to a range of Earth observation tasks that have not been tackled by previous work on foundation models involving multi-temporal cloud gap imputation, flood mapping, wildfire scar segmentation, and multi-temporal crop segmentation. Our experiments show that the pre-trained model accelerates the fine-tuning process compared to leveraging randomly initialized weights. In addition, pre-trained Prithvi compares well against the state-of-the-art, e.g., outperforming a conditional GAN model in multi-temporal cloud imputation by up to 5pp (or 5.7%) in the structural similarity index. Finally, due to the limited availability of labeled data in the field of Earth observation, we gradually reduce the quantity of available labeled data for refining the model to evaluate data efficiency and demonstrate that data can be decreased significantly without affecting the model's accuracy. The pre-trained 100 million parameter model and corresponding fine-tuning workflows have been released publicly as open source contributions to the global Earth sciences community through Hugging Face.
Prithvi WxC, a foundation model for weather and climate applications developed through a collaboration between IBM Research and NASA, demonstrates versatility across multiple tasks including forecasting, downscaling, and parameterization. The model exhibits strong zero-shot capabilities and achieves superior hurricane track prediction for Hurricane Ida (2021) compared to specialized models, with a track error of 63.972 km.
Quantum federated learning (QFL) is a combination of distributed quantum computing and federated machine learning, integrating the strengths of both to enable privacy-preserving decentralized learning with quantum-enhanced capabilities. It appears as a promising approach for addressing challenges in efficient and secure model training across distributed quantum systems. This paper presents a comprehensive survey on QFL, exploring its key concepts, fundamentals, applications, and emerging challenges in this rapidly developing field. Specifically, we begin with an introduction to the recent advancements of QFL, followed by discussion on its market opportunity and background knowledge. We then discuss the motivation behind the integration of quantum computing and federated learning, highlighting its working principle. Moreover, we review the fundamentals of QFL and its taxonomy. Particularly, we explore federation architecture, networking topology, communication schemes, optimization techniques, and security mechanisms within QFL frameworks. Furthermore, we investigate applications of QFL across several domains which include vehicular networks, healthcare networks, satellite networks, metaverse, and network security. Additionally, we analyze frameworks and platforms related to QFL, delving into its prototype implementations, and provide a detailed case study. Key insights and lessons learned from this review of QFL are also highlighted. We complete the survey by identifying current challenges and outlining potential avenues for future research in this rapidly advancing field.
The prediction of surrounding vehicle trajectories is crucial for collision-free path planning. In this study, we focus on a scenario where a connected and autonomous vehicle (CAV) serves as the central agent, utilizing both sensors and communication technologies to perceive its surrounding traffics consisting of autonomous vehicles (AVs), connected vehicles (CVs), and human-driven vehicles (HDVs). Our trajectory prediction task is aimed at all the detected surrounding vehicles. To effectively integrate the multi-source data from both sensor and communication technologies, we propose a deep learning framework called MSMA utilizing a cross-attention module for multi-source data fusion. Vector map data is utilized to provide contextual information. The trajectory dataset is collected in CARLA simulator with synthesized data errors introduced. Numerical experiments demonstrate that in a mixed traffic flow scenario, the integration of data from different sources enhances our understanding of the environment. This notably improves trajectory prediction accuracy, particularly in situations with a high CV market penetration rate. The code is available at: this https URL
7
One of the primary challenges in medical diagnostics is the accurate and efficient use of magnetic resonance imaging (MRI) for the detection of brain tumors. But the current machine learning (ML) approaches have two major limitations, data privacy and high latency. To solve the problem, in this work we propose a federated learning architecture for a better accurate brain tumor detection incorporating the YOLOv11 algorithm. In contrast to earlier methods of centralized learning, our federated learning approach protects the underlying medical data while supporting cooperative deep learning model training across multiple institutions. To allow the YOLOv11 model to locate and identify tumor areas, we adjust it to handle MRI data. To ensure robustness and generalizability, the model is trained and tested on a wide range of MRI data collected from several anonymous medical facilities. The results indicate that our method significantly maintains higher accuracy than conventional approaches.
AI-native 6G networks are envisioned to tightly embed artificial intelligence (AI) into the wireless ecosystem, enabling real-time, personalized, and privacy-preserving intelligence at the edge. A foundational pillar of this vision is federated learning (FL), which allows distributed model training across devices without sharing raw data. However, implementing classical FL methods faces several bottlenecks in heterogeneous dynamic wireless networks, including limited device compute capacity, unreliable connectivity, intermittent communications, and vulnerability to model security and data privacy breaches. This article investigates the integration of quantum federated learning (QFL) into AI-native 6G networks, forming a transformative paradigm capable of overcoming these challenges. By leveraging quantum techniques across computing, communication, and cryptography within FL workflows, QFL offers new capabilities along three key dimensions: (i) edge intelligence, (ii) network optimization, and (iii) security and privacy, which are studied in this work. We further present a case study demonstrating that a QFL framework employing the quantum approximate optimization algorithm outperforms classical methods in model convergence. We conclude the paper by identifying practical challenges facing QFL deployment, such as quantum state fragility, incompatibility with classical protocols, and hardware constraints, and then outline key research directions toward its scalable real-world adoption.
Quantum federated learning (QFL) has been recently introduced to enable a distributed privacy-preserving quantum machine learning (QML) model training across quantum processors (clients). Despite recent research efforts, existing QFL frameworks predominantly focus on unimodal systems, limiting their applicability to real-world tasks that often naturally involve multiple modalities. To fill this significant gap, we present for the first time a novel multimodal approach specifically tailored for the QFL setting with the intermediate fusion using quantum entanglement. Furthermore, to address a major bottleneck in multimodal QFL, where the absence of certain modalities during training can degrade model performance, we introduce a Missing Modality Agnostic (MMA) mechanism that isolates untrained quantum circuits, ensuring stable training without corrupted states. Simulation results demonstrate that the proposed multimodal QFL method with MMA yields an improvement in accuracy of 6.84% in independent and identically distributed (IID) and 7.25% in non-IID data distributions compared to the state-of-the-art methods.
High-quality machine learning (ML)-ready datasets play a foundational role in developing new artificial intelligence (AI) models or fine-tuning existing models for scientific applications such as weather and climate analysis. Unfortunately, despite the growing development of new deep learning models for weather and climate, there is a scarcity of curated, pre-processed machine learning (ML)-ready datasets. Curating such high-quality datasets for developing new models is challenging particularly because the modality of the input data varies significantly for different downstream tasks addressing different atmospheric scales (spatial and temporal). Here we introduce WxC-Bench (Weather and Climate Bench), a multi-modal dataset designed to support the development of generalizable AI models for downstream use-cases in weather and climate research. WxC-Bench is designed as a dataset of datasets for developing ML-models for a complex weather and climate system, addressing selected downstream tasks as machine learning phenomenon. WxC-Bench encompasses several atmospheric processes from meso-β\beta (20 - 200 km) scale to synoptic scales (2500 km), such as aviation turbulence, hurricane intensity and track monitoring, weather analog search, gravity wave parameterization, and natural language report generation. We provide a comprehensive description of the dataset and also present a technical validation for baseline analysis. The dataset and code to prepare the ML-ready data have been made publicly available on Hugging Face -- this https URL
Switchbacks are rapid magnetic field reversals that last from seconds to hours. Current Parker Solar Probe (PSP) observations pose many open questions in regard to the nature of switchbacks. For example, are they stable as they propagate through the inner heliosphere, and how are they formed? In this work, we aim to investigate the structure and origin of switchbacks. In order to study the stability of switchbacks, we suppose the small-scale current sheets therein are generated by magnetic braiding, and they should work to stabilize the switchbacks. With more than one thousand switchbacks identified with PSP observations in seven encounters, we find many more current sheets inside than outside switchbacks, indicating that these microstructures should work to stabilize the S-shaped structures of switchbacks. Additionally, we study the helium variations to trace the switchbacks to their origins. We find both helium-rich and helium-poor populations in switchbacks, implying that the switchbacks could originate from both closed and open magnetic field regions in the Sun. Moreover, we observe that the alpha-proton differential speeds also show complex variations as compared to the local Alfvén speed. The joint distributions of both parameters show that low helium abundance together with low differential speed is the dominant state in switchbacks. The presence of small-scale current sheets in switchbacks along with the helium features are in line with the hypothesis that switchbacks could originate from the Sun via interchange reconnection process. However, other formation mechanisms are not excluded.
Quantum federated learning (QFL) is an emerging field that has the potential to revolutionize computation by taking advantage of quantum physics concepts in a distributed machine learning (ML) environment. However, the majority of available quantum simulators are primarily built for general quantum circuit simulation and do not include integrated support for machine learning tasks such as training, evaluation, and iterative optimization. Furthermore, designing and assessing quantum learning algorithms is still a difficult and resource-intensive task. Real-time updates are essential for observing model convergence, debugging quantum circuits, and making conscious choices during training with the use of limited resources. Furthermore, most current simulators fail to support the integration of user-specific data for training purposes, undermining the main purpose of using a simulator. In this study, we introduce SimQFL, a customized simulator that simplifies and accelerates QFL experiments in quantum network applications. SimQFL supports real-time, epoch-wise output development and visualization, allowing researchers to monitor the process of learning across each training round. Furthermore, SimQFL offers an intuitive and visually appealing interface that facilitates ease of use and seamless execution. Users can customize key variables such as the number of epochs, learning rates, number of clients, and quantum hyperparameters such as qubits and quantum layers, making the simulator suitable for various QFL applications. The system gives immediate feedback following each epoch by showing intermediate outcomes and dynamically illustrating learning curves. SimQFL is a practical and interactive platform enabling academics and developers to prototype, analyze, and tune quantum neural networks with greater transparency and control in distributed quantum networks.
Erupting flux ropes play crucial role in powering a wide range of solar transients, including flares, jets, and coronal mass ejections. These events are driven by the release of stored magnetic energy, facilitated by the shear in the complex magnetic topologies. However, the mechanisms governing the formation and eruption of flux ropes, particularly the role of magnetic shear distribution in coronal arcades are not fully understood. We employ magnetohydrodynamic simulations incorporating nonadiabatic effects of optically thin radiative losses, magnetic field-aligned thermal conduction, and spatially varying (steady) background heating, to realistically model the coronal environment. A stratified solar atmosphere under gravity is initialized with a non-force-free field comprising sheared arcades. We study two different cases by varying the initial shear to analyze their resulting dynamics, and the possibility of flux rope formation and eruptions. Our results show that strong initial magnetic shear leads to spontaneous flux rope formation and eruption via magnetic reconnection, driven by Lorentz force. The shear distribution infers the non-potentiality distributed along arcades and demonstrates its relevance in identifying sites prone to eruptive activity. The evolution of mean shear and the relative strength between guide to reconnection fields during the pre- and post-eruption phases are explored, with implications of bulk heating for the ``hot onset'' phenomena in flares, and particle acceleration. On the other hand, the weaker shear case does not lead to formation of any flux ropes. Our findings highlight the limitations of relying solely on foot point shear and underscore the need for coronal scale diagnostics. These results are relevant for understanding eruptive onset conditions and can promote a better interpretation of coronal observations from current and future missions.
University of Washington logoUniversity of WashingtonTohoku University logoTohoku UniversityUniversity of MississippiCalifornia Institute of Technology logoCalifornia Institute of TechnologyUniversity of Cambridge logoUniversity of CambridgeINFN Sezione di NapoliMonash University logoMonash UniversityUCLA logoUCLANikhefUniversity of Science and Technology of China logoUniversity of Science and Technology of ChinaKyoto University logoKyoto UniversityUniversity of Michigan logoUniversity of MichiganThe Chinese University of Hong Kong logoThe Chinese University of Hong KongUniversity of MelbourneThe University of Texas at Austin logoThe University of Texas at AustinUniversity of WarsawTexas A&M University logoTexas A&M UniversityUniversity of British Columbia logoUniversity of British ColumbiaTata Institute of Fundamental ResearchOkayama UniversityUniversity of Florida logoUniversity of FloridaUniversity of Technology SydneyUniversity of Minnesota logoUniversity of MinnesotaUniversity of Maryland logoUniversity of MarylandUniversity of Tokyo logoUniversity of TokyoThe Pennsylvania State University logoThe Pennsylvania State UniversityUniversité Paris-Saclay logoUniversité Paris-SaclayGran Sasso Science InstitutePerimeter Institute for Theoretical Physics logoPerimeter Institute for Theoretical PhysicsUniversity of ZagrebSorbonne Université logoSorbonne UniversitéUniversity of Massachusetts AmherstCharles Sturt UniversitySapienza University of RomeAustralian National University logoAustralian National UniversityUniversity of Western AustraliaUniversity of GenevaCardiff UniversityUniversity of GlasgowLeibniz Universität HannoverUniversity of PortsmouthConsejo Superior de Investigaciones CientíficasWigner Research Centre for PhysicsSyracuse UniversityRMIT UniversityInstituto Nacional de Pesquisas EspaciaisUniversità di CamerinoUniversity of BirminghamUniversity of HyogoNiels Bohr InstituteBrandeis UniversityUniversity of the WitwatersrandUniversity of OregonNational Tsing-Hua UniversityPolish Academy of SciencesEötvös Loránd UniversityMissouri University of Science and TechnologyUniversity of Nizhny NovgorodNicolaus Copernicus Astronomical CenterThe University of Alabama in HuntsvilleUniversità di Napoli Federico IIUniversity of Hawai’iUniversity of SharjahAuburn UniversityInter-University Centre for Astronomy and AstrophysicsMontana State UniversityInternational Centre for Theoretical SciencesThe University of SheffieldUniversidade de Santiago de CompostelaINFN - Sezione di PadovaUniversity of ToyamaINFN-Sezione di GenovaUniversità di UdineUniversità di PerugiaINFN Sezione di RomaRheinisch-Westfälische Technische Hochschule AachenINFN Sezione di Roma Tor VergataUniversité de Bretagne OccidentaleLIGO Hanford ObservatoryUniversity of Urbino Carlo BoThe University of Texas Rio Grande ValleyUniversità di SienaLIGO Livingston ObservatoryNational Center for High-Performance ComputingAlbert Einstein InstituteARTEMIS, Observatoire de la Côte d’AzurUniversity of BrusselsLIGO IndiaUniversity of Sannio at BeneventoResonac Holdings CorporationUniversity of Pecs* National and Kapodistrian University of AthensUniversit de ParisUniversit catholique de LouvainUniversit Grenoble AlpesUniversit degli Studi di GenovaUniversit Libre de BruxellesUniversit di TrentoUniversit Paris CitUniversit de StrasbourgUniversit de LyonUniversit di PisaUniversit di PadovaUniversity of Rome “Tor Vergata ”Universit Politecnica delle MarcheINFN–TIFPAUniversit di Roma Tor VergataINFN Sezione di TriesteMax Planck Institute for Gravitational PhysicsINFN Sezione di FirenzeVrije Universiteit Brussel
Despite the growing number of confident binary black hole coalescences observed through gravitational waves so far, the astrophysical origin of these binaries remains uncertain. Orbital eccentricity is one of the clearest tracers of binary formation channels. Identifying binary eccentricity, however, remains challenging due to the limited availability of gravitational waveforms that include effects of eccentricity. Here, we present observational results for a waveform-independent search sensitive to eccentric black hole coalescences, covering the third observing run (O3) of the LIGO and Virgo detectors. We identified no new high-significance candidates beyond those that were already identified with searches focusing on quasi-circular binaries. We determine the sensitivity of our search to high-mass (total mass M>70 MM_\odot) binaries covering eccentricities up to 0.3 at 15 Hz orbital frequency, and use this to compare model predictions to search results. Assuming all detections are indeed quasi-circular, for our fiducial population model, we place an upper limit for the merger rate density of high-mass binaries with eccentricities 0 < e \leq 0.3 at 0.330.33 Gpc3^{-3} yr1^{-1} at 90\% confidence level.
The dissipation of stop-and-go waves attracted recent attention as a traffic management problem, which can be efficiently addressed by automated driving. As part of the 100 automated vehicles experiment named MegaVanderTest, feedback controls were used to induce strong dissipation via velocity smoothing. More precisely, a single vehicle driving differently in one of the four lanes of I-24 in the Nashville area was able to regularize the velocity profile by reducing oscillations in time and velocity differences among vehicles. Quantitative measures of this effect were possible due to the innovative I-24 MOTION system capable of monitoring the traffic conditions for all vehicles on the roadway. This paper presents the control design, the technological aspects involved in its deployment, and, finally, the results achieved by the experiment.
Maintaining wavefront stability while directly imaging exoplanets over long exposure times is an ongoing problem in the field of high-contrast imaging. Robust and efficient high-order wavefront sensing and control systems are required for maintaining wavefront stability to counteract mechanical and thermal instabilities. Dark zone maintenance (DZM) has been proposed to address quasi-static optical aberrations and maintain high levels of contrast for coronagraphic space telescopes. To further experimentally test this approach for future missions, such as the Habitable Worlds Observatory, this paper quantifies the differences between the theoretical closed-loop contrast bounds and DZM performance on the High-contrast Imager for Complex Aperture Telescopes(HiCAT) testbed. The quantification of DZM is achieved by traversing important parameters of the system, specifically the total direct photon rate entering the aperture of the instrument, ranging from 1.85×1061.85 \times 10^6 to 1.85×1081.85 \times 10^8 photons per second, and the wavefront error drift rate, ranging from σdrift\sigma_{drift} = 0.3 - 3 nm/iterationnm/\sqrt{iteration}, injected via the deformable mirror actuators. This is tested on the HiCAT testbed by injecting random walk drifts using two Boston Micromachines kilo deformable mirrors (DMs). The parameter scan is run on the HiCAT simulator and the HiCAT testbed where the corresponding results are compared to the model-based theoretical contrast bounds to analyze discrepancies. The results indicate an approximate one and a half order of magnitude difference between the theoretical bounds and testbed results.
The field of warp research has been dominated by analytical methods to investigate potential solutions. However, these approaches often favor simple metric forms that facilitate analysis but ultimately limit the range of exploration of novel solutions. So far the proposed solutions have been unphysical, requiring energy condition violations and large energy requirements. To overcome the analytical limitations in warp research, we introduce Warp Factory: a numerical toolkit designed for modeling warp drive spacetimes. By leveraging numerical analysis, Warp Factory enables the examination of general warp drive geometries by evaluating the Einstein field equations and computing energy conditions. Furthermore, this comprehensive toolkit provides the determination of metric scalars and insightful visualizations in both 2D and 3D, offering a deeper understanding of metrics and their corresponding stress-energy tensors. The paper delves into the methodology employed by Warp Factory in evaluating the physicality of warp drive spacetimes and highlights its application in assessing commonly modeled warp drive metrics. By leveraging the capabilities of Warp Factory, we aim to further warp drive research and hopefully bring us closer to realizing physically achievable warp drives.
Developing an analytical theory for atomic coherence driven by ultrashort laster pulses has proved to be challenging due to the breakdown of the rotating wave approximation (RWA). In this paper, we present an approximate, closed-form solution to the Schrodinger equation that describes a two-level atom under the excitation of a far-off-resonance, few-cycle pulse of arbitrary shape without invoking the RWA. As an example of its applicability, an analytical solution for Gaussian pulses is explicitly given. Comparisons with numerical solutions validate the accuracy our solution within the scope of the approximation. Finally, we outline an alternative approach that can lead to a more accurate solution by capturing the nonlinear behaviors of the system.
In this work, we present a first-principles lattice-QCD calculation of the unpolarized quark PDF for the pion and the kaon. The lattice data rely on matrix elements calculated for boosted mesons coupled to non-local operators containing a Wilson line. The calculations on this lattice ensemble correspond to two degenerate light, a strange, and a charm quark (Nf=2+1+1N_f=2+1+1), using maximally twisted mass fermions with a clover term. The lattice volume is 323×6432^3\times 64, with a lattice spacing of 0.0934 fm, and a pion mass of 260 MeV. Matrix elements are calculated for hadron boosts of P3=0, 0.41, 0.83, 1.25, 1.66,|P_3| = 0,~0.41,~0.83,~1.25,~1.66, and 2.07 GeV. To match lattice QCD results to their light-cone counterparts, we employ two complementary frameworks: the large-momentum effective theory (LaMET) and the short-distance factorization (SDF). Using these approaches in parallel, we also test the lattice data to identify methodology-driven systematics. Results are presented for the standard quark PDFs, as well as the valence sector. Beyond obtaining the PDFs, we also explore the possibility of extracting information on SU(3) flavor-symmetry-breaking effects. For LaMET, we also parametrize the momentum dependence to obtain the infinite-momentum PDFs.
Researchers from The University of Alabama in Huntsville and The University of Arizona developed the first comprehensive, logically rigorous theoretical framework for Verification and Validation (V&V) in systems engineering. Their work uses Dynamic Epistemic Modal Logic to provide formal definitions for V&V concepts and derive 50 theorems that clarify the relationships between verification and validation activities.
We present a joint Suzaku and XMM-Newton analysis of the outskirts of the nearby galaxy cluster Abell 2199, the only nearby galaxy cluster to be observed with near complete azimuthal coverage with Suzaku. Using the XMM-Newton observations to correct for the effects of gas clumping, we find that the azimuthally averaged entropy profile in the outskirts follows a power law with a slope of 1.20±0.231.20 \pm 0.23, statistically consistent with a slope of 1.1 predicted by non-radiative simulations for purely gravitational hierarchical structure formation. However, when divided into 10 sectors, the entropy shows significant azimuthal variation, with some sectors lying below the baseline level. The azimuthally averaged gas mass fraction is found to agree with the cosmic mean baryon fraction. The metal abundance in the outskirts is found to be consistent with being uniform in all directions and it has an average value of 0.290.03+0.03Z0.29_{-0.03}^{+0.03}\,Z_{\odot}, consistent with the gas accreting onto clusters being pre-enriched with metals.
There are no more papers matching your filters at the moment.