Old Dominion University
Large Language Models (LLMs) remain vulnerable to multi-turn jailbreak attacks. We introduce HarmNet, a modular framework comprising ThoughtNet, a hierarchical semantic network; a feedback-driven Simulator for iterative query refinement; and a Network Traverser for real-time adaptive attack execution. HarmNet systematically explores and refines the adversarial space to uncover stealthy, high-success attack paths. Experiments across closed-source and open-source LLMs show that HarmNet outperforms state-of-the-art methods, achieving higher attack success rates. For example, on Mistral-7B, HarmNet achieves a 99.4% attack success rate, 13.9% higher than the best baseline. Index terms: jailbreak attacks; large language models; adversarial framework; query refinement.
Federated Learning (FL) enables collaborative model training across institutions without sharing raw data. However, gradient sharing still risks privacy leakage, such as gradient inversion attacks. Homomorphic Encryption (HE) can secure aggregation but often incurs prohibitive computational and communication overhead. Existing HE-based FL methods sit at two extremes: encrypting all gradients for full privacy at high cost, or partially encrypting gradients to save resources while exposing vulnerabilities. We present DictPFL, a practical framework that achieves full gradient protection with minimal overhead. DictPFL encrypts every transmitted gradient while keeping non-transmitted parameters local, preserving privacy without heavy computation. It introduces two key modules: Decompose-for-Partial-Encrypt (DePE), which decomposes model weights into a static dictionary and an updatable lookup table, only the latter is encrypted and aggregated, while the static dictionary remains local and requires neither sharing nor encryption; and Prune-for-Minimum-Encrypt (PrME), which applies encryption-aware pruning to minimize encrypted parameters via consistent, history-guided masks. Experiments show that DictPFL reduces communication cost by 402-748×\times and accelerates training by 28-65×\times compared to fully encrypted FL, while outperforming state-of-the-art selective encryption methods by 51-155×\times in overhead and 4-19×\times in speed. Remarkably, DictPFL's runtime is within 2×\times of plaintext FL, demonstrating for the first time, that HE-based private federated learning is practical for real-world deployment. The code is publicly available at this https URL.
Large Language Models (LLMs) have revolutionized natural language processing but remain vulnerable to jailbreak attacks, especially multi-turn jailbreaks that distribute malicious intent across benign exchanges and bypass alignment mechanisms. Existing approaches often explore the adversarial space poorly, rely on hand-crafted heuristics, or lack systematic query refinement. We present NEXUS (Network Exploration for eXploiting Unsafe Sequences), a modular framework for constructing, refining, and executing optimized multi-turn attacks. NEXUS comprises: (1) ThoughtNet, which hierarchically expands a harmful intent into a structured semantic network of topics, entities, and query chains; (2) a feedback-driven Simulator that iteratively refines and prunes these chains through attacker-victim-judge LLM collaboration using harmfulness and semantic-similarity benchmarks; and (3) a Network Traverser that adaptively navigates the refined query space for real-time attacks. This pipeline uncovers stealthy, high-success adversarial paths across LLMs. On several closed-source and open-source LLMs, NEXUS increases attack success rate by 2.1% to 19.4% over prior methods. Code: this https URL
ViGText is a deepfake image detection framework developed by researchers from NJIT, ODU, and QCRI that integrates Vision-Language Model explanations and Graph Neural Networks. The system leverages patch-specific textual insights and dual-domain visual features within a graph structure to identify synthetic content. It achieved superior detection performance and demonstrated enhanced generalization to unseen generative models and increased robustness against adversarial attacks, setting new benchmarks in these critical areas.
The small-xx deep inelastic scattering in the saturation region is governed by the non-linear evolution of Wilson-line operators. In the leading logarithmic approximation it is given by the BK equation for the evolution of color dipoles. In the next-to-leading order the BK equation gets contributions from quark and gluon loops as well as from the tree gluon diagrams with quadratic and cubic nonlinearities. We calculate the gluon contribution to small-x evolution of Wilson lines (the quark part was obtained earlier).
University of Illinois at Urbana-Champaign logoUniversity of Illinois at Urbana-ChampaignUniversity of Pittsburgh logoUniversity of PittsburghUniversity of California, Santa Barbara logoUniversity of California, Santa BarbaraSLAC National Accelerator LaboratoryHarvard University logoHarvard UniversityImperial College London logoImperial College LondonUniversity of OklahomaDESYUniversity of Manchester logoUniversity of ManchesterUniversity of ZurichUniversity of BernUC Berkeley logoUC BerkeleyUniversity of Oxford logoUniversity of OxfordNikhefIndiana UniversityPusan National UniversityScuola Normale SuperioreCornell University logoCornell UniversityUniversity of California, San Diego logoUniversity of California, San DiegoNorthwestern University logoNorthwestern UniversityUniversity of GranadaCERN logoCERNArgonne National Laboratory logoArgonne National LaboratoryFlorida State UniversitySeoul National University logoSeoul National UniversityHuazhong University of Science and Technology logoHuazhong University of Science and TechnologyUniversity of Wisconsin-Madison logoUniversity of Wisconsin-MadisonUniversity of PisaLawrence Berkeley National Laboratory logoLawrence Berkeley National LaboratoryPolitecnico di MilanoUniversity of LiverpoolUniversity of IowaDuke University logoDuke UniversityUniversity of GenevaUniversity of GlasgowUniversity of Warwick logoUniversity of WarwickIowa State UniversityKarlsruhe Institute of Technology logoKarlsruhe Institute of TechnologyUniversità di Milano-BicoccaTechnische Universität MünchenOld Dominion UniversityTexas Tech UniversityDurham University logoDurham UniversityNiels Bohr InstituteCzech Technical University in PragueUniversity of OregonUniversity of AlabamaSTFC Rutherford Appleton LaboratoryLawrence Livermore National LaboratoryUniversity of California, Santa Cruz logoUniversity of California, Santa CruzUniversity of SarajevoJefferson LabTOBB University of Economics and TechnologyUniversity of California RiversideUniversity of HuddersfieldCEA SaclayRadboud University NijmegenUniversitá degli Studi dell’InsubriaHumboldt University BerlinINFN Milano-BicoccaUniversità degli Studi di BresciaIIT GuwahatiDaresbury LaboratoryINFN - PadovaINFN MilanoUniversità degli Studi di BariCockcroft InstituteHelwan UniversityINFN-TorinoINFN PisaINFN-BolognaBrookhaven National Laboratory (BNL)INFN Laboratori Nazionali del SudINFN PaviaMax Planck Institute for Nuclear PhysicsINFN TriesteINFN Roma TreINFN GenovaFermi National Accelerator Laboratory (Fermilab)INFN BariINFN-FirenzeINFN FerraraPunjab Agricultural UniversityEuropean Spallation Source (ESS)Fusion for EnergyInternational Institute of Physics (IIP)INFN-Roma La SapienzaUniversit degli Studi di GenovaUniversit di FerraraUniversit degli Studi di PadovaUniversit di Roma La SapienzaRWTH Aachen UniversityUniversit di TorinoSapienza Universit di RomaUniversit degli Studi di FirenzeUniversit degli Studi di TorinoUniversit di PaviaUniversit Di BolognaUniversit degli Studi Roma Tre
This review, by the International Muon Collider Collaboration (IMCC), outlines the scientific case and technological feasibility of a multi-TeV muon collider, demonstrating its potential for unprecedented energy reach and precision measurements in particle physics. It presents a comprehensive conceptual design and R&D roadmap for a collider capable of reaching 10+ TeV center-of-mass energy.
Embodied intelligence (EI) enables manufacturing systems to flexibly perceive, reason, adapt, and operate within dynamic shop floor environments. In smart manufacturing, a representative EI scenario is robotic visual inspection, where industrial robots must accurately inspect components on rapidly changing, heterogeneous production lines. This task requires both high inference accuracy especially for uncommon defects and low latency to match production speeds, despite evolving lighting, part geometries, and surface conditions. To meet these needs, we propose LAECIPS, a large vision model-assisted adaptive edge-cloud collaboration framework for IoT-based embodied intelligence systems. LAECIPS decouples large vision models in the cloud from lightweight models on the edge, enabling plug-and-play model adaptation and continual learning. Through a hard input mining-based inference strategy, LAECIPS routes complex and uncertain inspection cases to the cloud while handling routine tasks at the edge, achieving both high accuracy and low latency. Experiments conducted on a real-world robotic semantic segmentation system for visual inspection demonstrate significant improvements in accuracy, processing latency, and communication overhead compared to state-of-the-art methods. LAECIPS provides a practical and scalable foundation for embodied intelligence in smart manufacturing, especially in adaptive robotic inspection and quality control scenarios.
Accelerating cavities are an integral part of the Continuous Electron Beam Accelerator Facility (CEBAF) at Jefferson Laboratory. When any of the over 400 cavities in CEBAF experiences a fault, it disrupts beam delivery to experimental user halls. In this study, we propose the use of a deep learning model to predict slowly developing cavity faults. By utilizing pre-fault signals, we train a LSTM-CNN binary classifier to distinguish between radio-frequency (RF) signals during normal operation and RF signals indicative of impending faults. We optimize the model by adjusting the fault confidence threshold and implementing a multiple consecutive window criterion to identify fault events, ensuring a low false positive rate. Results obtained from analysis of a real dataset collected from the accelerating cavities simulating a deployed scenario demonstrate the model's ability to identify normal signals with 99.99% accuracy and correctly predict 80% of slowly developing faults. Notably, these achievements were achieved in the context of a highly imbalanced dataset, and fault predictions were made several hundred milliseconds before the onset of the fault. Anticipating faults enables preemptive measures to improve operational efficiency by preventing or mitigating their occurrence.
Handling objects with unknown or changing masses is a common challenge in robotics, often leading to errors or instability if the control system cannot adapt in real-time. In this paper, we present a novel approach that enables a six-degrees-of-freedom robotic manipulator to reliably follow waypoints while automatically estimating and compensating for unknown payload weight. Our method integrates an admittance control framework with a mass estimator, allowing the robot to dynamically update an excitation force to compensate for the payload mass. This strategy mitigates end-effector sagging and preserves stability when handling objects of unknown weights. We experimentally validated our approach in a challenging pick-and-place task on a shelf with a crossbar, improved accuracy in reaching waypoints and compliant motion compared to a baseline admittance-control scheme. By safely accommodating unknown payloads, our work enhances flexibility in robotic automation and represents a significant step forward in adaptive control for uncertain environments.
Most existing large-scale academic search engines are built to retrieve text-based information. However, there are no large-scale retrieval services for scientific figures and tables. One challenge for such services is understanding scientific figures' semantics, such as their types and purposes. A key obstacle is the need for datasets containing annotated scientific figures and tables, which can then be used for classification, question-answering, and auto-captioning. Here, we develop a pipeline that extracts figures and tables from the scientific literature and a deep-learning-based framework that classifies scientific figures using visual features. Using this pipeline, we built the first large-scale automatically annotated corpus, ACL-Fig, consisting of 112,052 scientific figures extracted from ~56K research papers in the ACL Anthology. The ACL-Fig-Pilot dataset contains 1,671 manually labeled scientific figures belonging to 19 categories. The dataset is accessible at this https URL under a CC BY-NC license.
We present a two-level decomposition strategy for solving the Vehicle Routing Problem (VRP) using the Quantum Approximate Optimization Algorithm. A Problem-Level Decomposition partitions a 13-node (156-qubit) VRP into smaller Traveling Salesman Problem (TSP) instances. Each TSP is then further cut via Circuit-Level Decomposition, enabling execution on near-term quantum devices. Our approach achieves up to 95\% reductions in the circuit depth, 96\% reduction in the number of qubits and a 99.5\% reduction in the number of 2-qubit gates. We demonstrate this hybrid algorithm on the standard edge encoding of the VRP as well as a novel amplitude encoding. These results demonstrate the feasibility of solving VRPs previously too complex for quantum simulators and provide early evidence of potential quantum utility.
Extracting accurate results from neutrino oscillation and cross section experiments requires accurate simulation of the neutrino-nucleus interaction. The rescattering of outgoing hadrons (final state interactions) by the rest of the nucleus is an important component of these interactions. We present a new measurement of proton transparency (defined as the fraction of outgoing protons that emerge without significant rescattering) using electron-nucleus scattering data recorded by the CLAS detector at Jefferson Laboratory on helium, carbon, and iron targets. This analysis by the Electrons for Neutrinos (e4νe4\nu) collaboration uses a new data-driven method to extract the transparency. It defines transparency as the ratio of electron-scattering events with a detected proton to quasi-elastic electron-scattering events where a proton should have been knocked out. Our results are consistent with previous measurements that determined the transparency from the ratio of measured events to theoretically predicted events. We find that the GENIE event generator, which is widely used by oscillation experiments to simulate neutrino-nucleus interactions, needs to better describe both the nuclear ground state and proton rescattering in order to reproduce our measured transparency ratios, especially at lower proton momenta.
Performance of superconducting resonators, particularly cavities for particle accelerators and micro cavities and thin film resonators for quantum computations and photon detectors has been improved substantially by recent materials treatments and technological advances. As a result, the niobium cavities have reached the quality factors Q1011Q\sim 10^{11} at 1-2 GHz and 1.5 K and the breakdown radio-frequency (rf) fields HH close to the dc superheating field of the Meissner state. These advances raise the question whether the state-of-the-art cavities are close to the fundamental limits, what these limits actually are, and to what extent the QQ and HH limits can be pushed by the materials nano structuring and impurity management. These issues are also relevant to many applications using high-Q thin film resonators, including single-photon detectors and quantum circuits. This topical review outlines basic physical mechanisms of the rf nonlinear surface impedance controlled by quasiparticles, dielectric losses and trapped vortices, as well as the dynamic field limit of the Meissner state. Sections cover ways of engineering an optimum quasiparticle density of states and superfluid density to reduce rf losses and kinetic inductance by pairbreaking mechanisms related to magnetic impurities, rf currents, and proximity-coupled metallic layers at the surface. A section focuses on mechanisms of residual surface resistance which dominates rf losses at ultra low temperatures. Microwave losses of trapped vortices and their reduction by optimizing the concentration of impurities and pinning potential are also discussed.
Deep learning (DL) models have provided state-of-the-art performance in various medical imaging benchmarking challenges, including the Brain Tumor Segmentation (BraTS) challenges. However, the task of focal pathology multi-compartment segmentation (e.g., tumor and lesion sub-regions) is particularly challenging, and potential errors hinder translating DL models into clinical workflows. Quantifying the reliability of DL model predictions in the form of uncertainties could enable clinical review of the most uncertain regions, thereby building trust and paving the way toward clinical translation. Several uncertainty estimation methods have recently been introduced for DL medical image segmentation tasks. Developing scores to evaluate and compare the performance of uncertainty measures will assist the end-user in making more informed decisions. In this study, we explore and evaluate a score developed during the BraTS 2019 and BraTS 2020 task on uncertainty quantification (QU-BraTS) and designed to assess and rank uncertainty estimates for brain tumor multi-compartment segmentation. This score (1) rewards uncertainty estimates that produce high confidence in correct assertions and those that assign low confidence levels at incorrect assertions, and (2) penalizes uncertainty measures that lead to a higher percentage of under-confident correct assertions. We further benchmark the segmentation uncertainties generated by 14 independent participating teams of QU-BraTS 2020, all of which also participated in the main BraTS segmentation task. Overall, our findings confirm the importance and complementary value that uncertainty estimates provide to segmentation algorithms, highlighting the need for uncertainty quantification in medical image analyses. Finally, in favor of transparency and reproducibility, our evaluation code is made publicly available at: this https URL
We present a quantum computational framework for SU(2) lattice gauge theory, leveraging continuous variables instead of discrete qubits to represent the infinite-dimensional Hilbert space of the gauge fields. We consider a ladder as well as a two-dimensional grid of plaquettes, detailing the use of gauge fixing to reduce the degrees of freedom and simplify the Hamiltonian. We demonstrate how the system dynamics, ground states, and energy gaps can be computed using the continuous-variable approach to quantum computing. Our results indicate that it is feasible to study non-Abelian gauge theories with continuous variables, providing new avenues for understanding the real-time dynamics of quantum field theories.
We investigate the relativistic scattering of three identical scalar bosons interacting via pair-wise interactions. Extending techniques from the non-relativistic three-body scattering theory, we provide a detailed and general prescription for solving and analytically continuing integral equations describing the three-body reactions. We use these techniques to study a system with zero angular momenta described by a single scattering length leading to a bound state in a two-body sub-channel. We obtain bound-state--particle and three-particle amplitudes in the previously unexplored kinematical regime; in particular, for real energies below elastic thresholds and complex energies in the physical and unphysical Riemann sheets. We extract positions of three-particle bound-states that agree with previous finite-volume studies, providing further evidence for the consistency of the relativistic finite-volume three-body quantization conditions. We also determine previously unobserved virtual bound states in this theory. Finally, we find numerical evidence of the breakdown of the two-body finite-volume formalism in the vicinity of the left-hand cuts and argue for the generalization of the existing formalism.
Global fits of physics models require efficient methods for exploring high-dimensional and/or multimodal posterior functions. We introduce a novel method for accelerating Markov Chain Monte Carlo (MCMC) sampling by pairing a Metropolis-Hastings algorithm with a diffusion model that can draw global samples with the aim of approximating the posterior. We briefly review diffusion models in the context of image synthesis before providing a streamlined diffusion model tailored towards low-dimensional data arrays. We then present our adapted Metropolis-Hastings algorithm which combines local proposals with global proposals taken from a diffusion model that is regularly trained on the samples produced during the MCMC run. Our approach leads to a significant reduction in the number of likelihood evaluations required to obtain an accurate representation of the Bayesian posterior across several analytic functions, as well as for a physical example based on a global analysis of parton distribution functions. Our method is extensible to other MCMC techniques, and we briefly compare our method to similar approaches based on normalizing flows. A code implementation can be found at this https URL.
PDFs can be studied directly using lattice QCD by evaluating matrix elements of non-local operators. A number of groups are pursuing numerical calculations and investigating possible systematic uncertainties. One systematic that has received less attention is the effect of calculating in a finite spacetime volume. Here we present first attempts to assess the role of the finite volume for spatially non-local operators. We find that these matrix elements may suffer from large finite-volume artifacts and more careful investigation is needed.
Semantic communication (SemCom) aims to enhance the resource efficiency of next-generation networks by transmitting the underlying meaning of messages, focusing on information relevant to the end user. Existing literature on SemCom primarily emphasizes learning the encoder and decoder through end-to-end deep learning frameworks, with the objective of minimizing a task-specific semantic loss function. Beyond its influence on the physical and application layer design, semantic variability across users in multi-user systems enables the design of resource allocation schemes that incorporate user-specific semantic requirements. To this end, \emph{a semantic-aware resource allocation} scheme is proposed with the objective of maximizing transmission and semantic reliability, ultimately increasing the number of users whose semantic requirements are met. The resulting resource allocation problem is a non-convex mixed-integer nonlinear program (MINLP), which is known to be NP-hard. To make the problem tractable, it is decomposed into a set of sub-problems, each of which is efficiently solved via geometric programming techniques. Finally, simulations demonstrate that the proposed method improves user satisfaction by up to 17.1%17.1\% compared to state of the art methods based on quality of experience-aware SemCom methods.
Massive MIMO (Multiple-Input Multiple-Output) is an advanced wireless communication technology, using a large number of antennas to improve the overall performance of the communication system in terms of capacity, spectral, and energy efficiency. The performance of MIMO systems is highly dependent on the quality of channel state information (CSI). Predicting CSI is, therefore, essential for improving communication system performance, particularly in MIMO systems, since it represents key characteristics of a wireless channel, including propagation, fading, scattering, and path loss. This study proposes a foundation model inspired by BERT, called BERT4MIMO, which is specifically designed to process high-dimensional CSI data from massive MIMO systems. BERT4MIMO offers superior performance in reconstructing CSI under varying mobility scenarios and channel conditions through deep learning and attention mechanisms. The experimental results demonstrate the effectiveness of BERT4MIMO in a variety of wireless environments.
There are no more papers matching your filters at the moment.