University of PlymouthUnited Kingdom
Deutsch et al. introduced Quantum Privacy Amplification (QPA), an iterative protocol that purifies noisy entangled qubit pairs, enabling provably secure quantum key distribution over real-world noisy communication channels. This method ensures an eavesdropper's information about the final shared key can be reduced to an arbitrarily low level, even in the presence of initial noise or interference.
Jingyue Liu et al. extend Physics-informed Neural Networks (PINNs) to accurately model and control complex robotic systems by incorporating non-conservative effects and using forward integration to circumvent direct acceleration measurements. Their approach enabled the design of provably stable controllers and was validated through successful experimental closed-loop control on a Franka Emika Panda rigid manipulator and a soft manipulator.
Ultralight particles, with a mass below the electronvolt scale, exhibit wave-like behavior and have arisen as a compelling dark matter candidate. A particularly intriguing subclass is scalar dark matter, which induces variations in fundamental physical constants. However, detecting such particles becomes highly challenging in the mass range above 106eV10^{-6}\,\text{eV}, as traditional experiments face severe limitations in response time. In contrast, the matter effect becomes significant in a vast and unexplored parameter space. These effects include (i) a force arising from scattering between ordinary matter and the dark matter wind and (ii) a fifth force between ordinary matter induced by the dark matter background. Using the repulsive quadratic scalar-photon interaction as a case study, we develop a unified framework based on quantum mechanical scattering theory to systematically investigate these phenomena across both perturbative and non-perturbative regimes. Our approach not only reproduces prior results obtained through other methodologies but also covers novel regimes with nontrivial features, such as decoherence effects, screening effects, and their combinations. In particular, we highlight one finding related to both scattering and background-induced forces: the descreening effect observed in the non-perturbative region with large incident momentum, which alleviates the decoherence suppression. Furthermore, we discuss current and proposed experiments, including inverse-square-law tests, equivalence principle tests, and deep-space acceleration measurements. Notably, we go beyond the spherical approximation and revisit the MICROSCOPE constraints on the background-induced force in the large-momentum regime, where the decoherence and screening effects interplay. The ultraviolet models realizing the quadratic scalar-photon interaction are also discussed.
The Interdisciplinary Centre for Computer Music Research (ICCMR) at the University of Plymouth developed "You Only Hear Once (YOHO)," an algorithm for audio segmentation and sound event detection that redefines the task as a direct regression problem. This approach, inspired by YOLO from computer vision, significantly accelerates inference and post-processing times by 6-14x and 7x respectively, while achieving competitive or superior accuracy on various music-speech and environmental sound datasets.
The Recover framework, developed by researchers at Samsung AI and the University of Plymouth, introduces a neuro-symbolic approach for online failure detection and recovery in robotic systems. This framework integrates symbolic AI's structured reasoning with Large Language Models' flexible planning, achieving 100% accuracy in failure detection through its ontology and successfully recovering from approximately 70% of detected failures in simulated environments.
A quantum computing algorithm for rhythm generation is presented, which aims to expand and explore quantum computing applications in the arts, particularly in music. The algorithm maps quantum random walk trajectories onto a rhythmspace -- a 2D interface that interpolates rhythmic patterns. The methodology consists of three stages. The first stage involves designing quantum computing algorithms and establishing a mapping between the qubit space and the rhythmspace. To minimize circuit depth, a decomposition of a 2D quantum random walk into two 1D quantum random walks is applied. The second stage focuses on biasing the directionality of quantum random walks by introducing classical potential fields, adjusting the probability distribution of the wave function based on the position gradient within these fields. Four potential fields are implemented: a null potential, a linear field, a Gaussian potential, and a Gaussian potential under inertial dynamics. The third stage addresses the sonification of these paths by generating MIDI drum pattern messages and transmitting them to a Digital Audio Workstation (DAW). This work builds upon existing literature that applies quantum computing to simpler qubit spaces with a few positions, extending the formalism to a 2D x-y plane. It serves as a proof of concept for scalable quantum computing-based generative random walk algorithms in music and audio applications. Furthermore, the approach is applicable to generic multidimensional sound spaces, as the algorithms are not strictly constrained to rhythm generation and can be adapted to different musical structures.
Early diagnosis and intervention for Autism Spectrum Disorder (ASD) has been shown to significantly improve the quality of life of autistic individuals. However, diagnostics methods for ASD rely on assessments based on clinical presentation that are prone to bias and can be challenging to arrive at an early diagnosis. There is a need for objective biomarkers of ASD which can help improve diagnostic accuracy. Deep learning (DL) has achieved outstanding performance in diagnosing diseases and conditions from medical imaging data. Extensive research has been conducted on creating models that classify ASD using resting-state functional Magnetic Resonance Imaging (fMRI) data. However, existing models lack interpretability. This research aims to improve the accuracy and interpretability of ASD diagnosis by creating a DL model that can not only accurately classify ASD but also provide explainable insights into its working. The dataset used is a preprocessed version of the Autism Brain Imaging Data Exchange (ABIDE) with 884 samples. Our findings show a model that can accurately classify ASD and highlight critical brain regions differing between ASD and typical controls, with potential implications for early diagnosis and understanding of the neural basis of ASD. These findings are validated by studies in the literature that use different datasets and modalities, confirming that the model actually learned characteristics of ASD and not just the dataset. This study advances the field of explainable AI in medical imaging by providing a robust and interpretable model, thereby contributing to a future with objective and reliable ASD diagnostics.
Muons decay in vacuum mainly via the leptonic channel to an electron, an electron neutrino and a muon antineutrino. Previous investigations have concluded that muon decay can only be significantly altered in a strong electromagnetic field when the muonic strong-field parameter is of order unity, which is far beyond the reach of lab-based experiments at current and planned facilities. In this letter, an alternative mechanism is presented in which a laser pulse affects the vacuum decay rate of a muon outside the pulse. Quantum interference between the muon decaying with or without interacting with the pulse generates fringes in the electron momentum spectra and can increase the muon lifetime by up to a factor 2. The required parameters to observe this effect are available in experiments today.
Surgical masks have played a crucial role in healthcare facilities to protect against respiratory and infectious diseases, particularly during the COVID-19 pandemic. However, the synthetic fibers, mainly made of polypropylene, used in their production may adversely affect the environment and human health. Recent studies have confirmed the presence of microplastics and fibers in human lungs and have related these synthetic particles with the occurrence of pulmonary ground glass nodules. Using a piston system to simulate human breathing, this study investigates the role of surgical masks as a direct source of inhalation of microplastics. Results reveal the release of particles of sizes ranging from nanometers (300 nm) to millimeters (~2 mm) during normal breathing conditions, raising concerns about the potential health risks. Notably, large visible particles (> 1 mm) were observed to be ejected from masks with limited wear after only a few breathing cycles. Given the widespread use of masks by healthcare workers and the potential future need for mask usage by the general population during seasonal infectious diseases or new pandemics, developing face masks using safe materials for both users and the environment is imperative.
The paper introduces Quantum Brain Networks (QBraiNs) as an emerging interdisciplinary field, proposing a framework for connecting human brains to quantum computers through neurotechnology and artificial intelligence. It asserts the technical feasibility of this concept by synthesizing existing advancements and outlines a range of transformative applications across science, technology, and arts.
The modern digital world is highly heterogeneous, encompassing a wide variety of communications, devices, and services. This interconnectedness generates, synchronises, stores, and presents digital information in multidimensional, complex formats, often fragmented across multiple sources. When linked to misuse, this digital information becomes vital digital evidence. Integrating and harmonising these diverse formats into a unified system is crucial for comprehensively understanding evidence and its relationships. However, existing approaches to date have faced challenges limiting investigators' ability to query heterogeneous evidence across large datasets. This paper presents a novel approach in the form of a modern unified data graph. The proposed approach aims to seamlessly integrate, harmonise, and unify evidence data, enabling cross-platform interoperability, efficient data queries, and improved digital investigation performance. To demonstrate its efficacy, a case study is conducted, highlighting the benefits of the proposed approach and showcasing its effectiveness in enabling the interoperability required for advanced analytics in digital investigations.
Researchers developed HOD, a Hyperbolic metric learning framework for visual Out-Of-Distribution (OOD) detection that projects feature embeddings into Hyperbolic space. The framework demonstrated improved OOD detection performance, reducing the average False Positive Rate at 95% recall on CIFAR-100 from 49.8% to 28.5%, and maintained effectiveness even with low-dimensional embeddings.
A first-order, confinement/deconfinement phase transition appears in the finite temperature behavior of many non-Abelian gauge theories. These theories play an important role in proposals for completion of the Standard Model of particle physics, hence the phase transition might have occurred in the early stages of evolution of our universe, leaving behind a detectable relic stochastic background of gravitational waves. Lattice field theory studies implementing the density of states method have the potential to provide detailed information about the phase transition, and measure the parameters determining the gravitational-wave power spectrum, by overcoming some the challenges faced with importance-sampling methods. We assess this potential for a representative choice of Yang-Mills theory with Sp(4)Sp(4) gauge group. We characterize its finite-temperature, first-order phase transition, in the thermodynamic (infinite volume) limit, for two different choices of number of sites in the compact time direction, hence taking the first steps towards the continuum limit extrapolation. We demonstrate the persistence of non-perturbative phenomena associated to the first-order phase transition: coexistence of states, metastability, latent heat, surface tension. We find consistency between several different strategies for the extraction of the volume-dependent critical coupling, hence assessing the size of systematic effects. We also determine the minimum choice of ratio between spatial and time extent of the lattice that allows to identify the contribution of the surface tension to the free energy. We observe that this ratio scales non-trivially with the time extent of the lattice, and comment on the implications for future high-precision numerical studies.
Generalized age feature extraction is crucial for age-related facial analysis tasks, such as age estimation and age-invariant face recognition (AIFR). Despite the recent successes of models in homogeneous-dataset experiments, their performance drops significantly in cross-dataset evaluations. Most of these models fail to extract generalized age features as they only attempt to map extracted features with training age labels directly without explicitly modeling the natural ordinal progression of aging. In this paper, we propose Order-Enhanced Contrastive Learning (OrdCon), a novel contrastive learning framework designed explicitly for ordinal attributes like age. Specifically, to extract generalized features, OrdCon aligns the direction vector of two features with either the natural aging direction or its reverse to model the ordinal process of aging. To further enhance generalizability, OrdCon leverages a novel soft proxy matching loss as a second contrastive objective, ensuring that features are positioned around the center of each age cluster with minimal intra-class variance and proportionally away from other clusters. By modeling the ageing process, the framework can enhance generalizability by improving the alignment of samples from the same class and reducing the divergence of direction vectors. We demonstrate that our proposed method achieves comparable results to state-of-the-art methods on various benchmark datasets in homogeneous-dataset evaluations for both age estimation and AIFR. In cross-dataset experiments, OrdCon outperforms other methods by reducing the mean absolute error by approximately 1.38 on average for the age estimation task and boosts the average accuracy for AIFR by 1.87%.
Background: Many attempts to validate gait pipelines that process sensor data to detect gait events have focused on the detection of initial contacts only in supervised settings using a single sensor. Objective: To evaluate the performance of a gait pipeline in detecting initial/final contacts using a step detection algorithm adaptive to different test settings, smartphone wear locations, and gait impairment levels. Methods: In GaitLab (ISRCTN15993728), healthy controls (HC) and people with multiple sclerosis (PwMS; Expanded Disability Status Scale 0.0-6.5) performed supervised Two-Minute Walk Test [2MWT] (structured in-lab overground and treadmill 2MWT) during two on-site visits carrying six smartphones and unsupervised walking activities (structured and unstructured real-world walking) daily for 10-14 days using a single smartphone. Reference gait data were collected with a motion capture system or Gait Up sensors. The pipeline's performance in detecting initial/final contacts was evaluated through F1 scores and absolute temporal error with respect to reference measurement systems. Results: We studied 35 HC and 93 PwMS. Initial/final contacts were accurately detected across all smartphone wear locations. Median F1 scores for initial/final contacts on in-lab 2MWT were >=98.2%/96.5% in HC and >=98.5%/97.7% in PwMS. F1 scores remained high on structured (HC: 100% [0.3%]/100% [0.2%]; PwMS: 99.5% [1.9%]/99.4% [2.5%]) and unstructured real-world walking (HC: 97.8% [2.6%]/97.8% [2.8%]; PwMS: 94.4% [6.2%]/94.0% [6.5%]). Median temporal errors were <=0.08 s. Neither age, sex, disease severity, walking aid use, nor setting (outdoor/indoor) impacted pipeline performance (all p>0.05). Conclusion: This gait pipeline accurately and consistently detects initial and final contacts in PwMS across different smartphone locations and environments, highlighting its potential for real-world gait assessment.
Motivated by the recently-established connection between Jarzynski's equality and the theoretical framework of Stochastic Normalizing Flows, we investigate a protocol relying on out-of-equilibrium lattice Monte Carlo simulations to mitigate the infamous computational problem of topological freezing. We test our proposal on 2d2d CPN1\mathrm{CP}^{N-1} models and compare our results with those obtained adopting the Parallel Tempering on Boundary Conditions proposed by M. Hasenbusch, obtaining comparable performances. Our work thus sets the stage for future applications combining our Monte Carlo setup with machine learning techniques.
Multi-source unsupervised domain adaptation aims to leverage labeled data from multiple source domains for training a machine learning model to generalize well on a target domain without labels. Source domain selection plays a crucial role in determining the model's performance. It relies on the similarities amongst source and target domains. Nonetheless, existing work for source domain selection often involves heavyweight computational procedures, especially when dealing with numerous source domains and the need to identify the best ones from them. In this paper, we introduce a framework for gradual fine tuning (GFT) of machine learning models on multiple source domains. We represent multiple source domains as an undirected weighted graph. We then give a new generalization error bound for GFT along any path within the graph, which is used to determine the optimal path corresponding to the optimal training order. With this formulation, we introduce three lightweight graph-routing strategies which tend to minimize the error bound. Our best strategy improves 2.3%2.3\% of accuracy over the state-of-the-art on Natural Language Inference (NLI) task and achieves competitive performance on Sentiment Analysis (SA) task, especially a 3.9%3.9\% improvement on a more diverse subset of data we use for SA.
Efficient password cracking is a critical aspect of digital forensics, enabling investigators to decrypt protected content during criminal investigations. Traditional password cracking methods, including brute-force, dictionary and rule-based attacks face challenges in balancing efficiency with increasing computational complexity. This study explores rule based optimisation strategies to enhance the effectiveness of password cracking while minimising resource consumption. By analysing publicly available password datasets, we propose an optimised rule set that reduces computational iterations by approximately 40%, significantly improving the speed of password recovery. Additionally, the impact of national password recommendations were examined, specifically, the UK National Cyber Security Centre's three word password guideline on password security and forensic recovery. Through user generated password surveys, we evaluate the crackability of three word passwords using dictionaries of varying common word proportions. Results indicate that while three word passwords provide improved memorability and usability, they remain vulnerable when common word combinations are used, with up to 77.5% of passwords cracked using a 30% common word dictionary subset. The study underscores the importance of dynamic password cracking strategies that account for evolving user behaviours and policy driven password structures. Findings contribution to both forensic efficiency and cyber security awareness, highlight the dual impact of password policies on security and investigative capabilities. Future work will focus upon refining rule based cracking techniques and expanding research on password composition trends.
We present the findings of "The Alzheimer's Disease Prediction Of Longitudinal Evolution" (TADPOLE) Challenge, which compared the performance of 92 algorithms from 33 international teams at predicting the future trajectory of 219 individuals at risk of Alzheimer's disease. Challenge participants were required to make a prediction, for each month of a 5-year future time period, of three key outcomes: clinical diagnosis, Alzheimer's Disease Assessment Scale Cognitive Subdomain (ADAS-Cog13), and total volume of the ventricles. The methods used by challenge participants included multivariate linear regression, machine learning methods such as support vector machines and deep neural networks, as well as disease progression models. No single submission was best at predicting all three outcomes. For clinical diagnosis and ventricle volume prediction, the best algorithms strongly outperform simple baselines in predictive ability. However, for ADAS-Cog13 no single submitted prediction method was significantly better than random guesswork. Two ensemble methods based on taking the mean and median over all predictions, obtained top scores on almost all tasks. Better than average performance at diagnosis prediction was generally associated with the additional inclusion of features from cerebrospinal fluid (CSF) samples and diffusion tensor imaging (DTI). On the other hand, better performance at ventricle volume prediction was associated with inclusion of summary statistics, such as the slope or maxima/minima of biomarkers. TADPOLE's unique results suggest that current prediction algorithms provide sufficient accuracy to exploit biomarkers related to clinical diagnosis and ventricle volume, for cohort refinement in clinical trials for Alzheimer's disease. However, results call into question the usage of cognitive test scores for patient selection and as a primary endpoint in clinical trials.
We study the θ\theta-dependence of the string tension and of the lightest glueball mass in four-dimensional SU(N)\mathrm{SU}(N) Yang-Mills theories. More precisely, we focus on the coefficients parametrizing the O(θ2)\mathcal{O}(\theta^2) dependence of these quantities, which we investigate by means of numerical simulations of the lattice-discretized theory, carried out using imaginary values of the θ\theta parameter. Topological freezing at large NN is avoided using the Parallel Tempering on Boundary Conditions algorithm. We provide controlled continuum extrapolations of such coefficients in the N=3N=3 case, and we report the results obtained on two fairly fine lattice spacings for N=6N=6.
There are no more papers matching your filters at the moment.