Universidad de Sevilla
Quantum computing is rapidly progressing from theoretical promise to practical implementation, offering significant computational advantages for tasks in optimization, simulation, cryptography, and machine learning. However, its integration into real-world software systems remains constrained by hardware fragility, platform heterogeneity, and the absence of robust software engineering practices. This paper introduces Service-Oriented Quantum (SOQ), a novel paradigm that reimagines quantum software systems through the lens of classical service-oriented computing. Unlike prior approaches such as Quantum Service-Oriented Computing (QSOC), which treat quantum capabilities as auxiliary components within classical systems, SOQ positions quantum services as autonomous, composable, and interoperable entities. We define the foundational principles of SOQ, propose a layered technology stack to support its realization, and identify the key research and engineering challenges that must be addressed, including interoperability, hybridity, pricing models, service abstractions, and workforce development. This approach is of vital importance for the advancement of quantum technology because it enables the scalable, modular, and interoperable integration of quantum computing into real-world software systems independently and without relying on a dedicated classical environment to manage quantum processing.
Neural network compression techniques typically require expensive fine-tuning or search procedures, rendering them impractical on commodity hardware. Inspired by recent LLM compression research, we present a general activation-aware factorization framework that can be applied to a broad range of layers. Moreover, we introduce a scalable budgeted rank allocator that allows flexible control over compression targets (e.g., retaining 50% of parameters) with no overhead. Together, these components form BALF, an efficient pipeline for compressing models without fine-tuning. We demonstrate its effectiveness across multiple scales and architectures, from ResNet-20 on CIFAR-10 to ResNeXt-101 and vision transformers on ImageNet, and show that it achieves excellent results in the fine-tuning-free regime. For instance, BALF reduces FLOPs on ResNeXt-101 by 45% with only a 1-percentage-point top-1 accuracy drop.
Sakowska et al. extend the Stellar Stream Legacy Survey to dwarf galaxies, identifying 20 accretion features and observing that 5.07% of isolated dwarfs within the DES footprint show such features, a frequency lower than for massive galaxies. This provides the first statistically motivated observational constraints on hierarchical mass assembly in the low-mass regime.
We present the first results from SPHINX-MHD, a suite of cosmological radiation-magnetohydrodynamics simulations designed to study the impact of primordial magnetic fields (PMFs) on galaxy formation and the evolution of the intergalactic medium during the epoch of reionization. The simulations are among the first to employ multi-frequency, on-the-fly radiation transfer and constrained transport ideal MHD in a cosmological context to simultaneously model the inhomogeneous process of reionization as well as the growth of PMFs. We run a series of (5cMpc)3(5\,\text{cMpc})^3 cosmological volumes, varying both the strength of the seed magnetic field (B0B_0) and its spectral index (nBn_B). We find that PMFs that have nB>0.562log10(B01nG)3.35n_B > -0.562\log_{10}\left(\frac{B_0}{1{\rm n}G}\right) - 3.35 produce electron optical depths (τe\tau_e) that are inconsistent with CMB constraints due to the unrealistically early collapse of low-mass dwarf galaxies. For nB2.9n_B\geq-2.9, our constraints are considerably tighter than the nG\sim{\rm n}G constraints from Planck. PMFs that do not satisfy our constraints have little impact on the reionization history or the shape of the UV luminosity function. Likewise, detecting changes in the Lya forest due to PMFs will be challenging because photoionisation and photoheating efficiently smooth the density field. However, we find that the first absorption feature in the global 21cm signal is a sensitive indicator of the properties of the PMFs, even for those that satisfy our τe\tau_e constraint. Furthermore, strong PMFs can marginally increase the escape of LyC photons by up to 25\% and shrink the effective radii of galaxies by 44%\sim44\% which could increase the completeness fraction of galaxy surveys. Finally, our simulations show that surveys with a magnitude limit of MUV,1500=13{\rm M_{UV,1500}=-13} can probe the sources that provide the majority of photons for reionization out to z=12z=12.
Traditional methods for solving physical equations in curved spaces, especially in fluid mechanics and general relativity, rely heavily on the use of Christoffel symbols. These symbols provide the necessary corrections to account for curvature in differential geometries but lead to significant computational complexity, particularly in numerical simulations. In this paper, we propose a novel, simplified approach that obviates the need for Christoffel symbols by symbolic programming and advanced numerical methods. Our approach is based on defining a symbolic mapping between Euclidean space and curved coordinate systems, enabling the transformation of spatial and temporal derivatives through Jacobians and their inverses. This eliminates the necessity of using Christoffel symbols for defining local bases and tensors, allowing for the direct application of physical laws in Cartesian coordinates even when solving problems in curved spaces. We demonstrate the robustness and flexibility of our method through several examples, including the derivation of the Navier-Stokes equations in cylindrical coordinates, the modeling of complex flows in bent cylindrical tubes, and the breakup of viscoelastic fluid threads. These examples highlight how our method simplifies the numerical formulation while maintaining accuracy and efficiency. Additionally, we explore how these advancements benefit free-surface flows, where mapping physical 3D domains to a simpler computational domain is essential for solving moving boundary problems.
Neurotechnologies are transforming how we measure, interpret, and modulate brain-body interactions, integrating real-time sensing, computation, and stimulation to enable precise physiological control. They hold transformative potential across clinical and non-clinical domains, from treating disorders to enhancing cognition and performance. Realizing this potential requires navigating complex, interdisciplinary challenges spanning neuroscience, materials science, device engineering, signal processing, computational modelling, and regulatory and ethical frameworks. This Perspective presents a strategic roadmap for neurotechnology development, created by early-career researchers, highlighting their role at the intersection of disciplines and their capacity to bridge traditional silos. We identify five cross-cutting trade-offs that constrain progress across functionality, scalability, adaptability, and translatability, and illustrate how technical domains influence their resolution. Rather than a domain-specific review, we focus on shared challenges and strategic opportunities that transcend disciplines. We propose a unified framework for collaborative innovation and education, highlight ethical and regulatory priorities, and outline a timeline for overcoming key bottlenecks. By aligning technical development with translational and societal needs, this roadmap aims to accelerate equitable, effective, and future-ready adaptive neurotechnologies, guiding coordinated efforts across the global research and innovation community.
We consider overdamped physical systems evolving under a feedback-controlled fluctuating potential and in contact with a thermal bath at temperature TT. A Markovian description of the dynamics, which keeps only the last value of the control action, is advantageous -- both from the theoretical and the practical side -- for the entropy balance. Novel second-law equalities and bounds for the extractable work are obtained, the latter being both tighter and easier to evaluate than those in the literature based on the whole chain of controller actions. The Markovian framework also allows us to prove that the bound for the extractable work that incorporates the unavailable information is saturated in a wide class of physical systems, for error-free measurements. These results are illustrated in a model system. For imperfect measurements, there appears an interval of measurement uncertainty, including the point at which work ceases to be extracted, where the new Markovian bound is tighter than the unavailable information bound.
This paper introduces a broad class of Mirror Descent (MD) and Generalized Exponentiated Gradient (GEG) algorithms derived from trace-form entropies defined via deformed logarithms. Leveraging these generalized entropies yields MD \& GEG algorithms with improved convergence behavior, robustness to vanishing and exploding gradients, and inherent adaptability to non-Euclidean geometries through mirror maps. We establish deep connections between these methods and Amari's natural gradient, revealing a unified geometric foundation for additive, multiplicative, and natural gradient updates. Focusing on the Tsallis, Kaniadakis, Sharma--Taneja--Mittal, and Kaniadakis--Lissia--Scarfone entropy families, we show that each entropy induces a distinct Riemannian metric on the parameter space, leading to GEG algorithms that preserve the natural statistical geometry. The tunable parameters of deformed logarithms enable adaptive geometric selection, providing enhanced robustness and convergence over classical Euclidean optimization. Overall, our framework unifies key first-order MD optimization methods under a single information-geometric perspective based on generalized Bregman divergences, where the choice of entropy determines the underlying metric and dual geometric structure.
Divergences are fundamental to the information criteria that underpin most signal processing algorithms. The alpha-beta family of divergences, designed for non-negative data, offers a versatile framework that parameterizes and continuously interpolates several separable divergences found in existing literature. This work extends the definition of alpha-beta divergences to accommodate complex data, specifically when the arguments of the divergence are complex vectors. This novel formulation is designed in such a way that, by setting the divergence hyperparameters to unity, it particularizes to the well-known Euclidean and Mahalanobis squared distances. Other choices of hyperparameters yield practical separable and non-separable extensions of several classical divergences. In the context of the problem of approximating a complex random vector, the centroid obtained by optimizing the alpha-beta mean distortion has a closed-form expression, which interpretation sheds light on the distinct roles of the divergence hyperparameters. These contributions may have wide potential applicability, as there are many signal processing domains in which the underlying data are inherently complex.
Motivated by the application of using model predictive control (MPC) for motion planning of autonomous mobile robots, a form of output tracking MPC for non-holonomic systems and with non-convex constraints is studied. Although the advantages of using MPC for motion planning have been demonstrated in several papers, in most of the available fundamental literature on output tracking MPC it is assumed, often implicitly, that the model is holonomic and generally the state or output constraints must be convex. Thus, in application-oriented publications, empirical results dominate and the topic of proving completeness, in particular under which assumptions the target is always reached, has received comparatively little attention. To address this gap, we present a novel MPC formulation that guarantees convergence to the desired target under realistic assumptions, which can be verified in relevant real-world scenarios.
Loophole-free violations of Bell inequalities imply that at least one of the assumptions behind local hidden-variable theories must fail. Here, we show that, if only one fails, then it has to fail completely, therefore excluding models that partially constrain freedom of choice or allow for partial retrocausal influences, or allow partial instantaneous actions at a distance. Specifically, we show that (i) any hidden-variable theory with outcome independence (OI) and arbitrary joint relaxation of measurement independence (MI) and parameter independence (PI) can be experimentally excluded in a Bell-like experiment with many settings on high-dimensional entangled states, and (ii) any hidden-variable theory with MI, PI and arbitrary relaxation of OI can be excluded in a Bell-like experiment with many settings on qubit-qubit entangled states.
Data quality is crucial for the successful training, generalization and performance of machine learning models. We propose to measure the quality of a subset concerning the dataset it represents, using topological data analysis techniques. Specifically, we define the persistence matching diagram, a topological invariant derived from combining embeddings with persistent homology. We provide an algorithm to compute it using minimum spanning trees. Also, the invariant allows us to understand whether the subset ``represents well" the clusters from the larger dataset or not, and we also use it to estimate bounds for the Hausdorff distance between the subset and the complete dataset. In particular, this approach enables us to explain why the chosen subset is likely to result in poor performance of a supervised learning model.
Fairness--the absence of unjustified bias--is a core principle in the development of Artificial Intelligence (AI) systems, yet it remains difficult to assess and enforce. Current approaches to fairness testing in large language models (LLMs) often rely on manual evaluation, fixed templates, deterministic heuristics, and curated datasets, making them resource-intensive and difficult to scale. This work aims to lay the groundwork for a novel, automated method for testing fairness in LLMs, reducing the dependence on domain-specific resources and broadening the applicability of current approaches. Our approach, Meta-Fair, is based on two key ideas. First, we adopt metamorphic testing to uncover bias by examining how model outputs vary in response to controlled modifications of input prompts, defined by metamorphic relations (MRs). Second, we propose exploiting the potential of LLMs for both test case generation and output evaluation, leveraging their capability to generate diverse inputs and classify outputs effectively. The proposal is complemented by three open-source tools supporting LLM-driven generation, execution, and evaluation of test cases. We report the findings of several experiments involving 12 pre-trained LLMs, 14 MRs, 5 bias dimensions, and 7.9K automatically generated test cases. The results show that Meta-Fair is effective in uncovering bias in LLMs, achieving an average precision of 92% and revealing biased behaviour in 29% of executions. Additionally, LLMs prove to be reliable and consistent evaluators, with the best-performing models achieving F1-scores of up to 0.79. Although non-determinism affects consistency, these effects can be mitigated through careful MR design. While challenges remain to ensure broader applicability, the results indicate a promising path towards an unprecedented level of automation in LLM testing.
Photons are central to quantum technologies, with photonic qubits offering a promising platform for quantum communication. Semiconductor quantum dots stand out for their ability to generate single photons on demand, a key capability for enabling long-distance quantum networks. In this work, we utilize high-purity single-photon sources based on self-assembled InAs(Ga)As quantum dots as quantum information carriers. We demonstrate that such on-demand single photons can generate quantum contextuality. This capability enables a novel protocol for semi-device-independent quantum key distribution over free-space channels. Crucially, our method does not require ideal or perfectly projective measurements, opening a new pathway for robust and practical quantum communication.
Researchers at the University of Science and Technology of China and Universidad de Sevilla experimentally demonstrated that complex numbers are indispensable to the standard formalism of quantum theory. Their three-party quantum game on a superconducting processor yielded a score of 8.09(1), robustly exceeding the maximum score of 7.66 predicted for real-valued quantum theory by 43 standard deviations.
The hydrodynamic stationary states of a granular fluid are addressed theoretically when subject to energy injection and a time-independent, but otherwise arbitrary external potential force. When the latter is not too symmetrical in a well defined sense, we show that a quiescent stationary state does not exist, rather than simply being unstable and, correspondingly, a steady convective state emerges spontaneously. We also unveil an unexpected connection of this feature with the self-diffusiophoresis of catalytically active particles: if an intruder in the granular fluid is the source of the potential, it will self-propel according to a recently proposed mechanism that lies beyond linear response theory, and that highlights the role of the intrinsic nonequilibrium nature of the state of the granular bath. In both scenarios, a state-dependent characteristic length of the granular fluid is identified which sets the scale at which the induced flow is the largest.
We describe the implementation of a model for charged-current quasi-elastic (CCQE) neutrino-nucleus scattering in the NEUT Monte Carlo event generator. This model employs relativistic momentum distributions obtained from mean field theory and relativistic distorted waves to describe the initial and final nucleon states. Final state interactions, both elastic and inelastic, are modelled by combining distorted waves with the NEUT intranuclear cascade, offering a more accurate representation of the interactions experienced by scattered nucleons. The model and its implementation in NEUT are described in detail and benchmarked against νμ\nu_{\mu}-12^{12}C scattering cross-section measurements from T2K and MINERν\nuA, as well as νμ\nu_{\mu}-40^{40}Ar measurements from MicroBooNE. Results, including transverse kinematic imbalance variables and scattered nucleon kinematics, show improved χ2\chi^2 values compared to other CCQE models in NEUT. Furthermore, the model consistently predicts lower cross sections in CCQE-dominated regions, indicating potential for further refinement, such as incorporating two-body currents or the use of more advanced nucleon axial form factors consistent with lattice QCD calculations.
In this paper, an adaptive nonlinear strategy for the motion and force control of flexible manipulators is proposed. The approach provides robust motion control until contact is detected when force control is then available--without any control switch--, and vice versa. This self-tuning in mixed contact/non-contact scenarios is possible thanks to the unified formulation of force and motion control, including an integral transpose-based inverse kinematics and adaptive-update laws for the flexible manipulator link and contact stiffnesses. Global boundedness of all signals and asymptotic stability of force and position are guaranteed through Lyapunov analysis. The control strategy and its implementation has been validated using a low-cost basic microcontroller and a manipulator with 3 flexible joints and 4 actuators. Complete experimental results are provided in a realistic mixed contact scenario, demonstrating very-low computational demand with inexpensive force sensors.
Removing floating litter from water bodies is crucial to preserving aquatic ecosystems and preventing environmental pollution. In this work, we present a multi-robot aerial soft manipulator for floating litter collection, leveraging the capabilities of aerial robots. The proposed system consists of two aerial robots connected by a flexible rope manipulator, which collects floating litter using a hook-based tool. Compared to single-aerial-robot solutions, the use of two aerial robots increases payload capacity and flight endurance while reducing the downwash effect at the manipulation point, located at the midpoint of the rope. Additionally, we employ an optimization-based rope-shape planner to compute the desired rope shape. The planner incorporates an adaptive behavior that maximizes grasping capabilities near the litter while minimizing rope tension when farther away. The computed rope shape trajectory is controlled by a shape visual servoing controller, which approximates the rope as a parabola. The complete system is validated in outdoor experiments, demonstrating successful grasping operations. An ablation study highlights how the planner's adaptive mechanism improves the success rate of the operation. Furthermore, real-world tests in a water channel confirm the effectiveness of our system in floating litter collection. These results demonstrate the potential of aerial robots for autonomous litter removal in aquatic environments.
We address the problem of deriving the set of quantum correlations for every Bell and Kochen-Specker (KS) contextuality scenario from simple assumptions. We show that the correlations that are possible according to quantum theory are equal to those possible under the assumptions that there is a nonempty set of correlations for every KS scenario and a statistically independent realization of any two KS experiments. The proof uses tools of the graph-theoretic approach to correlations and deals with Bell nonlocality and KS contextuality in a unified way.
There are no more papers matching your filters at the moment.