University of Vigo
The amplitude of an excited shape mode in a kink is expected to decay with a well-known power law via scalar radiation emission due to the nonlinear self-coupling of the scalar field. In this work we propose an alternative decay mechanism via pair production of fermions in a simple extension of the ϕ4\phi^4 model in which the scalar field is coupled to a (quantum) fermionic field through a Yukawa-like interaction term. We study the power emitted through fermions as a function of the coupling constant in the semi-classical limit (without backreaction) and compare it to the case of purely scalar radiation emission.
Numerical security proofs based on conic optimization are known to deliver optimal secret-key rates, but so far they have mostly assumed that the emitted states are fully characterized. In practice, this assumption is unrealistic, since real devices inevitably suffer from imperfections and side channels that are extremely difficult to model in detail. Here, we extend conic-optimization methods to scenarios where only partial information about the emitted states is known, covering both prepare-and-measure and measurement-device-independent protocols. We demonstrate that our method outperforms state-of-the-art analytical and numerical approaches under realistic source imperfections, especially for protocols that use non-qubit encodings. These results advance numerical-based proofs towards a standard, implementation-ready framework for evaluating quantum key distribution protocols in the presence of source imperfections.
The decoy-state method is a prominent approach to enhance the performance of quantum key distribution (QKD) systems that operate with weak coherent laser sources. Due to the limited transmissivity of single photons in optical fiber, current experimental decoy-state QKD setups increase their secret key rate by raising the repetition rate of the transmitter. However, this usually leads to correlations between subsequent optical pulses. This phenomenon leaks information about the encoding settings, including the intensities of the generated signals, which invalidates a basic premise of decoy-state QKD. Here we characterize intensity correlations between the emitted optical pulses in two industrial prototypes of decoy-state BB84 QKD systems and show that they significantly reduce the asymptotic key rate. In contrast to what has been conjectured, we experimentally confirm that the impact of higher-order correlations on the intensity of the generated signals can be much higher than that of nearest-neighbour correlations.
University of Cambridge logoUniversity of CambridgeUniversity of BernUniversity of EdinburghETH Zürich logoETH ZürichTechnische Universität DresdenUniversity of PisaStockholm University logoStockholm UniversitySorbonne Université logoSorbonne UniversitéUniversity of TurkuLeiden University logoLeiden UniversityUniversity of GenevaUniversity of BelgradeUniversity of ViennaUniversity of LeicesterUniversity of VigoUniversiteit LeidenObservatoire de ParisUniversité de LiègeINAF - Osservatorio Astrofisico di TorinoUniversity of Groningen logoUniversity of GroningenUniversity of BathLund UniversityUniversity of LausanneInstituto de Astrofísica de CanariasUniversity of AntioquiaEuropean Space AgencyUniversidad de ValparaísoUniversité de MonsELTE Eötvös Loránd UniversityUniversity of BordeauxObservatoire de la Côte d’AzurFaculdade de Ciências da Universidade de LisboaUniversity of BarcelonaMax Planck Institute for AstronomyNational Observatory of AthensUniversité de Paris-SaclayInstituto de Astrofísica de AndalucíaUniversité de Franche-ComtéINAF – Osservatorio Astronomico di RomaKatholieke Universiteit LeuvenRoyal Observatory of BelgiumSpace Research InstituteUniversité de RennesUniversity of AarhusKonkoly ObservatoryTartu ObservatoryHellenic Open UniversityARI, Zentrum für Astronomie der Universität HeidelbergCopernicus Astronomical CenterESAC, Villanueva de la CañadaAstronomical Observatory of TurinUniversité de BesançonCENTRA, Universidade de LisboaUniversité de NiceObservatoire de la Côte d'Azur, CNRSINAF – Osservatorio Astronomico di CataniaUniversit catholique de LouvainUniversit de ToulouseUniversit Libre de BruxellesINAF Osservatorio Astronomico di CapodimonteUniversit de LorraineAix-Marseille Universit",Universit de StrasbourgUniversit de LilleINAF Osservatorio Astrofisico di ArcetriINAF Osservatorio Astronomico di PadovaUniversit de MontpellierINAF Osservatorio di Astrofisica e Scienza dello Spazio di Bologna
The Gaia Galactic survey mission is designed and optimized to obtain astrometry, photometry, and spectroscopy of nearly two billion stars in our Galaxy. Yet as an all-sky multi-epoch survey, Gaia also observes several million extragalactic objects down to a magnitude of G~21 mag. Due to the nature of the Gaia onboard selection algorithms, these are mostly point-source-like objects. Using data provided by the satellite, we have identified quasar and galaxy candidates via supervised machine learning methods, and estimate their redshifts using the low resolution BP/RP spectra. We further characterise the surface brightness profiles of host galaxies of quasars and of galaxies from pre-defined input lists. Here we give an overview of the processing of extragalactic objects, describe the data products in Gaia DR3, and analyse their properties. Two integrated tables contain the main results for a high completeness, but low purity (50-70%), set of 6.6 million candidate quasars and 4.8 million candidate galaxies. We provide queries that select purer sub-samples of these containing 1.9 million probable quasars and 2.9 million probable galaxies (both 95% purity). We also use high quality BP/RP spectra of 43 thousand high probability quasars over the redshift range 0.05-4.36 to construct a composite quasar spectrum spanning restframe wavelengths from 72-100 nm.
In this paper we explore a relevant aspect of the interplay between two core elements of global optimization algorithms for nonconvex nonlinear programming problems, which we believe has been overlooked by past literature. The first one is the reformulation of the original problem, which requires the introduction of auxiliary variables with the goal of defining convex relaxations that can be solved both reliably and efficiently on a node-by-node basis. The second one, bound tightening or, more generally, domain reduction, allows to reduce the search space to be explored by the branch-and-bound algorithm. We are interested in the performance implications of propagating the bounds of the original variables to the auxiliary ones in the lifted space: does this propagation reduce the overall size of the tree? does it improve the efficiency at solving the node relaxations? To better understand the above interplay, we focus on the reformulation-linearization technique for polynomial optimization. In this setting we are able to obtain a theoretical result on the implicit bounds of the auxiliary variables in the RLT relaxations, which sets the stage for the ensuing computational study, whose goal is to assess to what extent the performance of an RLT-based algorithm may be affected by the decision to explicitly propagate the bounds on the original variables to the auxiliary ones.
This chapter highlights the transformation of secure communications through the incorporation of quantum mechanics. Over the past four decades, this groundbreaking theory has quietly revolutionized private communication. The chapter provides a concise historical overview of this field's inception, tracking the development of its pioneering protocol, BB84. It delves deeply into the protocol's evolution, spotlighting its milestones and challenges. Furthermore, it offers a panoramic view of the entire quantum key distribution landscape, encompassing continuous variable protocols designed to harness existing telecom technologies and device-independent quantum key distribution protocols aimed at achieving secure key exchange with minimal reliance on the experimental setup.
2
Researchers systematically compared fine-tuned pre-trained models against prompt-engineered large language models for emotion recognition in open-ended text. They found that fine-tuned RoBERTa achieved an 88% F-score for six-class emotion detection, while general-purpose LLMs reached over 78% accuracy for binary sentiment tasks using basic prompts.
Quantum key distribution systems offer cryptographic security, provided that all their components are thoroughly characterised. However, certain components might be vulnerable to a laser-damage attack, particularly when attacked at previously untested laser parameters. Here we show that exposing 1550-nm fiber-optic isolators to 17-mW average power, 1061-nm picosecond attacking pulses reduces their isolation below a safe threshold. Furthermore, the exposure to 1160-mW sub-nanosecond pulsed illumination permanently degrades isolation at 1550 nm while the isolators maintain forward transparency. These threats are not addressed by the currently-practiced security analysis.
Anxiety and depression are the most common mental health issues worldwide, affecting a non-negligible part of the population. Accordingly, stakeholders, including governments' health systems, are developing new strategies to promote early detection and prevention from a holistic perspective (i.e., addressing several disorders simultaneously). In this work, an entirely novel system for the multi-label classification of anxiety and depression is proposed. The input data consists of dialogues from user interactions with an assistant chatbot. Another relevant contribution lies in using Large Language Models (LLMs) for feature extraction, provided the complexity and variability of language. The combination of LLMs, given their high capability for language understanding, and Machine Learning (ML) models, provided their contextual knowledge about the classification problem thanks to the labeled data, constitute a promising approach towards mental health assessment. To promote the solution's trustworthiness, reliability, and accountability, explainability descriptions of the model's decision are provided in a graphical dashboard. Experimental results on a real dataset attain 90 % accuracy, improving those in the prior literature. The ultimate objective is to contribute in an accessible and scalable way before formal treatment occurs in the healthcare systems.
We use a simple method to derive two concentration bounds on the hypergeometric distribution. Comparison with existing results illustrates the advantage of these bounds across different regimes.
We investigate diagonal artifacts present in images captured by several Samsung smartphones and their impact on PRNU-based camera source verification. We first show that certain Galaxy S series models share a common pattern causing fingerprint collisions, with a similar issue also found in some Galaxy A models. Next, we demonstrate that reliable PRNU verification remains feasible for devices supporting PRO mode with raw capture, since raw images bypass the processing pipeline that introduces artifacts. This option, however, is not available for the mid-range A series models or in forensic cases without access to raw images. Finally, we outline potential forensic applications of the diagonal artifacts, such as reducing misdetections in HDR images and localizing regions affected by synthetic bokeh in portrait-mode images.
A research team from atlanTTic, University of Vigo, and CENTUM Research & Technology developed a hybrid network simulator designed for Flying Ad-hoc Networks. The tool integrates 5G Vehicle-to-Everything communications and Named-Data Networking, allowing for realistic validation and performance analysis of protocols and applications for dynamic UAV fleets.
Quantum key distribution promises unconditionally secure communications. However, as practical devices tend to deviate from their specifications, the security of some practical systems is no longer valid. In particular, an adversary can exploit imperfect detectors to learn a large part of the secret key, even though the security proof claims otherwise. Recently, a practical approach---measurement-device-independent quantum key distribution---has been proposed to solve this problem. However, so far its security has only been fully proven under the assumption that the legitimate users of the system have unlimited resources. Here we fill this gap and provide a rigorous security proof against general attacks in the finite-key regime. This is obtained by applying large deviation theory, specifically the Chernoff bound, to perform parameter estimation. For the first time we demonstrate the feasibility of long-distance implementations of measurement-device-independent quantum key distribution within a reasonable time-frame of signal transmission.
An algorithm to estimate the evolution of learning curves on the whole of a training data base, based on the results obtained from a portion and using a functional strategy, is introduced. We approximate iteratively the sought value at the desired time, independently of the learning technique used and once a point in the process, called prediction level, has been passed. The proposal proves to be formally correct with respect to our working hypotheses and includes a reliable proximity condition. This allows the user to fix a convergence threshold with respect to the accuracy finally achievable, which extends the concept of stopping criterion and seems to be effective even in the presence of distorting observations. Our aim is to evaluate the training effort, supporting decision making in order to reduce the need for both human and computational resources during the learning process. The proposal is of interest in at least three operational procedures. The first is the anticipation of accuracy gain, with the purpose of measuring how much work is needed to achieve a certain degree of performance. The second relates the comparison of efficiency between systems at training time, with the objective of completing this task only for the one that best suits our requirements. The prediction of accuracy is also a valuable item of information for customizing systems, since we can estimate in advance the impact of settings on both the performance and the development costs. Using the generation of part-of-speech taggers as an example application, the experimental results are consistent with our expectations.
Electoral fraud often manifests itself as statistical anomalies in election results, yet its extent can rarely be reliably confirmed by other evidence. Here we report complete results of municipal elections in Vlasikha town near Moscow, where we observe both statistical irregularities in the vote-counting transcripts and forensic evidence of tampering with ballots during their overnight storage. We evaluate two types of statistical signatures in the vote sequence that can prove batches of fraudulent ballots have been injected. We find that pairs of factory-made security bags with identical serial numbers are used in this fraud scheme. At 8 out of our 9 polling stations, the statistical and forensic evidence agrees (identifying 7 as fraudulent and 1 as honest), while at the remaining station the statistical evidence detects the fraud while the forensic one is insufficient. We also illustrate that the use of tamper-indicating seals at elections is inherently unreliable. -- -- Tezis po-russki est' v russkoj versii stat'i (normal'noj kirillicej, ne translitom)
We experimentally demonstrate a power limiter based on single-walled carbon nanotubes dispersed in a polymer matrix. This simple fiber-optic device permanently increases its attenuation when subjected to 50-mW or higher cw illumination at 1550 nm and initiates a fiber-fuse effect at 1 to 5 W. It may be used for protecting quantum key distribution equipment from light-injection attacks. We demonstrate its compatibility with phase- and polarisation-encoding quantum key distribution systems.
A commercial quantum key distribution (QKD) system needs to be formally certified to enable its wide deployment. The certification should include the system's robustness against known implementation loopholes and attacks that exploit them. Here we ready a fiber-optic QKD system for this procedure. The system has a prepare-and-measure scheme with decoy-state BB84 protocol, polarisation encoding, qubit source rate of 312.5 MHz, and is manufactured by QRate. We detail its hardware and post-processing. We analyse the hardware for known implementation loopholes, search for possible new loopholes, and discuss countermeasures. We then amend the system design to address the highest-risk loopholes identified. We also work out technical requirements on the certification lab and outline its possible structure.
A passive quantum key distribution (QKD) transmitter generates the quantum states prescribed by a QKD protocol at random, combining a fixed quantum mechanism and a post-selection step. By avoiding the use of active optical modulators externally driven by random number generators, passive QKD transmitters offer immunity to modulator side channels and potentially enable higher frequencies of operation. Recently, the first linear optics setup suitable for passive decoy-state QKD has been proposed. In this work, we simplify the prototype and adopt sharply different approaches for BB84 polarization encoding and decoy-state generation. On top of it, we elaborate a tight custom-made security analysis surpassing an unnecessary assumption and a post-selection step that are central to the former proposal.
We introduce reputable citations (RC), a method to screen and segment a collection of papers by decoupling popularity and influence. We demonstrate RC using recent works published in a large set of mathematics journals from Clarivate's Incites Essential Science Indicators, leveraging Clarivate's Web of Science for citation reports and assigning prestige values to institutions based on well-known international rankings. We compare researchers drawn from two samples: highly cited researchers (HC) and mathematicians whose influence is acknowledged by peers (Control). RC scores distinguish the influence of researchers beyond citations, revealing highly cited mathematical work of modest influence. The control group, comprising peer-acknowledged researchers, dominates the top tier of RC scores despite having fewer total citations than the HC group. Influence, as recognized by peers, does not always correlate with high citation counts, and RC scores offer a nuanced distinction between the two. With development, RC scores could automate screening of citations to identify exceptional and influential research, while addressing manipulative practices. The first application of RC reveals mathematics works that may be cited for reasons unrelated to genuine research advancements, suggesting a need for continued development of this method to mitigate such trends.
Quantum Key Distribution (QKD) promises information-theoretic security, yet integrating QKD into existing protocols like TLS remains challenging due to its fundamentally different operational model. In this paper, we propose a hybrid QKD-KEM protocol with two distinct integration approaches: a client-initiated flow compatible with both ETSI 004 and 014 specifications, and a server-initiated flow similar to existing work but limited to stateless ETSI 014 APIs. Unlike previous implementations, our work specifically addresses the integration of stateful QKD key exchange protocols (ETSI 004) which is essential for production QKD networks but has remained largely unexplored. By adapting OpenSSL's provider infrastructure to accommodate QKD's pre-distributed key model, we maintain compatibility with current TLS implementations while offering dual layers of security. Performance evaluations demonstrate the feasibility of our hybrid scheme with acceptable overhead, showing that robust security against quantum threats is achievable while addressing the unique requirements of different QKD API specifications.
There are no more papers matching your filters at the moment.