Politecnico di Bari
In the "Beyond Moore's Law" era, with increasing edge intelligence, domain-specific computing embracing unconventional approaches will become increasingly prevalent. At the same time, adopting a variety of nanotechnologies will offer benefits in energy cost, computational speed, reduced footprint, cyber resilience, and processing power. The time is ripe for a roadmap for unconventional computing with nanotechnologies to guide future research, and this collection aims to fill that need. The authors provide a comprehensive roadmap for neuromorphic computing using electron spins, memristive devices, two-dimensional nanomaterials, nanomagnets, and various dynamical systems. They also address other paradigms such as Ising machines, Bayesian inference engines, probabilistic computing with p-bits, processing in memory, quantum memories and algorithms, computing with skyrmions and spin waves, and brain-inspired computing for incremental learning and problem-solving in severely resource-constrained environments. These approaches have advantages over traditional Boolean computing based on von Neumann architecture. As the computational requirements for artificial intelligence grow 50 times faster than Moore's Law for electronics, more unconventional approaches to computing and signal processing will appear on the horizon, and this roadmap will help identify future needs and challenges. In a very fertile field, experts in the field aim to present some of the dominant and most promising technologies for unconventional computing that will be around for some time to come. Within a holistic approach, the goal is to provide pathways for solidifying the field and guiding future impactful discoveries.
Gamma-ray bursts are the most luminous electromagnetic events in the universe. Their prompt gamma-ray emission has typical durations between a fraction of a second and several minutes. A rare subset of these events have durations in excess of a thousand seconds, referred to as ultra-long gamma-ray bursts. Here, we report the discovery of the longest gamma-ray burst ever seen with a ~25,000 s gamma-ray duration, GRB 250702B, and characterize this event using data from four instruments in the InterPlanetary Network and the Monitor of All-sky X-ray Image. We find a hard spectrum, subsecond variability, and high total energy, which are only known to arise from ultrarelativistic jets powered by a rapidly-spinning stellar-mass central engine. These properties and the extreme duration are together incompatible with all confirmed gamma-ray burst progenitors and nearly all models in the literature. This burst is naturally explained with the helium merger model, where a field binary ends when a black hole falls into a stripped star and proceeds to consume and explode it from within. Under this paradigm, GRB 250702B adds to the growing evidence that helium stars expand and that some ultra-long GRBs have similar evolutionary pathways as collapsars, stellar-mass gravitational wave sources, and potentially rare types of supernovae.
University of Washington logoUniversity of WashingtonUniversity of Toronto logoUniversity of TorontoUniversity of Amsterdam logoUniversity of AmsterdamCalifornia Institute of Technology logoCalifornia Institute of TechnologyUniversity of Illinois at Urbana-Champaign logoUniversity of Illinois at Urbana-ChampaignUniversity of Waterloo logoUniversity of WaterlooHarvard University logoHarvard UniversityNational Central UniversityNational Astronomical Observatory of JapanChinese Academy of Sciences logoChinese Academy of SciencesGoogle logoGoogleUniversity of Chicago logoUniversity of ChicagoUC Berkeley logoUC BerkeleyNational Taiwan Universitythe University of Tokyo logothe University of TokyoPeking University logoPeking UniversityMcGill University logoMcGill UniversityBoston University logoBoston UniversityNASA Goddard Space Flight Center logoNASA Goddard Space Flight CenterKorea Astronomy and Space Science InstituteUniversity of CologneRadboud UniversityUniversity of Maryland logoUniversity of MarylandInstitute for Advanced StudyStockholm University logoStockholm UniversityUniversity of Arizona logoUniversity of ArizonaUniversity of Massachusetts AmherstFermi National Accelerator LaboratoryUniversidad Complutense de MadridUniversity of Colorado BoulderThe Graduate University for Advanced Studies (SOKENDAI)KTH Royal Institute of Technology logoKTH Royal Institute of TechnologyChalmers University of Technology logoChalmers University of TechnologyOsaka Metropolitan UniversityUniversitat de ValènciaNational Radio Astronomy ObservatoryHiroshima UniversityKanazawa UniversityUniversidad Nacional Autónoma de MéxicoUniversity of the WitwatersrandNational Tsing-Hua UniversityAcademia Sinica Institute of Astronomy and AstrophysicsEast Asian ObservatoryNazarbayev UniversityInstituto Nacional de Astrofísica, Óptica y ElectrónicaInstituto de Astrofísica de Andalucía-CSICMax Planck Institute for Radio AstronomyINAF – Istituto di Astrofisica Spaziale e Fisica Cosmica MilanoINAF-Istituto di RadioastronomiaKagoshima UniversityUniversità degli Studi di CagliariJoint ALMA ObservatoryInstitut de Radioastronomie Millimétrique (IRAM)Japan Aerospace Exploration AgencySRON Netherlands Institute for Space ResearchMIT Haystack ObservatoryVillanova UniversityINAF- Osservatorio Astronomico di CagliariUniversity of Science and Technology, KoreaPolitecnico di BariUniversidad de ConcepciٞnShiv Nadar Institute of EminenceJoint Institute for VLBI ERIC (JIVE)Goethe-University, FrankfurtSquare Kilometre Array South Africa (SARAO)Istituto Nazionale di Fisica Nucleare INFNUniversit degli Studi di Napoli Federico IICenter for Astrophysics  Harvard & Smithsonian
The Event Horizon Telescope Collaboration conducted the first multi-epoch polarimetric imaging of M87* at event-horizon scales, observing a stable black hole shadow diameter while detecting substantial year-to-year variability in the ring's azimuthal brightness and linear polarization patterns, along with initial constraints on extended jet emission.
Researchers from Politecnico di Bari investigate memorization of recommendation datasets in Large Language Models (LLMs), revealing that models like GPT-4 have memorized up to 80.76% of MovieLens-1M items while demonstrating how this memorization correlates with recommendation performance and exhibits strong popularity bias effects.
2
Researchers from Politecnico di Bari and Sapienza University of Rome probed Llama models to map where sentiment and emotion information are encoded internally, finding that sentiment is concentrated in mid-layers and emotion in early layers. Based on these findings, they developed SENTRILLAMA, a specialized model that significantly reduces computational costs while achieving competitive or superior accuracy in sentiment classification compared to full LLMs and prompting methods.
Emerging communication and cryptography applications call for reliable, fast, unpredictable random number generators. Quantum random number generation (QRNG) allows for the creation of truly unpredictable numbers thanks to the inherent randomness available in quantum mechanics. A popular approach is using the quantum vacuum state to generate random numbers. While convenient, this approach was generally limited in speed compared to other schemes. Here, through custom co-design of opto-electronic integrated circuits and side-information reduction by digital filtering, we experimentally demonstrated an ultrafast generation rate of 100 Gbps, setting a new record for vacuum-based quantum random number generation by one order of magnitude. Furthermore, our experimental demonstrations are well supported by an upgraded device-dependent framework that is secure against both classical and quantum side-information and that also properly considers the non-linearity in the digitization process. This ultrafast secure random number generator in the chip-scale platform holds promise for next generation communication and cryptography applications.
Quantum conditional entropies play a fundamental role in quantum information theory. In quantum key distribution, they are exploited to obtain reliable lower bounds on the secret-key rates in the finite-size regime, against collective attacks and coherent attacks under suitable assumptions. We consider continuous-variable communication protocols, where the sender Alice encodes information using a discrete modulation of phase-shifted coherent states, and the receiver Bob decodes by homodyne or heterodyne detection. We compute the Petz-Rényi and sandwiched Rényi conditional entropies associated with these setups, under the assumption of a passive eavesdropper who collects the quantum information leaked through a lossy communication line of known or bounded transmissivity. Whereas our results do not directly provide reliable key-rate estimates, they do represent useful ball-park figures. We obtain analytical or semi-analytical expressions that do not require intensive numerical calculations. These expressions serve as bounds on the key rates that may be tight in certain scenarios. We compare different estimates, including known bounds that have already appeared in the literature and new bounds. The latter are found to be tighter for very short block sizes.
The main aim of this work is to present an explicit construction of a 2-design of U(2){\rm U}(2), relying only on a tool that belongs to every physicists toolbox: the theory of angular momentum. Unitary designs are a rich and fundamental mathematical topic, with numerous fruitful applications in quantum information science and technology. In this work we take a peek under the hood. We begin with a minimal set of definitions and characterizations. Then we derive all 1-designs of U(2){\rm U}(2) of minimum size. Finally, we set out, step by step, a completion procedure extending such 1-designs to 2-designs. In particular, starting from the Pauli basis \unicodex2014\unicode{x2014} the prototypical unitary 1-design \unicodex2014\unicode{x2014} one \unicodex201C\unicode{x201C}naturally\unicodex201D\unicode{x201D} obtains the 2-design originally employed by Bennett and coauthors in Mixed State Entanglement and Quantum Error Correction\textit{Mixed State Entanglement and Quantum Error Correction}. The present work also serves as a gentle and largely self-contained introduction to the subject.
Multimodal Recommender Systems aim to improve recommendation accuracy by integrating heterogeneous content, such as images and textual metadata. While effective, it remains unclear whether their gains stem from true multimodal understanding or increased model complexity. This work investigates the role of multimodal item embeddings, emphasizing the semantic informativeness of the representations. Initial experiments reveal that embeddings from standard extractors (e.g., ResNet50, Sentence-Bert) enhance performance, but rely on modality-specific encoders and ad hoc fusion strategies that lack control over cross-modal alignment. To overcome these limitations, we leverage Large Vision-Language Models (LVLMs) to generate multimodal-by-design embeddings via structured prompts. This approach yields semantically aligned representations without requiring any fusion. Experiments across multiple settings show notable performance improvements. Furthermore, LVLMs embeddings offer a distinctive advantage: they can be decoded into structured textual descriptions, enabling direct assessment of their multimodal comprehension. When such descriptions are incorporated as side content into recommender systems, they improve recommendation performance, empirically validating the semantic depth and alignment encoded within LVLMs outputs. Our study highlights the importance of semantically rich representations and positions LVLMs as a compelling foundation for building robust and meaningful multimodal representations in recommendation tasks.
An investigation into Large Language Models' ability to internally encode factual accuracy reveals a significant generalization gap. Factuality probes, which replicate prior success on synthetic datasets, perform with accuracy barely above random chance when applied to statements directly generated by LLMs from open-domain QA tasks.
Magnetic tunnel junctions (MTJs) based on ferromagnets are canonical devices in spintronics, with wide-ranging applications in data storage, computing, and sensing. They simultaneously exhibit mechanisms for electrical detection of magnetic order through the tunneling magnetoresistance (TMR) effect, and reciprocally, for controlling magnetic order by electric currents through spin-transfer torque (STT). It was long assumed that neither of these effects could be sizeable in tunnel junctions made from antiferromagnetic materials, since they exhibit no net magnetization. Recently, however, it was shown that all-antiferromagnetic tunnel junctions (AFMTJs) based on chiral antiferromagnets do exhibit TMR due to their non-relativistic momentum-dependent spin polarization and cluster magnetic octupole moment, which are manifestations of their spin-split band structure. However, the reciprocal effect, i.e., the antiferromagnetic counterpart of STT driven by currents through the AFMTJ, has been assumed non-existent due to the total electric current being spin-neutral. Here, in contrast to this common expectation, we report nanoscale AFMTJs exhibiting this reciprocal effect, which we term octupole-driven spin-transfer torque (OTT). We demonstrate current-induced OTT switching of PtMn3|MgO|PtMn3 AFMTJs, fabricated on a thermally oxidized silicon substrate, exhibiting a record-high TMR value of 363% at room temperature and switching current densities of the order of 10 MA/cm2. Our theoretical modeling explains the origin of OTT in terms of the imbalance between intra- and inter-sublattice spin currents across the AFMTJ, and equivalently, in terms of the non-zero net cluster octupole polarization of each PtMn3 layer. This work establishes a new materials platform for antiferromagnetic spintronics and provides a pathway towards deeply scaled magnetic memory and room-temperature terahertz technologies.
In the realm of music recommendation, sequential recommender systems have shown promise in capturing the dynamic nature of music consumption. Nevertheless, traditional Transformer-based models, such as SASRec and BERT4Rec, while effective, encounter challenges due to the unique characteristics of music listening habits. In fact, existing models struggle to create a coherent listening experience due to rapidly evolving preferences. Moreover, music consumption is characterized by a prevalence of repeated listening, i.e., users frequently return to their favourite tracks, an important signal that could be framed as individual or personalized popularity. This paper addresses these challenges by introducing a novel approach that incorporates personalized popularity information into sequential recommendation. By combining user-item popularity scores with model-generated scores, our method effectively balances the exploration of new music with the satisfaction of user preferences. Experimental results demonstrate that a Personalized Most Popular recommender, a method solely based on user-specific popularity, outperforms existing state-of-the-art models. Furthermore, augmenting Transformer-based models with personalized popularity awareness yields superior performance, showing improvements ranging from 25.2% to 69.8%. The code for this paper is available at this https URL
Personalized content recommendations have been pivotal to the content experience in digital media from video streaming to social networks. However, several domain specific challenges have held back adoption of recommender systems in news publishing. To address these challenges, we introduce the Ekstra Bladet News Recommendation Dataset (EB-NeRD). The dataset encompasses data from over a million unique users and more than 37 million impression logs from Ekstra Bladet. It also includes a collection of over 125,000 Danish news articles, complete with titles, abstracts, bodies, and metadata, such as categories. EB-NeRD served as the benchmark dataset for the RecSys '24 Challenge, where it was demonstrated how the dataset can be used to address both technical and normative challenges in designing effective and responsible recommender systems for news publishing. The dataset is available at: this https URL.
The computation of time-optimal velocity profiles along prescribed paths, subject to generic acceleration constraints, is a crucial problem in robot trajectory planning, with particular relevance to autonomous racing. However, the existing methods either support arbitrary acceleration constraints at high computational cost or use conservative box constraints for computational efficiency. We propose FBGA, a new \underline{F}orward-\underline{B}ackward algorithm with \underline{G}eneric \underline{A}cceleration constraints, which achieves both high accuracy and low computation time. FBGA operates forward and backward passes to maximize the velocity profile in short, discretized path segments, while satisfying user-defined performance limits. Tested on five racetracks and two vehicle classes, FBGA handles complex, non-convex acceleration constraints with custom formulations. Its maneuvers and lap times closely match optimal control baselines (within 0.11%0.11\%-0.36%0.36\%), while being up to three orders of magnitude faster. FBGA maintains high accuracy even with coarse discretization, making it well-suited for online multi-query trajectory planning. Our open-source \texttt{C++} implementation is available at: this https URL.
Inspired by the developments in quantum computing, building domain-specific classical hardware to solve computationally hard problems has received increasing attention. Here, by introducing systematic sparsification techniques, we demonstrate a massively parallel architecture: the sparse Ising Machine (sIM). Exploiting sparsity, sIM achieves ideal parallelism: its key figure of merit - flips per second - scales linearly with the number of probabilistic bits (p-bit) in the system. This makes sIM up to 6 orders of magnitude faster than a CPU implementing standard Gibbs sampling. Compared to optimized implementations in TPUs and GPUs, sIM delivers 5-18x speedup in sampling. In benchmark problems such as integer factorization, sIM can reliably factor semiprimes up to 32-bits, far larger than previous attempts from D-Wave and other probabilistic solvers. Strikingly, sIM beats competition-winning SAT solvers (by 4-700x in runtime to reach 95% accuracy) in solving 3SAT problems. Even when sampling is made inexact using faster clocks, sIM can find the correct ground state with further speedup. The problem encoding and sparsification techniques we introduce can be applied to other Ising Machines (classical and quantum) and the architecture we present can be used for scaling the demonstrated 5,000-10,000 p-bits to 1,000,000 or more through analog CMOS or nanodevices.
Analog neuromorphic hardware is gaining traction as conventional digital systems struggle to keep pace with the growing energy and scalability demands of modern neural networks. Here, we present analog, fully magnonic, artificial neurons, which exploit a nonlinear magnon excitation mechanism based on the nonlinear magnonic frequency shift. This yields a sharp trigger response and tunable fading memory, as well as synaptic connections to other neurons via propagating magnons. Using micro-focused Brillouin light scattering spectroscopy on a Gallium-substituted yttrium iron garnet thin film, we show multi-neuron triggering, cascadability, and multi-input integration across interconnected neurons. Finally, we implement the experimentally verified neuron activation function in a neural network simulation, yielding high classification accuracy on standard benchmarks. The results establish all-magnonic neurons as promising devices for scalable, low-power, wave-based neuromorphic computing, highlighting their potential as building blocks for future physical neural networks.
We optimize Matrix-Product State (MPS)-based algorithms for simulating quantum circuits with finite fidelity, specifically the Time-Evolving Block Decimation (TEBD) and the Density-Matrix Renormalization Group (DMRG) algorithms, by exploiting the irregular arrangement of entangling operations in circuits. We introduce a variation of the standard TEBD algorithm, we termed "cluster-TEBD", which dynamically arranges qubits into entanglement clusters, enabling the exact contraction of multiple circuit layers in a single time step. Moreover, we enhance the DMRG algorithm by introducing an adaptive protocol which analyzes the entanglement distribution within each circuit section to be contracted, dynamically adjusting the qubit grouping at each iteration. We analyze the performances of these enhanced algorithms in simulating both stabilizer and non-stabilizer random circuits, with up to 10001000 qubits and 100100 layers of Clifford and non-Clifford gates, and in simulating Shor's quantum algorithm with tens of thousands of layers. Our findings show that, even with reasonable computational resources per task, cluster-based approaches can significantly speed up simulations of large-sized quantum circuits and improve the fidelity of the final states.
Recent demonstrations on specialized benchmarks have reignited excitement for quantum computers, yet whether they can deliver an advantage for practical real-world problems remains an open question. Here, we show that probabilistic computers (p-computers), when co-designed with hardware to implement powerful Monte Carlo algorithms, provide a compelling and scalable classical pathway for solving hard optimization problems. We focus on two key algorithms applied to 3D spin glasses: discrete-time simulated quantum annealing (DT-SQA) and adaptive parallel tempering (APT). We benchmark these methods against the performance of a leading quantum annealer on the same problem instances. For DT-SQA, we find that increasing the number of replicas improves residual energy scaling, in line with expectations from extreme value theory. We then show that APT, when supported by non-local isoenergetic cluster moves, exhibits a more favorable scaling and ultimately outperforms DT-SQA. We demonstrate these algorithms are readily implementable in modern hardware, projecting that custom Field Programmable Gate Arrays (FPGA) or specialized chips can leverage massive parallelism to accelerate these algorithms by orders of magnitude while drastically improving energy efficiency. Our results establish a new, rigorous classical baseline, clarifying the landscape for assessing a practical quantum advantage and presenting p-computers as a scalable platform for real-world optimization challenges.
Despite their widespread use, purely data-driven methods often suffer from overfitting, lack of physical consistency, and high data dependency, particularly when physical constraints are not incorporated. This study introduces a novel data assimilation approach that integrates Graph Neural Networks (GNNs) with optimisation techniques to enhance the accuracy of mean flow reconstruction, using Reynolds-Averaged Navier-Stokes (RANS) equations as a baseline. The method leverages the adjoint approach, incorporating RANS-derived gradients as optimisation terms during GNN training, ensuring that the learned model adheres to physical laws and maintains consistency. Additionally, the GNN framework is well-suited for handling unstructured data, which is common in the complex geometries encountered in Computational Fluid Dynamics (CFD). The GNN is interfaced with the Finite Element Method (FEM) for numerical simulations, enabling accurate modelling in unstructured domains. We consider the reconstruction of mean flow past bluff bodies at low Reynolds numbers as a test case, addressing tasks such as sparse data recovery, denoising, and inpainting of missing flow data. The key strengths of the approach lie in its integration of physical constraints into the GNN training process, leading to accurate predictions with limited data, making it particularly valuable when data are scarce or corrupted. Results demonstrate significant improvements in the accuracy of mean flow reconstructions, even with limited training data, compared to analogous purely data-driven models.
In the ever-evolving landscape of quantum cryptography, Device-independent Quantum Key Distribution (DI-QKD) stands out for its unique approach to ensuring security based not on the trustworthiness of the devices but on nonlocal correlations. Beginning with a contextual understanding of modern cryptographic security and the limitations of standard quantum key distribution methods, this review explores the pivotal role of nonclassicality and the challenges posed by various experimental loopholes for DI-QKD. Various protocols, security against individual, collective and coherent attacks, and the concept of self-testing are also examined, as well as the entropy accumulation theorem, and additional mathematical methods in formulating advanced security proofs. In addition, the burgeoning field of semi-device-independent models (measurement DI--QKD, Receiver DI--QKD, and One--sided DI--QKD) is also analyzed. The practical aspects are discussed through a detailed overview of experimental progress and the open challenges toward the commercial deployment in the future of secure communications.
There are no more papers matching your filters at the moment.