CEA-Leti
A comprehensive white paper from the GenAINet Initiative introduces Large Telecom Models (LTMs) as a novel framework for integrating AI into telecommunications infrastructure, providing a detailed roadmap for innovation while addressing critical challenges in scalability, hardware requirements, and regulatory compliance through insights from a diverse coalition of academic, industry and regulatory experts.
This paper presents an adaptive framework for edge inference based on a dynamically configurable transformer-powered deep joint source channel coding (DJSCC) architecture. Motivated by a practical scenario where a resource constrained edge device engages in goal oriented semantic communication, such as selectively transmitting essential features for object detection to an edge server, our approach enables efficient task aware data transmission under varying bandwidth and channel conditions. To achieve this, input data is tokenized into compact high level semantic representations, refined by a transformer, and transmitted over noisy wireless channels. As part of the DJSCC pipeline, we employ a semantic token selection mechanism that adaptively compresses informative features into a user specified number of tokens per sample. These tokens are then further compressed through the JSCC module, enabling a flexible token communication strategy that adjusts both the number of transmitted tokens and their embedding dimensions. We incorporate a resource allocation algorithm based on Lyapunov stochastic optimization to enhance robustness under dynamic network conditions, effectively balancing compression efficiency and task performance. Experimental results demonstrate that our system consistently outperforms existing baselines, highlighting its potential as a strong foundation for AI native semantic communication in edge intelligence applications.
This paper investigates the advantages of representing and processing semantic knowledge extracted into graphs within the emerging paradigm of semantic communications. The proposed approach leverages semantic and pragmatic aspects, incorporating recent advances on large language models (LLMs) to achieve compact representations of knowledge to be processed and exchanged between intelligent agents. This is accomplished by using the cascade of LLMs and graph neural networks (GNNs) as semantic encoders, where information to be shared is selected to be meaningful at the receiver. The embedding vectors produced by the proposed semantic encoder represent information in the form of triplets: nodes (semantic concepts entities), edges(relations between concepts), nodes. Thus, semantic information is associated with the representation of relationships among elements in the space of semantic concept abstractions. In this paper, we investigate the potential of achieving high compression rates in communication by incorporating relations that link elements within graph embeddings. We propose sending semantic symbols solely equivalent to node embeddings through the wireless channel and inferring the complete knowledge graph at the receiver. Numerical simulations illustrate the effectiveness of leveraging knowledge graphs to semantically compress and transmit information.
The 6G-GOALS project proposes a paradigm shift for 6G networks from traditional bit-level communication to semantic and goal-oriented communication, aiming for efficient and sustainable AI-Native interactions. It outlines a novel O-RAN-aligned architecture with a 'semantic plane' and details foundational research pillars, alongside two planned proof-of-concepts for real-time semantic communication and cooperative robotics.
Semantic communications focus on prioritizing the understanding of the meaning behind transmitted data and ensuring the successful completion of tasks that motivate the exchange of information. However, when devices rely on different languages, logic, or internal representations, semantic mismatches may occur, potentially hindering mutual understanding. This paper introduces a novel approach to addressing latent space misalignment in semantic communications, exploiting multiple-input multiple-output (MIMO) communications. Specifically, our method learns a MIMO precoder/decoder pair that jointly performs latent space compression and semantic channel equalization, mitigating both semantic mismatches and physical channel impairments. We explore two solutions: (i) a linear model, optimized by solving a biconvex optimization problem via the alternating direction method of multipliers (ADMM); (ii) a neural network-based model, which learns semantic MIMO precoder/decoder under transmission power budget and complexity constraints. Numerical results demonstrate the effectiveness of the proposed approach in a goal-oriented semantic communication scenario, illustrating the main trade-offs between accuracy, communication burden, and complexity of the solutions.
This paper explores the road to vastly improving the broadband connectivity in future 6G wireless systems. Different categories of use cases are considered, with peak data rates up to 1 Tbps. Several categories of enablers at the infrastructure, spectrum, and protocol/algorithmic levels are required to realize the intended broadband connectivity goals in 6G. At the infrastructure level, we consider ultra-massive MIMO technology (possibly implemented using holographic radio), intelligent reflecting surfaces, user-centric cell-free networking, integrated access and backhaul, and integrated space and terrestrial networks. At the spectrum level, the network must seamlessly utilize sub-6 GHz bands for coverage and spatial multiplexing of many devices, while higher bands will be mainly used for pushing the peak rates of point-to-point links. Finally, at the protocol/algorithmic level, the enablers include improved coding, modulation, and waveforms to achieve lower latency, higher reliability, and reduced complexity.
The field of neuromorphic computing has been rapidly evolving in recent years, with an increasing focus on hardware design and reliability. This special session paper provides an overview of the recent developments in neuromorphic computing, focusing on hardware design and reliability. We first review the traditional CMOS-based approaches to neuromorphic hardware design and identify the challenges related to scalability, latency, and power consumption. We then investigate alternative approaches based on emerging technologies, specifically integrated photonics approaches within the NEUROPULS project. Finally, we examine the impact of device variability and aging on the reliability of neuromorphic hardware and present techniques for mitigating these effects. This review is intended to serve as a valuable resource for researchers and practitioners in neuromorphic computing.
Optical parametric amplification (OPA) represents a powerful solution to achieve broadband amplification in wavelength ranges beyond the scope of conventional gain media, for generating high-power optical pulses, optical microcombs, entangled photon pairs and a wide range of other applications. Here, we demonstrate optical parametric amplifiers based on silicon nitride (Si3N4) waveguides integrated with two-dimensional (2D) layered graphene oxide (GO) films. We achieve precise control over the thickness, length, and position of the GO films using a transfer-free, layer-by-layer coating method combined with accurate window opening in the chip cladding using photolithography. Detailed OPA measurements with a pulsed pump for the fabricated devices with different GO film thicknesses and lengths show a maximum parametric gain of ~24.0 dB, representing a ~12.2 dB improvement relative to the device without GO. We perform a theoretical analysis of the device performance, achieving good agreement with experiment and showing that there is substantial room for further improvement. This work represents the first demonstration of integrating 2D materials on chips to enhance the OPA performance, providing a new way of achieving high performance photonic integrated OPA by incorporating 2D materials.
Multi-dimensional entangled photon states represent an important resource in quantum communication networks. Specifically, hyperentangled states presenting simultaneous entanglement in several degrees of freedom (DoF), stand out for their noise resilience and information capacity. In this work, we demonstrate the generation of hyperentangled photon pairs in the time and frequency-bin domain by spontaneous four-wave mixing from the coherent driving of two integrated Silicon microresonators. We demonstrate entanglement in each DoF by proving the violation of the Clauser Horne Shimony Holt (CHSH) inequality by more than 27 standard deviations (STDs) in each reduced space. Genuine hyperentanglement is then assessed from the negativity of an hyperentanglement witness, which is verified by more than 60 STDs. These results mark, to the best of our knowledge, the first demonstration of time-frequency bin hyperentanglement in an integrated silicon photonic device.
This paper explores the combination of neural network quantization and entropy coding for memory footprint minimization. Edge deployment of quantized models is hampered by the harsh Pareto frontier of the accuracy-to-bitwidth tradeoff, causing dramatic accuracy loss below a certain bitwidth. This accuracy loss can be alleviated thanks to mixed precision quantization, allowing for more flexible bitwidth allocation. However, standard mixed precision benefits remain limited due to the 1-bit frontier, that forces each parameter to be encoded on at least 1 bit of data. This paper introduces an approach that combines mixed precision, zero-point quantization and entropy coding to push the compression boundary of Resnets beyond the 1-bit frontier with an accuracy drop below 1% on the ImageNet benchmark. From an implementation standpoint, a compact decoder architecture features reduced latency, thus allowing for inference-compatible decoding.
Several speculative visions are conjecturing on what 6G services will be able to offer at the horizon of 2030. Nevertheless, the 6G design process is at its preliminary stages. The reality today is that hardware, technologies and new materials required to effectively meet the unprecedented performance targets required for future 6G services and network operation, have not been designed, tested or even do not exist yet. Today, a solid vision on the cost-benefit trade-offs of machine learning and artificial intelligence support for 6G network and services operation optimization is missing. This includes the possible support from hardware efficiency, operation effectiveness and, the immeasurable cost due to data acquisition-transfer-processing. The contribution of this paper is three-fold. This is the first paper deriving crucial 6G key performance indicators on hardware and technology design. Second, we present a new hardware technologies design methodology conceived to enable the effective software-hardware components integration required to meet the challenging performance envisioned for future 6G networks. Third, we suggest a paradigm shift towards goal-oriented and semantic communications, in which a totally new opportunity of joint design of hardware, artificial intelligence and effective communication is offered. The proposed vision is consolidated by our recent results on hardware, technology and machine learning performance.
In future 6G wireless networks, semantic and effectiveness aspects of communications will play a fundamental role, incorporating meaning and relevance into transmissions. However, obstacles arise when devices employ diverse languages, logic, or internal representations, leading to semantic mismatches that might jeopardize understanding. In latent space communication, this challenge manifests as misalignment within high-dimensional representations where deep neural networks encode data. This paper presents a novel framework for goal-oriented semantic communication, leveraging relative representations to mitigate semantic mismatches via latent space alignment. We propose a dynamic optimization strategy that adapts relative representations, communication parameters, and computation resources for energy-efficient, low-latency, goal-oriented semantic communications. Numerical results demonstrate our methodology's effectiveness in mitigating mismatches among devices, while optimizing energy consumption, delay, and effectiveness.
We focus on collaborative edge inference over wireless, which enables multiple devices to cooperate to improve inference performance in the presence of corrupted data. Exploiting a key-query mechanism for selective information exchange (or, group formation for collaboration), we recall the effect of wireless channel impairments in feature communication. We argue and show that a disjoint approach, which only considers either the semantic relevance or channel state between devices, performs poorly, especially in harsh propagation conditions. Based on these findings, we propose a joint approach that takes into account semantic information relevance and channel states when grouping devices for collaboration, by making the general attention weights dependent of the channel information. Numerical simulations show the superiority of the joint approach against local inference on corrupted data, as well as compared to collaborative inference with disjoint decisions that either consider application or physical layer parameters when forming groups.
In this paper, we propose a Gradient Descent Bit-Flipping (GDBF) decoding with momentum, which considers past updates to provide inertia to the decoding process. We show that GDBF or randomized GDBF decoders with momentum may closely approach the floating-point Belief-Propagation decoding performance, and even outperform it in the error-floor region, especially for graphs with high connectivity degree.
This paper addresses the efficient management of Mobile Access Points (MAPs), which are Unmanned Aerial Vehicles (UAV), in 5G networks. We propose a two-level hierarchical architecture, which dynamically reconfigures the network while considering Integrated Access-Backhaul (IAB) constraints. The high-layer decision process determines the number of MAPs through consensus, and we develop a joint optimization process to account for co-dependence in network self-management. In the low-layer, MAPs manage their placement using a double-attention based Deep Reinforcement Learning (DRL) model that encourages cooperation without retraining. To improve generalization and reduce complexity, we propose a federated mechanism for training and sharing one placement model for every MAP in the low-layer. Additionally, we jointly optimize the placement and backhaul connectivity of MAPs using a multi-objective reward function, considering the impact of varying MAP placement on wireless backhaul connectivity.
We investigate the resilience of silicon-based spin qubits against non-Markovian noise within the framework of quantum error correction. We consider a realistic non-Markovian noise model that affects both the Larmor frequency and exchange energy of qubits, allowing accurate simulations of noisy quantum circuits. We employ numerical emulation to assess the performance of the distance-3 rotated surface code and its XZZX variant, using a logical qubit coherence time metric based on Ramsey-like experiments. Our numerical results suggest that quantum error correction converts non-Markovian physical noise into Markovian logical noise, resulting in a quartic dependence of coherence time between physical and logical qubits. Additionally, we analyze the effects of spatial noise correlations and sparse architectures, substantiating the robustness of quantum error correction in silicon-based spin qubit systems.
Currently, the world experiences an unprecedentedly increasing generation of application data, from sensor measurements to video streams, thanks to the extreme connectivity capability provided by 5G networks. Going beyond 5G technology, such data aim to be ingested by Artificial Intelligence (AI) functions instantiated in the network to facilitate informed decisions, essential for the operation of applications, such as automated driving and factory automation. Nonetheless, while computing platforms hosting Machine Learning (ML) models are ever powerful, their energy footprint is a key impeding factor towards realizing a wireless network as a sustainable intelligent platform. Focusing on a beyond 5G wireless network, overlaid by a Multi-access Edge Computing (MEC) infrastructure with inferencing capabilities, our paper tackles the problem of energy-aware dependable inference by considering inference effectiveness as value of a goal that needs to be accomplished by paying the minimum price in energy consumption. Both MEC-assisted standalone and ensemble inference options are evaluated. It is shown that, for some system scenarios, goal effectiveness above 84% is achieved and sustained even by relaxing communication reliability requirements by one decimal digit, while enjoying a device radio energy consumption reduction of almost 23% at the same time. Also, ensemble inference is shown to improve system-wide energy efficiency and even achieve higher goal effectiveness, as compared to the standalone case for some system parameterizations.
Distance Geometry plays a central role in determining protein structures from Nuclear Magnetic Resonance (NMR) data, a task known as the Molecular Distance Geometry Problem (MDGP). A subclass of this problem, the Discretizable Distance Geometry Problem (DDGP), allows a recursive solution via the combinatorial Branch-and-Prune (BP) algorithm by exploiting specific vertex orderings in protein backbones. To accommodate the inherent uncertainty in NMR data, the interval Branch-and-Prune (\textit{i}BP) algorithm was introduced, incorporating interval distance constraints through uniform sampling. In this work, we propose two new algorithmic frameworks for solving the three-dimensional interval DDGP (\textit{i}DDGP): the interval Angular Branch-and-Prune (\textit{i}ABP), and its extension, the interval Torsion-angle Branch-and-Prune (\textit{i}TBP). These methods convert interval distances into angular constraints, enabling structured sampling over circular arcs. The \textit{i}ABP method guarantees feasibility by construction and removes the need for explicit constraint checking. The \textit{i}TBP algorithm further incorporates known torsion angle intervals, enforcing local chirality and planarity conditions critical for protein geometry. We present formal mathematical foundations for both methods and a systematic strategy for generating biologically meaningful \textit{i}DDGP instances from the Protein Data Bank (PDB) structures. Computational experiments demonstrate that both \textit{i}ABP and \textit{i}TBP consistently outperform \textit{i}BP in terms of solution rate and computational efficiency. In particular, \textit{i}TBP yields solutions with lower RMSD variance relative to the original PDB structures, better reflecting biologically plausible conformations.
5G radio at millimeter wave (mmWave) and beyond 5G concepts at 0.1-1 THz can exploit angle and delay measurements for localization, by the virtue of increased bandwidth and large antenna arrays but are limited in terms of blockage caused by obstacles. Reconfigurable intelligent surfaces (RISs) are seen as a transformative technology that can control the physical propagation environment in which they are embedded by passively reflecting EM waves in preferred directions. Whereas such RISs have been mainly intended for communication purposes, RISs can have great benefits in terms of performance, energy consumption, and cost for localization and mapping. These benefits as well as associated challenges are the main topics of this paper.
There are no more papers matching your filters at the moment.