Current applications of self-supervised learning to wireless channel representation often borrow paradigms developed for text and image processing, without fully addressing the unique characteristics and constraints of wireless communications. To bridge this gap, we introduce ContraWiMAE, Wireless Contrastive Masked Autoencoder, a transformer-based foundation model that unifies masked reconstruction and masked contrastive learning for wireless channel representation. Our key innovation is a new wireless-inspired contrastive objective that exploits the inherent characteristics of wireless environment, including noise, fading, and partial observability, as natural augmentation. Through extensive evaluation on unseen scenarios and conditions, we demonstrate our method's effectiveness in multiple downstream tasks, including cross-frequency beam selection, line-of-sight detection, and channel estimation. ContraWiMAE exhibits superior linear separability and adaptability in diverse wireless environments, demonstrating exceptional data efficiency and competitive performance compared with supervised baselines under challenging conditions. Comparative evaluations against a state-of-the-art wireless channel foundation model confirm the superior performance and data efficiency of our approach, highlighting its potential as a powerful baseline for future research in self-supervised wireless channel representation learning. To foster further work in this direction, we release the model weights and training pipeline for ContraWiMAE.
Today, Wi-Fi is over 25 years old. Yet, despite sharing the same branding name, today's Wi-Fi boasts entirely new capabilities that were not even on the roadmap 25 years ago. This article aims to provide a holistic and comprehensive technical and historical tutorial on Wi-Fi, beginning with IEEE 802.11b (Wi-Fi 1) and looking forward to IEEE 802.11bn (Wi-Fi 8). This is the first tutorial article to span these eight generations. Rather than a generation-by-generation exposition, we describe the key mechanisms that have advanced Wi-Fi. We begin by discussing spectrum allocation and coexistence, and detailing the IEEE 802.11 standardization cycle. Second, we provide an overview of the physical layer and describe key elements that have enabled data rates to increase by over 1,000x. Third, we describe how Wi-Fi Medium Access Control has been enhanced from the original Distributed Coordination Function to now include capabilities spanning from frame aggregation to wideband spectrum access. Fourth, we describe how Wi-Fi 5 first broke the one-user-at-a-time paradigm and introduced multi-user access. Fifth, given the increasing use of mobile, battery-powered devices, we describe Wi-Fi's energy-saving mechanisms over the generations. Sixth, we discuss how Wi-Fi was enhanced to seamlessly aggregate spectrum across 2.4 GHz, 5 GHz, and 6 GHz bands to improve throughput, reliability, and latency. Finally, we describe how Wi-Fi enables nearby Access Points to coordinate in order to improve performance and efficiency. In the Appendix, we further discuss Wi-Fi developments beyond 802.11bn, including integrated mmWave operations, sensing, security and privacy extensions, and the adoption of AI/ML.
Despite remarkable advances in the field, LLMs remain unreliable in distinguishing causation from correlation. Recent results from the Corr2Cause dataset benchmark reveal that state-of-the-art LLMs -- such as GPT-4 (F1 score: 29.08) -- only marginally outperform random baselines (Random Uniform, F1 score: 20.38), indicating limited capacity of generalization. To tackle this limitation, we propose a novel structured approach: rather than directly answering causal queries, we provide the model with the capability to structure its thinking by guiding the model to build a structured knowledge graph, systematically encoding the provided correlational premises, to answer the causal queries. This intermediate representation significantly enhances the model's causal capabilities. Experiments on the test subset of the Corr2Cause dataset benchmark with Qwen3-32B model (reasoning model) show substantial gains over standard direct prompting methods, improving F1 scores from 32.71 to 48.26 (over 47.5% relative increase), along with notable improvements in precision and recall. These results underscore the effectiveness of providing the model with the capability to structure its thinking and highlight its promising potential for broader generalization across diverse causal inference tasks.
Automatic Modulation Classification (AMC) is a signal processing technique widely used at the physical layer of wireless systems to enhance spectrum utilization efficiency. In this work, we propose a fast and accurate AMC system, termed DL-AMC, which leverages deep learning techniques. Specifically, DL-AMC is built using convolutional neural network (CNN) architectures, including ResNet-18, ResNet-50, and MobileNetv2. To evaluate its performance, we curated a comprehensive dataset containing various modulation schemes. Each modulation type was transformed into an eye diagram, with signal-to-noise ratio (SNR) values ranging from -20 dB to 30 dB. We trained the CNN models on this dataset to enable them to learn the discriminative features of each modulation class effectively. Experimental results show that the proposed DL-AMC models achieve high classification accuracy, especially in low SNR conditions. These results highlight the robustness and efficacy of DL-AMC in accurately classifying modulations in challenging wireless environments
In recent years, advances in immersive multimedia technologies, such as extended reality (XR) technologies, have led to more realistic and user-friendly devices. However, these devices are often bulky and uncomfortable, still requiring tether connectivity for demanding applications. The deployment of the fifth generation of telecommunications technologies (5G) has set the basis for XR offloading solutions with the goal of enabling lighter and fully wearable XR devices. In this paper, we present a traffic dataset for two demanding XR offloading scenarios that are complementary to those available in the current state of the art, captured using a fully developed end-to-end XR offloading solution. We also propose a set of accurate traffic models for the proposed scenarios based on the captured data, accompanied by a simple and consistent method to generate synthetic data from the fitted models. Finally, using an open-source 5G radio access network (RAN) emulator, we validate the models both at the application and resource allocation layers. Overall, this work aims to provide a valuable contribution to the field with data and tools for designing, testing, improving, and extending XR offloading solutions in academia and industry.
This paper discusses Life-Based Design (LBD) methodology within the context of designing technologies for reaching a state of solitude, the state where a person wishes to minimize her social contacts to get space or freedom.
This paper explores the road to vastly improving the broadband connectivity in future 6G wireless systems. Different categories of use cases are considered, with peak data rates up to 1 Tbps. Several categories of enablers at the infrastructure, spectrum, and protocol/algorithmic levels are required to realize the intended broadband connectivity goals in 6G. At the infrastructure level, we consider ultra-massive MIMO technology (possibly implemented using holographic radio), intelligent reflecting surfaces, user-centric cell-free networking, integrated access and backhaul, and integrated space and terrestrial networks. At the spectrum level, the network must seamlessly utilize sub-6 GHz bands for coverage and spatial multiplexing of many devices, while higher bands will be mainly used for pushing the peak rates of point-to-point links. Finally, at the protocol/algorithmic level, the enablers include improved coding, modulation, and waveforms to achieve lower latency, higher reliability, and reduced complexity.
The introduction of Integrated Sensing and Communications (ISAC) in cellular systems is not expected to result in a shift away from the popular choice of cost- and energy-efficient analog or hybrid beamforming structures. However, this comes at the cost of limiting the angular capabilities to a confined space per acquisitions. Thus, as a prerequisite for the successful implementation of numerous ISAC use cases, the need for an optimal angular estimation of targets and their separation based on the minimal number of angular samples arises. In this work, different approaches for angular estimation based on a minimal, DFT-based set of angular samples are evaluated. The samples are acquired through sweeping multiple beams of an ISAC proof of concept (PoC) in the industrial scenario of the ARENA2036. The study's findings indicate that interpolation approaches are more effective for generalizing across different types of angular scenarios. While the orthogonal matching pursuit (OMP) approach exhibits the most accurate estimation for a single, strong and clearly discriminable target, the DFT-based interpolation approach demonstrates the best overall estimation performance.
Classical and centralized Artificial Intelligence (AI) methods require moving data from producers (sensors, machines) to energy hungry data centers, raising environmental concerns due to computational and communication resource demands, while violating privacy. Emerging alternatives to mitigate such high energy costs propose to efficiently distribute, or federate, the learning tasks across devices, which are typically low-power. This paper proposes a novel framework for the analysis of energy and carbon footprints in distributed and federated learning (FL). The proposed framework quantifies both the energy footprints and the carbon equivalent emissions for vanilla FL methods and consensus-based fully decentralized approaches. We discuss optimal bounds and operational points that support green FL designs and underpin their sustainability assessment. Two case studies from emerging 5G industry verticals are analyzed: these quantify the environmental footprints of continual and reinforcement learning setups, where the training process is repeated periodically for continuous improvements. For all cases, sustainability of distributed learning relies on the fulfillment of specific requirements on communication efficiency and learner population size. Energy and test accuracy should be also traded off considering the model and the data footprints for the targeted industrial applications.
This paper investigates how near-field beamfocusing can be achieved using a modular linear array (MLA), composed of multiple widely spaced uniform linear arrays (ULAs). The MLA architecture extends the aperture length of a standard ULA without adding additional antennas, thereby enabling near-field beamfocusing without increasing processing complexity. Unlike conventional far-field beamforming, near-field beamfocusing enables simultaneous data transmission to multiple users at different distances in the same angular interval, offering significant multiplexing gains. We present a detailed mathematical analysis of the beamwidth and beamdepth achievable with the MLA and show that by appropriately selecting the number of antennas in each constituent ULA, ideal near-field beamfocusing can be realized. In addition, we propose a computationally efficient localization method that fuses estimates from each ULA, enabling efficient parametric channel estimation. Simulation results confirm the accuracy of the analytical expressions and that MLAs achieve near-field beamfocusing with a limited number of antennas, making them a promising solution for next-generation wireless systems.
Autoencoder-based deep learning is applied to jointly optimize geometric and probabilistic constellation shaping for optical coherent communication. The optimized constellation shaping outperforms the 256 QAM Maxwell-Boltzmann probabilistic distribution with extra 0.05 bits/4D-symbol mutual information for 64 GBd transmission over 170 km SMF link.
The transition to 6G has driven significant updates to the 3GPP channel model, particularly in modeling UE antennas and user-induced blockage for handheld devices. The 3GPP Rel.19 revision of TR 38.901 introduces a more realistic framework that captures directive antenna patterns, practical antenna placements, polarization effects, and element-specific blockage. These updates are based on high-fidelity simulations and measurements of a reference smartphone across multiple frequency ranges. By aligning link- and system-level simulations with real-world device behavior, the new model enables more accurate evaluation of 6G technologies and supports consistent performance assessment across industry and research.
As the number of user equipments (UEs) with various data rate and latency requirements increases in wireless networks, the resource allocation problem for orthogonal frequency-division multiple access (OFDMA) becomes challenging. In particular, varying requirements lead to a non-convex optimization problem when maximizing the systems data rate while preserving fairness between UEs. In this paper, we solve the non-convex optimization problem using deep reinforcement learning (DRL). We outline, train and evaluate a DRL agent, which performs the task of media access control scheduling for a downlink OFDMA scenario. To kickstart training of our agent, we introduce mimicking learning. For improvement of scheduling performance, full buffer state information at the base station (e.g. packet age, packet size) is taken into account. Techniques like input feature compression, packet shuffling and age capping further improve the performance of the agent. We train and evaluate our agents using Nokia's wireless suite and evaluate against different benchmark agents. We show that our agents clearly outperform the benchmark agents.
With the impending arrival of the sixth generation (6G) of wireless communication technology, the telecommunications landscape is poised for another revolutionary transformation. At the forefront of this evolution are intelligent meta-surfaces (IS), emerging as a disruptive physical layer technology with the potential to redefine the capabilities and performance metrics of future wireless networks. As 6G evolves from concept to reality, industry stakeholders, standards organizations, and regulatory bodies are collaborating to define the specifications, protocols, and interoperability standards governing IS deployment. Against this background, this article delves into the ongoing standardization efforts, emerging trends, potential opportunities, and prevailing challenges surrounding the integration of IS into the framework of 6G and beyond networks. Specifically, it provides a tutorial-style overview of recent advancements in IS and explores their potential applications within future networks beyond 6G. Additionally, the article identifies key challenges in the design and implementation of various types of intelligent surfaces, along with considerations for their practical standardization. Finally, it highlights potential future prospects in this evolving field.
Integrated sensing and communication (ISAC) enables radio systems to simultaneously sense and communicate with their environment. This paper, developed within the Hexa-X-II project funded by the European Union, presents a comprehensive cross-layer vision for ISAC in 6G networks, integrating insights from physical-layer design, hardware architectures, AI-driven intelligence, and protocol-level innovations. We begin by revisiting the foundational principles of ISAC, highlighting synergies and trade-offs between sensing and communication across different integration levels. Enabling technologies (such as multiband operation, massive and distributed MIMO, non-terrestrial networks, reconfigurable intelligent surfaces, and machine learning) are analyzed in conjunction with hardware considerations including waveform design, synchronization, and full-duplex operation. To bridge implementation and system-level evaluation, we introduce a quantitative cross-layer framework linking design parameters to key performance and value indicators. By synthesizing perspectives from both academia and industry, this paper outlines how deeply integrated ISAC can transform 6G into a programmable and context-aware platform supporting applications from reliable wireless access to autonomous mobility and digital twinning.
We investigate the generation of non-stabilizerness, or magic, in a multi-particle quantum walk by analyzing the time evolution of the stabilizer R\'enyi entropy M2M_2. Our study considers both single- and two-particle quantum walks in the framework of the XXZ Heisenberg model with varying interaction strengths. We demonstrate that the spread of magic follows the light-cone structure dictated by the system's dynamics, with distinct behaviors emerging in the easy-plane (\Delta < 1) and easy-axis (\Delta > 1) regimes. For \Delta < 1, magic generation is primarily governed by single-particle dynamics, while for \Delta > 1, doublon propagation dominates, resulting in a significantly slower growth of M2M_2. Furthermore, the magic exhibits logarithmic growth in time for both one and two-particle dynamics. Additionally, by examining the Pauli spectrum, we show that the statistical distribution of level spacings exhibits Poissonian behavior, independent of interaction strength or particle number. Our results shed light on the role of interactions on magic generation in a many-body system.
Machine learning-based failure management in optical networks has gained significant attention in recent years. However, severe class imbalance, where normal instances vastly outnumber failure cases, remains a considerable challenge. While pre- and in-processing techniques have been widely studied, post-processing methods are largely unexplored. In this work, we present a direct comparison of pre-, in-, and post-processing approaches for class imbalance mitigation in failure detection and identification using an experimental dataset. For failure detection, post-processing methods-particularly Threshold Adjustment-achieve the highest F1 score improvement (up to 15.3%), while Random Under-Sampling provides the fastest inference. In failure identification, GenAI methods deliver the most substantial performance gains (up to 24.2%), whereas post-processing shows limited impact in multi-class settings. When class overlap is present and latency is critical, over-sampling methods such as the SMOTE are most effective; without latency constraints, Meta-Learning yields the best results. In low-overlap scenarios, Generative AI approaches provide the highest performance with minimal inference time.
Fifth-generation (5G) systems utilize orthogonal demodulation reference signals (DMRS) to enable channel estimation at the receiver. These orthogonal DMRS-also referred to as pilots-are effective in avoiding pilot contamination and interference from both the user's own data and that of others. However, this approach incurs a significant overhead, as a substantial portion of the time-frequency resources must be reserved for pilot transmission. Moreover, the overhead increases with the number of users and transmission layers. To address these limitations in the context of emerging sixth-generation (6G) systems and to support data transmission across the entire time-frequency grid, the superposition of data and DMRS symbols has been explored as an alternative DMRS transmission strategy. In this study, we propose an enhanced version of DeepRx, a deep convolutional neural network (CNN)-based receiver, capable of estimating the channel from received superimposed (SI) DMRS symbols and reliably detecting the transmitted data. We also design a conventional receiver for comparison, which estimates the channel from SI DMRS using classical signal processing techniques. Extensive evaluations in both uplink single-user and multi-user scenarios demonstrate that DeepRx consistently outperforms the conventional receivers in terms of performance.
This paper studies integrated sensing and communication (ISAC) with dynamic time division duplex (DTDD) cell-free (CF) massive multiple-input multiple-output~(mMIMO) systems. DTDD enables the CF mMIMO system to concurrently serve both uplink~(UL) and downlink~(DL) users with spatially separated \emph{half-duplex~(HD)} access points~(APs) using the same time-frequency resources. Further, to facilitate ISAC, the UL APs are utilized for both UL data and target echo reception, while the DL APs jointly transmit the precoded DL data streams and target signal. In this context, we present centralized and distributed generalized likelihood-ratio tests~(GLRTs) for target detection treating UL users' signals as sensing interference. We then quantify the optimality and complexity trade-off between distributed and centralized GLRTs and benchmark the respective estimators with the Bayesian Cramér-Rao lower bound for target radar-cross section~(RCS). Then, we present a unified framework for joint UL users' data detection and RCS estimation. Next, for communication, we derive the signal-to-noise-plus-interference~(SINR) optimal combiner accounting for the cross-link and radar interference for UL data processing. In DL, we use regularized zero-forcing for the users and propose two types of precoders for the target: one ``user-centric" that nullifies the interference caused by the target signal to the DL users and one ``target-centric" based on the dominant eigenvector of the composite channel between the target and the APs. Finally, numerical studies corroborate with our theoretical findings and reveal that the \emph{GLRT is robust to inter-AP interference, and DTDD doubles the 90%90\%-likely sum UL-DL SE compared to traditional TDD-based CF-mMIMO ISAC systems}; while using HD hardware.
In this paper, we propose a multi-user downlink system for two users based on the orthogonal time frequency space (OTFS) modulation scheme. The design leverages the generalized singular value decomposition (GSVD) of the channels between the base station and the two users, applying precoding and detection matrices based on the right and left singular vectors, respectively. We derive the analytical expressions for three scenarios and present the corresponding simulation results. These results demonstrate that, in terms of bit error rate (BER), the proposed system outperforms the conventional multi-user OTFS system in two scenarios when using minimum mean square error (MMSE) equalizers or precoder, both for perfect channel state information and for a scenario with channel estimation errors. In the third scenario, the design is equivalent to zero-forcing (ZF) precoding at the transmitter.
There are no more papers matching your filters at the moment.