University Carlos III of Madrid
The possibility of making compact stopband filters using coplanar-coupled EBG resonators in inverted microstrip gap waveguide technology is studied in this work. To do this, the filtering characteristics of different configurations of mushroom-type elements are shown in which the short-circuit element is placed on the edge of the resonator of the patch. The behavior of the structure as well as its main advantages such as: low losses, self-packaging, low level of complexity, flexibility and easy design are illustrated in the paper. To evaluate the possibility of integrating these structures in gap waveguide planar antennas feeding networks, a 5-cell EBG filter was designed and built at the X band. The proposed filter reached a maximum rejection level of minus 35.4 dB, had a stopband centered at 9 GHz and a relative fractional bandwidth below minus 20 dB of 10.6 percent. The new compact filter presented a flat passband in which it was well matched and had low insertion losses that, including the connectors, were close to 1.5 dB in most of the band. These results are enough to improve low-complexity future antenna designs with filter functionalities in this technology.
25 Sep 2025
Nanophotonics has recently gained new momentum with the emergence of a novel class of nanophotonic systems consisting of resonant dielectric nanostructures integrated with single or few layers of transition metal dichalcogenides (2D-TMDs). Thinned to the single layer phase, 2D-TMDs are unique solid-state systems with excitonic states able to persist at room temperature and demonstrate notable tunability of their energies in the optical range. Based on these properties, they offer important opportunities for hybrid nanophotonic systems where a nanophotonic structure serves to enhance the light-matter interaction in the 2D-TMDs, while the 2D-TMDs can provide various active functionalities, thereby dramatically enhancing the scope of nanophotonic structures. In this work, we combine 2D-TMD materials with resonant photonic nanostructures, namely, metasurfaces composed of high-index dielectric nanoparticles. The dependence of the excitonic states on charge carrier density in 2D-TMDs leads to an amplitude modulation of the corresponding optical transitions upon changes of the Fermi level, and thereby to changes of the coupling strength between the 2D-TMDs and resonant modes of the photonic nanostructure. We experimentally implement such a hybrid nanophotonic system and demonstrate voltage tuning of its reflectance as well as its different polarization-dependent behavior. Our results show that hybridization with 2D-TMDs can serve to render resonant photonic nanostructures tunable and time-variant - important properties for practical applications in optical analog computers and neuromorphic circuits.
Wind farm placement arranges the size and the location of multiple wind farms within a given region. The power output is highly related to the wind speed on spatial and temporal levels, which can be modeled by advanced data-driven approaches. To this end, we use a probabilistic neural network as a surrogate that accounts for the spatiotemporal correlations of wind speed. This neural network uses ReLU activation functions so that it can be reformulated as mixed-integer linear set of constraints (constraint learning). We embed these constraints into the placement decision problem, formulated as a two-stage stochastic optimization problem. Specifically, conditional quantiles of the total electricity production are regarded as recursive decisions in the second stage. We use real high-resolution regional data from a northern region in Spain. We validate that the constraint learning approach outperforms the classical bilinear interpolation method. Numerical experiments are implemented on risk-averse investors. The results indicate that risk-averse investors concentrate on dominant sites with strong wind, while exhibiting spatial diversification and sensitive capacity spread in non-dominant sites. Furthermore, we show that if we introduce transmission line costs in the problem, risk-averse investors favor locations closer to the substations. On the contrary, risk-neutral investors are willing to move to further locations to achieve higher expected profits. Our results conclude that the proposed novel approach is able to tackle a portfolio of regional wind farm placements and further provide guidance for risk-averse investors.
PAC-Bayes bounds have been proposed to get risk estimates based on a training sample. In this paper the PAC-Bayes approach is combined with stability of the hypothesis learned by a Hilbert space valued algorithm. The PAC-Bayes setting is used with a Gaussian prior centered at the expected output. Thus a novelty of our paper is using priors defined in terms of the data-generating distribution. Our main result estimates the risk of the randomized algorithm in terms of the hypothesis stability coefficients. We also provide a new bound for the SVM classifier, which is compared to other known bounds experimentally. Ours appears to be the first stability-based bound that evaluates to non-trivial values.
Carrier Grade NAT (CGN) mechanisms enable ISPs to share a single IPv4 address across multiple customers, thus offering an immediate solution to the IPv4 address scarcity problem. In this paper, we perform a large scale active measurement campaign to detect CGNs in fixed broadband networks using NAT Revelio, a tool we have developed and validated. Revelio enables us to actively determine from within residential networks the type of upstream network address translation, namely NAT at the home gateway (customer-grade NAT) or NAT in the ISP (Carrier Grade NAT). We demonstrate the generality of the methodology by deploying Revelio in the FCC Measuring Broadband America testbed operated by SamKnows and also in the RIPE Atlas testbed. We enhance Revelio to actively discover from within any home network the type of upstream NAT configuration (i.e., simple home NAT or Carrier Grade NAT). We ran an active large-scale measurement study of CGN usage from 5,121 measurement vantage points within over 60 different ISPs operating in Europe and the United States. We found that 10% of the ISPs we tested have some form of CGN deployment. We validate our results with four ISPs at the IP level and, reported to the ground truth we collected, we conclude that Revelio was 100% accurate in determining the upstream NAT configuration for all the corresponding lines. To the best of our knowledge, this represents the largest active measurement study of (confirmed) CGN deployments at the IP level in fixed broadband networks to date.
In recent years, advances in immersive multimedia technologies, such as extended reality (XR) technologies, have led to more realistic and user-friendly devices. However, these devices are often bulky and uncomfortable, still requiring tether connectivity for demanding applications. The deployment of the fifth generation of telecommunications technologies (5G) has set the basis for XR offloading solutions with the goal of enabling lighter and fully wearable XR devices. In this paper, we present a traffic dataset for two demanding XR offloading scenarios that are complementary to those available in the current state of the art, captured using a fully developed end-to-end XR offloading solution. We also propose a set of accurate traffic models for the proposed scenarios based on the captured data, accompanied by a simple and consistent method to generate synthetic data from the fitted models. Finally, using an open-source 5G radio access network (RAN) emulator, we validate the models both at the application and resource allocation layers. Overall, this work aims to provide a valuable contribution to the field with data and tools for designing, testing, improving, and extending XR offloading solutions in academia and industry.
University Carlos III of MadridUniversidad de Alcal
Among the seventeen Sustainable Development Goals (SDGs) proposed within the 2030 Agenda and adopted by all the United Nations member states, the Fifth SDG is a call for action to turn Gender Equality into a fundamental human right and an essential foundation for a better world. It includes the eradication of all types of violence against women. Within this context, the UC3M4Safety research team aims to develop Bindi. This is a cyber-physical system which includes embedded Artificial Intelligence algorithms, for user real-time monitoring towards the detection of affective states, with the ultimate goal of achieving the early detection of risk situations for women. On this basis, we make use of wearable affective computing including smart sensors, data encryption for secure and accurate collection of presumed crime evidence, as well as the remote connection to protecting agents. Towards the development of such system, the recordings of different laboratory and into-the-wild datasets are in process. These are contained within the UC3M4Safety Database. Thus, this paper presents and details the first release of WEMAC, a novel multi-modal dataset, which comprises a laboratory-based experiment for 47 women volunteers that were exposed to validated audio-visual stimuli to induce real emotions by using a virtual reality headset while physiological, speech signals and self-reports were acquired and collected. We believe this dataset will serve and assist research on multi-modal affective computing using physiological and speech information.
Natural disasters affect millions of people every year. Finding missing persons in the shortest possible time is of crucial importance to reduce the death toll. This task is especially challenging when victims are sparsely distributed in large and/or difficult-to-reach areas and cellular networks are down. In this paper we present SARDO, a drone-based search and rescue solution that exploits the high penetration rate of mobile phones in the society to localize missing people. SARDO is an autonomous, all-in-one drone-based mobile network solution that does not require infrastructure support or mobile phones modifications. It builds on novel concepts such as pseudo-trilateration combined with machine-learning techniques to efficiently locate mobile phones in a given area. Our results, with a prototype implementation in a field-trial, show that SARDO rapidly determines the location of mobile phones (~3 min/UE) in a given area with an accuracy of few tens of meters and at a low battery consumption cost (~5%). State-of-the-art localization solutions for disaster scenarios rely either on mobile infrastructure support or exploit onboard cameras for human/computer vision, IR, thermal-based localization. To the best of our knowledge, SARDO is the first drone-based cellular search-and-rescue solution able to accurately localize missing victims through mobile phones.
Resistive Random Access Memories (RRAMs) are being studied by the industry and academia because it is widely accepted that they are promising candidates for the next generation of high density nonvolatile memories. Taking into account the stochastic nature of mechanisms behind resistive switching, a new technique based on the use of functional data analysis has been developed to accurately model resistive memory device characteristics. Functional principal component analysis (FPCA) based on Karhunen-Loeve expansion is applied to obtain an orthogonal decomposition of the reset process in terms of uncorrelated scalar random variables. Then, the device current has been accurately described making use of just one variable presenting a modeling approach that can be very attractive from the circuit simulation viewpoint. The new method allows a comprehensive description of the stochastic variability of these devices by introducing a probability distribution that allows the simulation of the main parameter that is employed for the model implementation. A rigorous description of the mathematical theory behind the technique is given and its application for a broad set of experimental measurements is explained.
Foundation models have achieved remarkable success across various domains, yet their adoption in healthcare remains limited. While significant advances have been made in medical imaging, genetic biomarkers, and time series from electronic health records, the potential of foundation models for patient behavior monitoring through personal digital devices remains underexplored. The data generated by these devices are inherently heterogeneous, multisource, and often exhibit high rates of missing data, posing unique challenges. This paper introduces a novel foundation model based on a modified vector quantized variational autoencoder, specifically designed to process real-world data from smartphones and wearable devices. We leveraged the discrete latent representation of this model to effectively perform two downstream tasks, suicide risk assessment and emotional state prediction, on different held-out clinical cohorts without the need of fine-tuning. We also highlight the existence of a trade-off between discrete and continuous latent structures, suggesting that hybrid models may be optimal for balancing accuracy across various supervised and unsupervised tasks.
The use of machine learning methods helps to improve decision making in different fields. In particular, the idea of bridging predictions (machine learning models) and prescriptions (optimization problems) is gaining attention within the scientific community. One of the main ideas to address this trade-off is the so-called Constraint Learning (CL) methodology, where the structures of the machine learning model can be treated as a set of constraints to be embedded within the optimization problem, establishing the relationship between a direct decision variable xx and a response variable yy. However, most CL approaches have focused on making point predictions for a certain variable, not taking into account the statistical and external uncertainty faced in the modeling process. In this paper, we extend the CL methodology to deal with uncertainty in the response variable yy. The novel Distributional Constraint Learning (DCL) methodology makes use of a piece-wise linearizable neural network-based model to estimate the parameters of the conditional distribution of yy (dependent on decisions xx and contextual information), which can be embedded within mixed-integer optimization problems. In particular, we formulate a stochastic optimization problem by sampling random values from the estimated distribution by using a linear set of constraints. In this sense, DCL combines both the high predictive performance of the neural network method and the possibility of generating scenarios to account for uncertainty within a tractable optimization model. The behavior of the proposed methodology is tested in a real-world problem in the context of electricity systems, where a Virtual Power Plant seeks to optimize its operation, subject to different forms of uncertainty, and with price-responsive consumers.
Adaptive learning is necessary for non-stationary environments where the learning machine needs to forget past data distribution. Efficient algorithms require a compact model update to not grow in computational burden with the incoming data and with the lowest possible computational cost for online parameter updating. Existing solutions only partially cover these needs. Here, we propose the first adaptive sparse Gaussian Process (GP) able to address all these issues. We first reformulate a variational sparse GP algorithm to make it adaptive through a forgetting factor. Next, to make the model inference as simple as possible, we propose updating a single inducing point of the sparse GP model together with the remaining model parameters every time a new sample arrives. As a result, the algorithm presents a fast convergence of the inference process, which allows an efficient model update (with a single inference iteration) even in highly non-stationary environments. Experimental results demonstrate the capabilities of the proposed algorithm and its good performance in modeling the predictive posterior in mean and confidence interval estimation compared to state-of-the-art approaches.
5
Integrated sensing and communication (ISAC) enables radio systems to simultaneously sense and communicate with their environment. This paper, developed within the Hexa-X-II project funded by the European Union, presents a comprehensive cross-layer vision for ISAC in 6G networks, integrating insights from physical-layer design, hardware architectures, AI-driven intelligence, and protocol-level innovations. We begin by revisiting the foundational principles of ISAC, highlighting synergies and trade-offs between sensing and communication across different integration levels. Enabling technologies (such as multiband operation, massive and distributed MIMO, non-terrestrial networks, reconfigurable intelligent surfaces, and machine learning) are analyzed in conjunction with hardware considerations including waveform design, synchronization, and full-duplex operation. To bridge implementation and system-level evaluation, we introduce a quantitative cross-layer framework linking design parameters to key performance and value indicators. By synthesizing perspectives from both academia and industry, this paper outlines how deeply integrated ISAC can transform 6G into a programmable and context-aware platform supporting applications from reliable wireless access to autonomous mobility and digital twinning.
The homogeneity problem for testing if more than two different samples come from the same population is considered for the case of functional data. The methodological results are motivated by the study of homogeneity of electronic devices fabricated by different materials and active layer thicknesses. In the case of normality distribution of the stochastic processes associated with each sample, this problem is known as Functional ANOVA problem and is reduced to test the equality of the mean group functions (FANOVA). The problem is that the current/voltage curves associated with Resistive Random Access Memories (RRAM) are not generated by a Gaussian process so that a different approach is necessary for testing homogeneity. To solve this problem two different parametric and nonparametric approaches based on basis expansion of the sample curves are proposed. The first consists of testing multivariate homogeneity tests on a vector of basis coefficients of the sample curves. The second is based on dimension reduction by using functional principal component analysis of the sample curves (FPCA) and testing multivariate homogeneity on a vector of principal components scores. Different approximation numerical techniques are employed to adapt the experimental data for the statistical study. An extensive simulation study is developed for analyzing the performance of both approaches in the parametric and non-parametric cases. Finally, the proposed methodologies are applied on three samples of experimental reset curves measured in three different RRAM technologies.
The disruptive reconfigurable intelligent surface (RIS) technology is steadily gaining relevance as a key element in future 6G networks. However, a one-size-fits-all RIS hardware design is yet to be defined due to many practical considerations. A major roadblock for currently available RISs is their inability to concurrently operate at multiple carrier frequencies, which would lead to redundant installations to support multiple radio access technologies (RATs). In this paper, we introduce FABRIS, a novel and practical multi-frequency RIS design. FABRIS is able to dynamically operate across different radio frequencies (RFs) by means of frequency-tunable antennas as unit cells with virtually no performance degradation when conventional approaches to RIS design and optimization fail. Remarkably, our design preserves a sufficiently narrow beamwidth as to avoid generating signal leakage in unwanted directions and a sufficiently high antenna efficiency in terms of scattering parameters. Indeed, FABRIS selects the RIS configuration that maximizes the signal at the intended target user equipment (UE) while minimizing leakage to non-intended neighboring UEs. Numerical results and full-wave simulations validate our proposed approach against a naive implementation that does not consider signal leakage resulting from multi-frequency antenna arrays.
This paper proposes a data preparation process for managing real-world kinematic data and detecting fishing vessels. The solution is a binary classification that classifies ship trajectories into either fishing or non-fishing ships. The data used are characterized by the typical problems found in classic data mining applications using real-world data, such as noise and inconsistencies. The two classes are also clearly unbalanced in the data, a problem which is addressed using algorithms that resample the instances. For classification, a series of features are extracted from spatiotemporal data that represent the trajectories of the ships, available from sequences of Automatic Identification System (AIS) reports. These features are proposed for the modelling of ship behavior but, because they do not contain context-related information, the classification can be applied in other scenarios. Experimentation shows that the proposed data preparation process is useful for the presented classification problem. In addition, positive results are obtained using minimal information.
Among the seventeen Sustainable Development Goals (SDGs) proposed within the 2030 Agenda and adopted by all the United Nations member states, the 13th^{th} SDG is a call for action to combat climate change for a better world. In this work, we provide an overview of areas in which audio intelligence -- a powerful but in this context so far hardly considered technology -- can contribute to overcome climate-related challenges. We categorise potential computer audition applications according to the five elements of earth, water, air, fire, and aether, proposed by the ancient Greeks in their five element theory; this categorisation serves as a framework to discuss computer audition in relation to different ecological aspects. Earth and water are concerned with the early detection of environmental changes and, thus, with the protection of humans and animals, as well as the monitoring of land and aquatic organisms. Aerial audio is used to monitor and obtain information about bird and insect populations. Furthermore, acoustic measures can deliver relevant information for the monitoring and forecasting of weather and other meteorological phenomena. The fourth considered element is fire. Due to the burning of fossil fuels, the resulting increase in CO2_2 emissions and the associated rise in temperature, fire is used as a symbol for man-made climate change and in this context includes the monitoring of noise pollution, machines, as well as the early detection of wildfires. In all these areas, computer audition can help counteract climate change. Aether then corresponds to the technology itself that makes this possible. This work explores these areas and discusses potential applications, while positioning computer audition in relation to methodological alternatives.
Network slicing to enable resource sharing among multiple tenants --network operators and/or services-- is considered a key functionality for next generation mobile networks. This paper provides an analysis of a well-known model for resource sharing, the 'share-constrained proportional allocation' mechanism, to realize network slicing. This mechanism enables tenants to reap the performance benefits of sharing, while retaining the ability to customize their own users' allocation. This results in a network slicing game in which each tenant reacts to the user allocations of the other tenants so as to maximize its own utility. We show that, under appropriate conditions, the game associated with such strategic behavior converges to a Nash equilibrium. At the Nash equilibrium, a tenant always achieves the same, or better, performance than under a static partitioning of resources, hence providing the same level of protection as such static partitioning. We further analyze the efficiency and fairness of the resulting allocations, providing tight bounds for the price of anarchy and envy-freeness. Our analysis and extensive simulation results confirm that the mechanism provides a comprehensive practical solution to realize network slicing. Our theoretical results also fill a gap in the literature regarding the analysis of this resource allocation model under strategic players.
The present study focuses on detecting the degree of deformity in fruits such as apples, mangoes, and strawberries during the process of inspecting their external quality, employing Single-Input and Multi-Input architectures based on convolutional neural network (CNN) models using sets of real and synthetic images. The datasets are segmented using the Segment Anything Model (SAM), which provides the silhouette of the fruits. Regarding the single-input architecture, the evaluation of the CNN models is performed only with real images, but a methodology is proposed to improve these results using a pre-trained model with synthetic images. In the Multi-Input architecture, branches with RGB images and fruit silhouettes are implemented as inputs for evaluating CNN models such as VGG16, MobileNetV2, and CIDIS. However, the results revealed that the Multi-Input architecture with the MobileNetV2 model was the most effective in identifying deformities in the fruits, achieving accuracies of 90\%, 94\%, and 92\% for apples, mangoes, and strawberries, respectively. In conclusion, the Multi-Input architecture with the MobileNetV2 model is the most accurate for classifying levels of deformity in fruits.
We propose a new method to visualize gene expression experiments inspired by the latent semantic indexing, technique originally proposed in the textual analysis context. By using the correspondence word-gene document-experiment, we define an asymmetric similarity measure of association for genes that accounts for potential hierarchies in the data, the key to obtain meaningful gene mappings. We use the polar decomposition to obtain the sources of asymmetry of the similarity matrix, which are later combined with previous knowledge. Genetic classes of genes are identified by means of a mixture model applied in the genes latent space. We describe the steps of the procedure and we show its utility in the Human Cancer dataset.
There are no more papers matching your filters at the moment.