Lappeenranta-Lahti University of Technology
MER2025 is the third year of our MER series of challenges, aiming to bring together researchers in the affective computing community to explore emerging trends and future directions in the field. Previously, MER2023 focused on multi-label learning, noise robustness, and semi-supervised learning, while MER2024 introduced a new track dedicated to open-vocabulary emotion recognition. This year, MER2025 centers on the theme "When Affective Computing Meets Large Language Models (LLMs)".We aim to shift the paradigm from traditional categorical frameworks reliant on predefined emotion taxonomies to LLM-driven generative methods, offering innovative solutions for more accurate and reliable emotion understanding. The challenge features four tracks: MER-SEMI focuses on fixed categorical emotion recognition enhanced by semi-supervised learning; MER-FG explores fine-grained emotions, expanding recognition from basic to nuanced emotional states; MER-DES incorporates multimodal cues (beyond emotion words) into predictions to enhance model interpretability; MER-PR investigates whether emotion prediction results can improve personality recognition performance. For the first three tracks, baseline code is available at MERTools, and datasets can be accessed via Hugging Face. For the last track, the dataset and baseline code are available on GitHub.
Researchers from multiple international institutions devised two defense frameworks, FilterRAG and ML-FilterRAG, to protect Retrieval-Augmented Generation (RAG) systems from knowledge poisoning attacks. These frameworks demonstrated improved accuracy, reaching levels comparable to unpoisoned RAG systems, and substantially lowered attack success rates across various LLMs and datasets in both black-box and white-box scenarios.
Quantum Software Engineering (QSE) is emerging as a critical discipline to make quantum computing accessible to a broader developer community; however, most quantum development environments still require developers to engage with low-level details across the software stack - including problem encoding, circuit construction, algorithm configuration, hardware selection, and result interpretation - making them difficult for classical software engineers to use. To bridge this gap, we present C2|Q>: a hardware-agnostic quantum software development framework that translates classical specifications (code) into quantum-executable programs while preserving methodological rigor. The framework applies modular software engineering principles by classifying the workflow into three core modules: an encoder that classifies problems, produces Quantum-Compatible Formats (QCFs), and constructs quantum circuits, a deployment module that generates circuits and recommends hardware based on fidelity, runtime, and cost, and a decoder that interprets quantum outputs into classical solutions. In evaluation, the encoder module achieved a 93.8% completion rate, the hardware recommendation module consistently selected the appropriate quantum devices for workloads scaling up to 56 qubits, and the full C2|Q>: workflow successfully processed classical specifications (434 Python snippets and 100 JSON inputs) with completion rates of 93.8% and 100%, respectively. For case study problems executed on publicly available NISQ hardware, C2|Q>: reduced the required implementation effort by nearly 40X compared to manual implementations using low-level quantum software development kits (SDKs), with empirical runs limited to small- and medium-sized instances consistent with current NISQ capabilities. The open-source implementation of C2|Q>: is available at this https URL
2
Modern power grids face an acute mismatch between where data is generated and where it can be processed: protection relays, EV (Electric Vehicle) charging, and distributed renewables demand millisecond analytics at the edge, while energy-hungry workloads often sit in distant clouds leading to missed real-time deadlines and wasted power. We address this by proposing, to our knowledge, the first-ever SDEN (Software Defined Energy Network) for CaaS (Computations-as-a-Service) that unifies edge, fog, and cloud compute with 5G URLLC (Ultra-Reliable Low-Latency Communications), SDN (Software Defined Networking), and NFV (Network Functions Virtualization) to co-optimize energy, latency, and reliability end-to-end. Our contributions are threefold: (i) a joint task offloading formulation that couples computation placement with network capacity under explicit URLLC constraints; (ii) a feasibility preserving, lightweight greedy heuristic that scales while closely tracking optimal energy and latency trade-offs; and (iii) a tiered AI (Artificial Intelligence) pipeline-reactive at the edge, predictive in the fog, strategic in the cloud-featuring privacy-preserving, federated GNNs (Graph Neural Networks) for fault detection and microgrid coordination. Unlike prior edge-only or cloud-only schemes, SDEN turns fragmented grid compute into a single, programmable substrate that delivers dependable, energy-aware, real time analytics establishing a first-ever, software defined path to practical, grid-scale CaaS.
We focus on Bayesian inverse problems with Gaussian likelihood, linear forward model, and priors that can be formulated as a Gaussian mixture. Such a mixture is expressed as an integral of Gaussian density functions weighted by a mixing density over the mixing variables. Within this framework, the corresponding posterior distribution also takes the form of a Gaussian mixture, and we derive the closed-form expression for its posterior mixing density. To sample from the posterior Gaussian mixture, we propose a two-step sampling method. First, we sample the mixture variables from the posterior mixing density, and then we sample the variables of interest from Gaussian densities conditioned on the sampled mixing variables. However, the posterior mixing density is relatively difficult to sample from, especially in high dimensions. Therefore, we propose to replace the posterior mixing density by a dimension-reduced approximation, and we provide a bound in the Hellinger distance for the resulting approximate posterior. We apply the proposed approach to a posterior with Laplace prior, where we introduce two dimension-reduced approximations for the posterior mixing density. Our numerical experiments indicate that samples generated via the proposed approximations have very low correlation and are close to the exact posterior.
We applied Bayesian Optimal Experimental Design (OED) in the estimation of parameters involved in the Equilibrium Dispersive Model for chromatography with two components with the Langmuir adsorption isotherm. The coefficients estimated were Henry's coefficients, the total absorption capacity and the number of theoretical plates, while the design variables were the injection time and the initial concentration. The Bayesian OED algorithm is based on nested Monte Carlo estimation, which becomes computationally challenging due to the simulation time of the PDE involved in the dispersive model. This complication was relaxed by introducing a surrogate model based on Piecewise Sparse Linear Interpolation. Using the surrogate model instead the original reduces significantly the simulation time and it approximates the solution of the PDE with high degree of accuracy. The estimation of the parameters over strategical design points provided by OED reduces the uncertainty in the estimation of parameters. Additionally, the Bayesian OED methodology indicates no improvements when increasing the number of measurements in temporal nodes above a threshold value.
Recently, vision transformer (ViT) based multimodal learning methods have been proposed to improve the robustness of face anti-spoofing (FAS) systems. However, there are still no works to explore the fundamental natures (\textit{e.g.}, modality-aware inputs, suitable multimodal pre-training, and efficient finetuning) in vanilla ViT for multimodal FAS. In this paper, we investigate three key factors (i.e., inputs, pre-training, and finetuning) in ViT for multimodal FAS with RGB, Infrared (IR), and Depth. First, in terms of the ViT inputs, we find that leveraging local feature descriptors benefits the ViT on IR modality but not RGB or Depth modalities. Second, in observation of the inefficiency on direct finetuning the whole or partial ViT, we design an adaptive multimodal adapter (AMA), which can efficiently aggregate local multimodal features while freezing majority of ViT parameters. Finally, in consideration of the task (FAS vs. generic object classification) and modality (multimodal vs. unimodal) gaps, ImageNet pre-trained models might be sub-optimal for the multimodal FAS task. To bridge these gaps, we propose the modality-asymmetric masked autoencoder (M2^{2}A2^{2}E) for multimodal FAS self-supervised pre-training without costly annotated labels. Compared with the previous modality-symmetric autoencoder, the proposed M2^{2}A2^{2}E is able to learn more intrinsic task-aware representation and compatible with modality-agnostic (e.g., unimodal, bimodal, and trimodal) downstream settings. Extensive experiments with both unimodal (RGB, Depth, IR) and multimodal (RGB+Depth, RGB+IR, Depth+IR, RGB+Depth+IR) settings conducted on multimodal FAS benchmarks demonstrate the superior performance of the proposed methods. We hope these findings and solutions can facilitate the future research for ViT-based multimodal FAS.
The discussions around the unsustainability of the dominant socio-economic structures have yet to produce solutions to address the escalating problems we face as a species. Such discussions, this paper argues, are hindered by the limited scope of the proposed solutions within a business-as-usual context as well as by the underlying technological rationale upon which these solutions are developed. In this paper, we conceptualize a radical sustainable alternative to the energy conundrum based on an emerging mode of production and a commons-based political economy. We propose a commons-oriented Energy Internet as a potential system for energy production and consumption, which may be better suited to tackle the current issues society faces. We conclude by referring to some of the challenges that the implementation of such a proposal would entail.
In recent years, Bayesian inference in large-scale inverse problems found in science, engineering and machine learning has gained significant attention. This paper examines the robustness of the Bayesian approach by analyzing the stability of posterior measures in relation to perturbations in the likelihood potential and the prior measure. We present new stability results using a family of integral probability metrics (divergences) akin to dual problems that arise in optimal transport. Our results stand out from previous works in three directions: (1) We construct new families of integral probability metrics that are adapted to the problem at hand; (2) These new metrics allow us to study both likelihood and prior perturbations in a convenient way; and (3) our analysis accommodates likelihood potentials that are only locally Lipschitz, making them applicable to a wide range of nonlinear inverse problems. Our theoretical findings are further reinforced through specific and novel examples where the approximation rates of posterior measures are obtained for different types of perturbations and provide a path towards the convergence analysis of recently adapted machine learning techniques for Bayesian inverse problems such as data-driven priors and neural network surrogates.
We propose a novel Graph Neural Network-based method for segmentation based on data fusion of multimodal Scanning Electron Microscope (SEM) images. In most cases, Backscattered Electron (BSE) images obtained using SEM do not contain sufficient information for mineral segmentation. Therefore, imaging is often complemented with point-wise Energy-Dispersive X-ray Spectroscopy (EDS) spectral measurements that provide highly accurate information about the chemical composition but that are time-consuming to acquire. This motivates the use of sparse spectral data in conjunction with BSE images for mineral segmentation. The unstructured nature of the spectral data makes most traditional image fusion techniques unsuitable for BSE-EDS fusion. We propose using graph neural networks to fuse the two modalities and segment the mineral phases simultaneously. Our results demonstrate that providing EDS data for as few as 1% of BSE pixels produces accurate segmentation, enabling rapid analysis of mineral samples. The proposed data fusion pipeline is versatile and can be adapted to other domains that involve image data and point-wise measurements.
The Multimodal Denoising and Alignment (MMDA) framework enhances multimodal face anti-spoofing by leveraging RGB, Depth, and IR data, achieving superior performance in detecting presentation attacks across diverse domains. MMDA demonstrated lower Half Total Error Rate (HTER) and higher Area Under the Curve (AUC) scores on benchmark datasets compared to existing methods, improving cross-domain generalization and robustness to missing modalities.
Researchers from Lappeenranta-Lahti University of Technology in Finland conducted a preliminary literature review to identify 25 motivators and 30 demotivators for integrating Large Language Models into software engineering education, categorizing these factors across four high-level themes each. This study establishes foundational insights for developing a tailored framework for Finnish higher education institutions.
This paper proposes a novel curriculum for the microprocessors and microcontrollers laboratory course. The proposed curriculum blends structured laboratory experiments with an open-ended project phase, addressing complex engineering problems and activities. Microprocessors and microcontrollers are ubiquitous in modern technology, driving applications across diverse fields. To prepare future engineers for Industry 4.0, effective educational approaches are crucial. The proposed lab enables students to perform hands-on experiments using advanced microprocessors and microcontrollers while leveraging their acquired knowledge by working in teams to tackle self-defined complex engineering problems that utilize these devices and sensors, often used in the industry. Furthermore, this curriculum fosters multidisciplinary learning and equips students with problem-solving skills that can be applied in real-world scenarios. With recent technological advancements, traditional microprocessors and microcontrollers curricula often fail to capture the complexity of real-world applications. This curriculum addresses this critical gap by incorporating insights from experts in both industry and academia. It trains students with the necessary skills and knowledge to thrive in this rapidly evolving technological landscape, preparing them for success upon graduation. The curriculum integrates project-based learning, where students define complex engineering problems for themselves. This approach actively engages students, fostering a deeper understanding and enhancing their learning capabilities. Statistical analysis shows that the proposed curriculum significantly improves student learning outcomes, particularly in their ability to formulate and solve complex engineering problems, as well as engage in complex engineering activities.
This paper introduces CUQIpy, a versatile open-source Python package for computational uncertainty quantification (UQ) in inverse problems, presented as Part I of a two-part series. CUQIpy employs a Bayesian framework, integrating prior knowledge with observed data to produce posterior probability distributions that characterize the uncertainty in computed solutions to inverse problems. The package offers a high-level modeling framework with concise syntax, allowing users to easily specify their inverse problems, prior information, and statistical assumptions. CUQIpy supports a range of efficient sampling strategies and is designed to handle large-scale problems. Notably, the automatic sampler selection feature analyzes the problem structure and chooses a suitable sampler without user intervention, streamlining the process. With a selection of probability distributions, test problems, computational methods, and visualization tools, CUQIpy serves as a powerful, flexible, and adaptable tool for UQ in a wide selection of inverse problems. Part II of the series focuses on the use of CUQIpy for UQ in inverse problems with partial differential equations (PDEs).
Artificial Intelligence (AI) presents transformative opportunities for industries and society, but its responsible development is essential to prevent unintended consequences. Ethically sound AI systems demand strategic planning, strong governance, and an understanding of the key drivers that promote responsible practices. This study aims to identify and prioritize the motivators that drive the ethical development of AI systems. A Multivocal Literature Review (MLR) and a questionnaire-based survey were conducted to capture current practices in ethical AI. We applied Interpretive Structure Modeling (ISM) to explore the relationships between motivator categories, followed by MICMAC analysis to classify them by their driving and dependence power. Fuzzy TOPSIS was used to rank these motivators by importance. Twenty key motivators were identified and grouped into eight categories: Human Resource, Knowledge Integration, Coordination, Project Administration, Standards, Technology Factor, Stakeholders, and Strategy & Matrices. ISM results showed that 'Human Resource' and 'Coordination' heavily influence other factors. MICMAC analysis placed categories like Human Resource (CA1), Coordination (CA3), Stakeholders (CA7), and Strategy & Matrices (CA8) in the independent cluster, indicating high driving but low dependence power. Fuzzy TOPSIS ranked motivators such as promoting team diversity, establishing AI governance bodies, appointing oversight leaders, and ensuring data privacy as most critical. To support ethical AI adoption, organizations should align their strategies with these motivators and integrate them into their policies, governance models, and development frameworks.
Network function virtualization (NFV) is a promising technology to make 5G networks flexible and agile. NFV decreases operators' OPEX and CAPEX by decoupling the physical hardware from the functions they perform. In NFV, users' service request can be viewed as a service function chain (SFC) consisting of several virtual network functions (VNFs) which are connected through virtual links. Resource allocation in NFV is done through a centralized authority called NFV Orchestrator (NFVO). This centralized authority suffers from some drawbacks such as single point of failure and security. Blockchain (BC) technology is able to address these problems by decentralizing resource allocation. The drawbacks of NFVO in NFV architecture and the exceptional BC characteristics to address these problems motivate us to focus on NFV resource allocation to users' SFCs without the need for an NFVO based on BC technology. To this end, we assume there are two types of users: users who send SFC requests (SFC requesting users) and users who perform mining process (miner users). For SFC requesting users, we formulate NFV resource allocation (NFV-RA) problem as a multi-objective problem to minimize the energy consumption and utilized resource cost, simultaneously.
We propose a novel, good-quality, and less demanding method for detecting knots on the surface of wooden logs using multimodal data fusion. Knots are a primary factor affecting the quality of sawn timber, making their detection fundamental to any timber grading or cutting optimization system. While X-ray computed tomography provides accurate knot locations and internal structures, it is often too slow or expensive for practical use. An attractive alternative is to use fast and cost-effective log surface measurements, such as laser scanners or RGB cameras, to detect surface knots and estimate the internal structure of wood. However, due to the small size of knots and noise caused by factors, such as bark and other natural variations, detection accuracy often remains low when only one measurement modality is used. In this paper, we demonstrate that by using a data fusion pipeline consisting of separate streams for RGB and point cloud data, combined by a late fusion module, higher knot detection accuracy can be achieved compared to using either modality alone. We further propose a simple yet efficient sawing angle optimization method that utilizes surface knot detections and cross-correlation to minimize the amount of unwanted arris knots, demonstrating its benefits over randomized sawing angles.
Hybrid work, a fusion of different work environments that allow employees to work in and outside their offices, represents a new frontier for agile researchers to explore. However, due to the nascent nature of the research phenomena, we are yet to achieve a good understanding of the research terrain formulated when hybrid work meets agile software development. This systematic mapping study, we aimed to provide a good understanding of this emerging research area. The systematic process we followed led to a collection of 12 primary studies, which is less than what we expected. All the papers are empirical studies, with most of them employing case studies as the research methodology. The people-centric nature of agile methods is yet to be adequately reflected in the studies in this area. Similarly, there is a lack of a richer understanding of hybrid work in terms of flexible work arrangements. Our mapping study identified various research opportunities that can be explored in future research.
Containerization in multi-cloud environments has received significant attention in recent years both from academic research and industrial development perspectives. However, there exists no effort to systematically investigate the state of research on this topic. The aim of this research is to systematically identify and categorize the multiple aspects of containerization in multi-cloud environment. We conducted the Systematic Mapping Study (SMS) on the literature published between January 2013 and July 2024. One hundred twenty one studies were selected and the key results are: (1) Four leading themes on containerization in multi-cloud environment are identified: 'Scalability and High Availability', 'Performance and Optimization', 'Security and Privacy', and 'Multi-Cloud Container Monitoring and Adaptation'. (2) Ninety-eight patterns and strategies for containerization in multicloud environment were classified across 10 subcategories and 4 categories. (3) Ten quality attributes considered were identified with 47 associated tactics. (4) Four catalogs consisting of challenges and solutions related to security, automation, deployment, and monitoring were introduced. The results of this SMS will assist researchers and practitioners in pursuing further studies on containerization in multi-cloud environment and developing specialized solutions for containerization applications in multi-cloud environment.
Context: Quantum software systems represent a new realm in software engineering, utilizing quantum bits (Qubits) and quantum gates (Qgates) to solve the complex problems more efficiently than classical counterparts . Agile software development approaches are considered to address many inherent challenges in quantum software development, but their effective integration remains unexplored Objective: This study investigates key causes of challenges that could hinders the adoption of traditional agile approaches in quantum software projects and develop an Agile Quantum Software Project Success Prediction Model (AQSSPM). Methodology: Firstly, w e identified 19 causes of challenging factors discussed in our previous study, which are potentially impacting agile quantum project success. Secondly, a survey was conducted to collect expert opinions on these causes and applied Genetic Algorithm (GA) with Na i ve Bayes Classifier (NBC) and Logistic Regression (LR) to develop the AQSSPM Results: Utilizing GA with NBC, project success probability improved from 53.17% to 99.68%, with cost reductions from 0.463% to 0.403%. Similarly, GA with LR increased success rates from 55.52% to 98.99%, and costs decreased from 0.496% to 0.409% after 100 iterati ons. Both methods result showed a strong positive correlation (rs=0.955) in causes ranking, with no significant difference between them (t=1.195, p=0.240>0.05). Conclusion: The AQSSPM highlights critical focus areas for efficiently and successfully implementing agile quantum projects considering the cost factor of a particular project
There are no more papers matching your filters at the moment.