University of Castilla - La Mancha
This research quantifies the energy consumption differences between Unity and Unreal Engine across common video game functionalities like physics, static mesh rendering, and dynamic mesh rendering. It establishes that Unity is 351% more energy-efficient for physics and 17% for static rendering, while Unreal Engine is 26% more efficient for dynamic rendering, providing concrete data on the energy profiles of these prevalent game engines.
As quantum computers advance, the complexity of the software they can execute increases as well. To ensure this software is efficient, maintainable, reusable, and cost-effective -key qualities of any industry-grade software-mature software engineering practices must be applied throughout its design, development, and operation. However, the significant differences between classical and quantum software make it challenging to directly apply classical software engineering methods to quantum systems. This challenge has led to the emergence of Quantum Software Engineering as a distinct field within the broader software engineering landscape. In this work, a group of active researchers analyse in depth the current state of quantum software engineering research. From this analysis, the key areas of quantum software engineering are identified and explored in order to determine the most relevant open challenges that should be addressed in the next years. These challenges help identify necessary breakthroughs and future research directions for advancing Quantum Software Engineering.
The application of kernel-based Machine Learning (ML) techniques to discrete choice modelling using large datasets often faces challenges due to memory requirements and the considerable number of parameters involved in these models. This complexity hampers the efficient training of large-scale models. This paper addresses these problems of scalability by introducing the Nystr\"om approximation for Kernel Logistic Regression (KLR) on large datasets. The study begins by presenting a theoretical analysis in which: i) the set of KLR solutions is characterised, ii) an upper bound to the solution of KLR with Nystr\"om approximation is provided, and finally iii) a specialisation of the optimisation algorithms to Nystr\"om KLR is described. After this, the Nystr\"om KLR is computationally validated. Four landmark selection methods are tested, including basic uniform sampling, a k-means sampling strategy, and two non-uniform methods grounded in leverage scores. The performance of these strategies is evaluated using large-scale transport mode choice datasets and is compared with traditional methods such as Multinomial Logit (MNL) and contemporary ML techniques. The study also assesses the efficiency of various optimisation techniques for the proposed Nystr\"om KLR model. The performance of gradient descent, Momentum, Adam, and L-BFGS-B optimisation methods is examined on these datasets. Among these strategies, the k-means Nystr\"om KLR approach emerges as a successful solution for applying KLR to large datasets, particularly when combined with the L-BFGS-B and Adam optimisation methods. The results highlight the ability of this strategy to handle datasets exceeding 200,000 observations while maintaining robust performance.
The Integrated Nested Laplace Approximation (INLA) is a convenient way to obtain approximations to the posterior marginals for parameters in Bayesian hierarchical models when the latent effects can be expressed as a Gaussian Markov Random Field (GMRF). In addition, its implementation in the R-INLA package for the R statistical software provides an easy way to fit models using INLA in practice. R-INLA implements a number of widely used latent models, including several spatial models. In addition, R-INLA can fit models in a fraction of the time than other computer intensive methods (e.g. Markov Chain Monte Carlo) take to fit the same model. Although INLA provides a fast approximation to the marginals of the model parameters, it is difficult to use it with models not implemented in R-INLA. It is also difficult to make multivariate posterior inference on the parameters of the model as INLA focuses on the posterior marginals and not the joint posterior distribution. In this paper we describe how to use INLA within the Metropolis-Hastings algorithm to fit spatial models and estimate the joint posterior distribution of a reduced number of parameters. We will illustrate the benefits of this new method with two examples on spatial econometrics and disease mapping where complex spatial models with several spatial structures need to be fitted.
Software development methods are usually not applied by the book. Companies are under pressure to continuously deploy software products that meet market needs and stakeholders' requests. To implement efficient and effective development processes, companies utilize multiple frameworks, methods and practices, and combine these into hybrid methods. A common combination contains a rich management framework to organize and steer projects complemented with a number of smaller practices providing the development teams with tools to complete their tasks. In this paper, based on 732 data points collected through an international survey, we study the software development process use in practice. Our results show that 76.8% of the companies implement hybrid methods. Company size as well as the strategy in devising and evolving hybrid methods affect the suitability of the chosen process to reach company or project goals. Our findings show that companies that combine planned improvement programs with process evolution can increase their process' suitability by up to 5%.
Software process models need to be variant-rich, in the sense that they should be systematically customizable to specific project goals and project environments. It is currently very difficult to model Variant-Rich Process (VRP) because variability mechanisms are largely missing in modern process modeling languages. Variability mechanisms from other domains, such as programming languages, might be suitable for the representation of variability and could be adapted to the modeling of software processes. Mechanisms from Software Product Line Engineering (SPLE) and concepts from Aspect- Oriented Software Engineering (AOSE) show particular promise when modeling variability. This paper presents an approach that integrates variability concepts from SPLE and AOSE in the design of a VRP approach for the systematic support of tailoring in software processes. This approach has also been implemented in SPEM, resulting in the vSPEM notation. It has been used in a pilot application, which indicates that our approach based on AOSE can make process tailoring easier and more productive.
FASILL (acronym of "Fuzzy Aggregators and Similarity Into a Logic Language") is a fuzzy logic programming language with implicit/explicit truth degree annotations, a great variety of connectives and unification by similarity. FASILL integrates and extends features coming from MALP (Multi-Adjoint Logic Programming, a fuzzy logic language with explicitly annotated rules) and Bousi~Prolog (which uses a weak unification algorithm and is well suited for flexible query answering). Hence, it properly manages similarity and truth degrees in a single framework combining the expressive benefits of both languages. This paper presents the main features and implementations details of FASILL. Along the paper we describe its syntax and operational semantics and we give clues of the implementation of the lattice module and the similarity module, two of the main building blocks of the new programming environment which enriches the FLOPER system developed in our research group.
Context/Background: process and practice adoption is a key element in modern software process improvement initiatives, and many of them fail. Goal: this paper presents a preliminary version of a usability model for software development process and practice. Method: this model integrates different perspectives, the ISO Standard on Sys- tems and Software Quality Models (ISO 25010) and classic usability literature. For illustrating the feasibility of the model, two experts applied it to Scrum. Results: metrics values were mostly positive and consistent between evaluators. Conclusions: we find the model feasible to use and potentially beneficial.
The emergence of a variety of Machine Learning (ML) approaches for travel mode choice prediction poses an interesting question to transport modellers: which models should be used for which applications? The answer to this question goes beyond simple predictive performance, and is instead a balance of many factors, including behavioural interpretability and explainability, computational complexity, and data efficiency. There is a growing body of research which attempts to compare the predictive performance of different ML classifiers with classical random utility models. However, existing studies typically analyse only the disaggregate predictive performance, ignoring other aspects affecting model choice. Furthermore, many studies are affected by technical limitations, such as the use of inappropriate validation schemes, incorrect sampling for hierarchical data, lack of external validation, and the exclusive use of discrete metrics. We address these limitations by conducting a systematic comparison of different modelling approaches, across multiple modelling problems, in terms of the key factors likely to affect model choice (out-of-sample predictive performance, accuracy of predicted market shares, extraction of behavioural indicators, and computational efficiency). We combine several real world datasets with synthetic datasets, where the data generation function is known. The results indicate that the models with the highest disaggregate predictive performance (namely extreme gradient boosting and random forests) provide poorer estimates of behavioural indicators and aggregate mode shares, and are more expensive to estimate, than other models, including deep neural networks and Multinomial Logit (MNL). It is further observed that the MNL model performs robustly in a variety of situations, though ML techniques can improve the estimates of behavioural indices such as Willingness to Pay.
15
Glyphosate contamination in waters is becoming a major health problem that needs to be urgently addressed, as accidental spraying, drift or leakage of this highly water-soluble herbicide can impact aquatic ecosystems. Researchers are increasingly concerned about exposure to glyphosate and the risks its poses to human health, since it may cause substantial damage, even in small doses. The detection of glyphosate residues in waters is not a simple task, as it requires complex and expensive equipment and qualified personnel. New technological tools need to be designed and developed, based on proven, but also cost-efficient, agile and user-friendly, analytical techniques, which can be used in the field and in the lab, enabled by connectivity and multi-platform software applications. This paper presents the design, development and testing of an innovative low-cost VIS-NIR (Visible and Near-Infrared) spectrometer (called SpectroGLY), based on IoT (Internet of Things) technologies, which allows potential glyphosate contamination in waters to be detected. SpectroGLY combines the functional concept of a traditional lab spectrometer with the IoT technological concept, enabling the integration of several connectivity options for rural and urban settings and digital visualization and monitoring platforms (Mobile App and Dashboard Web). Thanks to its portability, it can be used in any context and provides results in 10 minutes. Additionally, it is unnecessary to transfer the sample to a laboratory (optimizing time, costs and the capacity for corrective actions by the authorities). In short, this paper proposes an innovative, low-cost, agile and highly promising solution to avoid potential intoxications that may occur due to ingestion of water contaminated by this herbicide.
Magnetic resonance imaging (MRI) is the standard modality to understand human brain structure and function in vivo (antemortem). Decades of research in human neuroimaging has led to the widespread development of methods and tools to provide automated volume-based segmentations and surface-based parcellations which help localize brain functions to specialized anatomical regions. Recently ex vivo (postmortem) imaging of the brain has opened-up avenues to study brain structure at sub-millimeter ultra high-resolution revealing details not possible to observe with in vivo MRI. Unfortunately, there has been limited methodological development in ex vivo MRI primarily due to lack of datasets and limited centers with such imaging resources. Therefore, in this work, we present one-of-its-kind dataset of 82 ex vivo T2w whole brain hemispheres MRI at 0.3 mm isotropic resolution spanning Alzheimer's disease and related dementias. We adapted and developed a fast and easy-to-use automated surface-based pipeline to parcellate, for the first time, ultra high-resolution ex vivo brain tissue at the native subject space resolution using the Desikan-Killiany-Tourville (DKT) brain atlas. This allows us to perform vertex-wise analysis in the template space and thereby link morphometry measures with pathology measurements derived from histology. We will open-source our dataset docker container, Jupyter notebooks for ready-to-use out-of-the-box set of tools and command line options to advance ex vivo MRI clinical brain imaging research on the project webpage.
18
In the context of the rapid dissemination of multimedia content, identifying disinformation on social media platforms such as TikTok represents a significant challenge. This study introduces a hybrid framework that combines the computational power of deep learning with the interpretability of fuzzy logic to detect suspected disinformation in TikTok videos. The methodology is comprised of two core components: a multimodal feature analyser that extracts and evaluates data from text, audio, and video; and a multimodal disinformation detector based on fuzzy logic. These systems operate in conjunction to evaluate the suspicion of spreading disinformation, drawing on human behavioural cues such as body language, speech patterns, and text coherence. Two experiments were conducted: one focusing on context-specific disinformation and the other on the scalability of the model across broader topics. For each video evaluated, high-quality, comprehensive, well-structured reports are generated, providing a detailed view of the disinformation behaviours.
This paper explores the integration of hypothetical reasoning into an efficient implementation of the fuzzy logic language Bousi~Prolog. To this end, we first analyse what would be expected from a logic inference system, equipped with what is called embedded implication, to model solving goals with respect to assumptions. We start with a propositional system and incrementally build more complex systems and implementations to satisfy the requirements imposed by a system like Bousi~Prolog. Finally, we propose an inference system, operational semantics, and the translation function to generate efficient Prolog programs from Bousi~Prolog programs.
Objective: The most relevant source of signal contamination in the cardiac electrophysiology (EP) laboratory is the ubiquitous powerline interference (PLI). To reduce this perturbation, algorithms including common fixed bandwidth and adaptive notch filters have been proposed. Although such methods have proven to add artificial fractionation to intra atrial electrograms (EGMs), they are still frequently used. However, such morphological alteration can conceal the accurate interpretation of EGMs, specially to evaluate the mechanisms supporting atrial fibrillation (AF), which is the most common cardiac arrhythmia. Given the clinical relevance of AF, a novel algorithm aimed at reducing PLI on highly contaminated bipolar EGMs and, simultaneously, preserving their morphology is proposed. Approach: The method is based on the wavelet shrinkage and has been validated through customized indices on a set of synthesized EGMs to accurately quantify the achieved level of PLI reduction and signal morphology alteration. Visual validation of the algorithms performance has also been included for some real EGM excerpts. Main results: The method has outperformed common filtering-based and wavelet based strategies in the analyzed scenario. Moreover, it possesses advantages such as insensitivity to amplitude and frequency variations in the PLI, and the capability of joint removal of several interferences. Significance: The use of this algorithm in routine cardiac EP studies may enable improved and truthful evaluation of AF mechanisms.
Although catheter ablation (CA) is still the first-line treatment for persistent atrial fibrillation (AF) patients, its limited long-term success rate has motivated clinical interest in preoperative prediction on the procedures outcome to provide optimized patient selection, limit repeated procedures, hospitalization rates, and treatment costs. To this respect, dominant frequency (DF) and amplitude of fibrillatory waves (f-waves) reflected on the ECG have provided promising results. Hence this work explores the ability of a novel set of frequency and amplitud f-waves features, such as spectral entropy (SE), spectral flatness measure (SFM), and amplitud spectrum area (AMSA), along with DF and normalized f-wave amplitude (NFWA), to improve CA outcome prediction. Despite all single indices reported statistically significant differences between patients who relapsed to AF and those who maintained sinus rhythm after a follow up of 9 months for 204 6 s-length ECG intervals extracted from 51 persistent AF patients, they obtained a limited discriminant ability ranging between 55 and 62%, which was overcome by 15 - 23% when NFWA, SE and AMSA were combined. Consequently, this combination of frequency and amplitude features of the fwaves seems to provide new insights about the atrial substrate remodeling, which could be helpful in improving preoperative CA outcome prediction.
As for most of cardiac arrhythmias, atrial fibrillation (AF) is primarily treated by catheter ablation (CA). However, the mid-term recurrence rate of this procedure in persistent AF patients is still limited and the preoperative prediction of its outcome is clinically interesting to select candidates who could benefit the most from the intervention. This context encouraged the study of C0 complexity as a novel predictor, because it estimates organization of the power spectral distribution (PSD) of the fibrillatory waves (f-waves). For that purpose, the PSD was divided into two divergent components using a threshold, theta, which was considered by multiplying the mean value of the PSD by a factor, alpha, ranging between 1.5 and 2.5. On a database of 74 patients, the values of C0 complexity computed for all alpha factors reported statistically significant differences between the patients who maintained sinus rhythm and those who relapsed to AF after a follow-up of 9 months. They also showed higher values of sensitivity (Se), specificity (Sp), and accuracy (Acc) than the well known predictors of the dominant frequency (DF) and f-wave amplitude. Moreover, the combination of the DF and the C0 complexity computed with alpha = 2, via a decision tree, improved classification until values of Se, Sp and Acc of 75.33, 77.33 and 76.58%, respectively. These results manifests the relevance of the f-wave PSD distribution to anticipate CA outcome in persistent AF patients.
This is a follow-up of a paper by Fernández-Bonder-Ritorto-Salort [8], where the classical concept of HH-convergence was extended to fractional pp-Laplace type operators. In this short paper we provide an explicit characterization of this notion by demonstrating that the weak-* convergence of the coefficients is an equivalent condition for HH-convergence of the sequence of nonlocal operators. This result takes advantage of nonlocality and is in stark contrast to the local p-Laplacian case
Evolvability is defined as the ability of a population to generate heritable variation to facilitate its adaptation to new environments or selection pressures. In this article, we consider evolvability as a phenotypic trait subject to evolution and discuss its implications in the adaptation of populations of asexual individuals. We explore the evolutionary dynamics of an actively proliferating population of individuals, subject to changes in their proliferative potential and their evolvability, through mathematical simulations of a stochastic individual-based model and its deterministic continuum counterpart. We find robust adaptive trajectories that rely on individuals with high evolvability rapidly exploring the phenotypic landscape and reaching the proliferative potential with the highest fitness. The strength of selection on the proliferative potential, and the cost associated with evolvability, can alter these trajectories such that, if both are sufficiently constraining, highly evolvable populations can become extinct in our individual-based model simulations. We explore the impact of this interaction at various scales, discussing its effects in undisturbed environments and also in disrupted contexts, such as cancer.
Evaluating expression of the Human epidermal growth factor receptor 2 (Her2) by visual examination of immunohistochemistry (IHC) on invasive breast cancer (BCa) is a key part of the diagnostic assessment of BCa due to its recognised importance as a predictive and prognostic marker in clinical practice. However, visual scoring of Her2 is subjective and consequently prone to inter-observer variability. Given the prognostic and therapeutic implications of Her2 scoring, a more objective method is required. In this paper, we report on a recent automated Her2 scoring contest, held in conjunction with the annual PathSoc meeting held in Nottingham in June 2016, aimed at systematically comparing and advancing the state-of-the-art Artificial Intelligence (AI) based automated methods for Her2 scoring. The contest dataset comprised of digitised whole slide images (WSI) of sections from 86 cases of invasive breast carcinoma stained with both Haematoxylin & Eosin (H&E) and IHC for Her2. The contesting algorithms automatically predicted scores of the IHC slides for an unseen subset of the dataset and the predicted scores were compared with the 'ground truth' (a consensus score from at least two experts). We also report on a simple Man vs Machine contest for the scoring of Her2 and show that the automated methods could beat the pathology experts on this contest dataset. This paper presents a benchmark for comparing the performance of automated algorithms for scoring of Her2. It also demonstrates the enormous potential of automated algorithms in assisting the pathologist with objective IHC scoring.
In this position paper we address the Software Sustainability from the IN perspective, so that the Software Engineering (SE) community is aware of the need to contribute towards sustainable software companies, which need to adopt a holistic approach to sustainability considering all its dimensions (human, economic and environmental). A series of important challenges to be considered in the coming years are presented, in order that advances in involved SE communities on the subject can be harmonised and used to contribute more effectively to this field of great interest and impact on society.
There are no more papers matching your filters at the moment.