São Paulo State University
Complex wounds usually face partial or total loss of skin thickness, healing by secondary intention. They can be acute or chronic, figuring infections, ischemia and tissue necrosis, and association with systemic diseases. Research institutes around the globe report countless cases, ending up in a severe public health problem, for they involve human resources (e.g., physicians and health care professionals) and negatively impact life quality. This paper presents a new database for automatically categorizing complex wounds with five categories, i.e., non-wound area, granulation, fibrinoid tissue, and dry necrosis, hematoma. The images comprise different scenarios with complex wounds caused by pressure, vascular ulcers, diabetes, burn, and complications after surgical interventions. The dataset, called ComplexWoundDB, is unique because it figures pixel-level classifications from 2727 images obtained in the wild, i.e., images are collected at the patients' homes, labeled by four health professionals. Further experiments with distinct machine learning techniques evidence the challenges in addressing the problem of computer-aided complex wound tissue categorization. The manuscript sheds light on future directions in the area, with a detailed comparison among other databased widely used in the literature.
Small bodies in our Solar System are considered remnants of their early formation. Studying their physical and dynamic properties can provide insights into their evolution, stability, and origin. ESA's Rosetta mission successfully landed and studied comet Churyumov-Gerasimenko (67P) for approximately two years. In this work, the aim is to analyze the surface and orbital dynamics of comet 67P in detail, using a suitable 3-D polyhedral shape model. We applied the polyhedron method to calculate dynamic surface characteristics, including geometric height, surface tilt, surface slopes, geopotential surface, acceleration surface, escape speed, equilibrium points, and zero-velocity curves. The results show that the gravitational potential is predominant on the comet's surface due to its slow rotation. The escape speed has the maximum value in the Hapi region (the comet's neck). The surface slopes were analyzed to predict possible regions of particle motion and accumulation. The results show that most regions of the comet's surface have low slopes. Furthermore, we analyzed the slopes under the effects of Third-Body gravitational and Solar Radiation Pressure perturbations. Our results showed that the effects of Third-Body perturbations do not significantly affect the global behavior of slopes. Meanwhile, the Solar Radiation Pressure does not significantly affect particles across the surface of comet 67P with sizes >103>\sim10^{-3}\,cm at apocenter and >101>\sim10^{-1}\,cm at pericenter. We also identified four equilibrium points around comet 67P and one equilibrium point inside the body, where points E2_2 and E5_5 are linearly stable. In addition, we approximated the shape of comet 67P using the simplified Dipole Segment Model to study its dynamics, employing parameters derived from its 3-D polyhedral shape model. We found 12 families of planar symmetric periodic orbits around the body.
Researchers from São Paulo State University, Eindhoven University of Technology, and Linnaeus University developed HUMAP, a hierarchical dimensionality reduction technique built on UMAP principles. This method systematically constructs a hierarchy and preserves the mental map across different levels of detail, demonstrating competitive performance against existing hierarchical methods in both structure preservation and runtime on diverse datasets.
Texture, a significant visual attribute in images, has been extensively investigated across various image recognition applications. Convolutional Neural Networks (CNNs), which have been successful in many computer vision tasks, are currently among the best texture analysis approaches. On the other hand, Vision Transformers (ViTs) have been surpassing the performance of CNNs on tasks such as object recognition, causing a paradigm shift in the field. However, ViTs have so far not been scrutinized for texture recognition, hindering a proper appreciation of their potential in this specific setting. For this reason, this work explores various pre-trained ViT architectures when transferred to tasks that rely on textures. We review 21 different ViT variants and perform an extensive evaluation and comparison with CNNs and hand-engineered models on several tasks, such as assessing robustness to changes in texture rotation, scale, and illumination, and distinguishing color textures, material textures, and texture attributes. The goal is to understand the potential and differences among these models when directly applied to texture recognition, using pre-trained ViTs primarily for feature extraction and employing linear classifiers for evaluation. We also evaluate their efficiency, which is one of the main drawbacks in contrast to other methods. Our results show that ViTs generally outperform both CNNs and hand-engineered models, especially when using stronger pre-training and tasks involving in-the-wild textures (images from the internet). We highlight the following promising models: ViT-B with DINO pre-training, BeiTv2, and the Swin architecture, as well as the EfficientFormer as a low-cost alternative. In terms of efficiency, although having a higher number of GFLOPs and parameters, ViT-B and BeiT(v2) can achieve a lower feature extraction time on GPUs compared to ResNet50.
The ability to communicate with robots using natural language is a significant step forward in human-robot interaction. However, accurately translating verbal commands into physical actions is promising, but still presents challenges. Current approaches require large datasets to train the models and are limited to robots with a maximum of 6 degrees of freedom. To address these issues, we propose a framework called InstructRobot that maps natural language instructions into robot motion without requiring the construction of large datasets or prior knowledge of the robot's kinematics model. InstructRobot employs a reinforcement learning algorithm that enables joint learning of language representations and inverse kinematics model, simplifying the entire learning process. The proposed framework is validated using a complex robot with 26 revolute joints in object manipulation tasks, demonstrating its robustness and adaptability in realistic environments. The framework can be applied to any task or domain where datasets are scarce and difficult to create, making it an intuitive and accessible solution to the challenges of training robots using linguistic communication. Open source code for the InstructRobot framework and experiments can be accessed at this https URL
This work presents FreeSVC, a promising multilingual singing voice conversion approach that leverages an enhanced VITS model with Speaker-invariant Clustering (SPIN) for better content representation and the State-of-the-Art (SOTA) speaker encoder ECAPA2. FreeSVC incorporates trainable language embeddings to handle multiple languages and employs an advanced speaker encoder to disentangle speaker characteristics from linguistic content. Designed for zero-shot learning, FreeSVC enables cross-lingual singing voice conversion without extensive language-specific training. We demonstrate that a multilingual content extractor is crucial for optimal cross-language conversion. Our source code and models are publicly available.
Asteroids with companions constitute an excellent sample for studying the collisional and dynamical evolution of minor planets. The currently known binary population were discovered by different complementary techniques that produce, for the moment, a strongly biased distribution, especially in a range of intermediate asteroid sizes (approximately 20 to 100 km) where both mutual photometric events and high-resolution adaptive optic imaging are poorly efficient. A totally independent technique of binary asteroid discovery, based on astrometry, can help to reveal new binary systems and populate a range of sizes and separations that remain nearly unexplored. In this work, we describe a dedicated period detection method and its results for the Gaia DR3 data set. This method looks for the presence of a periodic signature in the orbit post-fit residuals. After conservative filtering and validation based on statistical and physical criteria, we are able to present a first sample of astrometric binary candidates, to be confirmed by other observation techniques such as photometric light curves and stellar occultations.
The Aharonov-Bohm (AB) effect is considered in the context of Generalized Electrodynamics (GE) by Podolsky and Bopp. GE is the only extension to Maxwell electrodynamics that is locally {\normalsize{}U(1)}-gauge invariant, admits linear field equations and contains higher-order derivatives of the vector potential. GE admits both massless and massive modes for the photon. We recover the ordinary quantum phase shift of the AB effect, derived in the context of Maxwell electrodynamics, for the massless mode of the photon in GE. The massive mode induces a correction factor to the AB phase shift depending on the photon mass. We study both the magnetic AB effect and its electric counterpart. In principle, accurate experimental observations of AB the phase shift could be used to constrain GE photon mass.
The origin of Mercury still remains poorly understood compared to the other rocky planets of the Solar System. One of the most relevant constraints that any formation model has to fulfill refers to its internal structure, with a predominant iron core covered by a thin silicate layer. This led to the idea that it could be the product of a mantle stripping caused by a giant impact. Previous studies in this line focused on binary collisions involving bodies of very different masses. However, such collisions are actually rare in N-body simulations of terrestrial planet formation, whereas collisions involving similar mass bodies appear to be more frequent. Here, we perform smooth particle hydrodynamics simulations to investigate the conditions under which collisions of similar mass bodies are able to form a Mercury-like planet. Our results show that such collisions can fulfill the necessary constraints in terms of mass (0.055 MM_\oplus) and composition (30/70 silicate-to-iron mass ratio) within less than 5%, as long as the impact angles and velocities are properly adjusted according to well established scaling laws.
We derive a two-dimensional symplectic map for particle motion at the plasma edge by modeling the electrostatic potential as a superposition of integer spatial harmonics with relative phase shift, then reduce it to a two-wave model to study the transport dependence on the perturbation amplitudes, relative phase, and spatial-mode choice. Using particle transmissivity as a confinement criterion, identical-mode pairs exhibit phase-controlled behavior: anti-phase waves produce destructive interference and strong confinement while in-phase waves add constructively and drive chaotic transport. Mode-mismatched pairs produce richer phase-space structure with higher-order resonances and sticky regions; the transmissivity boundaries become geometrically complex. Box-counting dimensions quantify this: integer dimension smooth boundaries for identical modes versus non-integer fractal-like dimension for distinct modes, demonstrating that phase and spectral content of waves jointly determine whether interference suppresses or promotes transport.
The logics of formal inconsistency (LFIs, for short) are paraconsistent logics (that is, logics containing contradictory but non-trivial theories) having a consistency connective which allows to recover the ex falso quodlibet principle in a controlled way. The aim of this paper is considering a novel semantical approach to first-order LFIs based on Tarskian structures defined over swap structures, a special class of multialgebras. The proposed semantical framework generalizes previous aproaches to quantified LFIs presented in the literature. The case of QmbC, the simpler quantified LFI expanding classical logic, will be analyzed in detail. An axiomatic extension of QmbC called QLFI1o is also studied, which is equivalent to the quantified version of da Costa and D'Ottaviano 3-valued logic J3. The semantical structures for this logic turn out to be Tarkian structures based on twist structures. The expansion of QmbC and QLFI1o with a standard equality predicate is also considered.
Similarities in the non-mass dependent isotopic composition of refractory elements with the bulk silicate Earth suggest that both the Earth and the Moon formed from the same material reservoir. On the other hand, the Moon's volatile depletion and isotopic composition of moderately volatile elements points to a global devolatilization processes, most likely during a magma ocean phase of the Moon. Here, we investigate the devolatilisation of the molten Moon due to a tidally-assisted hydrodynamic escape with a focus on the dynamics of the evaporated gas. Unlike the 1D steady-state approach of Charnoz et al. (2021), we use 2D time-dependent hydrodynamic simulations carried out with the FARGOCA code modified to take into account the magma ocean as a gas source. Near the Earth's Roche limit, where the proto-Moon likely formed, evaporated gases from the lunar magma ocean form a circum-Earth disk of volatiles, with less than 30% of material being re-accreted by the Moon. We find that the measured depletion of K and Na on the Moon can be achieved if the lunar magma-ocean had a surface temperature of about 1800-2000 K. After about 1000 years, a thermal boundary layer or a flotation crust forms a lid that inhibits volatile escape. Mapping the volatile velocity field reveals varying trends in the longitudes of volatile reaccretion on the Moon's surface: material is predominantly re-accreted on the trailing side when the Moon-Earth distance exceeds 3.5 Earth radii, suggesting a dichotomy in volatile abundances between the leading and trailing sides of the Moon. This dichotomy may provide insights on the tidal conditions of the early molten Earth. In conclusion, tidally-driven atmospheric escape effectively devolatilizes the Moon, matching the measured abundances of Na and K on timescales compatible with the formation of a thermal boundary layer or an anorthite flotation crust.
Identifying anomalies has become one of the primary strategies towards security and protection procedures in computer networks. In this context, machine learning-based methods emerge as an elegant solution to identify such scenarios and learn irrelevant information so that a reduction in the identification time and possible gain in accuracy can be obtained. This paper proposes a novel feature selection approach called Finite Element Machines for Feature Selection (FEMa-FS), which uses the framework of finite elements to identify the most relevant information from a given dataset. Although FEMa-FS can be applied to any application domain, it has been evaluated in the context of anomaly detection in computer networks. The outcomes over two datasets showed promising results.
Molecular dynamics simulations have been used in different scientific fields to investigate a broad range of physical systems. However, the accuracy of calculation is based on the model considered to describe the atomic interactions. In particular, ab initio molecular dynamics (AIMD) has the accuracy of density functional theory (DFT), and thus is limited to small systems and relatively short simulation time. In this scenario, Neural Network Force Fields (NNFF) have an important role, since it provides a way to circumvent these caveats. In this work we investigate NNFF designed at the level of DFT to describe liquid water, focusing on the size and quality of the training data-set considered. We show that structural properties are less dependent on the size of the training data-set compared to dynamical ones (such as the diffusion coefficient), and a good sampling (selecting data reference for training process) can lead to a small sample with good precision.
Quantum computing has existed in the theoretical realm for several decades. Recently, quantum computing has re-emerged as a promising technology to solve problems that a classical computer could take hundreds of years to solve. However, there are challenges and opportunities for academics and practitioners regarding software engineering practices for testing and debugging quantum programs. This paper presents a roadmap for addressing these challenges, pointing out the existing gaps in the literature and suggesting research directions. We discuss the limitations caused by noise, the no-cloning theorem, the lack of a standard architecture for quantum computers, among others. Regarding testing, we highlight gaps and opportunities related to transpilation, mutation analysis, input states with hybrid interfaces, program analysis, and coverage. For debugging, we present the current strategies, including classical techniques applied to quantum programs, quantum-specific assertions, and quantum-related bug patterns. We introduce a conceptual model to illustrate concepts regarding the testing and debugging of quantum programs and the relationship between them. Those concepts are used to identify and discuss research challenges to cope with quantum programs through 2030, focusing on the interfaces between classical and quantum computing and on creating testing and debugging techniques that take advantage of the unique quantum computing characteristics.
We explore the concept of negative pressure and its relevance in a variety of physical contexts: the expansion of the universe, mixture theory, cavitation, and the capillary effect in plants. Using thermodynamic arguments, we discuss the intricate connection between negative pressure and negative thermal expansion. We highlight the fact that metastable states and competing phases are often associated with the emergence of negative pressure. We also propose a new link between the effective Grüneisen parameter and nucleation theory.
In the past decades, fuzzy logic has played an essential role in many research areas. Alongside, graph-based pattern recognition has shown to be of great importance due to its flexibility in partitioning the feature space using the background from graph theory. Some years ago, a new framework for both supervised, semi-supervised, and unsupervised learning named Optimum-Path Forest (OPF) was proposed with competitive results in several applications, besides comprising a low computational burden. In this paper, we propose the Fuzzy Optimum-Path Forest, an improved version of the standard OPF classifier that learns the samples' membership in an unsupervised fashion, which are further incorporated during supervised training. Such information is used to identify the most relevant training samples, thus improving the classification step. Experiments conducted over twelve public datasets highlight the robustness of the proposed approach, which behaves similarly to standard OPF in worst-case scenarios.
In recent years, deep learning techniques have been used to develop sign language recognition systems, potentially serving as a communication tool for millions of hearing-impaired individuals worldwide. However, there are inherent challenges in creating such systems. Firstly, it is important to consider as many linguistic parameters as possible in gesture execution to avoid ambiguity between words. Moreover, to facilitate the real-world adoption of the created solution, it is essential to ensure that the chosen technology is realistic, avoiding expensive, intrusive, or low-mobility sensors, as well as very complex deep learning architectures that impose high computational requirements. Based on this, our work aims to propose an efficient sign language recognition system that utilizes low-cost sensors and techniques. To this end, an object detection model was trained specifically for detecting the interpreter's face and hands, ensuring focus on the most relevant regions of the image and generating inputs with higher semantic value for the classifier. Additionally, we introduced a novel approach to obtain features representing hand location and movement by leveraging spatial information derived from centroid positions of bounding boxes, thereby enhancing sign discrimination. The results demonstrate the efficiency of our handcrafted features, increasing accuracy by 7.96% on the AUTSL dataset, while adding fewer than 700 thousand parameters and incurring less than 10 milliseconds of additional inference time. These findings highlight the potential of our technique to strike a favorable balance between computational cost and accuracy, making it a promising approach for practical sign language recognition applications.
In networked systems, the interplay between the dynamics of individual subsystems and their network interactions has been found to generate multistability in various contexts. Despite its ubiquity, the specific mechanisms and ingredients that give rise to multistability from such interplay remain poorly understood. In a network of coupled excitable units, we show that this interplay generating multistability occurs through a competition between the units' transient dynamics and their coupling. Specifically, the diffusive coupling between the units manages to reinject them in the excitability region of their individual state space and effectively trap them there. We show that this trapping mechanism leads to the coexistence of multiple types of oscillations: periodic, quasiperiodic, and even chaotic, although the units separately do not oscillate. Interestingly, we show that the attractors emerge through different types of bifurcations - in particular, the periodic attractors emerge through either saddle-node of limit cycles bifurcations or homoclinic bifurcations - but in all cases the reinjection mechanism is present.
Machine Learning (ML) algorithms have been increasingly applied to problems from several different areas. Despite their growing popularity, their predictive performance is usually affected by the values assigned to their hyperparameters (HPs). As consequence, researchers and practitioners face the challenge of how to set these values. Many users have limited knowledge about ML algorithms and the effect of their HP values and, therefore, do not take advantage of suitable settings. They usually define the HP values by trial and error, which is very subjective, not guaranteed to find good values and dependent on the user experience. Tuning techniques search for HP values able to maximize the predictive performance of induced models for a given dataset, but have the drawback of a high computational cost. Thus, practitioners use default values suggested by the algorithm developer or by tools implementing the algorithm. Although default values usually result in models with acceptable predictive performance, different implementations of the same algorithm can suggest distinct default values. To maintain a balance between tuning and using default values, we propose a strategy to generate new optimized default values. Our approach is grounded on a small set of optimized values able to obtain predictive performance values better than default settings provided by popular tools. After performing a large experiment and a careful analysis of the results, we concluded that our approach delivers better default values. Besides, it leads to competitive solutions when compared to tuned values, making it easier to use and having a lower cost. We also extracted simple rules to guide practitioners in deciding whether to use our new methodology or a HP tuning approach.
There are no more papers matching your filters at the moment.