Ecole Centrale de Nantes
The increase of surface electromyography (sEMG) root-mean-square (RMS) is very frequently used to determine fatigue. However, as RMS is also influenced by muscle force,its effective usage as indicator of fatigue is mainly limited to isometric, constant force tasks.This research develops a simple me-thodto preclude the effect of muscle force, hereby estimates the EMG amplitude response exclusively to fatigue with RMS. Experiment was carried out on the biceps brachiis of 15 subjects (7males, 8 females) during sustained static maximum voluntary contractions (sMVC).Result shows that the sEMG RMS response to fatigue increasesto 21.27% while muscle force decreasing to 50%MVC, which implies that more and more extra effort is needed as muscle fatigue intensifies. It would be promising to use the RMS response exclusively to fatigue as an indicator of muscle fatigue.
Vehicle-to-everything (V2X) collaborative perception has emerged as a promising solution to address the limitations of single-vehicle perception systems. However, existing V2X datasets are limited in scope, diversity, and quality. To address these gaps, we present Mixed Signals, a comprehensive V2X dataset featuring 45.1k point clouds and 240.6k bounding boxes collected from three connected autonomous vehicles (CAVs) equipped with two different configurations of LiDAR sensors, plus a roadside unit with dual LiDARs. Our dataset provides point clouds and bounding box annotations across 10 classes, ensuring reliable data for perception training. We provide detailed statistical analysis on the quality of our dataset and extensively benchmark existing V2X methods on it. The Mixed Signals dataset is ready-to-use, with precise alignment and consistent annotations across time and viewpoints. Dataset website is available at this https URL.
In autonomous robotics, a significant challenge involves devising robust solutions for Active Collaborative SLAM (AC-SLAM). This process requires multiple robots to cooperatively explore and map an unknown environment by intelligently coordinating their movements and sensor data acquisition. In this article, we present an efficient visual AC-SLAM method using aerial and ground robots for environment exploration and mapping. We propose an efficient frontiers filtering method that takes into account the common IoU map frontiers and reduces the frontiers for each robot. Additionally, we also present an approach to guide robots to previously visited goal positions to promote loop closure to reduce SLAM uncertainty. The proposed method is implemented in ROS and evaluated through simulations on publicly available datasets and similar methods, achieving an accumulative average of 59% of increase in area coverage.
Nowadays, high-speed machining is usually used for production of hardened material parts with complex shapes such as dies and molds. In such parts, tool paths generated for bottom machining feature with the conventional parallel plane strategy induced many feed rate reductions, especially when boundaries of the feature have a lot of curvatures and are not parallel. Several machining experiments on hardened material lead to the conclusion that a tool path implying stable cutting conditions might guarantee a better part surface integrity. To ensure this stability, the shape machined must be decomposed when conventional strategies are not suitable. In this paper, an experimental approach based on high-speed performance simulation is conducted on a master bottom machining feature in order to highlight the influence of the curvatures towards a suitable decomposition of machining area. The decomposition is achieved through the construction of intermediate curves between the closed boundaries of the feature. These intermediate curves are used as guidance curve for the tool paths generation with an alternative machining strategy called "guidance curve strategy". For the construction of intermediate curves, key parameters reflecting the influence of their proximity with each closed boundary and the influence of the curvatures of this latter are introduced. Based on the results, a method for defining guidance curves in four steps is proposed.
Home Energy Management Systems (HEMSs) help households tailor their electricity usage based on power system signals such as energy prices. This technology helps to reduce energy bills and offers greater demand-side flexibility that supports the power system stability. However, residents who lack a technical background may find it difficult to use HEMSs effectively, because HEMSs require well-formatted parameterization that reflects the characteristics of the energy resources, houses, and users' needs. Recently, Large-Language Models (LLMs) have demonstrated an outstanding ability in language understanding. Motivated by this, we propose an LLM-based interface that interacts with users to understand and parameterize their ``badly-formatted answers'', and then outputs well-formatted parameters to implement an HEMS. We further use Reason and Act method (ReAct) and few-shot prompting to enhance the LLM performance. Evaluating the interface performance requires multiple user--LLM interactions. To avoid the efforts in finding volunteer users and reduce the evaluation time, we additionally propose a method that uses another LLM to simulate users with varying expertise, ranging from knowledgeable to non-technical. By comprehensive evaluation, the proposed LLM-based HEMS interface achieves an average parameter retrieval accuracy of 88\%, outperforming benchmark models without ReAct and/or few-shot prompting.
On demand of efficient reachability analysis due to the inevitable complexity of large-scale biological models, this paper is dedicated to a novel approach: PermReach, for reachability problem of our new framework, Asynchronous Binary Automata Networks (ABAN). ABAN is an expressive modeling framework which contains all the dynamics behaviors performed by Asynchronous Boolean Networks. Compared to Boolean Networks (BN), ABAN has a finer description of state transitions (from a local state to another, instead of symmetric Boolean functions). To analyze the reachability properties on large-scale models (like the ones from systems biology), previous works exhibited an efficient abstraction technique called Local Causality Graph (LCG). However, this technique may be not conclusive. Our contribution here is to extend these results by tackling those complex intractable cases via a heuristic technique. To validate our method, tests were conducted in large biological networks, showing that our method is more conclusive than existing ones.
Segmentation of axon and myelin from microscopy images of the nervous system provides useful quantitative information about the tissue microstructure, such as axon density and myelin thickness. This could be used for instance to document cell morphometry across species, or to validate novel non-invasive quantitative magnetic resonance imaging techniques. Most currently-available segmentation algorithms are based on standard image processing and usually require multiple processing steps and/or parameter tuning by the user to adapt to different modalities. Moreover, only few methods are publicly available. We introduce AxonDeepSeg, an open-source software that performs axon and myelin segmentation of microscopic images using deep learning. AxonDeepSeg features: (i) a convolutional neural network architecture; (ii) an easy training procedure to generate new models based on manually-labelled data and (iii) two ready-to-use models trained from scanning electron microscopy (SEM) and transmission electron microscopy (TEM). Results show high pixel-wise accuracy across various species: 85% on rat SEM, 81% on human SEM, 95% on mice TEM and 84% on macaque TEM. Segmentation of a full rat spinal cord slice is computed and morphological metrics are extracted and compared against the literature. AxonDeepSeg is freely available at this https URL
Approximations of Laplace-Beltrami operators on manifolds through graph Lapla-cians have become popular tools in data analysis and machine learning. These discretized operators usually depend on bandwidth parameters whose tuning remains a theoretical and practical problem. In this paper, we address this problem for the unnormalized graph Laplacian by establishing an oracle inequality that opens the door to a well-founded data-driven procedure for the bandwidth selection. Our approach relies on recent results by Lacour and Massart [LM15] on the so-called Lepski's method.
As part of the 2016 public evaluation challenge on Detection and Classification of Acoustic Scenes and Events (DCASE 2016), the second task focused on evaluating sound event detection systems using synthetic mixtures of office sounds. This task, which follows the `Event Detection - Office Synthetic' task of DCASE 2013, studies the behaviour of tested algorithms when facing controlled levels of audio complexity with respect to background noise and polyphony/density, with the added benefit of a very accurate ground truth. This paper presents the task formulation, evaluation metrics, submitted systems, and provides a statistical analysis of the results achieved, with respect to various aspects of the evaluation dataset.
Submarines are vital for maritime defense, requiring optimized hydrodynamic performance to minimize resistance. Advancements in Computational Fluid Dynamics (CFD) enable accurate predictions of submarine hydrodynamics for optimal design. This study compared the meshing capabilities of OpenFOAM and commercial software as well as the performance of High-Performance Computing (HPC) and standard PC resources upon hydrodynamic characteristics. The RANS turbulence model with was employed to analyze the resistances of the MARIN's BB2-class submarine. CFD simulations were conducted at a model scale (1:35.1) at a speed of 1.8235 m/s ( of 21 knots) upon various mesh densities from 1 to 97 million cells. Empirical equations were initialized for turbulence parameters. Mesh sensitivity and iteration convergence ensured validated results. The findings showed that the results were validated with errors ranging from 0.3% to 10% across different mesh densities. The lowest error (0.3%) was achieved with 97 million cells generated by the commercial meshing tool with HPC, while 13 million cells by OpenFOAM with a standard PC resulted in a 3.4% error. Accuracy improved with precise initialization of turbulence parameters, mesh strategy, numerical schemes, and computing resources. The application of a standard PC with the OpenFOAM meshing tool was able to produce an acceptable accuracy, with less than 5% error for lower mesh densities. Thus, it can be suggested that using a standard PC was beneficial for preliminary hydrodynamic simulations. However, HPC with commercial software was essential for detailed industrial analyses, such as full-scale resistance and propulsion simulations.
The data-driven computing paradigm initially introduced by Kirchdoerfer & Ortiz (2016) is extended by incorporating locally linear tangent spaces into the data set. These tangent spaces are constructed by means of the tensor voting method introduced by Mordohai & Medioni (2010) which improves the learning of the underlying structure of a data set. Tensor voting is an instance-based machine learning technique which accumulates votes from the nearest neighbors to build up second-order tensors encoding tangents and normals to the underlying data structure. The here proposed second-order data-driven paradigm is a plug-in method for distance-minimizing as well as entropy-maximizing data-driven schemes. Like its predecessor, the resulting method aims to minimize a suitably defined free energy over phase space subject to compatibility and equilibrium constraints. The method's implementation is straightforward and numerically efficient since the data structure analysis is performed in an offline step. Selected numerical examples are presented that establish the higher-order convergence properties of the data-driven solvers enhanced by tensor voting for ideal and noisy data sets.
Current deep-learning based methods do not easily integrate to clinical protocols, neither take full advantage of medical knowledge. In this work, we propose and compare several strategies relying on curriculum learning, to support the classification of proximal femur fracture from X-ray images, a challenging problem as reflected by existing intra- and inter-expert disagreement. Our strategies are derived from knowledge such as medical decision trees and inconsistencies in the annotations of multiple experts, which allows us to assign a degree of difficulty to each training sample. We demonstrate that if we start learning "easy" examples and move towards "hard", the model can reach a better performance, even with fewer data. The evaluation is performed on the classification of a clinical dataset of about 1000 X-ray images. Our results show that, compared to class-uniform and random strategies, the proposed medical knowledge-based curriculum, performs up to 15% better in terms of accuracy, achieving the performance of experienced trauma surgeons.
Designers, process planners and manufacturers naturally consider different concepts for a same object. The stiffness of production means and the design specification requirements mark out process planners as responsible of the coherent integration of all constraints. First, this paper details an innovative solution of resource choice, applied for aircraft manufacturing parts. In a second part, key concepts are instanced for the considered industrial domain. Finally, a digital mock up validates the solution viability and demonstrates the possibility of an in-process knowledge capitalisation and use. Formalising the link between Design and Manufacturing allows to hope enhancements of simultaneous Product / Process developments.
Otologic surgery has some specificities compared to others surgeries. The anatomic working space is small, with various anatomical structures to preserve, like ossicles or facial nerve. This requires the use of microscope or endoscope. The microscope let the surgeon use both hands, but allows only direct vision. The endoscope leaves only one hand to the surgeon to use his tools, but provides a "fish-eye" vision. The rise of endoscopy these past few years has led to the development of numerous devices for the surgeon: the Robotol, first otological robot designed to performed some movements and hold an endoscope, or the Endofix Exo. Both devices need the hand of the surgeon to be moved. No robotic device allows the endoscope to be directed autonomously while the surgeon keeps both hands free to work, just like when he is working with a microscope. The objective of our work is to define the specific needs of the otological assistance surgery.
This research set out to identify and structure from online reviews the words and expressions related to customers' likes and dislikes to guide product development. Previous methods were mainly focused on product features. However, reviewers express their preference not only on product features. In this paper, based on an extensive literature review in design science, the authors propose a summarization model containing multiples aspects of user preference, such as product affordances, emotions, usage conditions. Meanwhile, the linguistic patterns describing these aspects of preference are discovered and drafted as annotation guidelines. A case study demonstrates that with the proposed model and the annotation guidelines, human annotators can structure the online reviews with high inter-agreement. As high inter-agreement human annotation results are essential for automatizing the online review summarization process with the natural language processing, this study provides materials for the future study of automatization.
We present an accurate, fast and efficient method for segmentation and muscle mask propagation in 3D freehand ultrasound data, towards accurate volume quantification. A deep Siamese 3D Encoder-Decoder network that captures the evolution of the muscle appearance and shape for contiguous slices is deployed. We uses it to propagate a reference mask annotated by a clinical expert. To handle longer changes of the muscle shape over the entire volume and to provide an accurate propagation, we devise a Bidirectional Long Short Term Memory module. Also, to train our model with a minimal amount of training samples, we propose a strategy combining learning from few annotated 2D ultrasound slices with sequential pseudo-labeling of the unannotated slices. We introduce a decremental update of the objective function to guide the model convergence in the absence of large amounts of annotated data. After training with a small number of volumes, the decremental update transitions from a weakly-supervised training to a few-shot setting. Finally, to handle the class-imbalance between foreground and background muscle pixels, we propose a parametric Tversky loss function that learns to adaptively penalize false positives and false negatives. We validate our approach for the segmentation, label propagation, and volume computation of the three low-limb muscles on a dataset of 61600 images from 44 subjects. We achieve a Dice score coefficient of over 95 %95~\% and a volumetric error \textcolor{black}{of} 1.6035±0.587 %1.6035 \pm 0.587~\%.
11
Risk assessment of cyber-physical systems, such as power plants, connected devices and IT-infrastructures has always been challenging: safety (i.e. absence of unintentional failures) and security (i.e. no disruptions due to attackers) are conditions that must be guaranteed. One of the traditional tools used to help considering these problems is attack trees, a tree-based formalism inspired by fault trees, a well-known formalism used in safety engineering. In this paper we define and implement the translation of attack-fault trees (AFTs) to a new extension of timed automata, called parametric weighted timed automata. This allows us to parametrize constants such as time and discrete costs in an AFT and then, using the model-checker IMITATOR, to compute the set of parameter values such that a successful attack is possible. Using the different sets of parameter values computed, different attack and fault scenarios can be deduced depending on the budget, time or computation power of the attacker, providing helpful data to select the most efficient counter-measure.
We demonstrate the feasibility of a fully automatic computer-aided diagnosis (CAD) tool, based on deep learning, that localizes and classifies proximal femur fractures on X-ray images according to the AO classification. The proposed framework aims to improve patient treatment planning and provide support for the training of trauma surgeon residents. A database of 1347 clinical radiographic studies was collected. Radiologists and trauma surgeons annotated all fractures with bounding boxes, and provided a classification according to the AO standard. The proposed CAD tool for the classification of radiographs into types "A", "B" and "not-fractured", reaches a F1-score of 87% and AUC of 0.95, when classifying fractures versus not-fractured cases it improves up to 94% and 0.98. Prior localization of the fracture results in an improvement with respect to full image classification. 100% of the predicted centers of the region of interest are contained in the manually provided bounding boxes. The system retrieves on average 9 relevant images (from the same class) out of 10 cases. Our CAD scheme localizes, detects and further classifies proximal femur fractures achieving results comparable to expert-level and state-of-the-art performance. Our auxiliary localization model was highly accurate predicting the region of interest in the radiograph. We further investigated several strategies of verification for its adoption into the daily clinical routine. A sensitivity analysis of the size of the ROI and image retrieval as a clinical use case were presented.
This communication is based on an original approach linking economical factors to technical and methodological ones. This work is applied to the decision process for mix production. This approach is relevant for costing driving systems. The main interesting point is that the quotation factors (linked to time indicators for each step of the industrial process) allow the complete evaluation and control of, on the one hand, the global balance of the company for a six-month period and, on the other hand, the reference values for each step of the process cycle of the parts. This approach is based on a complete numerical traceability and control of the processes (design and manufacturing of the parts and tools, mass production). This is possible due to numerical models and to feedback loops for cost indicator analysis at design and production levels. Quotation is also the base for the design requirements and for the choice and the configuration of the production process. The reference values of the quotation generate the base reference parameters of the process steps and operations. The traceability of real values (real time consuming, real consumable) is mainly used for a statistic feedback to the quotation application. The industrial environment is a steel sand casting company with a wide mix product and the application concerns both design and manufacturing. The production system is fully automated and integrates different products at the same time.
We study the fully convolutional neural networks in the context of malignancy detection for breast cancer screening. We work on a supervised segmentation task looking for an acceptable compromise between the precision of the network and the computational complexity.
There are no more papers matching your filters at the moment.