Laboratoire de Math ́ematiques Jean LerayUniversity of Nantes
Identifying the Best Machine Learning Algorithms for Brain Tumor Segmentation, Progression Assessment, and Overall Survival Prediction in the BRATS Challenge
23 Apr 2019
Gliomas are the most common primary brain malignancies, with different degrees of aggressiveness, variable prognosis and various heterogeneous histologic sub-regions, i.e., peritumoral edematous/invaded tissue, necrotic core, active and non-enhancing core. This intrinsic heterogeneity is also portrayed in their radio-phenotype, as their sub-regions are depicted by varying intensity profiles disseminated across multi-parametric magnetic resonance imaging (mpMRI) scans, reflecting varying biological properties. Their heterogeneous shape, extent, and location are some of the factors that make these tumors difficult to resect, and in some cases inoperable. The amount of resected tumor is a factor also considered in longitudinal scans, when evaluating the apparent tumor for potential diagnosis of progression. Furthermore, there is mounting evidence that accurate segmentation of the various tumor sub-regions can offer the basis for quantitative image analysis towards prediction of patient overall survival. This study assesses the state-of-the-art machine learning (ML) methods used for brain tumor image analysis in mpMRI scans, during the last seven instances of the International Brain Tumor Segmentation (BraTS) challenge, i.e., 2012-2018. Specifically, we focus on i) evaluating segmentations of the various glioma sub-regions in pre-operative mpMRI scans, ii) assessing potential tumor progression by virtue of longitudinal growth of tumor sub-regions, beyond use of the RECIST/RANO criteria, and iii) predicting the overall survival from pre-operative mpMRI scans of patients that underwent gross total resection. Finally, we investigate the challenge of identifying the best ML algorithms for each of these tasks, considering that apart from being diverse on each instance of the challenge, the multi-institutional mpMRI BraTS dataset has also been a continuously evolving/growing dataset.
View blog
Resources
Event-by-Event Simulation of the Three-Dimensional Hydrodynamic Evolution from Flux Tube Initial Conditions in Ultrarelativistic Heavy Ion Collisions
We present a realistic treatment of the hydrodynamic evolution of ultrarelativistic heavy ion collisions, based on the following features: initial conditions obtained from a flux tube approach, compatible with the string model and the color glass condensate picture; event-by-event procedure, taking into the account the highly irregular space structure of single events, being experimentally visible via so-called ridge structures in two-particle correlations; use of an efficient code for solving the hydrodynamic equations in 3+1 dimensions, including the conservation of baryon number, strangeness, and electric charge; employment of a realistic equation-of-state, compatible with lattice gauge results; use of a complete hadron resonance table, making our calculations compatible with the results from statistical models; hadronic cascade procedure after an hadronization from the thermal matter at an early time.
View blog
Resources
Perceptual Visual Quality Assessment: Principles, Methods, and Future Directions
As multimedia services such as video streaming, video conferencing, virtual reality (VR), and online gaming continue to expand, ensuring high perceptual visual quality becomes a priority to maintain user satisfaction and competitiveness. However, multimedia content undergoes various distortions during acquisition, compression, transmission, and storage, resulting in the degradation of experienced quality. Thus, perceptual visual quality assessment (PVQA), which focuses on evaluating the quality of multimedia content based on human perception, is essential for optimizing user experiences in advanced communication systems. Several challenges are involved in the PVQA process, including diverse characteristics of multimedia content such as image, video, VR, point cloud, mesh, multimodality, etc., and complex distortion scenarios as well as viewing conditions. In this paper, we first present an overview of PVQA principles and methods. This includes both subjective methods, where users directly rate their experiences, and objective methods, where algorithms predict human perception based on measurable factors such as bitrate, frame rate, and compression levels. Based on the basics of PVQA, quality predictors for different multimedia data are then introduced. In addition to traditional images and videos, immersive multimedia and generative artificial intelligence (GenAI) content are also discussed. Finally, the paper concludes with a discussion on the future directions of PVQA research.
View blog
Resources
Characterisation of Hamamatsu R11065-20 PMTs for use in the SABRE South NaI(Tl) Crystal Detectors

This paper presents a detailed characterization of Hamamatsu R11065-20 photomultiplier tubes for the SABRE South dark matter experiment, providing essential data on their gain, dark rate, and timing properties. The work includes the development of machine learning techniques that improve signal discrimination efficiency to 75-80% at 90% background rejection for low-energy events.

View blog
Resources
Causal Consistency: Beyond Memory
In distributed systems where strong consistency is costly when not impossible, causal consistency provides a valuable abstraction to represent program executions as partial orders. In addition to the sequential program order of each computing entity, causal order also contains the semantic links between the events that affect the shared objects -- messages emission and reception in a communication channel , reads and writes on a shared register. Usual approaches based on semantic links are very difficult to adapt to other data types such as queues or counters because they require a specific analysis of causal dependencies for each data type. This paper presents a new approach to define causal consistency for any abstract data type based on sequential specifications. It explores, formalizes and studies the differences between three variations of causal consistency and highlights them in the light of PRAM, eventual consistency and sequential consistency: weak causal consistency, that captures the notion of causality preservation when focusing on convergence ; causal convergence that mixes weak causal consistency and convergence; and causal consistency, that coincides with causal memory when applied to shared memory.
View blog
Resources
ESVQA: Perceptual Quality Assessment of Egocentric Spatial Videos
With the rapid development of eXtended Reality (XR), egocentric spatial shooting and display technologies have further enhanced immersion and engagement for users, delivering more captivating and interactive experiences. Assessing the quality of experience (QoE) of egocentric spatial videos is crucial to ensure a high-quality viewing experience. However, the corresponding research is still lacking. In this paper, we use the concept of embodied experience to highlight this more immersive experience and study the new problem, i.e., embodied perceptual quality assessment for egocentric spatial videos. Specifically, we introduce the first Egocentric Spatial Video Quality Assessment Database (ESVQAD), which comprises 600 egocentric spatial videos captured using the Apple Vision Pro and their corresponding mean opinion scores (MOSs). Furthermore, we propose a novel multi-dimensional binocular feature fusion model, termed ESVQAnet, which integrates binocular spatial, motion, and semantic features to predict the overall perceptual quality. Experimental results demonstrate the ESVQAnet significantly outperforms 16 state-of-the-art VQA models on the embodied perceptual quality assessment task, and exhibits strong generalization capability on traditional VQA tasks. The database and code are available at this https URL.
View blog
Resources
Hybrid-MST: A Hybrid Active Sampling Strategy for Pairwise Preference Aggregation
In this paper we present a hybrid active sampling strategy for pairwise preference aggregation, which aims at recovering the underlying rating of the test candidates from sparse and noisy pairwise labelling. Our method employs Bayesian optimization framework and Bradley-Terry model to construct the utility function, then to obtain the Expected Information Gain (EIG) of each pair. For computational efficiency, Gaussian-Hermite quadrature is used for estimation of EIG. In this work, a hybrid active sampling strategy is proposed, either using Global Maximum (GM) EIG sampling or Minimum Spanning Tree (MST) sampling in each trial, which is determined by the test budget. The proposed method has been validated on both simulated and real-world datasets, where it shows higher preference aggregation ability than the state-of-the-art methods.
View blog
Resources11
Modular Constraint Solver Cooperation via Abstract Interpretation
Cooperation among constraint solvers is difficult because different solving paradigms have different theoretical foundations. Recent works have shown that abstract interpretation can provide a unifying theory for various constraint solvers. In particular, it relies on abstract domains which capture constraint languages as ordered structures. The key insight of this paper is viewing cooperation schemes as abstract domains combinations. We propose a modular framework in which solvers and cooperation schemes can be seamlessly added and combined. This differs from existing approaches such as SMT where the cooperation scheme is usually fixed (e.g., Nelson-Oppen). We contribute to two new cooperation schemes: (i) interval propagators completion that allows abstract domains to exchange bound constraints, and (ii) delayed product which exchanges over-approximations of constraints between two abstract domains. Moreover, the delayed product is based on delayed goal of logic programming, and it shows that abstract domains can also capture control aspects of constraint solving. Finally, to achieve modularity, we propose the shared product to combine abstract domains and cooperation schemes. Our approach has been fully implemented, and we provide various examples on the flexible job shop scheduling problem. Under consideration for acceptance in TPLP.
View blog
Resources
Road map for the tuning of hadronic interaction models with accelerator-based and astroparticle data
In high-energy and astroparticle physics, event generators play an essential role, even in the simplest data analyses. As analysis techniques become more sophisticated, e.g. based on deep neural networks, their correct description of the observed event characteristics becomes even more important. Physical processes occurring in hadronic collisions are simulated within a Monte Carlo framework. A major challenge is the modeling of hadron dynamics at low momentum transfer, which includes the initial and final phases of every hadronic collision. Phenomenological models inspired by Quantum Chromodynamics used for these phases cannot guarantee completeness or correctness over the full phase space. These models usually include parameters which must be tuned to suitable experimental data. Until now, event generators have primarily been developed and tuned based on data from high-energy physics experiments at accelerators. However, in many cases they have been found to not satisfactorily describe data from astroparticle experiments, which provide sensitivity especially to hadrons produced nearly parallel to the collision axis and cover center-of-mass energies up to several hundred TeV, well beyond those reached at colliders so far. In this report, we address the complementarity of these two sets of data and present a road map for exploiting, for the first time, their complementarity by enabling a unified tuning of event generators with accelerator-based and astroparticle data.
View blog
Resources
Caesium fallout in Tokyo on 15th March, 2011 is dominated by highly radioactive, caesium-rich microparticles
In order to understand the chemical properties and environmental impacts of low-solubility Cs-rich microparticles (CsMPs) derived from the FDNPP, the CsMPs collected from Tokyo were investigated at the atomic scale using high-resolution transmission electron microscopy (HRTEM) and dissolution experiments were performed on the air filters. Remarkably, CsMPs 0.58-2.0 micrometer in size constituted 80%-89% of the total Cs radioactivity during the initial fallout events on 15th March, 2011. The CsMPs from Tokyo and Fukushima exhibit the same texture at the nanoscale: aggregates of Zn-Fe-oxide nanoparticles embedded in amorphous SiO2 glass. The Cs is associated with Zn-Fe-oxide nanoparticles or in the form of nanoscale inclusions of intrinsic Cs species,rather than dissolved in the SiO2 matrix. The Cs concentration in CsMPs from Tokyo (0.55-10.9 wt%) is generally less than that in particles from Fukushima (8.5-12.9 wt%).The radioactivity per unit mass of CsMPs from Tokyo is still as high as 1E11 Bq/g, which is extremely high for particles originating from nuclear accidents. Thus, inhalation of the low-solubility CsMPs would result in a high localized energy deposition by beta (0.51-12)*1E-3 Gy/h within the 100-micrometer-thick water layer on the CsMP surface) and may have longer-term effects compared with those predicted for soluble Cs-species.
View blog
Resources
A 2-categorical approach to the semantics of dependent type theory with computation axioms
Axiomatic type theory is a dependent type theory without computation rules. The term equality judgements that usually characterise these rules are replaced by computation axioms, i.e., additional term judgements that are typed by identity types. This paper is devoted to providing an effective description of its semantics, from a higher categorical perspective: given the challenge of encoding intensional type formers into 1-dimensional categorical terms and properties, a challenge that persists even for axiomatic type formers, we adopt Richard Garner's approach in the 2-dimensional study of dependent types. We prove that the type formers of axiomatic theories can be encoded into natural 2-dimensional category theoretic data, obtaining a presentation of the semantics of axiomatic type theory via 2-categorical models called display map 2-categories. In the axiomatic case, the 2-categorical requirements identified by Garner for interpreting intensional type formers are relaxed. Therefore, we obtain a presentation of the semantics of the axiomatic theory that generalises Garner's one for the intensional case. Our main result states that the interpretation of axiomatic theories within display map 2-categories is well-defined and enjoys the soundness property. We use this fact to provide a semantic proof that the computation rule of intensional identity types is not admissible in axiomatic type theory. This is achieved via a revisitation of Hofmann and Streicher's groupoid model that believes axiomatic identity types but does not believe intensional ones.
View blog
Resources
Inclusive pi^0, eta, and direct photon production at high transverse momentum in p+p and d+Au collisions at sqrt(s_NN) = 200 GeV
We report a measurement of high-p_T inclusive pi^0, eta, and direct photon production in p+p and d+Au collisions at sqrt(s_NN) = 200 GeV at midrapidity (0 < eta < 1). Photons from the decay pi^0 -> gamma gamma were detected in the Barrel Electromagnetic Calorimeter of the STAR experiment at the Relativistic Heavy Ion Collider. The eta -> gamma gamma decay was also observed and constituted the first eta measurement by STAR. The first direct photon cross section measurement by STAR is also presented, the signal was extracted statistically by subtracting the pi^0, eta, and omega(782) decay background from the inclusive photon distribution observed in the calorimeter. The analysis is described in detail, and the results are found to be in good agreement with earlier measurements and with next-to-leading order perturbative QCD calculations.
View blog
Resources
A Cocycle Model for Topological and Lie Group Cohomology
We propose a unified framework in which the different constructions of cohomology groups for topological and Lie groups can all be treated on equal footings. In particular, we show that the cohomology of "locally continuous" cochains (respectively "locally smooth" in the case of Lie groups) fits into this framework, which provides an easily accessible cocycle model for topological and Lie group cohomology. We illustrate the use of this unified framework and the relation between the different models in various applications. This includes the construction of cohomology classes characterizing the string group and a direct connection to Lie algebra cohomology.
View blog
Resources
Cohomology of finite monogenic self-distributive structures
A shelf is a set with a binary operation~\op\op satisfying $a \op (b \op c) = (a \op b) \op (a \op c).Racksareshelveswithinvertibletranslations. Racks are shelves with invertible translations b \mapsto a \op b$; many of their aspects, including cohomological, are better understood than those of general shelves. Finite monogenic shelves (FMS), of which Laver tables and cyclic racks are the most famous examples, form a remarkably rich family of structures and play an important role in set theory. We compute the cohomology of FMS with arbitrary coefficients. On the way we develop general tools for studying the cohomology of shelves. Moreover, inside any finite shelf we identify a sub-rack which inherits its major characteristics, including the cohomology. For FMS, these sub-racks are all cyclic.
View blog
Resources
Perceptual representations of structural information in images: application to quality assessment of synthesized view in FTV scenario
As the immersive multimedia techniques like Free-viewpoint TV (FTV) develop at an astonishing rate, user's demand for high-quality immersive contents increases dramatically. Unlike traditional uniform artifacts, the distortions within immersive contents could be non-uniform structure-related and thus are challenging for commonly used quality metrics. Recent studies have demonstrated that the representation of visual features can be extracted from multiple levels of the hierarchy. Inspired by the hierarchical representation mechanism in the human visual system (HVS), in this paper, we explore to adopt structural representations to quantitatively measure the impact of such structure-related distortion on perceived quality in FTV scenario. More specifically, a bio-inspired full reference image quality metric is proposed based on 1) low-level contour descriptor; 2) mid-level contour category descriptor; and 3) task-oriented non-natural structure descriptor. The experimental results show that the proposed model outperforms significantly the state-of-the-art metrics.
View blog
Resources
A general procedure to combine estimators
A general method to combine several estimators of the same quantity is investigated. In the spirit of model and forecast averaging, the final estimator is computed as a weighted average of the initial ones, where the weights are constrained to sum to one. In this framework, the optimal weights, minimizing the quadratic loss, are entirely determined by the mean square error matrix of the vector of initial estimators. The averaging estimator is built using an estimation of this matrix, which can be computed from the same dataset. A non-asymptotic error bound on the averaging estimator is derived, leading to asymptotic optimality under mild conditions on the estimated mean square error matrix. This method is illustrated on standard statistical problems in parametric and semi-parametric models where the averaging estimator outperforms the initial estimators in most cases.
View blog
Resources
New physics searches with heavy-ion collisions at the LHC
This document summarises proposed searches for new physics accessible in the heavy-ion mode at the CERN Large Hadron Collider (LHC), both through hadronic and ultraperipheral γγ\gamma\gamma interactions, and that have a competitive or, even, unique discovery potential compared to standard proton-proton collision studies. Illustrative examples include searches for new particles -- such as axion-like pseudoscalars, radions, magnetic monopoles, new long-lived particles, dark photons, and sexaquarks as dark matter candidates -- as well as new interactions, such as non-linear or non-commutative QED extensions. We argue that such interesting possibilities constitute a well-justified scientific motivation, complementing standard quark-gluon-plasma physics studies, to continue running with ions at the LHC after the Run-4, i.e. beyond 2030, including light and intermediate-mass ion species, accumulating nucleon-nucleon integrated luminosities in the accessible fb1^{-1} range per month.
View blog
Resources
Resonance production in high energy collisions from small to big systems
The aim of this paper is to understand resonance production (and more generally particle production) for different collision systems, namely proton-proton (pp), proton-nucleus (pA), and nucleus-nucleus (AA) scattering at the LHC. We will investigate in particular particle yields and ratios versus multiplicity, using the same multiplicity definition for the three different systems, in order to analyse in a compact way the evolution of particle production with the system size and the origin of a very different system size dependence of the different particles.
View blog
Resources
Cross section and transverse single-spin asymmetry of muons from open heavy-flavor decays in polarized pp+pp collisions at s=200\sqrt{s}=200 GeV
The cross section and transverse single-spin asymmetries of μ\mu^{-} and μ+\mu^{+} from open heavy-flavor decays in polarized pp+pp collisions at s=200\sqrt{s}=200 GeV were measured by the PHENIX experiment during 2012 at the Relativistic Heavy Ion Collider. Because heavy-flavor production is dominated by gluon-gluon interactions at s=200\sqrt{s}=200 GeV, these measurements offer a unique opportunity to obtain information on the trigluon correlation functions. The measurements are performed at forward and backward rapidity (1.4&lt;|y|&lt;2.0) over the transverse momentum range of $1.25
View blog
Resources
Towards deep learning-powered IVF: A large public benchmark for morphokinetic parameter prediction
13 May 2022
An important limitation to the development of Artificial Intelligence (AI)-based solutions for In Vitro Fertilization (IVF) is the absence of a public reference benchmark to train and evaluate deep learning (DL) models. In this work, we describe a fully annotated dataset of 704 videos of developing embryos, for a total of 337k images. We applied ResNet, LSTM, and ResNet-3D architectures to our dataset and demonstrate that they overperform algorithmic approaches to automatically annotate stage development phases. Altogether, we propose the first public benchmark that will allow the community to evaluate morphokinetic models. This is the first step towards deep learning-powered IVF. Of note, we propose highly detailed annotations with 16 different development phases, including early cell division phases, but also late cell divisions, phases after morulation, and very early phases, which have never been used before. We postulate that this original approach will help improve the overall performance of deep learning approaches on time-lapse videos of embryo development, ultimately benefiting infertile patients with improved clinical success rates (Code and data are available at this https URL).
View blog
Resources
There are no more papers matching your filters at the moment.