University of Orleans
Researchers at Massachusetts General Hospital and collaborating institutions develop a region-adaptive MRI super-resolution system combining mixture-of-experts with diffusion models, achieving superior image quality metrics while enabling specialized processing of distinct anatomical regions through three expert networks that dynamically adapt to tissue characteristics.
Subspace tracking is a fundamental problem in signal processing, where the goal is to estimate and track the underlying subspace that spans a sequence of data streams over time. In high-dimensional settings, data samples are often corrupted by non-Gaussian noises and may exhibit sparsity. This paper explores the alpha divergence for sparse subspace estimation and tracking, offering robustness to data corruption. The proposed method outperforms the state-of-the-art robust subspace tracking methods while achieving a low computational complexity and memory storage. Several experiments are conducted to demonstrate its effectiveness in robust subspace tracking and direction-of-arrival (DOA) estimation.
Aerial and satellite imagery are inherently complementary remote sensing sources, offering high-resolution detail alongside expansive spatial coverage. However, the use of these sources for land cover segmentation introduces several challenges, prompting the development of a variety of segmentation methods. Among these approaches, the DeepLabV3+ architecture is considered as a promising approach in the field of single-source image segmentation. However, despite its reliable results for segmentation, there is still a need to increase its robustness and improve its performance. This is particularly crucial for multimodal image segmentation, where the fusion of diverse types of information is essential. An interesting approach involves enhancing this architectural framework through the integration of novel components and the modification of certain internal processes. In this paper, we enhance the DeepLabV3+ architecture by introducing a new transposed conventional layers block for upsampling a second entry to fuse it with high level features. This block is designed to amplify and integrate information from satellite images, thereby enriching the segmentation process through fusion with aerial images. For experiments, we used the LandCover.ai (Land Cover from Aerial Imagery) dataset for aerial images, alongside the corresponding dataset sourced from Sentinel 2 data. Through the fusion of both sources, the mean Intersection over Union (mIoU) achieved a total mIoU of 84.91% without data augmentation.
Due to the recent demand of 3-D models in several applications like medical imaging, video games, among others, the necessity of implementing 3-D mesh watermarking schemes aiming to protect copyright has increased considerably. The majority of robust 3-D watermarking techniques have essentially focused on the robustness against attacks while the imperceptibility of these techniques is still a real issue. In this context, a blind robust 3-D mesh watermarking method based on mesh saliency and Quantization Index Modulation (QIM) for Copyright protection is proposed. The watermark is embedded by quantifying the vertex norms of the 3-D mesh using QIM scheme since it offers a good robustness-capacity tradeoff. The choice of the vertices is adjusted by the mesh saliency to achieve watermark robustness and to avoid visual distortions. The experimental results show the high imperceptibility of the proposed scheme while ensuring a good robustness against a wide range of attacks including additive noise, similarity transformations, smoothing, quantization, etc.
There is considerable debate whether the domestic political institutions (specifically, the country s level of democracy) of the host developing country toward foreign investors are effective in establishing the credibility of commitments are still underway, researchers have also analyzed the effect of international institutions such as (GATT-WTO) membership and Bilateral Investment treaties (BIT) in their role of establishing the credibility of commitment to attract foreign investments. We argue that there are qualitative differences among various types of trade agreements and full-fledged trade agreements (FTA-CU) provide credibility to foreign investors and democracy level in the host country conditions this effect whereas the partial scope agreements (PSA) are not sufficient in providing credibility of commitments and not moderated by democracy. This paper analyses the impact of heterogeneous TAs, and their interaction with domestic institutions, on FDI inflows. Statistical analyses for 122 developing countries from 1970 to 2005 support this argument. The method adopted relies on fixed effects estimator which is robust to control endogeneity on a large panel dataset. The strict erogeneity of results by using a method suggested by Baier and Bergstrand (2007) and no feedback effect found in sample. The results state that (1) More the FTA-CU concluded, larger the amount of FDI inflows are attracted into the developing countries and PSA are insignificant in determining the FDI inflow; (2) FTA CU are complementary to democratic regime whereas the conditional effect of PSA with democracy on levels of FDI inflows is insignificant.
Fully connected layer is an essential component of Convolutional Neural Networks (CNNs), which demonstrates its efficiency in computer vision tasks. The CNN process usually starts with convolution and pooling layers that first break down the input images into features, and then analyze them independently. The result of this process feeds into a fully connected neural network structure which drives the final classification decision. In this paper, we propose a Kernelized Dense Layer (KDL) which captures higher order feature interactions instead of conventional linear relations. We apply this method to Facial Expression Recognition (FER) and evaluate its performance on RAF, FER2013 and ExpW datasets. The experimental results demonstrate the benefits of such layer and show that our model achieves competitive results with respect to the state-of-the-art approaches.
Crop and weed monitoring is an important challenge for agriculture and food production nowadays. Thanks to recent advances in data acquisition and computation technologies, agriculture is evolving to a more smart and precision farming to meet with the high yield and high quality crop production. Classification and recognition in Unmanned Aerial Vehicles (UAV) images are important phases for crop monitoring. Advances in deep learning models relying on Convolutional Neural Network (CNN) have achieved high performances in image classification in the agricultural domain. Despite the success of this architecture, CNN still faces many challenges such as high computation cost, the need of large labelled datasets, ... Natural language processing's transformer architecture can be an alternative approach to deal with CNN's limitations. Making use of the self-attention paradigm, Vision Transformer (ViT) models can achieve competitive or better results without applying any convolution operations. In this paper, we adopt the self-attention mechanism via the ViT models for plant classification of weeds and crops: red beet, off-type beet (green leaves), parsley and spinach. Our experiments show that with small set of labelled training data, ViT models perform better compared to state-of-the-art CNN-based models EfficientNet and ResNet, with a top accuracy of 99.8\% achieved by the ViT model.
Knee Osteoarthritis (KOA) is a common musculoskeletal disorder that significantly affects the mobility of older adults. In the medical domain, images containing temporal data are frequently utilized to study temporal dynamics and statistically monitor disease progression. While deep learning-based generative models for natural images have been widely researched, there are comparatively few methods available for synthesizing temporal knee X-rays. In this work, we introduce a novel deep-learning model designed to synthesize intermediate X-ray images between a specific patient's healthy knee and severe KOA stages. During the testing phase, based on a healthy knee X-ray, the proposed model can produce a continuous and effective sequence of KOA X-ray images with varying degrees of severity. Specifically, we introduce a Diffusion-based Morphing Model by modifying the Denoising Diffusion Probabilistic Model. Our approach integrates diffusion and morphing modules, enabling the model to capture spatial morphing details between source and target knee X-ray images and synthesize intermediate frames along a geodesic path. A hybrid loss consisting of diffusion loss, morphing loss, and supervision loss was employed. We demonstrate that our proposed approach achieves the highest temporal frame synthesis performance, effectively augmenting data for classification models and simulating the progression of KOA.
Mycetoma is a chronic and neglected inflammatory disease prevalent in tropical and subtropical regions. It can lead to severe disability and social stigma. The disease is classified into two types based on the causative microorganisms: eumycetoma (fungal) and actinomycetoma (bacterial). Effective treatment strategies depend on accurately identifying the causative agents. Current identification methods include molecular, cytological, and histopathological techniques, as well as grain culturing. Among these, histopathological techniques are considered optimal for use in endemic areas, but they require expert pathologists for accurate identification, which can be challenging in rural areas lacking such expertise. The advent of digital pathology and automated image analysis algorithms offers a potential solution. This report introduces a novel dataset designed for the automated detection and classification of mycetoma using histopathological images. It includes the first database of microscopic images of mycetoma tissue, detailing the entire pipeline from species distribution and patient sampling to acquisition protocols through histological procedures. The dataset consists of images from 142 patients, totalling 864 images, each annotated with binary masks indicating the presence of grains, facilitating both detection and segmentation tasks.
Knee osteoarthritis (KOA) is a prevalent musculoskeletal disorder, often diagnosed using X-rays due to its cost-effectiveness. While Magnetic Resonance Imaging (MRI) provides superior soft tissue visualization and serves as a valuable supplementary diagnostic tool, its high cost and limited accessibility significantly restrict its widespread use. To explore the feasibility of bridging this imaging gap, we conducted a feasibility study leveraging a diffusion-based model that uses an X-ray image as conditional input, alongside target depth and additional patient-specific feature information, to generate corresponding MRI sequences. Our findings demonstrate that the MRI volumes generated by our approach is visually closer to real MRI scans. Moreover, increasing inference steps enhances the continuity and smoothness of the synthesized MRI sequences. Through ablation studies, we further validate that integrating supplementary patient-specific information, beyond what X-rays alone can provide, enhances the accuracy and clinical relevance of the generated MRI, which underscores the potential of leveraging external patient-specific information to improve the MRI generation. This study is available at this https URL
Since the late 1970s, successive satellite missions have been monitoring the sun's activity and recording the total solar irradiance (TSI). Some of these measurements have lasted for more than a decade. In order to obtain a seamless record whose duration exceeds that of the individual instruments, the time series have to be merged. Climate models can be better validated using such long TSI time series which can also help to provide stronger constraints on past climate reconstructions (e.g., back to the Maunder minimum). We propose a 3-step method based on data fusion, including a stochastic noise model to take into account short and long-term correlations. Compared with previous products scaled at the nominal TSI value of 1361 W/m2, the difference is below 0.2 W/m2 in terms of solar minima. Next, we model the frequency spectrum of this 41-year TSI composite time series with a Generalized Gauss-Markov model to help describe an observed flattening at high frequencies. It allows us to fit a linear trend into these TSI time series by joint inversion with the stochastic noise model via a maximum-likelihood estimator. Our results show that the amplitude of such trend is \sim -0.004 +/- 0.004 W/(m2yr) for the period 1980 - 2021. These results are compared with the difference of irradiance values estimated from two consecutive solar minima. We conclude that the trend in these composite time series is mostly an artifact due to the colored noise.
Due to the recent demand of 3-D meshes in a wide range of applications such as video games, medical imaging, film special effect making, computer-aided design (CAD), among others, the necessity of implementing 3-D mesh watermarking schemes aiming to protect copyright has increased in the last decade. Nowadays, the majority of robust 3-D watermarking approaches have mainly focused on the robustness against attacks while the imperceptibility of these techniques is still a serious challenge. In this context, a blind robust 3-D mesh watermarking method based on mesh saliency and scalar Costa scheme (SCS) for Copyright protection is proposed. The watermark is embedded by quantifying the vertex norms of the 3-D mesh by SCS scheme using the vertex normal norms as synchronizing primitives. The choice of these vertices is based on 3-D mesh saliency to achieve watermark robustness while ensuring high imperceptibility. The experimental results show that in comparison with the alternative methods, the proposed work can achieve a high imperceptibility performance while ensuring a good robustness against several common attacks including similarity transformations, noise addition, quantization, smoothing, elements reordering, etc.
Knee Osteoarthritis (KOA) is a prevalent musculoskeletal disorder that severely impacts mobility and quality of life, particularly among older adults. Its diagnosis often relies on subjective assessments using the Kellgren-Lawrence (KL) grading system, leading to variability in clinical evaluations. To address these challenges, we propose a confidence-driven deep learning framework for early KOA detection, focusing on distinguishing KL-0 and KL-2 stages. The Siamese-based framework integrates a novel multi-level feature extraction architecture with a hybrid loss strategy. Specifically, multi-level Global Average Pooling (GAP) layers are employed to extract features from varying network depths, ensuring comprehensive feature representation, while the hybrid loss strategy partitions training samples into high-, medium-, and low-confidence subsets. Tailored loss functions are applied to improve model robustness and effectively handle uncertainty in annotations. Experimental results on the Osteoarthritis Initiative (OAI) dataset demonstrate that the proposed framework achieves competitive accuracy, sensitivity, and specificity, comparable to those of expert radiologists. Cohen's kappa values (k > 0.85)) confirm substantial agreement, while McNemar's test (p > 0.05) indicates no statistically significant differences between the model and radiologists. Additionally, Confidence distribution analysis reveals that the model emulates radiologists' decision-making patterns. These findings highlight the potential of the proposed approach to serve as an auxiliary diagnostic tool, enhancing early KOA detection and reducing clinical workload.
Knee Osteoarthritis (KOA) is a common musculoskeletal condition that significantly affects mobility and quality of life, particularly in elderly populations. However, training deep learning models for early KOA classification is often hampered by the limited availability of annotated medical datasets, owing to the high costs and labour-intensive nature of data labelling. Traditional data augmentation techniques, while useful, rely on simple transformations and fail to introduce sufficient diversity into the dataset. To address these challenges, we propose the Key-Exchange Convolutional Auto-Encoder (KECAE) as an innovative Artificial Intelligence (AI)-based data augmentation strategy for early KOA classification. Our model employs a convolutional autoencoder with a novel key-exchange mechanism that generates synthetic images by selectively exchanging key pathological features between X-ray images, which not only diversifies the dataset but also ensures the clinical validity of the augmented data. A hybrid loss function is introduced to supervise feature learning and reconstruction, integrating multiple components, including reconstruction, supervision, and feature separation losses. Experimental results demonstrate that the KECAE-generated data significantly improve the performance of KOA classification models, with accuracy gains of up to 1.98% across various standard and state-of-the-art architectures. Furthermore, a clinical validation study involving expert radiologists confirms the anatomical plausibility and diagnostic realism of the synthetic outputs. These findings highlight the potential of KECAE as a robust tool for augmenting medical datasets in early KOA detection.
Magnetic Resonance Imaging (MRI) offers critical insights into microstructural details, however, the spatial resolution of standard 1.5T imaging systems is often limited. In contrast, 7T MRI provides significantly enhanced spatial resolution, enabling finer visualization of anatomical structures. Though this, the high cost and limited availability of 7T MRI hinder its widespread use in clinical settings. To address this challenge, a novel Super-Resolution (SR) model is proposed to generate 7T-like MRI from standard 1.5T MRI scans. Our approach leverages a diffusion-based architecture, incorporating gradient nonlinearity correction and bias field correction data from 7T imaging as guidance. Moreover, to improve deployability, a progressive distillation strategy is introduced. Specifically, the student model refines the 7T SR task with steps, leveraging feature maps from the inference phase of the teacher model as guidance, aiming to allow the student model to achieve progressively 7T SR performance with a smaller, deployable model size. Experimental results demonstrate that our baseline teacher model achieves state-of-the-art SR performance. The student model, while lightweight, sacrifices minimal performance. Furthermore, the student model is capable of accepting MRI inputs at varying resolutions without the need for retraining, significantly further enhancing deployment flexibility. The clinical relevance of our proposed method is validated using clinical data from Massachusetts General Hospital. Our code is available at this https URL.
13
Knee OsteoArthritis (KOA) is a widespread musculoskeletal disorder that can severely impact the mobility of older individuals. Insufficient medical data presents a significant obstacle for effectively training models due to the high cost associated with data labelling. Currently, deep learning-based models extensively utilize data augmentation techniques to improve their generalization ability and alleviate overfitting. However, conventional data augmentation techniques are primarily based on the original data and fail to introduce substantial diversity to the dataset. In this paper, we propose a novel approach based on the Vision Transformer (ViT) model with original Selective Shuffled Position Embedding (SSPE) and key-patch exchange strategies to obtain different input sequences as a method of data augmentation for early detection of KOA (KL-0 vs KL-2). More specifically, we fix and shuffle the position embedding of key and non-key patches, respectively. Then, for the target image, we randomly select other candidate images from the training set to exchange their key patches and thus obtain different input sequences. Finally, a hybrid loss function is developed by incorporating multiple loss functions for different types of the sequences. According to the experimental results, the generated data are considered valid as they lead to a notable improvement in the model's classification performance.
This paper investigates the usage of kernel functions at the different layers in a convolutional neural network. We carry out extensive studies of their impact on convolutional, pooling and fully-connected layers. We notice that the linear kernel may not be sufficiently effective to fit the input data distributions, whereas high order kernels prone to over-fitting. This leads to conclude that a trade-off between complexity and performance should be reached. We show how one can effectively leverage kernel functions, by introducing a more distortion aware pooling layers which reduces over-fitting while keeping track of the majority of the information fed into subsequent layers. We further propose Kernelized Dense Layers (KDL), which replace fully-connected layers, and capture higher order feature interactions. The experiments on conventional classification datasets i.e. MNIST, FASHION-MNIST and CIFAR-10, show that the proposed techniques improve the performance of the network compared to classical convolution, pooling and fully connected layers. Moreover, experiments on fine-grained classification i.e. facial expression databases, namely RAF-DB, FER2013 and ExpW demonstrate that the discriminative power of the network is boosted, since the proposed techniques improve the awareness to slight visual details and allows the network reaching state-of-the-art results.
Eye-tracking analysis plays a vital role in medical imaging, providing key insights into how radiologists visually interpret and diagnose clinical cases. In this work, we first analyze radiologists' attention and agreement by measuring the distribution of various eye-movement patterns, including saccades direction, amplitude, and their joint distribution. These metrics help uncover patterns in attention allocation and diagnostic strategies. Furthermore, we investigate whether and how doctors' gaze behavior shifts when viewing authentic (Real) versus deep-learning-generated (Fake) images. To achieve this, we examine fixation bias maps, focusing on first, last, short, and longest fixations independently, along with detailed saccades patterns, to quantify differences in gaze distribution and visual saliency between authentic and synthetic images.
Deep learning models have great potential in medical imaging, including orthodontics and skeletal maturity assessment. However, applying a model to data different from its training set can lead to unreliable predictions that may impact patient care. To address this, we propose a comprehensive verification framework that evaluates model suitability through multiple complementary strategies. First, we introduce a Gradient Attention Map (GAM)-based approach that analyzes attention patterns using Grad-CAM and compares them via similarity metrics such as IoU, Dice Similarity, SSIM, Cosine Similarity, Pearson Correlation, KL Divergence, and Wasserstein Distance. Second, we extend verification to early convolutional feature maps, capturing structural mis-alignments missed by attention alone. Finally, we incorporate an additional garbage class into the classification model to explicitly reject out-of-distribution inputs. Experimental results demonstrate that these combined methods effectively identify unsuitable models and inputs, promoting safer and more reliable deployment of deep learning in medical imaging.
We introduce a novel deep learning framework for the automated staging of spheno-occipital synchondrosis (SOS) fusion, a critical diagnostic marker in both orthodontics and forensic anthropology. Our approach leverages a dual-model architecture wherein a teacher model, trained on manually cropped images, transfers its precise spatial understanding to a student model that operates on full, uncropped images. This knowledge distillation is facilitated by a newly formulated loss function that aligns spatial logits as well as incorporates gradient-based attention spatial mapping, ensuring that the student model internalizes the anatomically relevant features without relying on external cropping or YOLO-based segmentation. By leveraging expert-curated data and feedback at each step, our framework attains robust diagnostic accuracy, culminating in a clinically viable end-to-end pipeline. This streamlined approach obviates the need for additional pre-processing tools and accelerates deployment, thereby enhancing both the efficiency and consistency of skeletal maturation assessment in diverse clinical settings.
There are no more papers matching your filters at the moment.