Philips Healthcare
Efficient automatic segmentation of multi-level (i.e. main and branch) pulmonary arteries (PA) in CTPA images plays a significant role in clinical applications. However, most existing methods concentrate only on main PA or branch PA segmentation separately and ignore segmentation efficiency. Besides, there is no public large-scale dataset focused on PA segmentation, which makes it highly challenging to compare the different methods. To benchmark multi-level PA segmentation algorithms, we organized the first \textbf{P}ulmonary \textbf{AR}tery \textbf{SE}gmentation (PARSE) challenge. On the one hand, we focus on both the main PA and the branch PA segmentation. On the other hand, for better clinical application, we assign the same score weight to segmentation efficiency (mainly running time and GPU memory consumption during inference) while ensuring PA segmentation accuracy. We present a summary of the top algorithms and offer some suggestions for efficient and accurate multi-level PA automatic segmentation. We provide the PARSE challenge as open-access for the community to benchmark future algorithm developments at \url{this https URL}.
Cardiac magnetic resonance imaging (MRI) has emerged as a clinically gold-standard technique for diagnosing cardiac diseases, thanks to its ability to provide diverse information with multiple modalities and anatomical views. Accelerated cardiac MRI is highly expected to achieve time-efficient and patient-friendly imaging, and then advanced image reconstruction approaches are required to recover high-quality, clinically interpretable images from undersampled measurements. However, the lack of publicly available cardiac MRI k-space dataset in terms of both quantity and diversity has severely hindered substantial technological progress, particularly for data-driven artificial intelligence. Here, we provide a standardized, diverse, and high-quality CMRxRecon2024 dataset to facilitate the technical development, fair evaluation, and clinical transfer of cardiac MRI reconstruction approaches, towards promoting the universal frameworks that enable fast and robust reconstructions across different cardiac MRI protocols in clinical practice. To the best of our knowledge, the CMRxRecon2024 dataset is the largest and most protocal-diverse publicly available cardiac k-space dataset. It is acquired from 330 healthy volunteers, covering commonly used modalities, anatomical views, and acquisition trajectories in clinical cardiac MRI workflows. Besides, an open platform with tutorials, benchmarks, and data processing tools is provided to facilitate data usage, advanced method development, and fair performance evaluation.
In this document, we explore in more detail our published work (Komorowski, Celi, Badawi, Gordon, & Faisal, 2018) for the benefit of the AI in Healthcare research community. In the above paper, we developed the AI Clinician system, which demonstrated how reinforcement learning could be used to make useful recommendations towards optimal treatment decisions from intensive care data. Since publication a number of authors have reviewed our work (e.g. Abbasi, 2018; Bos, Azoulay, & Martin-Loeches, 2019; Saria, 2018). Given the difference of our framework to previous work, the fact that we are bridging two very different academic communities (intensive care and machine learning) and that our work has impact on a number of other areas with more traditional computer-based approaches (biosignal processing and control, biomedical engineering), we are providing here additional details on our recent publication.
Three-dimensional (3D) and dynamic 3D+time (4D) reconstruction of coronary arteries from X-ray coronary angiography (CA) has the potential to improve clinical procedures. However, there are multiple challenges to be addressed, most notably, blood-vessel structure sparsity, poor background and blood vessel distinction, sparse-views, and intra-scan motion. State-of-the-art reconstruction approaches rely on time-consuming manual or error-prone automatic segmentations, limiting clinical usability. Recently, approaches based on Neural Radiance Fields (NeRF) have shown promise for automatic reconstructions in the sparse-view setting. However, they suffer from long training times due to their dependence on MLP-based representations. We propose NerT-CA, a hybrid approach of Neural and Tensorial representations for accelerated 4D reconstructions with sparse-view CA. Building on top of the previous NeRF-based work, we model the CA scene as a decomposition of low-rank and sparse components, utilizing fast tensorial fields for low-rank static reconstruction and neural fields for dynamic sparse reconstruction. Our approach outperforms previous works in both training time and reconstruction accuracy, yielding reasonable reconstructions from as few as three angiogram views. We validate our approach quantitatively and qualitatively on representative 4D phantom datasets.
Purpose: To propose and evaluate an accelerated T1ρT_{1\rho} quantification method that combines T1ρT_{1\rho}-weighted fast spin echo (FSE) images and proton density (PD)-weighted anatomical FSE images, leveraging deep learning models for T1ρT_{1\rho} mapping. The goal is to reduce scan time and facilitate integration into routine clinical workflows for osteoarthritis (OA) assessment. Methods: This retrospective study utilized MRI data from 40 participants (30 OA patients and 10 healthy volunteers). A volume of PD-weighted anatomical FSE images and a volume of T1ρT_{1\rho}-weighted images acquired at a non-zero spin-lock time were used as input to train deep learning models, including a 2D U-Net and a multi-layer perceptron (MLP). T1ρT_{1\rho} maps generated by these models were compared with ground truth maps derived from a traditional non-linear least squares (NLLS) fitting method using four T1ρT_{1\rho}-weighted images. Evaluation metrics included mean absolute error (MAE), mean absolute percentage error (MAPE), regional error (RE), and regional percentage error (RPE). Results: Deep learning models achieved RPEs below 5% across all evaluated scenarios, outperforming NLLS methods, especially in low signal-to-noise conditions. The best results were obtained using the 2D U-Net, which effectively leveraged spatial information for accurate T1ρT_{1\rho} fitting. The proposed method demonstrated compatibility with shorter TSLs, alleviating RF hardware and specific absorption rate (SAR) limitations. Conclusion: The proposed approach enables efficient T1ρT_{1\rho} mapping using PD-weighted anatomical images, reducing scan time while maintaining clinical standards. This method has the potential to facilitate the integration of quantitative MRI techniques into routine clinical practice, benefiting OA diagnosis and monitoring.
Deep learning models have achieved state-of-the-art performance in automated Cardiac Magnetic Resonance (CMR) analysis. However, the efficacy of these models is highly dependent on the availability of high-quality, artifact-free images. In clinical practice, CMR acquisitions are frequently degraded by respiratory motion, yet the robustness of deep learning models against such artifacts remains an underexplored problem. To promote research in this domain, we organized the MICCAI CMRxMotion challenge. We curated and publicly released a dataset of 320 CMR cine series from 40 healthy volunteers who performed specific breathing protocols to induce a controlled spectrum of motion artifacts. The challenge comprised two tasks: 1) automated image quality assessment to classify images based on motion severity, and 2) robust myocardial segmentation in the presence of motion artifacts. A total of 22 algorithms were submitted and evaluated on the two designated tasks. This paper presents a comprehensive overview of the challenge design and dataset, reports the evaluation results for the top-performing methods, and further investigates the impact of motion artifacts on five clinically relevant biomarkers. All resources and code are publicly available at: this https URL
Augmenting X-ray imaging with 3D roadmap to improve guidance is a common strategy. Such approaches benefit from automated analysis of the X-ray images, such as the automatic detection and tracking of instruments. In this paper, we propose a real-time method to segment the catheter and guidewire in 2D X-ray fluoroscopic sequences. The method is based on deep convolutional neural networks. The network takes as input the current image and the three previous ones, and segments the catheter and guidewire in the current image. Subsequently, a centerline model of the catheter is constructed from the segmented image. A small set of annotated data combined with data augmentation is used to train the network. We trained the method on images from 182 X-ray sequences from 23 different interventions. On a testing set with images of 55 X-ray sequences from 5 other interventions, a median centerline distance error of 0.2 mm and a median tip distance error of 0.9 mm was obtained. The segmentation of the instruments in 2D X-ray sequences is performed in a real-time fully-automatic manner.
Purpose: To address the systematic bias in whole-brain dual flip angle (DFA) T1-mapping at 7T by optimizing the flip angle pair and carefully selecting RF pulse shape and duration. Theory and Methods: Spoiled gradient echoes can be used to estimate whole-brain maps of T1. This can be accomplished by using only two acquisitions with different flip angles, i.e., a DFA-based approach. Although DFA-based T1-mapping is seemingly straightforward to implement, it is sensitive to bias caused by incomplete spoiling and incidental magnetization transfer (MT) effects. Further bias is introduced by the increased B0 and B1+ inhomogeneities at 7T. Experiments were performed to determine the optimal flip angle pair and appropriate RF pulse shape and duration. Obtained T1 estimates were validated using inversion recovery prepared EPI and compared to literature values. A multi-echo readout was used to increase SNR, enabling quantification of R2* and susceptibility, X. Results: Incomplete spoiling was observed above a local flip angle of approximately 20 degrees. An asymmetric gauss-filtered sinc pulse with a constant duration of 700 us showed a sufficiently flat frequency response profile to avoid incomplete excitation in areas with high B0 offsets. A pulse duration of 700 us minimized effects from incidental MT. Conclusion: When performing DFA-based T1-mapping one should (i) limit the higher flip angle to avoid incomplete spoiling, (ii) use a RF pulse shape insensitive to B0 inhomogeneities and (iii) apply a constant RF pulse duration, balanced to minimize incidental MT.
Hierarchically decomposed component-based system development reduces design complexity by supporting distribution of work and component reuse. For product line development, the variability of the components to be deployed in different products has to be represented by appropriate means. In this paper, we propose hierarchical variability modeling which allows specifying component variability integrated with the component hierarchy and locally to the components. Components can contain variation points determining where components may vary. Associated variants define how this variability can be realized in different component configurations. We present a meta model for hierarchical variability modeling to formalize the conceptual ideas. In order to obtain an implementation of the proposed approach together with tool support, we extend the existing architectural description language MontiArc with hierarchical variability modeling. We illustrate the presented approach using an example from the automotive systems domain.
The development of new technology such as wearables that record high-quality single channel ECG, provides an opportunity for ECG screening in a larger population, especially for atrial fibrillation screening. The main goal of this study is to develop an automatic classification algorithm for normal sinus rhythm (NSR), atrial fibrillation (AF), other rhythms (O), and noise from a single channel short ECG segment (9-60 seconds). For this purpose, signal quality index (SQI) along with dense convolutional neural networks was used. Two convolutional neural network (CNN) models (main model that accepts 15 seconds ECG and secondary model that processes 9 seconds shorter ECG) were trained using the training data set. If the recording is determined to be of low quality by SQI, it is immediately classified as noisy. Otherwise, it is transformed to a time-frequency representation and classified with the CNN as NSR, AF, O, or noise. At the final step, a feature-based post-processing algorithm classifies the rhythm as either NSR or O in case the CNN model's discrimination between the two is indeterminate. The best result achieved at the official phase of the PhysioNet/CinC challenge on the blind test set was 0.80 (F1 for NSR, AF, and O were 0.90, 0.80, and 0.70, respectively).
Functional magnetic resonance imaging (fMRI) has become instrumental in researching brain function. One application of fMRI is investigating potential neural features that distinguish people with autism spectrum disorder (ASD) from healthy controls. The Autism Brain Imaging Data Exchange (ABIDE) facilitates this research through its extensive data-sharing initiative. While ABIDE offers data preprocessed with various atlases, independent component analysis (ICA) for dimensionality reduction remains underutilized. We address this gap by presenting ICA-based resting-state networks (RSNs) from preprocessed scans from ABIDE, now publicly available: this https URL. These RSNs unveil neural activation clusters without atlas constraints, offering a perspective on ASD analyses that complements the predominantly atlas-based literature. This contribution provides a valuable resource for further research into ASD, potentially aiding in developing new analytical approaches.
Purpose: Tracer-kinetic models can be used for the quantitative assessment of contrast-enhanced MRI data. However, the model-fitting can produce unreliable results due to the limited data acquired and the high noise levels. Such problems are especially prevalent in myocardial perfusion MRI leading to the compromise of constrained numerical deconvolutions and segmental signal averaging being commonly used as alternatives to the more complex tracer-kinetic models. Methods: In this work, the use of hierarchical Bayesian inference for the parameter estimation is explored. It is shown that with Bayesian inference it is possible to reliably fit the two-compartment exchange model to perfusion data. The use of prior knowledge on the ranges of kinetic parameters and the fact that neighbouring voxels are likely to have similar kinetic properties combined with a Markov chain Monte Carlo based fitting procedure significantly improves the reliability of the perfusion estimates with compared to the traditional least-squares approach. The method is assessed using both simulated and patient data. Results: The average (standard deviation) normalised mean square error for the distinct noise realisations of a simulation phantom falls from 0.32 (0.55) with the least-squares fitting to 0.13 (0.2) using Bayesian inference. The assessment of the presence of coronary artery disease based purely on the quantitative MBF maps obtained using Bayesian inference matches the visual assessment in all 24 slices. When using the maps obtained by the least-squares fitting, a corresponding assessment is only achieved in 16/24 slices. Conclusion: Bayesian inference allows a reliable, fully automated and user-independent assessment of myocardial perfusion on a voxel-wise level using the two-compartment exchange model.
Cerebral X-ray digital subtraction angiography (DSA) is a widely used imaging technique in patients with neurovascular disease, allowing for vessel and flow visualization with high spatio-temporal resolution. Automatic artery-vein segmentation in DSA plays a fundamental role in vascular analysis with quantitative biomarker extraction, facilitating a wide range of clinical applications. The widely adopted U-Net applied on static DSA frames often struggles with disentangling vessels from subtraction artifacts. Further, it falls short in effectively separating arteries and veins as it disregards the temporal perspectives inherent in DSA. To address these limitations, we propose to simultaneously leverage spatial vasculature and temporal cerebral flow characteristics to segment arteries and veins in DSA. The proposed network, coined CAVE, encodes a 2D+time DSA series using spatial modules, aggregates all the features using temporal modules, and decodes it into 2D segmentation maps. On a large multi-center clinical dataset, CAVE achieves a vessel segmentation Dice of 0.84 (±\pm0.04) and an artery-vein segmentation Dice of 0.79 (±\pm0.06). CAVE surpasses traditional Frangi-based K-means clustering (P<0.001) and U-Net (P<0.001) by a significant margin, demonstrating the advantages of harvesting spatio-temporal features. This study represents the first investigation into automatic artery-vein segmentation in DSA using deep learning. The code is publicly available at this https URL
The quantification of myocardial perfusion MRI has the potential to provide a fast, automated and user-independent assessment of myocardial ischaemia. However, due to the relatively high noise level and low temporal resolution of the acquired data and the complexity of the tracer-kinetic models, the model fitting can yield unreliable parameter estimates. A solution to this problem is the use of Bayesian inference which can incorporate prior knowledge and improve the reliability of the parameter estimation. This, however, uses Markov chain Monte Carlo sampling to approximate the posterior distribution of the kinetic parameters which is extremely time intensive. This work proposes training convolutional networks to directly predict the kinetic parameters from the signal-intensity curves that are trained using estimates obtained from the Bayesian inference. This allows fast estimation of the kinetic parameters with a similar performance to the Bayesian inference.
The Thrombolysis in Cerebral Infarction (TICI) score is an important metric for reperfusion therapy assessment in acute ischemic stroke. It is commonly used as a technical outcome measure after endovascular treatment (EVT). Existing TICI scores are defined in coarse ordinal grades based on visual inspection, leading to inter- and intra-observer variation. In this work, we present autoTICI, an automatic and quantitative TICI scoring method. First, each digital subtraction angiography (DSA) acquisition is separated into four phases (non-contrast, arterial, parenchymal and venous phase) using a multi-path convolutional neural network (CNN), which exploits spatio-temporal features. The network also incorporates sequence level label dependencies in the form of a state-transition matrix. Next, a minimum intensity map (MINIP) is computed using the motion corrected arterial and parenchymal frames. On the MINIP image, vessel, perfusion and background pixels are segmented. Finally, we quantify the autoTICI score as the ratio of reperfused pixels after EVT. On a routinely acquired multi-center dataset, the proposed autoTICI shows good correlation with the extended TICI (eTICI) reference with an average area under the curve (AUC) score of 0.81. The AUC score is 0.90 with respect to the dichotomized eTICI. In terms of clinical outcome prediction, we demonstrate that autoTICI is overall comparable to eTICI.
Machine learning is a modern approach to problem-solving and task automation. In particular, machine learning is concerned with the development and applications of algorithms that can recognize patterns in data and use them for predictive modeling. Artificial neural networks are a particular class of machine learning algorithms and models that evolved into what is now described as deep learning. Given the computational advances made in the last decade, deep learning can now be applied to massive data sets and in innumerable contexts. Therefore, deep learning has become its own subfield of machine learning. In the context of biological research, it has been increasingly used to derive novel insights from high-dimensional biological data. To make the biological applications of deep learning more accessible to scientists who have some experience with machine learning, we solicited input from a community of researchers with varied biological and deep learning interests. These individuals collaboratively contributed to this manuscript's writing using the GitHub version control platform and the Manubot manuscript generation toolset. The goal was to articulate a practical, accessible, and concise set of guidelines and suggestions to follow when using deep learning. In the course of our discussions, several themes became clear: the importance of understanding and applying machine learning fundamentals as a baseline for utilizing deep learning, the necessity for extensive model comparisons with careful evaluation, and the need for critical thought in interpreting results generated by deep learning, among others.
226
The development of new x-ray imaging techniques often requires prior knowledge of tissue attenuation, but the sources of such information are sparse. We have measured the attenuation of adipose breast tissue using spectral imaging, in vitro and in vivo. For the in-vitro measurement, fixed samples of adipose breast tissue were imaged on a spectral mammography system, and the energy-dependent x-ray attenuation was measured in terms of equivalent thicknesses of aluminum and poly-methyl methacrylate (PMMA). For the in-vivo measurement, a similar procedure was applied on a number of spectral screening mammograms. The results of the two measurements agreed well and were consistent with published attenuation data and with measurements on tissue-equivalent material.
Phase-contrast imaging is an emerging technology that may increase the signal-difference-to-noise ratio in medical imaging. One of the most promising phase-contrast techniques is Talbot interferometry, which, combined with energy-sensitive photon-counting detectors, enables spectral differential phase-contrast mammography. We have evaluated a realistic system based on this technique by cascaded-systems analysis and with a task-dependent ideal-observer detectability index as a figure-of-merit. Beam-propagation simulations were used for validation and illustration of the analytical framework. Differential phase contrast improved detectability compared to absorption contrast, in particular for fine tumor structures. This result was supported by images of human mastectomy samples that were acquired with a conventional detector. The optimal incident energy was higher in differential phase contrast than in absorption contrast when disregarding the setup design energy. Further, optimal weighting of the transmitted spectrum was found to have a weaker energy dependence than for absorption contrast. Taking the design energy into account yielded a superimposed maximum on both detectability as a function of incident energy, and on optimal weighting. Spectral material decomposition was not facilitated by phase contrast, but phase information may be used instead of spectral information.
Purpose: To implement and evaluate a sequential approach to obtain semi-quantitative T1-weighted MPRAGE images, unbiased by B1 inhomogeneities at 7T. Methods: In the reference gradient echo used for normalization of the MPRAGE image, flip angle (aGE) and acquisition voxel size (Vref) was varied to optimize tissue contrast and acquisition time (Tacq). The finalized protocol was implemented at three different resolutions and the reproducibility was evaluated. Maps of T1 were derived based on the normalized MPRAGE through forward signal modelling. Results: A good compromise between tissue contrast and SNR was reached at aGE=3°. A reduction of the reference GE Tacq by a factor of 4, at the cost of negligible bias, was obtained by increasing Vref with a factor of 8 relative the MPRAGE resolution. The coefficient-of-variation in segmented WM was 9+/-5% after normalization, compared to 24+/-12% before. The T1 maps showed no obvious bias and had reasonable values with regard to literature, especially after optional B1 correction through separate flip angle mapping. Conclusion: A non-interleaved acquisition for normalization of MPRAGE offers a simple alternative to MP2RAGE to obtain semi-quantitative purely T1-weighted images. These images can be converted to T1 maps analogously to the established MP2RAGE approach. Scan time can be reduced by increasing Vref which has a miniscule effect on image quality.
This paper proposes a two-dimensional (2D) bidirectional long short-term memory generative adversarial network (GAN) to produce synthetic standard 12-lead ECGs corresponding to four types of signals: left ventricular hypertrophy (LVH), left branch bundle block (LBBB), acute myocardial infarction (ACUTMI), and Normal. It uses a fully automatic end-to-end process to generate and verify the synthetic ECGs that does not require any visual inspection. The proposed model is able to produce synthetic standard 12-lead ECG signals with success rates of 98% for LVH, 93% for LBBB, 79% for ACUTMI, and 59% for Normal. Statistical evaluation of the data confirms that the synthetic ECGs are not biased towards or overfitted to the training ECGs, and span a wide range of morphological features. This study demonstrates that it is feasible to use a 2D GAN to produce standard 12-lead ECGs suitable to augment artificially a diverse database of real ECGs, thus providing a possible solution to the demand for extensive ECG datasets.
There are no more papers matching your filters at the moment.