Automatic Segmentation of Cardiac Magnetic Resonance Images based on Multi-input Fusion Network

Author(s):  
Jianshe Shi ◽  
Yuguang Ye ◽  
Daxin Zhu ◽  
Lianta Su ◽  
Yifeng Huang ◽  
...  
2012 ◽  
Vol 12 (04) ◽  
pp. 1250059
Author(s):  
MOHAMMED AMMAR ◽  
SAÏD MAHMOUDI ◽  
MOHAMMED AMINE CHIKH ◽  
AMINE ABBOU

Active Appearance Models (AAM), have been introduced by Cootes et al. [IEEE Transactions on Pattern Analysis and Machine Intelligence, 2001], and are used to learn objects characteristics during a training phase by building a compact statistical model representing shape and texture variation of the object. This Model is used to find the object location and shape-appearance parameters, in a test set. The selection of the initial position of the construct model in a test image is a very important task in this context. The goal of this work is to propose an automatic segmentation method applied to cardiovascular MR images using an AAM based segmentation approach. The AAM model was constructed using 20 end-diastolic and end-systolic short axis cardiac magnetic resonance images (MRI). Once the model is constructed, we select the best position in order to start the search step manually in the test image. That is why; in this paper, the localization of the left ventricular cavity in the test image is used to select the initial position of the construct model developed from the training images. So we propose an automatic approach to detect this spatial position by using two methods: (1) the circular Hough transform (CHT) and (2) the evaluation of the Hausdorff distance.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Markus J. Ankenbrand ◽  
Liliia Shainberg ◽  
Michael Hock ◽  
David Lohr ◽  
Laura M. Schreiber

Abstract Background Image segmentation is a common task in medical imaging e.g., for volumetry analysis in cardiac MRI. Artificial neural networks are used to automate this task with performance similar to manual operators. However, this performance is only achieved in the narrow tasks networks are trained on. Performance drops dramatically when data characteristics differ from the training set properties. Moreover, neural networks are commonly considered black boxes, because it is hard to understand how they make decisions and why they fail. Therefore, it is also hard to predict whether they will generalize and work well with new data. Here we present a generic method for segmentation model interpretation. Sensitivity analysis is an approach where model input is modified in a controlled manner and the effect of these modifications on the model output is evaluated. This method yields insights into the sensitivity of the model to these alterations and therefore to the importance of certain features on segmentation performance. Results We present an open-source Python library (misas), that facilitates the use of sensitivity analysis with arbitrary data and models. We show that this method is a suitable approach to answer practical questions regarding use and functionality of segmentation models. We demonstrate this in two case studies on cardiac magnetic resonance imaging. The first case study explores the suitability of a published network for use on a public dataset the network has not been trained on. The second case study demonstrates how sensitivity analysis can be used to evaluate the robustness of a newly trained model. Conclusions Sensitivity analysis is a useful tool for deep learning developers as well as users such as clinicians. It extends their toolbox, enabling and improving interpretability of segmentation models. Enhancing our understanding of neural networks through sensitivity analysis also assists in decision making. Although demonstrated only on cardiac magnetic resonance images this approach and software are much more broadly applicable.


Circulation ◽  
2020 ◽  
Vol 142 (Suppl_3) ◽  
Author(s):  
shuo wang ◽  
Hena Patel ◽  
Tamari Miller ◽  
Keith Ameyaw ◽  
Akhil Narang ◽  
...  

Background: It is unclear whether artificial intelligence (AI) can provide automatic solutions to measure right ventricular ejection fraction (RVEF), due to the complex RV geometry. Although several deep learning (DL) algorithms are available to quantify RVEF from cardiac magnetic resonance (CMR) images, there has been no systematic comparison of these algorithms, and the prognostic value of these automated measurements is unknown. We aimed to determine whether RVEF measurements made using DL algorithms could be used to risk stratify patients similarly to measurements made by an expert. Methods: We identified from a pre-existing registry 200 patients who underwent CMR. RVEF was determined using 3 fully automated commercial DL algorithms (DL-RVEF) and also by a clinical expert (CLIN-RVEF) using conventional methodology. Each of the DL-RVEF approaches was compared against CLIN-RVEF using linear regression and Bland-Altman analyses. In addition, RVEF values were classified according to clinically important cutoffs: <35%, 35-50%, ≥50%, and rates of disagreement with the reference classification were determined. ROC analysis was performed to evaluate the ability of CLIN-RVEF and each of the DL-RVEF based classifications to predict major adverse cardiovascular events (MACE). Results: The CLIN-RVEF and the three DL-RVEFs were obtained in all patients. We found only modest correlations between DL-RVEF and CLIN-RVEF (figure). The DL-RVEF algorithms had accuracy ranging from 0.59 to 0.78 for categorizing RV function. Nevertheless, ROC analysis showed no significant differences between the 4 approaches in predicting MACE, as reflected by respective AUC values of 0.68, 0.69, 0.64 and 0.63. Conclusions: Although the automated algorithms predicted patient outcomes as well as the CLIN-RVEF, the agreement between DL-RVEF and the clinical expert’s measurements was not optimal. DL approaches need further refinements to improve automated assessment of RV function.


Sign in / Sign up

Export Citation Format

Share Document