Ground-truth-free deep learning for artefacts reduction in 2D radial cardiac cine MRI using a synthetically generated dataset

Author(s):  
Duote Chen ◽  
Tobias Schaeffter ◽  
Christoph Kolbitsch ◽  
Andreas Kofler
Algorithms ◽  
2021 ◽  
Vol 14 (7) ◽  
pp. 212
Author(s):  
Youssef Skandarani ◽  
Pierre-Marc Jodoin ◽  
Alain Lalande

Deep learning methods are the de facto solutions to a multitude of medical image analysis tasks. Cardiac MRI segmentation is one such application, which, like many others, requires a large number of annotated data so that a trained network can generalize well. Unfortunately, the process of having a large number of manually curated images by medical experts is both slow and utterly expensive. In this paper, we set out to explore whether expert knowledge is a strict requirement for the creation of annotated data sets on which machine learning can successfully be trained. To do so, we gauged the performance of three segmentation models, namely U-Net, Attention U-Net, and ENet, trained with different loss functions on expert and non-expert ground truth for cardiac cine–MRI segmentation. Evaluation was done with classic segmentation metrics (Dice index and Hausdorff distance) as well as clinical measurements, such as the ventricular ejection fractions and the myocardial mass. The results reveal that generalization performances of a segmentation neural network trained on non-expert ground truth data is, to all practical purposes, as good as that trained on expert ground truth data, particularly when the non-expert receives a decent level of training, highlighting an opportunity for the efficient and cost-effective creation of annotations for cardiac data sets.


2021 ◽  
Vol 11 (4) ◽  
pp. 1600-1612
Author(s):  
Yan Wang ◽  
Yue Zhang ◽  
Zhaoying Wen ◽  
Bing Tian ◽  
Evan Kao ◽  
...  

2021 ◽  
Author(s):  
Roshan Reddy Upendra ◽  
S. M. Kamrul Hasan ◽  
Richard Simon ◽  
Brian Jamison Wentz ◽  
Suzanne M. Shontz ◽  
...  

Entropy ◽  
2020 ◽  
Vol 22 (6) ◽  
pp. 687
Author(s):  
Elena Martín-González ◽  
Teresa Sevilla ◽  
Ana Revilla-Orodea ◽  
Pablo Casaseca-de-la-Higuera ◽  
Carlos Alberola-López

Groupwise image (GW) registration is customarily used for subsequent processing in medical imaging. However, it is computationally expensive due to repeated calculation of transformations and gradients. In this paper, we propose a deep learning (DL) architecture that achieves GW elastic registration of a 2D dynamic sequence on an affordable average GPU. Our solution, referred to as dGW, is a simplified version of the well-known U-net. In our GW solution, the image that the other images are registered to, referred to in the paper as template image, is iteratively obtained together with the registered images. Design and evaluation have been carried out using 2D cine cardiac MR slices from 2 databases respectively consisting of 89 and 41 subjects. The first database was used for training and validation with 66.6–33.3% split. The second one was used for validation (50%) and testing (50%). Additional network hyperparameters, which are—in essence—those that control the transformation smoothness degree, are obtained by means of a forward selection procedure. Our results show a 9-fold runtime reduction with respect to an optimization-based implementation; in addition, making use of the well-known structural similarity (SSIM) index we have obtained significative differences with dGW with respect to an alternative DL solution based on Voxelmorph.


2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Thomas Küstner ◽  
Niccolo Fuin ◽  
Kerstin Hammernik ◽  
Aurelien Bustin ◽  
Haikun Qi ◽  
...  

2020 ◽  
Vol 85 (1) ◽  
pp. 152-167 ◽  
Author(s):  
Christopher M. Sandino ◽  
Peng Lai ◽  
Shreyas S. Vasanawala ◽  
Joseph Y. Cheng
Keyword(s):  
Cine Mri ◽  

2017 ◽  
Author(s):  
Tian Zhou ◽  
Ilknur Icke ◽  
Belma Dogdas ◽  
Sarayu Parimal ◽  
Smita Sampath ◽  
...  

Radiology ◽  
2019 ◽  
Vol 291 (3) ◽  
pp. 606-617 ◽  
Author(s):  
Nan Zhang ◽  
Guang Yang ◽  
Zhifan Gao ◽  
Chenchu Xu ◽  
Yanping Zhang ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document