A deep learning approach for the automatic recognition of prosthetic mitral valve in echocardiographic images

2021 ◽  
Vol 133 ◽  
pp. 104388
Author(s):  
Majid Vafaeezadeh ◽  
Hamid Behnam ◽  
Ali Hosseinsabet ◽  
Parisa Gifani
2021 ◽  
Vol 409 ◽  
pp. 107142
Author(s):  
Fernando Lara ◽  
Román Lara-Cueva ◽  
Julio C. Larco ◽  
Enrique V. Carrera ◽  
Rubén León

2020 ◽  
Vol 13 (2) ◽  
pp. 374-381 ◽  
Author(s):  
Kenya Kusunose ◽  
Takashi Abe ◽  
Akihiro Haga ◽  
Daiju Fukuda ◽  
Hirotsugu Yamada ◽  
...  

2021 ◽  
Vol 23 (1) ◽  
Author(s):  
Ricardo A. Gonzales ◽  
Felicia Seemann ◽  
Jérôme Lamy ◽  
Hamid Mojibian ◽  
Dan Atar ◽  
...  

Abstract Background Mitral annular plane systolic excursion (MAPSE) and left ventricular (LV) early diastolic velocity (e’) are key metrics of systolic and diastolic function, but not often measured by cardiovascular magnetic resonance (CMR). Its derivation is possible with manual, precise annotation of the mitral valve (MV) insertion points along the cardiac cycle in both two and four-chamber long-axis cines, but this process is highly time-consuming, laborious, and prone to errors. A fully automated, consistent, fast, and accurate method for MV plane tracking is lacking. In this study, we propose MVnet, a deep learning approach for MV point localization and tracking capable of deriving such clinical metrics comparable to human expert-level performance, and validated it in a multi-vendor, multi-center clinical population. Methods The proposed pipeline first performs a coarse MV point annotation in a given cine accurately enough to apply an automated linear transformation task, which standardizes the size, cropping, resolution, and heart orientation, and second, tracks the MV points with high accuracy. The model was trained and evaluated on 38,854 cine images from 703 patients with diverse cardiovascular conditions, scanned on equipment from 3 main vendors, 16 centers, and 7 countries, and manually annotated by 10 observers. Agreement was assessed by the intra-class correlation coefficient (ICC) for both clinical metrics and by the distance error in the MV plane displacement. For inter-observer variability analysis, an additional pair of observers performed manual annotations in a randomly chosen set of 50 patients. Results MVnet achieved a fast segmentation (<1 s/cine) with excellent ICCs of 0.94 (MAPSE) and 0.93 (LV e’) and a MV plane tracking error of −0.10 ± 0.97 mm. In a similar manner, the inter-observer variability analysis yielded ICCs of 0.95 and 0.89 and a tracking error of −0.15 ± 1.18 mm, respectively. Conclusion A dual-stage deep learning approach for automated annotation of MV points for systolic and diastolic evaluation in CMR long-axis cine images was developed. The method is able to carefully track these points with high accuracy and in a timely manner. This will improve the feasibility of CMR methods which rely on valve tracking and increase their utility in a clinical setting.


2018 ◽  
Vol 6 (3) ◽  
pp. 122-126
Author(s):  
Mohammed Ibrahim Khan ◽  
◽  
Akansha Singh ◽  
Anand Handa ◽  
◽  
...  

2020 ◽  
Vol 17 (3) ◽  
pp. 299-305 ◽  
Author(s):  
Riaz Ahmad ◽  
Saeeda Naz ◽  
Muhammad Afzal ◽  
Sheikh Rashid ◽  
Marcus Liwicki ◽  
...  

This paper presents a deep learning benchmark on a complex dataset known as KFUPM Handwritten Arabic TexT (KHATT). The KHATT data-set consists of complex patterns of handwritten Arabic text-lines. This paper contributes mainly in three aspects i.e., (1) pre-processing, (2) deep learning based approach, and (3) data-augmentation. The pre-processing step includes pruning of white extra spaces plus de-skewing the skewed text-lines. We deploy a deep learning approach based on Multi-Dimensional Long Short-Term Memory (MDLSTM) networks and Connectionist Temporal Classification (CTC). The MDLSTM has the advantage of scanning the Arabic text-lines in all directions (horizontal and vertical) to cover dots, diacritics, strokes and fine inflammation. The data-augmentation with a deep learning approach proves to achieve better and promising improvement in results by gaining 80.02% Character Recognition (CR) over 75.08% as baseline.


Sign in / Sign up

Export Citation Format

Share Document