scholarly journals Classification of Cardiomyopathies from MR Cine Images Using Convolutional Neural Network with Transfer Learning

Diagnostics ◽  
2021 ◽  
Vol 11 (9) ◽  
pp. 1554
Author(s):  
Philippe Germain ◽  
Armine Vardazaryan ◽  
Nicolas Padoy ◽  
Aissam Labani ◽  
Catherine Roy ◽  
...  

The automatic classification of various types of cardiomyopathies is desirable but has never been performed using a convolutional neural network (CNN). The purpose of this study was to evaluate currently available CNN models to classify cine magnetic resonance (cine-MR) images of cardiomyopathies. Method: Diastolic and systolic frames of 1200 cine-MR sequences of three categories of subjects (395 normal, 411 hypertrophic cardiomyopathy, and 394 dilated cardiomyopathy) were selected, preprocessed, and labeled. Pretrained, fine-tuned deep learning models (VGG) were used for image classification (sixfold cross-validation and double split testing with hold-out data). The heat activation map algorithm (Grad-CAM) was applied to reveal salient pixel areas leading to the classification. Results: The diastolic–systolic dual-input concatenated VGG model cross-validation accuracy was 0.982 ± 0.009. Summed confusion matrices showed that, for the 1200 inputs, the VGG model led to 22 errors. The classification of a 227-input validation group, carried out by an experienced radiologist and cardiologist, led to a similar number of discrepancies. The image preparation process led to 5% accuracy improvement as compared to nonprepared images. Grad-CAM heat activation maps showed that most misclassifications occurred when extracardiac location caught the attention of the network. Conclusions: CNN networks are very well suited and are 98% accurate for the classification of cardiomyopathies, regardless of the imaging plane, when both diastolic and systolic frames are incorporated. Misclassification is in the same range as inter-observer discrepancies in experienced human readers.

2020 ◽  
Vol 10 (6) ◽  
pp. 1999 ◽  
Author(s):  
Milica M. Badža ◽  
Marko Č. Barjaktarović

The classification of brain tumors is performed by biopsy, which is not usually conducted before definitive brain surgery. The improvement of technology and machine learning can help radiologists in tumor diagnostics without invasive measures. A machine-learning algorithm that has achieved substantial results in image segmentation and classification is the convolutional neural network (CNN). We present a new CNN architecture for brain tumor classification of three tumor types. The developed network is simpler than already-existing pre-trained networks, and it was tested on T1-weighted contrast-enhanced magnetic resonance images. The performance of the network was evaluated using four approaches: combinations of two 10-fold cross-validation methods and two databases. The generalization capability of the network was tested with one of the 10-fold methods, subject-wise cross-validation, and the improvement was tested by using an augmented image database. The best result for the 10-fold cross-validation method was obtained for the record-wise cross-validation for the augmented data set, and, in that case, the accuracy was 96.56%. With good generalization capability and good execution speed, the new developed CNN architecture could be used as an effective decision-support tool for radiologists in medical diagnostics.


2019 ◽  
Author(s):  
Carolina L. S. Cipriano ◽  
Giovanni L. F. Da Silva ◽  
Jonnison L. Ferreira ◽  
Aristófanes C. Silva ◽  
Anselmo Cardoso De Paiva

One of the most severe and common brain tumors is gliomas. Manual classification of injuries of this type is a laborious task in the clinical routine. Therefore, this work proposes an automatic method to classify lesions in the brain in 3D MR images based on superpixels, PSO algorithm and convolutional neural network. The proposed method obtained results for the complete, central and active regions, an accuracy of 87.88%, 70.51%, 80.08% and precision of 76%, 84%, 75% for the respective regions. The results demonstrate the difficulty of the network in the classification of the regions found in the lesions.


Author(s):  
Abdul Kholik ◽  
Agus Harjoko ◽  
Wahyono Wahyono

The volume density of vehicles is a problem that often occurs in every city, as for the impact of vehicle density is congestion. Classification of vehicle density levels on certain roads is required because there are at least 7 vehicle density level conditions. Monitoring conducted by the police, the Department of Transportation and the organizers of the road currently using video-based surveillance such as CCTV that is still monitored by people manually. Deep Learning is an approach of synthetic neural network-based learning machines that are actively developed and researched lately because it has succeeded in delivering good results in solving various soft-computing problems, This research uses the convolutional neural network architecture. This research tries to change the supporting parameters on the convolutional neural network to further calibrate the maximum accuracy. After the experiment changed the parameters, the classification model was tested using K-fold cross-validation, confusion matrix and model exam with data testing. On the K-fold cross-validation test with an average yield of 92.83% with a value of K (fold) = 5, model testing is done by entering data testing amounting to 100 data, the model can predict or classify correctly i.e. 81 data.


2019 ◽  
Vol 46 (12) ◽  
pp. 5652-5665 ◽  
Author(s):  
Zongqing Ma ◽  
Xi Wu ◽  
Xin Wang ◽  
Qi Song ◽  
Youbing Yin ◽  
...  

2021 ◽  
pp. 1-16
Author(s):  
Sumit Tripathi ◽  
Neeraj Sharma

BACKGROUND: The noise in magnetic resonance (MR) images causes severe issues for medical diagnosis purposes. OBJECTIVE: In this paper, we propose a discriminative learning based convolutional neural network denoiser to denoise the MR image data contaminated with noise. METHODS: The proposed method incorporates the use of depthwise separable convolution along with local response normalization with modified hyperparameters and internal skip connections to denoise the contaminated MR images. Moreover, the addition of parametric RELU instead of normal conventional RELU in our proposed architecture gives more stable and fine results. The denoised images were further segmented to test the appropriateness of the results. The network is trained on one dataset and tested on other dataset produces remarkably good results. RESULTS: Our proposed network was used to denoise the images of different noise levels, and it yields better performance as compared with various networks. The SSIM and PSNR showed an average improvement of (7.2 ± 0.002) % and (8.5 ± 0.25) % respectively when tested on different datasets without retaining the network. An improvement of 5% and 6% was achieved in the values of mean intersection over union (mIoU) and BF score when the denoised images were segmented for testing the relevancy in biomedical imaging applications. The statistical test suggests that the obtained results are statistically significant as p< 0.05. CONCLUSION: The denoised images obtained are more clinically suitable for medical image diagnosis purposes, as depicted by the evaluation parameters. Further, external clinical validation was performed by an experienced radiologist for testing the validation of the resulting images.


2021 ◽  
pp. 20210002
Author(s):  
Mayara Simões Bispo ◽  
Mário Lúcio Gomes de Queiroz Pierre Júnior ◽  
Antônio Lopes Apolinário Jr ◽  
Jean Nunes dos Santos ◽  
Braulio Carneiro Junior ◽  
...  

Objective: To analyse the automatic classification performance of a convolutional neural network (CNN), Google Inception v3, using tomographic images of odontogenic keratocysts (OKCs) and ameloblastomas (AMs). Methods: For construction of the database, we selected axial multidetector CT images from patients with confirmed AM (n = 22) and OKC (n = 18) based on a conclusive histopathological report. The images (n = 350) were segmented manually and data augmentation algorithms were applied, totalling 2500 images. The k-fold × five cross-validation method (k = 2) was used to estimate the accuracy of the CNN model. Results: The accuracy and standard deviation (%) of cross-validation for the five iterations performed were 90.16 ± 0.95, 91.37 ± 0.57, 91.62 ± 0.19, 92.48 ± 0.16 and 91.21 ± 0.87, respectively. A higher error rate was observed for the classification of AM images. Conclusion: This study demonstrated a high classification accuracy of Google Inception v3 for tomographic images of OKCs and AMs. However, AMs images presented the higher error rate.


2020 ◽  
Vol 10 (5) ◽  
pp. 1023-1032
Author(s):  
Lin Qi ◽  
Haoran Zhang ◽  
Xuehao Cao ◽  
Xuyang Lyu ◽  
Lisheng Xu ◽  
...  

Accurate segmentation of the blood pool of left ventricle (LV) and myocardium (or left ventricular epicardium, MYO) from cardiac magnetic resonance (MR) can help doctors to quantify LV ejection fraction and myocardial deformation. To reduce doctor’s burden of manual segmentation, in this study, we propose an automated and concurrent segmentation method of the LV and MYO. First, we employ a convolutional neural network (CNN) architecture to extract the region of interest (ROI) from short-axis cardiac cine MR images as a preprocessing step. Next, we present a multi-scale feature fusion (MSFF) CNN with a new weighted Dice index (WDI) loss function to get the concurrent segmentation of the LV and MYO. We use MSFF modules with three scales to extract different features, and then concatenate feature maps by the short and long skip connections in the encoder and decoder path to capture more complete context information and geometry structure for better segmentation. Finally, we compare the proposed method with Fully Convolutional Networks (FCN) and U-Net on the combined cardiac datasets from MICCAI 2009 and ACDC 2017. Experimental results demonstrate that the proposed method could perform effectively on LV and MYOs segmentation in the combined datasets, indicating its potential for clinical application.


2020 ◽  
Vol 2020 (4) ◽  
pp. 4-14
Author(s):  
Vladimir Budak ◽  
Ekaterina Ilyina

The article proposes the classification of lenses with different symmetrical beam angles and offers a scale as a spot-light’s palette. A collection of spotlight’s images was created and classified according to the proposed scale. The analysis of 788 pcs of existing lenses and reflectors with different LEDs and COBs carried out, and the dependence of the axial light intensity from beam angle was obtained. A transfer training of new deep convolutional neural network (CNN) based on the pre-trained GoogleNet was performed using this collection. GradCAM analysis showed that the trained network correctly identifies the features of objects. This work allows us to classify arbitrary spotlights with an accuracy of about 80 %. Thus, light designer can determine the class of spotlight and corresponding type of lens with its technical parameters using this new model based on CCN.


Author(s):  
P.L. Nikolaev

This article deals with method of binary classification of images with small text on them Classification is based on the fact that the text can have 2 directions – it can be positioned horizontally and read from left to right or it can be turned 180 degrees so the image must be rotated to read the sign. This type of text can be found on the covers of a variety of books, so in case of recognizing the covers, it is necessary first to determine the direction of the text before we will directly recognize it. The article suggests the development of a deep neural network for determination of the text position in the context of book covers recognizing. The results of training and testing of a convolutional neural network on synthetic data as well as the examples of the network functioning on the real data are presented.


Sign in / Sign up

Export Citation Format

Share Document