scholarly journals Evaluasi Performasi Ruang Warna pada Klasifikasi Diabetic Retinophaty Menggunakan Convolution Neural Network

2021 ◽  
Vol 8 (3) ◽  
pp. 619
Author(s):  
Candra Dewi ◽  
Andri Santoso ◽  
Indriati Indriati ◽  
Nadia Artha Dewi ◽  
Yoke Kusuma Arbawa

<p>Semakin meningkatnya jumlah penderita diabetes menjadi salah satu faktor penyebab semakin tingginya penderita penyakit <em>diabetic retinophaty</em>. Salah satu citra yang digunakan oleh dokter mata untuk mengidentifikasi <em>diabetic retinophaty</em> adalah foto retina. Dalam penelitian ini dilakukan pengenalan penyakit diabetic retinophaty secara otomatis menggunakan citra <em>fundus</em> retina dan algoritme <em>Convolutional Neural Network</em> (CNN) yang merupakan variasi dari algoritme Deep Learning. Kendala yang ditemukan dalam proses pengenalan adalah warna retina yang cenderung merah kekuningan sehingga ruang warna RGB tidak menghasilkan akurasi yang optimal. Oleh karena itu, dalam penelitian ini dilakukan pengujian pada berbagai ruang warna untuk mendapatkan hasil yang lebih baik. Dari hasil uji coba menggunakan 1000 data pada ruang warna RGB, HSI, YUV dan L*a*b* memberikan hasil yang kurang optimal pada data seimbang dimana akurasi terbaik masih dibawah 50%. Namun pada data tidak seimbang menghasilkan akurasi yang cukup tinggi yaitu 83,53% pada ruang warna YUV dengan pengujian pada data latih dan akurasi 74,40% dengan data uji pada semua ruang warna.</p><p> </p><p><em><strong>Abstract</strong></em></p><p class="Abstract"><em>Increasing the number of people with diabetes is one of the factors causing the high number of people with diabetic retinopathy. One of the images used by ophthalmologists to identify diabetic retinopathy is a retinal photo. In this research, the identification of diabetic retinopathy is done automatically using retinal fundus images and the Convolutional Neural Network (CNN) algorithm, which is a variation of the Deep Learning algorithm. The obstacle found in the recognition process is the color of the retina which tends to be yellowish red so that the RGB color space does not produce optimal accuracy. Therefore, in this research, various color spaces were tested to get better results. From the results of trials using 1000 images data in the color space of RGB, HSI, YUV and L * a * b * give suboptimal results on balanced data where the best accuracy is still below 50%. However, the unbalanced data gives a fairly high accuracy of 83.53% with training data on the YUV color space and 74,40% with testing data on all color spaces.</em></p><p><em><strong><br /></strong></em></p>

Cancers ◽  
2021 ◽  
Vol 13 (4) ◽  
pp. 652 ◽  
Author(s):  
Carlo Augusto Mallio ◽  
Andrea Napolitano ◽  
Gennaro Castiello ◽  
Francesco Maria Giordano ◽  
Pasquale D'Alessio ◽  
...  

Background: Coronavirus disease 2019 (COVID-19) pneumonia and immune checkpoint inhibitor (ICI) therapy-related pneumonitis share common features. The aim of this study was to determine on chest computed tomography (CT) images whether a deep convolutional neural network algorithm is able to solve the challenge of differential diagnosis between COVID-19 pneumonia and ICI therapy-related pneumonitis. Methods: We enrolled three groups: a pneumonia-free group (n = 30), a COVID-19 group (n = 34), and a group of patients with ICI therapy-related pneumonitis (n = 21). Computed tomography images were analyzed with an artificial intelligence (AI) algorithm based on a deep convolutional neural network structure. Statistical analysis included the Mann–Whitney U test (significance threshold at p < 0.05) and the receiver operating characteristic curve (ROC curve). Results: The algorithm showed low specificity in distinguishing COVID-19 from ICI therapy-related pneumonitis (sensitivity 97.1%, specificity 14.3%, area under the curve (AUC) = 0.62). ICI therapy-related pneumonitis was identified by the AI when compared to pneumonia-free controls (sensitivity = 85.7%, specificity 100%, AUC = 0.97). Conclusions: The deep learning algorithm is not able to distinguish between COVID-19 pneumonia and ICI therapy-related pneumonitis. Awareness must be increased among clinicians about imaging similarities between COVID-19 and ICI therapy-related pneumonitis. ICI therapy-related pneumonitis can be applied as a challenge population for cross-validation to test the robustness of AI models used to analyze interstitial pneumonias of variable etiology.


Diagnostics ◽  
2020 ◽  
Vol 10 (10) ◽  
pp. 803
Author(s):  
Luu-Ngoc Do ◽  
Byung Hyun Baek ◽  
Seul Kee Kim ◽  
Hyung-Jeong Yang ◽  
Ilwoo Park ◽  
...  

The early detection and rapid quantification of acute ischemic lesions play pivotal roles in stroke management. We developed a deep learning algorithm for the automatic binary classification of the Alberta Stroke Program Early Computed Tomographic Score (ASPECTS) using diffusion-weighted imaging (DWI) in acute stroke patients. Three hundred and ninety DWI datasets with acute anterior circulation stroke were included. A classifier algorithm utilizing a recurrent residual convolutional neural network (RRCNN) was developed for classification between low (1–6) and high (7–10) DWI-ASPECTS groups. The model performance was compared with a pre-trained VGG16, Inception V3, and a 3D convolutional neural network (3DCNN). The proposed RRCNN model demonstrated higher performance than the pre-trained models and 3DCNN with an accuracy of 87.3%, AUC of 0.941, and F1-score of 0.888 for classification between the low and high DWI-ASPECTS groups. These results suggest that the deep learning algorithm developed in this study can provide a rapid assessment of DWI-ASPECTS and may serve as an ancillary tool that can assist physicians in making urgent clinical decisions.


2020 ◽  
Author(s):  
Hao Wu ◽  
Wen Tang ◽  
Chu Wu ◽  
Yufeng Deng ◽  
Rongguo Zhang

AbstractPurposeAlthough statistical models have been employed to detect and classify lung nodules using deep learning-extracted and clinical features, there is a lack of model validation in independent, multinational datasets from computed tomography (CT) scans and patient clinical information. To this end, we developed a deep learning-based algorithm to predict the malignancy of pulmonary nodules and validated its performance in three independent datasets containing multiracial and multinational populations.MethodsIn this study, a convolutional neural network-based algorithm to predict lung nodule malignancy was built based on CT scans and patient-wise clinical features (i.e. sex, spiculation, and nodule location). The model consists of three steps: (1) a deep learning algorithm to automatically extract features from CT scans, (2) clinical features were concatenated with the nodule features after dimension reduction by the principal component analysis (PCA), and (3) a multivariate logistic regression model was employed to classify the malignancy of the lung nodules. The model was trained by a dataset containing 1,556 nodules from 813 patients from the National Lung Screening Trial (NLST). The performance of the model was evaluated on three independent, multi-institutional datasets LIDC and Infervision Multi-Center (IMC) dataset, which contains 562 nodules from 293 patients, and 2044 nodules from 589 patients, respectively. The model accuracy was measured by the area under curve (AUC) of receiver operating characteristic (ROC) analysis.ResultsThe study shows that the AUCs of ROCs on the NLST dataset, LIDC dataset, and IMC dataset are 0.91, 0.86, and 0.95, respectively. The inclusion of clinical features does not significantly improve the model performance. Quantitatively, the summed-up weight on the prediction accuracy of the 10 nodule features extracted by the deep learning algorithm equals to 0.091, while the weight of patient sex, nodule spiculation, and location is 0.031, 0.052, and 0.008, respectively.ConclusionThe convolutional neural network-based model for lung nodule classification could be generalized to multiple datasets containing diverse populations. The addition of three patient clinical features to the nodule features extracted by deep learning does not boost the performance of the model.


Electronics ◽  
2021 ◽  
Vol 10 (2) ◽  
pp. 112
Author(s):  
Fangzhou Xu ◽  
Fenqi Rong ◽  
Yunjing Miao ◽  
Yanan Sun ◽  
Gege Dong ◽  
...  

This study describes a method for classifying electrocorticograms (ECoGs) based on motor imagery (MI) on the brain–computer interface (BCI) system. This method is different from the traditional feature extraction and classification method. In this paper, the proposed method employs the deep learning algorithm for extracting features and the traditional algorithm for classification. Specifically, we mainly use the convolution neural network (CNN) to extract the features from the training data and then classify those features by combing with the gradient boosting (GB) algorithm. The comprehensive study with CNN and GB algorithms will profoundly help us to obtain more feature information from brain activities, enabling us to obtain the classification results from human body actions. The performance of the proposed framework has been evaluated on the dataset I of BCI Competition III. Furthermore, the combination of deep learning and traditional algorithms provides some ideas for future research with the BCI systems.


2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Yiwen Shu ◽  
Xiwen Wu

Objective. This study was to explore the diagnostic effect of the coronary angiography (CAG) based on the fully convolutional neural network (FCNN) algorithm for patients with coronary heart disease (CHD) and suspected (not diagnosed) myocardial ischemia. Methods. In this study, 150 patients with undiagnosed CHD with myocardial ischemia in hospital were selected as the research objects. They were divided into an observation group and a control group by random number method. The patients in observation group were examined with CAG with the assistance of convolutional neural network (CNN) algorithm, while patients in the control group received conventional CAG. Results. The Dice coefficient of the segmentation effect evaluation index was 0.89, which showed that the image processing effect of the algorithm was good. There was no statistical difference in positive rates of single/double-vessel lesions between the two groups ( P > 0.05 ), and the positive rates of multivessel lesions and total lesions in the observation group were higher than those in the control group, showing statistically obvious difference ( P < 0.05 ). The examination sensitivity, specificity, accuracy, and Kappa value of the observation group were −90.9%, −60%, −82.7%, and −0.72, which were all higher in contrast to those of the control group. The proportion of positive myocardial ischemia and coronary artery stenosis (CAS) (82%) was higher than other cases (18%), and the comparison was statistically significant ( P < 0.05 ). Conclusion. CAG based on the deep learning algorithm showed a good detection effect and can better display the coronary lesions and reflect the good development prospects of deep learning technology in medical imaging.


Sign in / Sign up

Export Citation Format

Share Document