scholarly journals Faster Region-Based Convolutional Neural Network in the Classification of Different Parkinsonism Patterns of the Striatum on Maximum Intensity Projection Images of [18F]FP-CIT Positron Emission Tomography

Diagnostics ◽  
2021 ◽  
Vol 11 (9) ◽  
pp. 1557
Author(s):  
Byung Wook Choi ◽  
Sungmin Kang ◽  
Hae Won Kim ◽  
Oh Dae Kwon ◽  
Huy Duc Vu ◽  
...  

The aim of this study was to compare the performance of a deep-learning convolutional neural network (Faster R-CNN) model to detect imaging findings suggestive of idiopathic Parkinson’s disease (PD) based on [18F]FP-CIT PET maximum intensity projection (MIP) images versus that of nuclear medicine (NM) physicians. The anteroposterior MIP images of the [18F]FP-CIT PET scan of 527 patients were classified as having PD (139 images) or non-PD (388 images) patterns according to the final diagnosis. Non-PD patterns were classified as overall-normal (ONL, 365 images) and vascular parkinsonism with definite defects or prominently decreased dopamine transporter binding (dVP, 23 images) patterns. Faster R-CNN was trained on 120 PD, 320 ONL, and 16 dVP pattern images and tested on the 19 PD, 45 ONL, and seven dVP patterns images. The performance of the Faster R-CNN and three NM physicians was assessed using receiver operating characteristics curve analysis. The difference in performance was assessed using Cochran’s Q test, and the inter-rater reliability was calculated. Faster R-CNN showed high accuracy in differentiating PD from non-PD patterns and also from dVP patterns, with results comparable to those of NM physicians. There were no significant differences in the area under the curve and performance. The inter-rater reliability among Faster R-CNN and NM physicians showed substantial to almost perfect agreement. The deep-learning model accurately differentiated PD from non-PD patterns on MIP images of [18F]FP-CIT PET, and its performance was comparable to that of NM physicians.

2021 ◽  
Vol 7 (2) ◽  
pp. 356-362
Author(s):  
Harry Coppock ◽  
Alex Gaskell ◽  
Panagiotis Tzirakis ◽  
Alice Baird ◽  
Lyn Jones ◽  
...  

BackgroundSince the emergence of COVID-19 in December 2019, multidisciplinary research teams have wrestled with how best to control the pandemic in light of its considerable physical, psychological and economic damage. Mass testing has been advocated as a potential remedy; however, mass testing using physical tests is a costly and hard-to-scale solution.MethodsThis study demonstrates the feasibility of an alternative form of COVID-19 detection, harnessing digital technology through the use of audio biomarkers and deep learning. Specifically, we show that a deep neural network based model can be trained to detect symptomatic and asymptomatic COVID-19 cases using breath and cough audio recordings.ResultsOur model, a custom convolutional neural network, demonstrates strong empirical performance on a data set consisting of 355 crowdsourced participants, achieving an area under the curve of the receiver operating characteristics of 0.846 on the task of COVID-19 classification.ConclusionThis study offers a proof of concept for diagnosing COVID-19 using cough and breath audio signals and motivates a comprehensive follow-up research study on a wider data sample, given the evident advantages of a low-cost, highly scalable digital COVID-19 diagnostic tool.


10.2196/24973 ◽  
2021 ◽  
Vol 9 (1) ◽  
pp. e24973
Author(s):  
Thao Thi Ho ◽  
Jongmin Park ◽  
Taewoo Kim ◽  
Byunggeon Park ◽  
Jaehee Lee ◽  
...  

Background Many COVID-19 patients rapidly progress to respiratory failure with a broad range of severities. Identification of high-risk cases is critical for early intervention. Objective The aim of this study is to develop deep learning models that can rapidly identify high-risk COVID-19 patients based on computed tomography (CT) images and clinical data. Methods We analyzed 297 COVID-19 patients from five hospitals in Daegu, South Korea. A mixed artificial convolutional neural network (ACNN) model, combining an artificial neural network for clinical data and a convolutional neural network for 3D CT imaging data, was developed to classify these cases as either high risk of severe progression (ie, event) or low risk (ie, event-free). Results Using the mixed ACNN model, we were able to obtain high classification performance using novel coronavirus pneumonia lesion images (ie, 93.9% accuracy, 80.8% sensitivity, 96.9% specificity, and 0.916 area under the curve [AUC] score) and lung segmentation images (ie, 94.3% accuracy, 74.7% sensitivity, 95.9% specificity, and 0.928 AUC score) for event versus event-free groups. Conclusions Our study successfully differentiated high-risk cases among COVID-19 patients using imaging and clinical features. The developed model can be used as a predictive tool for interventions in aggressive therapies.


2021 ◽  
Vol 13 (13) ◽  
pp. 2450
Author(s):  
Aaron E. Maxwell ◽  
Timothy A. Warner ◽  
Luis Andrés Guillén

Convolutional neural network (CNN)-based deep learning (DL) is a powerful, recently developed image classification approach. With origins in the computer vision and image processing communities, the accuracy assessment methods developed for CNN-based DL use a wide range of metrics that may be unfamiliar to the remote sensing (RS) community. To explore the differences between traditional RS and DL RS methods, we surveyed a random selection of 100 papers from the RS DL literature. The results show that RS DL studies have largely abandoned traditional RS accuracy assessment terminology, though some of the accuracy measures typically used in DL papers, most notably precision and recall, have direct equivalents in traditional RS terminology. Some of the DL accuracy terms have multiple names, or are equivalent to another measure. In our sample, DL studies only rarely reported a complete confusion matrix, and when they did so, it was even more rare that the confusion matrix estimated population properties. On the other hand, some DL studies are increasingly paying attention to the role of class prevalence in designing accuracy assessment approaches. DL studies that evaluate the decision boundary threshold over a range of values tend to use the precision-recall (P-R) curve, the associated area under the curve (AUC) measures of average precision (AP) and mean average precision (mAP), rather than the traditional receiver operating characteristic (ROC) curve and its AUC. DL studies are also notable for testing the generalization of their models on entirely new datasets, including data from new areas, new acquisition times, or even new sensors.


2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Kyung-Ok Cho ◽  
Hyun-Jong Jang

AbstractThe manual review of an electroencephalogram (EEG) for seizure detection is a laborious and error-prone process. Thus, automated seizure detection based on machine learning has been studied for decades. Recently, deep learning has been adopted in order to avoid manual feature extraction and selection. In the present study, we systematically compared the performance of different combinations of input modalities and network structures on a fixed window size and dataset to ascertain an optimal combination of input modalities and network structures. The raw time-series EEG, periodogram of the EEG, 2D images of short-time Fourier transform results, and 2D images of raw EEG waveforms were obtained from 5-s segments of intracranial EEGs recorded from a mouse model of epilepsy. A fully connected neural network (FCNN), recurrent neural network (RNN), and convolutional neural network (CNN) were implemented to classify the various inputs. The classification results for the test dataset showed that CNN performed better than FCNN and RNN, with the area under the curve (AUC) for the receiver operating characteristics curves ranging from 0.983 to 0.984, from 0.985 to 0.989, and from 0.989 to 0.993 for FCNN, RNN, and CNN, respectively. As for input modalities, 2D images of raw EEG waveforms yielded the best result with an AUC of 0.993. Thus, CNN can be the most suitable network structure for automated seizure detection when applied to the images of raw EEG waveforms, since CNN can effectively learn a general spatially-invariant representation of seizure patterns in 2D representations of raw EEG.


2021 ◽  
Vol 11 (8) ◽  
pp. 786
Author(s):  
Chaoyue Chen ◽  
Yisong Cheng ◽  
Jianfeng Xu ◽  
Ting Zhang ◽  
Xin Shu ◽  
...  

The purpose of this study was to determine whether a deep-learning-based assessment system could facilitate preoperative grading of meningioma. This was a retrospective study conducted at two institutions covering 643 patients. The system, designed with a cascade network structure, was developed using deep-learning technology for automatic tumor detection, visual assessment, and grading prediction. Specifically, a modified U-Net convolutional neural network was first established to segment tumor images. Subsequently, the segmentations were introduced into rendering algorithms for spatial reconstruction and another DenseNet convolutional neural network for grading prediction. The trained models were integrated as a system, and the robustness was tested based on its performance on an external dataset from the second institution involving different magnetic resonance imaging platforms. The results showed that the segment model represented a noteworthy performance with dice coefficients of 0.920 ± 0.009 in the validation group. With accurate segmented tumor images, the rendering model delicately reconstructed the tumor body and clearly displayed the important intracranial vessels. The DenseNet model also achieved high accuracy with an area under the curve of 0.918 ± 0.006 and accuracy of 0.901 ± 0.039 when classifying tumors into low-grade and high-grade meningiomas. Moreover, the system exhibited good performance on the external validation dataset.


Author(s):  
Muhaza Liebenlito ◽  
Yanne Irene ◽  
Abdul Hamid

AbstractIn this paper, we use chest x-ray images of Tuberculosis and Pneumonia to diagnose the patient using a convolutional neural network model. We use 4273 images of pneumonia, 1989 images of normal, and 394 images of tuberculosis. The data are divided into 80% as the training set and 20% as the testing set. We do the preprocessing steps to all of our images data, such as resize, converting RGB to grayscale, and Gaussian normalization. On the training dataset, the sampling technique used is undersampling and oversampling to balance each class. The best model was chosen based on the Area under Curve value i.e. the area under the curve of Receiver Operating Characteristics. This method shows that the best model obtains when trains the training dataset using oversampling. The Area under Curve value is 0.99 for tuberculosis and 0.98 for pneumonia. Therefore, this best model succeeds to identify 86% true for tuberculosis and 96% true for pneumonia.Keywords: chest X-ray images; tuberculosis; pneumonia; convolutional neural network.                                                                AbstrakPada penelitian ini memanfaatkan data citra chest x-ray penderita penyakit tuberculosis dan pneumonia. Model convolutional neural network digunakan untuk membantu mendiagnosis kedua penyakit ini. Data yang digunakan masing-masing sudah dilabeli sebanyak 4273 citra pneumonia, 1989 citra normal dan 394 citra tuberculosis. Data tersebut dibagi menjadi 80% himpunan data latih dan 20% data uji. Himpunan data tersebut telah melalui 3 tahap prepocessing yaitu resize citra, merubah citra RGB menjadi grayscale dan standarisasi gausian pada citra. Pada data latih dilakukan teknik sampling berupa undersampling dan oversampling data untuk menyeimbangkan data latih antar kelas. Model terbaik dipilih berdasarkan nilai Area under Curve yaitu luas daerah di bawah kurva Receiver Operating Chracteristics. Hasil menunjukkan bahwa model terbaik dihasilkan ketika dilatih menggunakan data latih hasil oversampling dengan nilai Area under Curve kelas tuberculosis sebesar 0,99 dan nilai Area under Curve kelas pneumonia sebesar 0,98. Oleh karena itu, model terbaik ini mampu mengindentifikasi sebanyak 86% penyakit tuberculosis dan 96% penyakit pneumonia.Kata Kunci: citra chest X-ray; penyakit infeksi paru; pengolahan citra digital Convolutional Neural Network.


2019 ◽  
Author(s):  
Seoin Back ◽  
Junwoong Yoon ◽  
Nianhan Tian ◽  
Wen Zhong ◽  
Kevin Tran ◽  
...  

We present an application of deep-learning convolutional neural network of atomic surface structures using atomic and Voronoi polyhedra-based neighbor information to predict adsorbate binding energies for the application in catalysis.


2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Young-Gon Kim ◽  
Sungchul Kim ◽  
Cristina Eunbee Cho ◽  
In Hye Song ◽  
Hee Jin Lee ◽  
...  

AbstractFast and accurate confirmation of metastasis on the frozen tissue section of intraoperative sentinel lymph node biopsy is an essential tool for critical surgical decisions. However, accurate diagnosis by pathologists is difficult within the time limitations. Training a robust and accurate deep learning model is also difficult owing to the limited number of frozen datasets with high quality labels. To overcome these issues, we validated the effectiveness of transfer learning from CAMELYON16 to improve performance of the convolutional neural network (CNN)-based classification model on our frozen dataset (N = 297) from Asan Medical Center (AMC). Among the 297 whole slide images (WSIs), 157 and 40 WSIs were used to train deep learning models with different dataset ratios at 2, 4, 8, 20, 40, and 100%. The remaining, i.e., 100 WSIs, were used to validate model performance in terms of patch- and slide-level classification. An additional 228 WSIs from Seoul National University Bundang Hospital (SNUBH) were used as an external validation. Three initial weights, i.e., scratch-based (random initialization), ImageNet-based, and CAMELYON16-based models were used to validate their effectiveness in external validation. In the patch-level classification results on the AMC dataset, CAMELYON16-based models trained with a small dataset (up to 40%, i.e., 62 WSIs) showed a significantly higher area under the curve (AUC) of 0.929 than those of the scratch- and ImageNet-based models at 0.897 and 0.919, respectively, while CAMELYON16-based and ImageNet-based models trained with 100% of the training dataset showed comparable AUCs at 0.944 and 0.943, respectively. For the external validation, CAMELYON16-based models showed higher AUCs than those of the scratch- and ImageNet-based models. Model performance for slide feasibility of the transfer learning to enhance model performance was validated in the case of frozen section datasets with limited numbers.


2021 ◽  
Vol 13 (2) ◽  
pp. 274
Author(s):  
Guobiao Yao ◽  
Alper Yilmaz ◽  
Li Zhang ◽  
Fei Meng ◽  
Haibin Ai ◽  
...  

The available stereo matching algorithms produce large number of false positive matches or only produce a few true-positives across oblique stereo images with large baseline. This undesired result happens due to the complex perspective deformation and radiometric distortion across the images. To address this problem, we propose a novel affine invariant feature matching algorithm with subpixel accuracy based on an end-to-end convolutional neural network (CNN). In our method, we adopt and modify a Hessian affine network, which we refer to as IHesAffNet, to obtain affine invariant Hessian regions using deep learning framework. To improve the correlation between corresponding features, we introduce an empirical weighted loss function (EWLF) based on the negative samples using K nearest neighbors, and then generate deep learning-based descriptors with high discrimination that is realized with our multiple hard network structure (MTHardNets). Following this step, the conjugate features are produced by using the Euclidean distance ratio as the matching metric, and the accuracy of matches are optimized through the deep learning transform based least square matching (DLT-LSM). Finally, experiments on Large baseline oblique stereo images acquired by ground close-range and unmanned aerial vehicle (UAV) verify the effectiveness of the proposed approach, and comprehensive comparisons demonstrate that our matching algorithm outperforms the state-of-art methods in terms of accuracy, distribution and correct ratio. The main contributions of this article are: (i) our proposed MTHardNets can generate high quality descriptors; and (ii) the IHesAffNet can produce substantial affine invariant corresponding features with reliable transform parameters.


Sign in / Sign up

Export Citation Format

Share Document