scholarly journals Saliency-based 3D convolutional neural network for categorising common focal liver lesions on multisequence MRI

2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Shu-Hui Wang ◽  
Xin-Jun Han ◽  
Jing Du ◽  
Zhen-Chang Wang ◽  
Chunwang Yuan ◽  
...  

Abstract Background The imaging features of focal liver lesions (FLLs) are diverse and complex. Diagnosing FLLs with imaging alone remains challenging. We developed and validated an interpretable deep learning model for the classification of seven categories of FLLs on multisequence MRI and compared the differential diagnosis between the proposed model and radiologists. Methods In all, 557 lesions examined by multisequence MRI were utilised in this retrospective study and divided into training–validation (n = 444) and test (n = 113) datasets. The area under the receiver operating characteristic curve (AUC) was calculated to evaluate the performance of the model. The accuracy and confusion matrix of the model and individual radiologists were compared. Saliency maps were generated to highlight the activation region based on the model perspective. Results The AUC of the two- and seven-way classifications of the model were 0.969 (95% CI 0.944–0.994) and from 0.919 (95% CI 0.857–0.980) to 0.999 (95% CI 0.996–1.000), respectively. The model accuracy (79.6%) of the seven-way classification was higher than that of the radiology residents (66.4%, p = 0.035) and general radiologists (73.5%, p = 0.346) but lower than that of the academic radiologists (85.4%, p = 0.291). Confusion matrices showed the sources of diagnostic errors for the model and individual radiologists for each disease. Saliency maps detected the activation regions associated with each predicted class. Conclusion This interpretable deep learning model showed high diagnostic performance in the differentiation of FLLs on multisequence MRI. The analysis principle contributing to the predictions can be explained via saliency maps.

Cancers ◽  
2021 ◽  
Vol 14 (1) ◽  
pp. 12
Author(s):  
Jose M. Castillo T. ◽  
Muhammad Arif ◽  
Martijn P. A. Starmans ◽  
Wiro J. Niessen ◽  
Chris H. Bangma ◽  
...  

The computer-aided analysis of prostate multiparametric MRI (mpMRI) could improve significant-prostate-cancer (PCa) detection. Various deep-learning- and radiomics-based methods for significant-PCa segmentation or classification have been reported in the literature. To be able to assess the generalizability of the performance of these methods, using various external data sets is crucial. While both deep-learning and radiomics approaches have been compared based on the same data set of one center, the comparison of the performances of both approaches on various data sets from different centers and different scanners is lacking. The goal of this study was to compare the performance of a deep-learning model with the performance of a radiomics model for the significant-PCa diagnosis of the cohorts of various patients. We included the data from two consecutive patient cohorts from our own center (n = 371 patients), and two external sets of which one was a publicly available patient cohort (n = 195 patients) and the other contained data from patients from two hospitals (n = 79 patients). Using multiparametric MRI (mpMRI), the radiologist tumor delineations and pathology reports were collected for all patients. During training, one of our patient cohorts (n = 271 patients) was used for both the deep-learning- and radiomics-model development, and the three remaining cohorts (n = 374 patients) were kept as unseen test sets. The performances of the models were assessed in terms of their area under the receiver-operating-characteristic curve (AUC). Whereas the internal cross-validation showed a higher AUC for the deep-learning approach, the radiomics model obtained AUCs of 0.88, 0.91 and 0.65 on the independent test sets compared to AUCs of 0.70, 0.73 and 0.44 for the deep-learning model. Our radiomics model that was based on delineated regions resulted in a more accurate tool for significant-PCa classification in the three unseen test sets when compared to a fully automated deep-learning model.


Sensors ◽  
2020 ◽  
Vol 20 (9) ◽  
pp. 2556
Author(s):  
Liyang Wang ◽  
Yao Mu ◽  
Jing Zhao ◽  
Xiaoya Wang ◽  
Huilian Che

The clinical symptoms of prediabetes are mild and easy to overlook, but prediabetes may develop into diabetes if early intervention is not performed. In this study, a deep learning model—referred to as IGRNet—is developed to effectively detect and diagnose prediabetes in a non-invasive, real-time manner using a 12-lead electrocardiogram (ECG) lasting 5 s. After searching for an appropriate activation function, we compared two mainstream deep neural networks (AlexNet and GoogLeNet) and three traditional machine learning algorithms to verify the superiority of our method. The diagnostic accuracy of IGRNet is 0.781, and the area under the receiver operating characteristic curve (AUC) is 0.777 after testing on the independent test set including mixed group. Furthermore, the accuracy and AUC are 0.856 and 0.825, respectively, in the normal-weight-range test set. The experimental results indicate that IGRNet diagnoses prediabetes with high accuracy using ECGs, outperforming existing other machine learning methods; this suggests its potential for application in clinical practice as a non-invasive, prediabetes diagnosis technology.


PLoS ONE ◽  
2021 ◽  
Vol 16 (6) ◽  
pp. e0253239
Author(s):  
Yiyun Chen ◽  
Craig S. Roberts ◽  
Wanmei Ou ◽  
Tanaz Petigara ◽  
Gregory V. Goldmacher ◽  
...  

Background The World Health Organization (WHO)-defined radiological pneumonia is a preferred endpoint in pneumococcal vaccine efficacy and effectiveness studies in children. Automating the WHO methodology may support more widespread application of this endpoint. Methods We trained a deep learning model to classify pneumonia CXRs in children using the World Health Organization (WHO)’s standardized methodology. The model was pretrained on CheXpert, a dataset containing 224,316 adult CXRs, and fine-tuned on PERCH, a pediatric dataset containing 4,172 CXRs. The model was then tested on two pediatric CXR datasets released by WHO. We also compared the model’s performance to that of radiologists and pediatricians. Results The average area under the receiver operating characteristic curve (AUC) for primary endpoint pneumonia (PEP) across 10-fold validation of PERCH images was 0.928; average AUC after testing on WHO images was 0.977. The model’s classification performance was better on test images with high inter-observer agreement; however, the model still outperformed human assessments in AUC and precision-recall spaces on low agreement images. Conclusion A deep learning model can classify pneumonia CXR images in children at a performance comparable to human readers. Our method lays a strong foundation for the potential inclusion of computer-aided readings of pediatric CXRs in vaccine trials and epidemiology studies.


Author(s):  
Hamza Abbad ◽  
Shengwu Xiong

Automatic diacritization is an Arabic natural language processing topic based on the sequence labeling task where the labels are the diacritics and the letters are the sequence elements. A letter can have from zero up to two diacritics. The dataset used was a subset of the preprocessed version of the Tashkeela corpus. We developed a deep learning model composed of a stack of four bidirectional long short-term memory hidden layers of the same size and an output layer at every level. The levels correspond to the groups that we classified the diacritics into (short vowels, double case-endings, Shadda, and Sukoon). Before training, the data were divided into input vectors containing letter indexes and outputs vectors containing the indexes of diacritics regarding their groups. Both input and output vectors are concatenated, then a sliding window operation with overlapping is performed to generate continuous and fixed-size data. Such data is used for both training and evaluation. Finally, we realize some tests using the standard metrics with all of their variations and compare our results with two recent state-of-the-art works. Our model achieved 3% diacritization error rate and 8.99% word error rate when including all letters. We have also generated the confusion matrix to show the performances per output and analyzed the mismatches of the first 500 lines to classify the model errors according to their linguistic nature.


Diagnostics ◽  
2021 ◽  
Vol 11 (7) ◽  
pp. 1182
Author(s):  
Cheng-Yi Kao ◽  
Chiao-Yun Lin ◽  
Cheng-Chen Chao ◽  
Han-Sheng Huang ◽  
Hsing-Yu Lee ◽  
...  

We aimed to set up an Automated Radiology Alert System (ARAS) for the detection of pneumothorax in chest radiographs by a deep learning model, and to compare its efficiency and diagnostic performance with the existing Manual Radiology Alert System (MRAS) at the tertiary medical center. This study retrospectively collected 1235 chest radiographs with pneumothorax labeling from 2013 to 2019, and 337 chest radiographs with negative findings in 2019 were separated into training and validation datasets for the deep learning model of ARAS. The efficiency before and after using the model was compared in terms of alert time and report time. During parallel running of the two systems from September to October 2020, chest radiographs prospectively acquired in the emergency department with age more than 6 years served as the testing dataset for comparison of diagnostic performance. The efficiency was improved after using the model, with mean alert time improving from 8.45 min to 0.69 min and the mean report time from 2.81 days to 1.59 days. The comparison of the diagnostic performance of both systems using 3739 chest radiographs acquired during parallel running showed that the ARAS was better than the MRAS as assessed in terms of sensitivity (recall), area under receiver operating characteristic curve, and F1 score (0.837 vs. 0.256, 0.914 vs. 0.628, and 0.754 vs. 0.407, respectively), but worse in terms of positive predictive value (PPV) (precision) (0.686 vs. 1.000). This study had successfully designed a deep learning model for pneumothorax detection on chest radiographs and set up an ARAS with improved efficiency and overall diagnostic performance.


2020 ◽  
Author(s):  
Hyung Jun Park ◽  
Dae Yon Jung ◽  
Wonjun Ji ◽  
Chang-Min Choi

BACKGROUND Detecting bacteremia among surgical in-patients is more obscure than other patients due to the inflammatory condition caused by the surgery. The previous criteria such as systemic inflammatory response syndrome or Sepsis-3 are not available for use in general wards, and thus, many clinicians usually rely on practical senses to diagnose postoperative infection. OBJECTIVE This study aims to evaluate the performance of continuous monitoring with a deep learning model for early detection of bacteremia for surgical in-patients in the general ward and the intensive care unit (ICU). METHODS In this retrospective cohort study, we included 36,023 consecutive patients who underwent general surgery between October and December 2017 at a tertiary referral hospital in South Korea. The primary outcome was the area under the receiver operating characteristic curve (AUROC) and the area under the precision-recall curve (AUPRC) for detecting bacteremia by the deep learning model, and the secondary outcome was the feature explainability of the model by occlusion analysis. RESULTS Out of the 36,023 patients in the data set, 720 cases of bacteremia were included. Our deep learning–based model showed an AUROC of 0.97 (95% CI 0.974-0.981) and an AUPRC of 0.17 (95% CI 0.147-0.203) for detecting bacteremia in surgical in-patients. For predicting bacteremia within the previous 24-hour period, the AUROC and AUPRC values were 0.93 and 0.15, respectively. Occlusion analysis showed that vital signs and laboratory measurements (eg, kidney function test and white blood cell group) were the most important variables for detecting bacteremia. CONCLUSIONS A deep learning model based on time series electronic health records data had a high detective ability for bacteremia for surgical in-patients in the general ward and the ICU. The model may be able to assist clinicians in evaluating infection among in-patients, ordering blood cultures, and prescribing antibiotics with real-time monitoring.


2021 ◽  
Author(s):  
Selena I. Huisman ◽  
Arthur T.J. van der Boog ◽  
Fia Cialdella ◽  
Joost J.C. Verhoeff ◽  
Szabolcs David

Background and purpose. Changes of healthy appearing brain tissue after radiotherapy have been previously observed, however, they remain difficult to quantify. Due to these changes, patients undergoing radiotherapy may have a higher risk of cognitive decline, leading to a reduced quality of life. The experienced tissue atrophy is similar to the effects of normal aging in healthy individuals. We propose a new way to quantify tissue changes after cranial RT as accelerated brain aging using the BrainAGE framework. Materials and methods. BrainAGE was applied to longitudinal MRI scans of 32 glioma patients, who have undergone radiotherapy. Utilizing a pre-trained deep learning model, brain age is estimated for all patients' pre-radiotherapy planning and follow-up MRI scans to get a quantification of the changes occurring in the brain over time. Saliency maps were extracted from the model to spatially identify which areas of the brain the deep learning model weighs highest for predicting age. The predicted ages from the deep learning model were used in a linear mixed effects model to quantity aging and aging rates for patients after radiotherapy. Results. The linear mixed effects model resulted in an accelerated aging rate of 2.78 years per year, a significant increase over a normal aging rate of 1 (p <0.05, confidence interval (CI) = 2.54-3.02). Furthermore, the saliency maps showed numerous anatomically well-defined areas, e.g.: Heschl's gyrus among others, determined by the model as important for brain age prediction. Conclusion. We found that patients undergoing radiotherapy are affected by significant radiation- induced accelerated aging, with several anatomically well-defined areas contributing to this aging. The estimated brain age could provide a method for quantifying quality of life post-radiotherapy.


2021 ◽  
Vol 10 (1) ◽  
Author(s):  
Zheng Kou ◽  
Yi-Fan Huang ◽  
Ao Shen ◽  
Saeed Kosari ◽  
Xiang-Rong Liu ◽  
...  

Abstract Background Coronaviruses can be isolated from bats, civets, pangolins, birds and other wild animals. As an animal-origin pathogen, coronavirus can cross species barrier and cause pandemic in humans. In this study, a deep learning model for early prediction of pandemic risk was proposed based on the sequences of viral genomes. Methods A total of 3257 genomes were downloaded from the Coronavirus Genome Resource Library. We present a deep learning model of cross-species coronavirus infection that combines a bidirectional gated recurrent unit network with a one-dimensional convolution. The genome sequence of animal-origin coronavirus was directly input to extract features and predict pandemic risk. The best performances were explored with the use of pre-trained DNA vector and attention mechanism. The area under the receiver operating characteristic curve (AUROC) and the area under precision-recall curve (AUPR) were used to evaluate the predictive models. Results The six specific models achieved good performances for the corresponding virus groups (1 for AUROC and 1 for AUPR). The general model with pre-training vector and attention mechanism provided excellent predictions for all virus groups (1 for AUROC and 1 for AUPR) while those without pre-training vector or attention mechanism had obviously reduction of performance (about 5–25%). Re-training experiments showed that the general model has good capabilities of transfer learning (average for six groups: 0.968 for AUROC and 0.942 for AUPR) and should give reasonable prediction for potential pathogen of next pandemic. The artificial negative data with the replacement of the coding region of the spike protein were also predicted correctly (100% accuracy). With the application of the Python programming language, an easy-to-use tool was created to implements our predictor. Conclusions Robust deep learning model with pre-training vector and attention mechanism mastered the features from the whole genomes of animal-origin coronaviruses and could predict the risk of cross-species infection for early warning of next pandemic. Graphical Abstract


2021 ◽  
Author(s):  
So Jin Park ◽  
Tae Hoon Ko ◽  
Chan Kee Park ◽  
Yong Chan Kim ◽  
In Young Choi

BACKGROUND Pathologic myopia is a disease that causes vision impairment and blindness. Therefore, it is essential to diagnose it in a timely manner. However, there is no standardized definition for pathologic myopia, and the interpretation of pathologic myopia by optical coherence tomography is subjective and requires considerable time and money. Therefore, there is a need for a diagnostic tool that can diagnose pathologic myopia in patients automatically and in a timely manner. OBJECTIVE The purpose of this study was to develop an algorithm that uses optical coherence tomography (OCT) to automatically diagnose patients with pathologic myopia who require treatment. METHODS This study was conducted using patient data from patients who underwent optical coherence tomography tests at the Ophthalmology Department of Incheon St. Mary's Hospital and Seoul St. Mary's Hospital from January 2012 to May 2020. To automatically diagnose pathologic myopia, a deep learning model was developed using 3D optical coherence tomography images. A model was developed using transfer learning based on four pre-trained convolutional neural networks (ResNet18, ResNext50, EfficientNetB0, EfficientNetB4). The performance of each model was evaluated and compared based on accuracy, sensitivity, specificity, and area under the receiver operating characteristic curve (AUROC). RESULTS Four models developed using test datasets were evaluated and compared. The model based on EfficientNetB4 showed the best performance (95% accuracy, 93% sensitivity, 96% specificity, and 98% AUROC). CONCLUSIONS In our study, we developed a deep learning model that can automatically diagnose pathologic myopia without segmentation of 3D optical coherence tomography images. Our deep learning model based on EfficientNetB4 demonstrated excellent performance in identifying pathologic myopia.


2021 ◽  
Author(s):  
Dong Jin Park ◽  
Min Woo Park ◽  
Homin Lee ◽  
Young-Jin Kim ◽  
Yeongsic Kim ◽  
...  

Abstract Artificial intelligence is a concept that includes machine learning and deep learning. The deep learning model used in this study corresponds to DNN (deep neural network) by utilizing two or more hidden layers. In this study, MLP (multi-layer perceptron) and machine learning models (XGBoost, LGBM) were used. An MLP consists of at least three layers: an input layer, a hidden layer, and an output layer. In general, tree models or linear models using machine learning are widely used for classification. We analyzed our data by applying deep learning (MLP) to improve the performance, which showed good results. The deep learning and ML models showed differences in predictive power and disease classification patterns. We used a confusion matrix and analyzed feature importance using the SHAP value method. Here, we present a protocol to confirm that the use of deep learning can show good performance in disease classification using hospital numerical structured data (laboratory test).


Sign in / Sign up

Export Citation Format

Share Document