scholarly journals Using Slit-Lamp Images for Deep Learning–Based Identification of Bacterial and Fungal Keratitis

Author(s):  
Ning Hung ◽  
Eugene Yu-Chuan Kang ◽  
Andy Guan-Yu Shih ◽  
Chi-Hung Lin ◽  
Ming‐Tse Kuo ◽  
...  

In this study, we aimed to develop a deep learning model for identifying bacterial keratitis (BK) and fungal keratitis (FK) by using slit-lamp images. We retrospectively collected slit-lamp images of patients with culture-proven microbial keratitis between January 1, 2010, and December 31, 2019, from two medical centers in Taiwan. We constructed a deep learning algorithm, consisting of a segmentation model for cropping cornea images and a classification model that applies convolutional neural networks to differentiate between FK and BK. The model performance was evaluated and presented as the area under the curve (AUC) of the receiver operating characteristic curves. A gradient-weighted class activation mapping technique was used to plot the heatmap of the model. By using 1330 images from 580 patients, the deep learning algorithm achieved an average diagnostic accuracy of 80.00%. The diagnostic accuracy for BK ranged from 79.59% to 95.91% and that for FK ranged from 26.31% to 63.15%. DenseNet169 showed the best model performance, with an AUC of 0.78 for both BK and FK. The heat maps revealed that the model was able to identify the corneal infiltrations. The model showed better diagnostic accuracy than the previously reported diagnostic performance of both general ophthalmologists and corneal specialists.

Diagnostics ◽  
2021 ◽  
Vol 11 (7) ◽  
pp. 1246
Author(s):  
Ning Hung ◽  
Andy Kuan-Yu Shih ◽  
Chihung Lin ◽  
Ming-Tse Kuo ◽  
Yih-Shiou Hwang ◽  
...  

In this study, we aimed to develop a deep learning model for identifying bacterial keratitis (BK) and fungal keratitis (FK) by using slit-lamp images. We retrospectively collected slit-lamp images of patients with culture-proven microbial keratitis between 1 January 2010 and 31 December 2019 from two medical centers in Taiwan. We constructed a deep learning algorithm consisting of a segmentation model for cropping cornea images and a classification model that applies different convolutional neural networks (CNNs) to differentiate between FK and BK. The CNNs included DenseNet121, DenseNet161, DenseNet169, DenseNet201, EfficientNetB3, InceptionV3, ResNet101, and ResNet50. The model performance was evaluated and presented as the area under the curve (AUC) of the receiver operating characteristic curves. A gradient-weighted class activation mapping technique was used to plot the heat map of the model. By using 1330 images from 580 patients, the deep learning algorithm achieved the highest average accuracy of 80.0%. Using different CNNs, the diagnostic accuracy for BK ranged from 79.6% to 95.9%, and that for FK ranged from 26.3% to 65.8%. The CNN of DenseNet161 showed the best model performance, with an AUC of 0.85 for both BK and FK. The heat maps revealed that the model was able to identify the corneal infiltrations. The model showed a better diagnostic accuracy than the previously reported diagnostic performance of both general ophthalmologists and corneal specialists.


2021 ◽  
Author(s):  
Ayumi Koyama ◽  
Dai Miyazaki ◽  
Yuji Nakagawa ◽  
Yuji Ayatsuka ◽  
Hitomi Miyake ◽  
...  

Abstract Corneal opacities are an important cause of blindness, and its major etiology is infectious keratitis. Slit-lamp examinations are commonly used to determine the causative pathogen; however, their diagnostic accuracy is low even for experienced ophthalmologists. To characterize the “face” of an infected cornea, we have adapted a deep learning architecture used for facial recognition and applied it to determine a probability score for a specific pathogen causing keratitis. To record the diverse features and mitigate the uncertainty, batches of probability scores of 4 serial images taken from many angles or fluorescence staining were learned for score and decision level fusion using a gradient boosting decision tree. A total of 4306 slit-lamp images and 312 images obtained by internet publications on keratitis by bacteria, fungi, acanthamoeba, and herpes simplex virus (HSV) were studied. The created algorithm had a high overall accuracy of diagnosis, e.g., the accuracy/area under the curve (AUC) for acanthamoeba was 97.9%/0.995, bacteria was 90.7%/0.963, fungi was 95.0%/0.975, and HSV was 92.3%/0.946, by group K-fold validation, and it was robust to even the low resolution web images. We suggest that our hybrid deep learning-based algorithm be used as a simple and accurate method for computer-assisted diagnosis of infectious keratitis.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Ayumi Koyama ◽  
Dai Miyazaki ◽  
Yuji Nakagawa ◽  
Yuji Ayatsuka ◽  
Hitomi Miyake ◽  
...  

AbstractCorneal opacities are important causes of blindness, and their major etiology is infectious keratitis. Slit-lamp examinations are commonly used to determine the causative pathogen; however, their diagnostic accuracy is low even for experienced ophthalmologists. To characterize the “face” of an infected cornea, we have adapted a deep learning architecture used for facial recognition and applied it to determine a probability score for a specific pathogen causing keratitis. To record the diverse features and mitigate the uncertainty, batches of probability scores of 4 serial images taken from many angles or fluorescence staining were learned for score and decision level fusion using a gradient boosting decision tree. A total of 4306 slit-lamp images including 312 images obtained by internet publications on keratitis by bacteria, fungi, acanthamoeba, and herpes simplex virus (HSV) were studied. The created algorithm had a high overall accuracy of diagnosis, e.g., the accuracy/area under the curve for acanthamoeba was 97.9%/0.995, bacteria was 90.7%/0.963, fungi was 95.0%/0.975, and HSV was 92.3%/0.946, by group K-fold validation, and it was robust to even the low resolution web images. We suggest that our hybrid deep learning-based algorithm be used as a simple and accurate method for computer-assisted diagnosis of infectious keratitis.


Diagnostics ◽  
2021 ◽  
Vol 11 (2) ◽  
pp. 250
Author(s):  
Yejin Jeon ◽  
Kyeorye Lee ◽  
Leonard Sunwoo ◽  
Dongjun Choi ◽  
Dong Yul Oh ◽  
...  

Accurate image interpretation of Waters’ and Caldwell view radiographs used for sinusitis screening is challenging. Therefore, we developed a deep learning algorithm for diagnosing frontal, ethmoid, and maxillary sinusitis on both Waters’ and Caldwell views. The datasets were selected for the training and validation set (n = 1403, sinusitis% = 34.3%) and the test set (n = 132, sinusitis% = 29.5%) by temporal separation. The algorithm can simultaneously detect and classify each paranasal sinus using both Waters’ and Caldwell views without manual cropping. Single- and multi-view models were compared. Our proposed algorithm satisfactorily diagnosed frontal, ethmoid, and maxillary sinusitis on both Waters’ and Caldwell views (area under the curve (AUC), 0.71 (95% confidence interval, 0.62–0.80), 0.78 (0.72–0.85), and 0.88 (0.84–0.92), respectively). The one-sided DeLong’s test was used to compare the AUCs, and the Obuchowski–Rockette model was used to pool the AUCs of the radiologists. The algorithm yielded a higher AUC than radiologists for ethmoid and maxillary sinusitis (p = 0.012 and 0.013, respectively). The multi-view model also exhibited a higher AUC than the single Waters’ view model for maxillary sinusitis (p = 0.038). Therefore, our algorithm showed diagnostic performances comparable to radiologists and enhanced the value of radiography as a first-line imaging modality in assessing multiple sinusitis.


Webology ◽  
2020 ◽  
Vol 17 (2) ◽  
pp. 788-803
Author(s):  
Ahmed Mahdi Abdulkadium

Robotics mainly concern with the movement of robot with improvement obstacle avoidance, this issue is handed. It contains of a Microcontroller to process the data, and Ultrasonic sensors to detect the obstacles on its path. Artificial intelligence is used to predict the presence of obstacle in the path. In this research random forest algorithm is used and it is improved by RFHTMC algorithm. Deep learning mainly compromises of reducing the mean absolute error of forecasting. Problem with random forest is time complexity, as it involves formation of many classification trees. The proposed algorithm reduces the set of rules which is used for classification model, to improve time complexity. Performance analysis shows an significant improvement in results as compare to other deep learning algorithm as well as random forest. Forecasting accuracy shows 8% improvement as compare to random forest with 26% reduced operation time.


2021 ◽  
Author(s):  
J Weston Hughes ◽  
Neal Yuan ◽  
Bryan He ◽  
Jiahong Ouyang ◽  
Joseph Ebinger ◽  
...  

AbstractLaboratory blood testing is routinely used to assay biomarkers to provide information on physiologic state beyond what clinicians can evaluate from interpreting medical imaging. We hypothesized that deep learning interpretation of echocardiogram videos can provide additional value in understanding disease states and can predict common biomarkers results. Using 70,066 echocardiograms and associated biomarker results from 39,460 patients, we developed EchoNet-Labs, a video-based deep learning algorithm to predict anemia, elevated B-type natriuretic peptide (BNP), troponin I, and blood urea nitrogen (BUN), and abnormal levels in ten additional lab tests. On held-out test data across different healthcare systems, EchoNet-Labs achieved an area under the curve (AUC) of 0.80 in predicting anemia, 0.82 in predicting elevated BNP, 0.75 in predicting elevated troponin I, and 0.69 in predicting elevated BUN. We further demonstrate the utility of the model in predicting abnormalities in 10 additional lab tests. We investigate the features necessary for EchoNet-Labs to make successful predictions and identify potential prediction mechanisms for each biomarker using well-known and novel explainability techniques. These results show that deep learning applied to diagnostic imaging can provide additional clinical value and identify phenotypic information beyond current imaging interpretation methods.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Ryoungwoo Jang ◽  
Jae Ho Choi ◽  
Namkug Kim ◽  
Jae Suk Chang ◽  
Pil Whan Yoon ◽  
...  

AbstractDespite being the gold standard for diagnosis of osteoporosis, dual-energy X-ray absorptiometry (DXA) could not be widely used as a screening tool for osteoporosis. This study aimed to predict osteoporosis via simple hip radiography using deep learning algorithm. A total of 1001 datasets of proximal femur DXA with matched same-side cropped simple hip bone radiographic images of female patients aged ≥ 55 years were collected. Of these, 504 patients had osteoporosis (T-score ≤ − 2.5), and 497 patients did not have osteoporosis. The 1001 images were randomly divided into three sets: 800 images for the training, 100 images for the validation, and 101 images for the test. Based on VGG16 equipped with nonlocal neural network, we developed a deep neural network (DNN) model. We calculated the confusion matrix and evaluated the accuracy, sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV). We drew the receiver operating characteristic (ROC) curve. A gradient-based class activation map (Grad-CAM) overlapping the original image was also used to visualize the model performance. Additionally, we performed external validation using 117 datasets. Our final DNN model showed an overall accuracy of 81.2%, sensitivity of 91.1%, and specificity of 68.9%. The PPV was 78.5%, and the NPV was 86.1%. The area under the ROC curve value was 0.867, indicating a reasonable performance for screening osteoporosis by simple hip radiography. The external validation set confirmed a model performance with an overall accuracy of 71.8% and an AUC value of 0.700. All Grad-CAM results from both internal and external validation sets appropriately matched the proximal femur cortex and trabecular patterns of the radiographs. The DNN model could be considered as one of the useful screening tools for easy prediction of osteoporosis in the real-world clinical setting.


2019 ◽  
Author(s):  
Soonil Kwon ◽  
Joonki Hong ◽  
Eue-Keun Choi ◽  
Byunghwan Lee ◽  
Changhyun Baik ◽  
...  

BACKGROUND Continuous photoplethysmography (PPG) monitoring with a wearable device may aid the early detection of atrial fibrillation (AF). OBJECTIVE We aimed to evaluate the diagnostic performance of a ring-type wearable device (CardioTracker, CART), which can detect AF using deep learning analysis of PPG signals. METHODS Patients with persistent AF who underwent cardioversion were recruited prospectively. We recorded PPG signals at the finger with CART and a conventional pulse oximeter before and after cardioversion over a period of 15 min (each instrument). Cardiologists validated the PPG rhythms with simultaneous single-lead electrocardiography. The PPG data were transmitted to a smartphone wirelessly and analyzed with a deep learning algorithm. We also validated the deep learning algorithm in 20 healthy subjects with sinus rhythm (SR). RESULTS In 100 study participants, CART generated a total of 13,038 30-s PPG samples (5850 for SR and 7188 for AF). Using the deep learning algorithm, the diagnostic accuracy, sensitivity, specificity, positive-predictive value, and negative-predictive value were 96.9%, 99.0%, 94.3%, 95.6%, and 98.7%, respectively. Although the diagnostic accuracy decreased with shorter sample lengths, the accuracy was maintained at 94.7% with 10-s measurements. For SR, the specificity decreased with higher variability of peak-to-peak intervals. However, for AF, CART maintained consistent sensitivity regardless of variability. Pulse rates had a lower impact on sensitivity than on specificity. The performance of CART was comparable to that of the conventional device when using a proper threshold. External validation showed that 94.99% (16,529/17,400) of the PPG samples from the control group were correctly identified with SR. CONCLUSIONS A ring-type wearable device with deep learning analysis of PPG signals could accurately diagnose AF without relying on electrocardiography. With this device, continuous monitoring for AF may be promising in high-risk populations. CLINICALTRIAL ClinicalTrials.gov NCT04023188; https://clinicaltrials.gov/ct2/show/NCT04023188


Sign in / Sign up

Export Citation Format

Share Document