scholarly journals Deep Learning for Diagnosis of Paranasal Sinusitis Using Multi-View Radiographs

Diagnostics ◽  
2021 ◽  
Vol 11 (2) ◽  
pp. 250
Author(s):  
Yejin Jeon ◽  
Kyeorye Lee ◽  
Leonard Sunwoo ◽  
Dongjun Choi ◽  
Dong Yul Oh ◽  
...  

Accurate image interpretation of Waters’ and Caldwell view radiographs used for sinusitis screening is challenging. Therefore, we developed a deep learning algorithm for diagnosing frontal, ethmoid, and maxillary sinusitis on both Waters’ and Caldwell views. The datasets were selected for the training and validation set (n = 1403, sinusitis% = 34.3%) and the test set (n = 132, sinusitis% = 29.5%) by temporal separation. The algorithm can simultaneously detect and classify each paranasal sinus using both Waters’ and Caldwell views without manual cropping. Single- and multi-view models were compared. Our proposed algorithm satisfactorily diagnosed frontal, ethmoid, and maxillary sinusitis on both Waters’ and Caldwell views (area under the curve (AUC), 0.71 (95% confidence interval, 0.62–0.80), 0.78 (0.72–0.85), and 0.88 (0.84–0.92), respectively). The one-sided DeLong’s test was used to compare the AUCs, and the Obuchowski–Rockette model was used to pool the AUCs of the radiologists. The algorithm yielded a higher AUC than radiologists for ethmoid and maxillary sinusitis (p = 0.012 and 0.013, respectively). The multi-view model also exhibited a higher AUC than the single Waters’ view model for maxillary sinusitis (p = 0.038). Therefore, our algorithm showed diagnostic performances comparable to radiologists and enhanced the value of radiography as a first-line imaging modality in assessing multiple sinusitis.

2021 ◽  
Author(s):  
Ayumi Koyama ◽  
Dai Miyazaki ◽  
Yuji Nakagawa ◽  
Yuji Ayatsuka ◽  
Hitomi Miyake ◽  
...  

Abstract Corneal opacities are an important cause of blindness, and its major etiology is infectious keratitis. Slit-lamp examinations are commonly used to determine the causative pathogen; however, their diagnostic accuracy is low even for experienced ophthalmologists. To characterize the “face” of an infected cornea, we have adapted a deep learning architecture used for facial recognition and applied it to determine a probability score for a specific pathogen causing keratitis. To record the diverse features and mitigate the uncertainty, batches of probability scores of 4 serial images taken from many angles or fluorescence staining were learned for score and decision level fusion using a gradient boosting decision tree. A total of 4306 slit-lamp images and 312 images obtained by internet publications on keratitis by bacteria, fungi, acanthamoeba, and herpes simplex virus (HSV) were studied. The created algorithm had a high overall accuracy of diagnosis, e.g., the accuracy/area under the curve (AUC) for acanthamoeba was 97.9%/0.995, bacteria was 90.7%/0.963, fungi was 95.0%/0.975, and HSV was 92.3%/0.946, by group K-fold validation, and it was robust to even the low resolution web images. We suggest that our hybrid deep learning-based algorithm be used as a simple and accurate method for computer-assisted diagnosis of infectious keratitis.


2018 ◽  
Author(s):  
Sebastien Villon ◽  
David Mouillot ◽  
Marc Chaumont ◽  
Emily S Darling ◽  
Gérard Subsol ◽  
...  

Identifying and counting individual fish on videos is a crucial task to cost-effectively monitor marine biodiversity, but it remains a difficult and time-consuming task. In this paper, we present a method to assist the automated identification of fish species on underwater images, and we compare our algorithm performances to human ability in terms of speed and accuracy. We first tested the performance of a convolutional neural network trained with different photographic databases while accounting for different post-processing decision rules to identify 20 fish species. Finally, we compared the performance in species identification of our best model with human performances on a test database of 1197 pictures representing nine species. The best network was the one trained with 900 000 pictures of whole fish and of their parts and environment (e.g. reef bottom or water). The rate of correct identification of fish was 94.9%, greater than the rate of correct identifications by humans (89.3%). The network was also able to identify fish individuals partially hidden behind corals or behind other fish and was more effective than humans identification on smallest or blurry pictures while humans were better to recognize fish individuals in unusual positions (e.g. twisted body). On average, each identification by our best algorithm using a common hardware took 0.06 seconds. Deep Learning methods can thus perform efficient fish identification on underwater pictures which pave the way to new video-based protocols for monitoring fish biodiversity cheaply and effectively.


Author(s):  
Ning Hung ◽  
Eugene Yu-Chuan Kang ◽  
Andy Guan-Yu Shih ◽  
Chi-Hung Lin ◽  
Ming‐Tse Kuo ◽  
...  

In this study, we aimed to develop a deep learning model for identifying bacterial keratitis (BK) and fungal keratitis (FK) by using slit-lamp images. We retrospectively collected slit-lamp images of patients with culture-proven microbial keratitis between January 1, 2010, and December 31, 2019, from two medical centers in Taiwan. We constructed a deep learning algorithm, consisting of a segmentation model for cropping cornea images and a classification model that applies convolutional neural networks to differentiate between FK and BK. The model performance was evaluated and presented as the area under the curve (AUC) of the receiver operating characteristic curves. A gradient-weighted class activation mapping technique was used to plot the heatmap of the model. By using 1330 images from 580 patients, the deep learning algorithm achieved an average diagnostic accuracy of 80.00%. The diagnostic accuracy for BK ranged from 79.59% to 95.91% and that for FK ranged from 26.31% to 63.15%. DenseNet169 showed the best model performance, with an AUC of 0.78 for both BK and FK. The heat maps revealed that the model was able to identify the corneal infiltrations. The model showed better diagnostic accuracy than the previously reported diagnostic performance of both general ophthalmologists and corneal specialists.


2021 ◽  
Author(s):  
J Weston Hughes ◽  
Neal Yuan ◽  
Bryan He ◽  
Jiahong Ouyang ◽  
Joseph Ebinger ◽  
...  

AbstractLaboratory blood testing is routinely used to assay biomarkers to provide information on physiologic state beyond what clinicians can evaluate from interpreting medical imaging. We hypothesized that deep learning interpretation of echocardiogram videos can provide additional value in understanding disease states and can predict common biomarkers results. Using 70,066 echocardiograms and associated biomarker results from 39,460 patients, we developed EchoNet-Labs, a video-based deep learning algorithm to predict anemia, elevated B-type natriuretic peptide (BNP), troponin I, and blood urea nitrogen (BUN), and abnormal levels in ten additional lab tests. On held-out test data across different healthcare systems, EchoNet-Labs achieved an area under the curve (AUC) of 0.80 in predicting anemia, 0.82 in predicting elevated BNP, 0.75 in predicting elevated troponin I, and 0.69 in predicting elevated BUN. We further demonstrate the utility of the model in predicting abnormalities in 10 additional lab tests. We investigate the features necessary for EchoNet-Labs to make successful predictions and identify potential prediction mechanisms for each biomarker using well-known and novel explainability techniques. These results show that deep learning applied to diagnostic imaging can provide additional clinical value and identify phenotypic information beyond current imaging interpretation methods.


2018 ◽  
Author(s):  
Sebastien Villon ◽  
David Mouillot ◽  
Marc Chaumont ◽  
Emily S Darling ◽  
Gérard Subsol ◽  
...  

Identifying and counting individual fish on videos is a crucial task to cost-effectively monitor marine biodiversity, but it remains a difficult and time-consuming task. In this paper, we present a method to assist the automated identification of fish species on underwater images, and we compare our algorithm performances to human ability in terms of speed and accuracy. We first tested the performance of a convolutional neural network trained with different photographic databases while accounting for different post-processing decision rules to identify 20 fish species. Finally, we compared the performance in species identification of our best model with human performances on a test database of 1197 pictures representing nine species. The best network was the one trained with 900 000 pictures of whole fish and of their parts and environment (e.g. reef bottom or water). The rate of correct identification of fish was 94.9%, greater than the rate of correct identifications by humans (89.3%). The network was also able to identify fish individuals partially hidden behind corals or behind other fish and was more effective than humans identification on smallest or blurry pictures while humans were better to recognize fish individuals in unusual positions (e.g. twisted body). On average, each identification by our best algorithm using a common hardware took 0.06 seconds. Deep Learning methods can thus perform efficient fish identification on underwater pictures which pave the way to new video-based protocols for monitoring fish biodiversity cheaply and effectively.


10.2196/15931 ◽  
2020 ◽  
Vol 8 (3) ◽  
pp. e15931 ◽  
Author(s):  
Chin-Sheng Lin ◽  
Chin Lin ◽  
Wen-Hui Fang ◽  
Chia-Jung Hsu ◽  
Sy-Jou Chen ◽  
...  

Background The detection of dyskalemias—hypokalemia and hyperkalemia—currently depends on laboratory tests. Since cardiac tissue is very sensitive to dyskalemia, electrocardiography (ECG) may be able to uncover clinically important dyskalemias before laboratory results. Objective Our study aimed to develop a deep-learning model, ECG12Net, to detect dyskalemias based on ECG presentations and to evaluate the logic and performance of this model. Methods Spanning from May 2011 to December 2016, 66,321 ECG records with corresponding serum potassium (K+) concentrations were obtained from 40,180 patients admitted to the emergency department. ECG12Net is an 82-layer convolutional neural network that estimates serum K+ concentration. Six clinicians—three emergency physicians and three cardiologists—participated in human-machine competition. Sensitivity, specificity, and balance accuracy were used to evaluate the performance of ECG12Net with that of these physicians. Results In a human-machine competition including 300 ECGs of different serum K+ concentrations, the area under the curve for detecting hypokalemia and hyperkalemia with ECG12Net was 0.926 and 0.958, respectively, which was significantly better than that of our best clinicians. Moreover, in detecting hypokalemia and hyperkalemia, the sensitivities were 96.7% and 83.3%, respectively, and the specificities were 93.3% and 97.8%, respectively. In a test set including 13,222 ECGs, ECG12Net had a similar performance in terms of sensitivity for severe hypokalemia (95.6%) and severe hyperkalemia (84.5%), with a mean absolute error of 0.531. The specificities for detecting hypokalemia and hyperkalemia were 81.6% and 96.0%, respectively. Conclusions A deep-learning model based on a 12-lead ECG may help physicians promptly recognize severe dyskalemias and thereby potentially reduce cardiac events.


2021 ◽  
Author(s):  
Arash Abbasi ◽  
Max J Feldman ◽  
Jaebum Park ◽  
Katelyn Greene ◽  
Richard G Novy ◽  
...  

Novel deep learning algorithms are proposed for hollow heart detection which is an internal tuber defect. Hollow heart is one of many internal defects that decrease the market value of potatoes in the fresh market and food processing sectors. Susceptibility to internal defects like the hollow heart is influenced by genetic and environmental factors so elimination of defect-prone material in potato breeding programs is important. Current methods of evaluation utilize human scoring which is limiting (only collects binary data) relative to the data collection capacity afforded by computer vision or are based upon x-ray transmission techniques that are both expensive and can be hazardous. Automation of defect classification (e.g. hollow heart) from data sets collected using inexpensive, consumer-grade hardware has the potential to increase throughput and reduce bias in public breeding programs. The proposed algorithms consists of ResNet50 as the backbone of the model followed by a shallow fully connected network (FCN). A simple augmentation technique is performed to increase the number of images in the data set. The performance of the proposed algorithm is validated by investigating metrics such as precision and the area under the curve (AUC).


2019 ◽  
Vol 2 (1) ◽  
Author(s):  
Filippo Arcadu ◽  
Fethallah Benmansour ◽  
Andreas Maunz ◽  
Jeff Willis ◽  
Zdenka Haskova ◽  
...  

AbstractThe global burden of diabetic retinopathy (DR) continues to worsen and DR remains a leading cause of vision loss worldwide. Here, we describe an algorithm to predict DR progression by means of deep learning (DL), using as input color fundus photographs (CFPs) acquired at a single visit from a patient with DR. The proposed DL models were designed to predict future DR progression, defined as 2-step worsening on the Early Treatment Diabetic Retinopathy Diabetic Retinopathy Severity Scale, and were trained against DR severity scores assessed after 6, 12, and 24 months from the baseline visit by masked, well-trained, human reading center graders. The performance of one of these models (prediction at month 12) resulted in an area under the curve equal to 0.79. Interestingly, our results highlight the importance of the predictive signal located in the peripheral retinal fields, not routinely collected for DR assessments, and the importance of microvascular abnormalities. Our findings show the feasibility of predicting future DR progression by leveraging CFPs of a patient acquired at a single visit. Upon further development on larger and more diverse datasets, such an algorithm could enable early diagnosis and referral to a retina specialist for more frequent monitoring and even consideration of early intervention. Moreover, it could also improve patient recruitment for clinical trials targeting DR.


2020 ◽  
Author(s):  
Prashant Sadashiv Gidde ◽  
Shyam Sunder Prasad ◽  
Ajay Pratap Singh ◽  
Nitin Bhatheja ◽  
Satyartha Prakash ◽  
...  

AbstractThe coronavirus disease of 2019 (COVID-19) pandemic exposed a limitation of artificial intelligence (AI) based medical image interpretation systems. Early in the pandemic, when need was greatest, the absence of sufficient training data prevented effective deep learning (DL) solutions. Even now, there is a need for Chest-X-ray (CxR) screening tools in low and middle income countries (LMIC), when RT-PCR is delayed, to exclude COVID-19 pneumonia (Cov-Pneum) requiring transfer to higher care. In absence of local LMIC data and poor portability of CxR DL algorithms, a new approach is needed. Axiomatically, it is faster to repurpose existing data than to generate new datasets. Here, we describe CovBaseAI, an explainable tool which uses an ensemble of three DL models and an expert decision system (EDS) for Cov-Pneum diagnosis, trained entirely on datasets from the pre-COVID-19 period. Portability, performance, and explainability of CovBaseAI was primarily validated on two independent datasets. First, 1401 randomly selected CxR from an Indian quarantine-center to assess effectiveness in excluding radiologic Cov-Pneum that may require higher care. Second, a curated dataset with 434 RT-PCR positive cases of varying levels of severity and 471 historical scans containing normal studies and non-COVID pathologies, to assess performance in advanced medical settings. CovBaseAI had accuracy of 87% with negative predictive value of 98% in the quarantine-center data for Cov-Pneum. However, sensitivity varied from 0.66 to 0.90 depending on whether RT-PCR or radiologist opinion was set as ground truth. This tool with explainability feature has better performance than publicly available algorithms trained on COVID-19 data but needs further improvement.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Ayumi Koyama ◽  
Dai Miyazaki ◽  
Yuji Nakagawa ◽  
Yuji Ayatsuka ◽  
Hitomi Miyake ◽  
...  

AbstractCorneal opacities are important causes of blindness, and their major etiology is infectious keratitis. Slit-lamp examinations are commonly used to determine the causative pathogen; however, their diagnostic accuracy is low even for experienced ophthalmologists. To characterize the “face” of an infected cornea, we have adapted a deep learning architecture used for facial recognition and applied it to determine a probability score for a specific pathogen causing keratitis. To record the diverse features and mitigate the uncertainty, batches of probability scores of 4 serial images taken from many angles or fluorescence staining were learned for score and decision level fusion using a gradient boosting decision tree. A total of 4306 slit-lamp images including 312 images obtained by internet publications on keratitis by bacteria, fungi, acanthamoeba, and herpes simplex virus (HSV) were studied. The created algorithm had a high overall accuracy of diagnosis, e.g., the accuracy/area under the curve for acanthamoeba was 97.9%/0.995, bacteria was 90.7%/0.963, fungi was 95.0%/0.975, and HSV was 92.3%/0.946, by group K-fold validation, and it was robust to even the low resolution web images. We suggest that our hybrid deep learning-based algorithm be used as a simple and accurate method for computer-assisted diagnosis of infectious keratitis.


Sign in / Sign up

Export Citation Format

Share Document