scholarly journals Development and validation of deep learning algorithms for automated eye laterality detection with anterior segment photography

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Ce Zheng ◽  
Xiaolin Xie ◽  
Zhilei Wang ◽  
Wen Li ◽  
Jili Chen ◽  
...  

AbstractThis paper aimed to develop and validate a deep learning (DL) model for automated detection of the laterality of the eye on anterior segment photographs. Anterior segment photographs for training a DL model were collected with the Scheimpflug anterior segment analyzer. We applied transfer learning and fine-tuning of pre-trained deep convolutional neural networks (InceptionV3, VGG16, MobileNetV2) to develop DL models for determining the eye laterality. Testing datasets, from Scheimpflug and slit-lamp digital camera photography, were employed to test the DL model, and the results were compared with a classification performed by human experts. The performance of the DL model was evaluated by accuracy, sensitivity, specificity, operating characteristic curves, and corresponding area under the curve values. A total of 14,468 photographs were collected for the development of DL models. After training for 100 epochs, the DL models of the InceptionV3 mode achieved the area under the receiver operating characteristic curve of 0.998 (with 95% CI 0.924–0.958) for detecting eye laterality. In the external testing dataset (76 primary gaze photographs taken by a digital camera), the DL model achieves an accuracy of 96.1% (95% CI 91.7%–100%), which is better than an accuracy of 72.3% (95% CI 62.2%–82.4%), 82.8% (95% CI 78.7%–86.9%) and 86.8% (95% CI 82.5%–91.1%) achieved by human graders. Our study demonstrated that this high-performing DL model can be used for automated labeling for the laterality of eyes. Our DL model is useful for managing a large volume of the anterior segment images with a slit-lamp camera in the clinical setting.

Electronics ◽  
2020 ◽  
Vol 9 (1) ◽  
pp. 190 ◽  
Author(s):  
Zhiwei Huang ◽  
Jinzhao Lin ◽  
Liming Xu ◽  
Huiqian Wang ◽  
Tong Bai ◽  
...  

The application of deep convolutional neural networks (CNN) in the field of medical image processing has attracted extensive attention and demonstrated remarkable progress. An increasing number of deep learning methods have been devoted to classifying ChestX-ray (CXR) images, and most of the existing deep learning methods are based on classic pretrained models, trained by global ChestX-ray images. In this paper, we are interested in diagnosing ChestX-ray images using our proposed Fusion High-Resolution Network (FHRNet). The FHRNet concatenates the global average pooling layers of the global and local feature extractors—it consists of three branch convolutional neural networks and is fine-tuned for thorax disease classification. Compared with the results of other available methods, our experimental results showed that the proposed model yields a better disease classification performance for the ChestX-ray 14 dataset, according to the receiver operating characteristic curve and area-under-the-curve score. An ablation study further confirmed the effectiveness of the global and local branch networks in improving the classification accuracy of thorax diseases.


2021 ◽  
Author(s):  
Zhuyun Qian ◽  
Xiaolin Xie ◽  
Jianlong Yang ◽  
Hongfei Ye ◽  
Zhilei Wang ◽  
...  

Abstract Background: The purpose of this study was to implement and evaluate a deep learning (DL) approach for automatically detecting shallow anterior chamber depth (ACD) from two-dimensional (2D) overview anterior segment photographs.Methods: We trained a DL model using a dataset of anterior segment photographs collected from Shanghai Aier Eye Hospital from June 2018 to December 2019. A Pentacam HR system was used to capture a 2D overview eye image and measure the ACD. Shallow ACD was defined as ACD less than 2.4 mm. The DL model was evaluated by a five-fold cross-validation test in a hold-out testing dataset. We also evaluated the DL model by testing it against two glaucoma specialists. The performance of the DL model was calculated by metrics, including accuracy, sensitivity, specificity, and area under the receiver operating characteristic curve (AUC).Results: A total of 4,322 photographs (2,054 shallow AC and 2,268 deep AC images) were assigned to the training dataset, and 482 photographs (229 shallow AC and 253 deep AC images) were held out for internal testing dataset. In detecting shallow ACD in the internal hold-out testing dataset, the DL model achieved an AUC of 0.91 (95% CI, 0.88–0.94) with 82% sensitivity and 84% specificity. In the same testing dataset, the DL model also achieved better performance than the two glaucoma specialists (accuracy of 80% vs. accuracy of 74% and 69%).Conclusions: We proposed a high-performing DL model to automatically detect shallow ACD from overview anterior segment photographs. Our DL model has potential applications in detecting and monitoring shallow ACD in the real world. Trial registration:http://clinicaltrials.gov, NCT04340635, retrospectively registered on 29 March 2020.


2015 ◽  
Vol 2015 ◽  
pp. 1-8 ◽  
Author(s):  
Faik Orucoglu ◽  
Ebru Toker

Purpose. To assess and compare the anterior and posterior corneal surface parameters, keratoconus indices, thickness profile data, and data from enhanced elevation maps of keratoconic and normal corneas with the Pentacam Scheimpflug corneal tomography and to determine the sensitivity and specificity of these parameters in discriminating keratoconus from normal eyes.Methods. The study included 656 keratoconus eyes and 515 healthy eyes with a mean age of30.95±9.25and32.90±14.78years, respectively. Forty parameters obtained from the Pentacam tomography were assessed by the receiver operating characteristic curve analysis for their efficiency.Results. Receiver operating characteristic curve analyses showed excellent predictive accuracy (area under the curve, ranging from 0.914 to 0.972) for 21 of the 40 parameters evaluated. Among all parameters indices of vertical asymmetry, keratoconus index, front elevation at thinnest location, back elevation at thinnest location, Ambrósio Relational Thickness (ARTmax), deviation of average pachymetric progression, deviation of ARTmax, and total deviation showed excellent (>90%) sensitivity and specificity in addition to excellent area under the receiver operating characteristic curve (AUROC).Conclusions. Parameters derived from the topometric and Belin-Ambrósio enhanced ectasia display maps very effectively discriminate keratoconus from normal corneas with excellent sensitivity and specificity.


Diagnostics ◽  
2021 ◽  
Vol 11 (6) ◽  
pp. 1127
Author(s):  
Ji Hyung Nam ◽  
Dong Jun Oh ◽  
Sumin Lee ◽  
Hyun Joo Song ◽  
Yun Jeong Lim

Capsule endoscopy (CE) quality control requires an objective scoring system to evaluate the preparation of the small bowel (SB). We propose a deep learning algorithm to calculate SB cleansing scores and verify the algorithm’s performance. A 5-point scoring system based on clarity of mucosal visualization was used to develop the deep learning algorithm (400,000 frames; 280,000 for training and 120,000 for testing). External validation was performed using additional CE cases (n = 50), and average cleansing scores (1.0 to 5.0) calculated using the algorithm were compared to clinical grades (A to C) assigned by clinicians. Test results obtained using 120,000 frames exhibited 93% accuracy. The separate CE case exhibited substantial agreement between the deep learning algorithm scores and clinicians’ assessments (Cohen’s kappa: 0.672). In the external validation, the cleansing score decreased with worsening clinical grade (scores of 3.9, 3.2, and 2.5 for grades A, B, and C, respectively, p < 0.001). Receiver operating characteristic curve analysis revealed that a cleansing score cut-off of 2.95 indicated clinically adequate preparation. This algorithm provides an objective and automated cleansing score for evaluating SB preparation for CE. The results of this study will serve as clinical evidence supporting the practical use of deep learning algorithms for evaluating SB preparation quality.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Andrew P. Creagh ◽  
Florian Lipsmeier ◽  
Michael Lindemann ◽  
Maarten De Vos

AbstractThe emergence of digital technologies such as smartphones in healthcare applications have demonstrated the possibility of developing rich, continuous, and objective measures of multiple sclerosis (MS) disability that can be administered remotely and out-of-clinic. Deep Convolutional Neural Networks (DCNN) may capture a richer representation of healthy and MS-related ambulatory characteristics from the raw smartphone-based inertial sensor data than standard feature-based methodologies. To overcome the typical limitations associated with remotely generated health data, such as low subject numbers, sparsity, and heterogeneous data, a transfer learning (TL) model from similar large open-source datasets was proposed. Our TL framework leveraged the ambulatory information learned on human activity recognition (HAR) tasks collected from wearable smartphone sensor data. It was demonstrated that fine-tuning TL DCNN HAR models towards MS disease recognition tasks outperformed previous Support Vector Machine (SVM) feature-based methods, as well as DCNN models trained end-to-end, by upwards of 8–15%. A lack of transparency of “black-box” deep networks remains one of the largest stumbling blocks to the wider acceptance of deep learning for clinical applications. Ensuing work therefore aimed to visualise DCNN decisions attributed by relevance heatmaps using Layer-Wise Relevance Propagation (LRP). Through the LRP framework, the patterns captured from smartphone-based inertial sensor data that were reflective of those who are healthy versus people with MS (PwMS) could begin to be established and understood. Interpretations suggested that cadence-based measures, gait speed, and ambulation-related signal perturbations were distinct characteristics that distinguished MS disability from healthy participants. Robust and interpretable outcomes, generated from high-frequency out-of-clinic assessments, could greatly augment the current in-clinic assessment picture for PwMS, to inform better disease management techniques, and enable the development of better therapeutic interventions.


Cancers ◽  
2021 ◽  
Vol 13 (15) ◽  
pp. 3896
Author(s):  
Karla Montalbán-Hernández ◽  
Ramón Cantero-Cid ◽  
Roberto Lozano-Rodríguez ◽  
Alejandro Pascual-Iglesias ◽  
José Avendaño-Ortiz ◽  
...  

Colorectal cancer (CRC) is the second most deadly and third most commonly diagnosed cancer worldwide. There is significant heterogeneity among patients with CRC, which hinders the search for a standard approach for the detection of this disease. Therefore, the identification of robust prognostic markers for patients with CRC represents an urgent clinical need. In search of such biomarkers, a total of 114 patients with colorectal cancer and 67 healthy participants were studied. Soluble SIGLEC5 (sSIGLEC5) levels were higher in plasma from patients with CRC compared with healthy volunteers. Additionally, sSIGLEC5 levels were higher in exitus than in survivors, and the receiver operating characteristic curve analysis revealed sSIGLEC5 to be an exitus predictor (area under the curve 0.853; cut-off > 412.6 ng/mL) in these patients. A Kaplan–Meier analysis showed that patients with high levels of sSIGLEC5 had significantly shorter overall survival (hazard ratio 15.68; 95% CI 4.571–53.81; p ≤ 0.0001) than those with lower sSIGLEC5 levels. Our study suggests that sSIGLEC5 is a soluble prognosis marker and exitus predictor in CRC.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Enav Yefet ◽  
Avishag Yossef ◽  
Zohar Nachum

AbstractWe aimed to assess risk factors for anemia at delivery by conducting a secondary analysis of a prospective cohort study database including 1527 women who delivered vaginally ≥ 36 gestational weeks. Anemia (Hemoglobin (Hb) < 10.5 g/dL) was assessed at delivery. A complete blood count results during pregnancy as well as maternal and obstetrical characteristics were collected. The primary endpoint was to determine the Hb cutoff between 24 and 30 gestational weeks that is predictive of anemia at delivery by using the area under the curve (AUC) of the receiver operating characteristic curve. Independent risk factors for anemia at delivery were assessed using stepwise multivariable logistic regression. Hb and infrequent iron supplement treatment were independent risk factors for anemia at delivery (OR 0.3 95%CI [0.2–0.4] and OR 2.4 95%CI [1.2–4.8], respectively; C statistics 83%). Hb 10.6 g/dL was an accurate cutoff to predict anemia at delivery (AUC 80% 95%CI 75–84%; sensitivity 75% and specificity 74%). Iron supplement was beneficial to prevent anemia regardless of Hb value. Altogether, Hb should be routinely tested between 24 and 30 gestational weeks to screen for anemia. A flow chart for anemia screening and treatment during pregnancy is proposed in the manuscript.Trial registration: ClinicalTrials.gov Identifier: NCT02434653.


2020 ◽  
Vol 2020 ◽  
pp. 1-11
Author(s):  
Yuanyuan Xu ◽  
Genke Yang ◽  
Jiliang Luo ◽  
Jianan He

Electronic component recognition plays an important role in industrial production, electronic manufacturing, and testing. In order to address the problem of the low recognition recall and accuracy of traditional image recognition technologies (such as principal component analysis (PCA) and support vector machine (SVM)), this paper selects multiple deep learning networks for testing and optimizes the SqueezeNet network. The paper then presents an electronic component recognition algorithm based on the Faster SqueezeNet network. This structure can reduce the size of network parameters and computational complexity without deteriorating the performance of the network. The results show that the proposed algorithm performs well, where the Receiver Operating Characteristic Curve (ROC) and Area Under the Curve (AUC), capacitor and inductor, reach 1.0. When the FPR is less than or equal 10 − 6   level, the TPR is greater than or equal to 0.99; its reasoning time is about 2.67 ms, achieving the industrial application level in terms of time consumption and performance.


2022 ◽  
Vol 14 (2) ◽  
pp. 274
Author(s):  
Mohamed Marzhar Anuar ◽  
Alfian Abdul Halin ◽  
Thinagaran Perumal ◽  
Bahareh Kalantar

In recent years complex food security issues caused by climatic changes, limitations in human labour, and increasing production costs require a strategic approach in addressing problems. The emergence of artificial intelligence due to the capability of recent advances in computing architectures could become a new alternative to existing solutions. Deep learning algorithms in computer vision for image classification and object detection can facilitate the agriculture industry, especially in paddy cultivation, to alleviate human efforts in laborious, burdensome, and repetitive tasks. Optimal planting density is a crucial factor for paddy cultivation as it will influence the quality and quantity of production. There have been several studies involving planting density using computer vision and remote sensing approaches. While most of the studies have shown promising results, they have disadvantages and show room for improvement. One of the disadvantages is that the studies aim to detect and count all the paddy seedlings to determine planting density. The defective paddy seedlings’ locations are not pointed out to help farmers during the sowing process. In this work we aimed to explore several deep convolutional neural networks (DCNN) models to determine which one performs the best for defective paddy seedling detection using aerial imagery. Thus, we evaluated the accuracy, robustness, and inference latency of one- and two-stage pretrained object detectors combined with state-of-the-art feature extractors such as EfficientNet, ResNet50, and MobilenetV2 as a backbone. We also investigated the effect of transfer learning with fine-tuning on the performance of the aforementioned pretrained models. Experimental results showed that our proposed methods were capable of detecting the defective paddy rice seedlings with the highest precision and an F1-Score of 0.83 and 0.77, respectively, using a one-stage pretrained object detector called EfficientDet-D1 EficientNet.


2020 ◽  
Author(s):  
Tuan Pham

Chest X-rays have been found to be very promising for assessing COVID-19 patients, especially for resolving emergency-department and urgent-care-center overcapacity. Deep-learning (DL) methods in artificial intelligence (AI) play a dominant role as high-performance classifiers in the detection of the disease using chest X-rays. While many new DL models have been being developed for this purpose, this study aimed to investigate the fine tuning of pretrained convolutional neural networks (CNNs) for the classification of COVID-19 using chest X-rays. Three pretrained CNNs, which are AlexNet, GoogleNet, and SqueezeNet, were selected and fine-tuned without data augmentation to carry out 2-class and 3-class classification tasks using 3 public chest X-ray databases. In comparison with other recently developed DL models, the 3 pretrained CNNs achieved very high classification results in terms of accuracy, sensitivity, specificity, precision, F1 score, and area under the receiver-operating-characteristic curve. AlexNet, GoogleNet, and SqueezeNet require the least training time among pretrained DL models, but with suitable selection of training parameters, excellent classification results can be achieved without data augmentation by these networks. The findings contribute to the urgent need for harnessing the pandemic by facilitating the deployment of AI tools that are fully automated and readily available in the public domain for rapid implementation.


Sign in / Sign up

Export Citation Format

Share Document