scholarly journals Using Slit-Lamp Images for Deep Learning-Based Identification of Bacterial and Fungal Keratitis: Model Development and Validation with Different Convolutional Neural Networks

Diagnostics ◽  
2021 ◽  
Vol 11 (7) ◽  
pp. 1246
Author(s):  
Ning Hung ◽  
Andy Kuan-Yu Shih ◽  
Chihung Lin ◽  
Ming-Tse Kuo ◽  
Yih-Shiou Hwang ◽  
...  

In this study, we aimed to develop a deep learning model for identifying bacterial keratitis (BK) and fungal keratitis (FK) by using slit-lamp images. We retrospectively collected slit-lamp images of patients with culture-proven microbial keratitis between 1 January 2010 and 31 December 2019 from two medical centers in Taiwan. We constructed a deep learning algorithm consisting of a segmentation model for cropping cornea images and a classification model that applies different convolutional neural networks (CNNs) to differentiate between FK and BK. The CNNs included DenseNet121, DenseNet161, DenseNet169, DenseNet201, EfficientNetB3, InceptionV3, ResNet101, and ResNet50. The model performance was evaluated and presented as the area under the curve (AUC) of the receiver operating characteristic curves. A gradient-weighted class activation mapping technique was used to plot the heat map of the model. By using 1330 images from 580 patients, the deep learning algorithm achieved the highest average accuracy of 80.0%. Using different CNNs, the diagnostic accuracy for BK ranged from 79.6% to 95.9%, and that for FK ranged from 26.3% to 65.8%. The CNN of DenseNet161 showed the best model performance, with an AUC of 0.85 for both BK and FK. The heat maps revealed that the model was able to identify the corneal infiltrations. The model showed a better diagnostic accuracy than the previously reported diagnostic performance of both general ophthalmologists and corneal specialists.

Author(s):  
Ning Hung ◽  
Eugene Yu-Chuan Kang ◽  
Andy Guan-Yu Shih ◽  
Chi-Hung Lin ◽  
Ming‐Tse Kuo ◽  
...  

In this study, we aimed to develop a deep learning model for identifying bacterial keratitis (BK) and fungal keratitis (FK) by using slit-lamp images. We retrospectively collected slit-lamp images of patients with culture-proven microbial keratitis between January 1, 2010, and December 31, 2019, from two medical centers in Taiwan. We constructed a deep learning algorithm, consisting of a segmentation model for cropping cornea images and a classification model that applies convolutional neural networks to differentiate between FK and BK. The model performance was evaluated and presented as the area under the curve (AUC) of the receiver operating characteristic curves. A gradient-weighted class activation mapping technique was used to plot the heatmap of the model. By using 1330 images from 580 patients, the deep learning algorithm achieved an average diagnostic accuracy of 80.00%. The diagnostic accuracy for BK ranged from 79.59% to 95.91% and that for FK ranged from 26.31% to 63.15%. DenseNet169 showed the best model performance, with an AUC of 0.78 for both BK and FK. The heat maps revealed that the model was able to identify the corneal infiltrations. The model showed better diagnostic accuracy than the previously reported diagnostic performance of both general ophthalmologists and corneal specialists.


Author(s):  
Fawziya M. Rammo ◽  
Mohammed N. Al-Hamdani

Many languages identification (LID) systems rely on language models that use machine learning (ML) approaches, LID systems utilize rather long recording periods to achieve satisfactory accuracy. This study aims to extract enough information from short recording intervals in order to successfully classify the spoken languages under test. The classification process is based on frames of (2-18) seconds where most of the previous LID systems were based on much longer time frames (from 3 seconds to 2 minutes). This research defined and implemented many low-level features using MFCC (Mel-frequency cepstral coefficients), containing speech files in five languages (English. French, German, Italian, Spanish), from voxforge.org an open-source corpus that consists of user-submitted audio clips in various languages, is the source of data used in this paper. A CNN (convolutional Neural Networks) algorithm applied in this paper for classification and the result was perfect, binary language classification had an accuracy of 100%, and five languages classification with six languages had an accuracy of 99.8%.


2019 ◽  
Vol 11 (23) ◽  
pp. 2858 ◽  
Author(s):  
Tianyu Ci ◽  
Zhen Liu ◽  
Ying Wang

We propose a new convolutional neural networks method in combination with ordinal regression aiming at assessing the degree of building damage caused by earthquakes with aerial imagery. The ordinal regression model and a deep learning algorithm are incorporated to make full use of the information to improve the accuracy of the assessment. A new loss function was introduced in this paper to combine convolutional neural networks and ordinal regression. Assessing the level of damage to buildings can be considered as equivalent to predicting the ordered labels of buildings to be assessed. In the existing research, the problem has usually been simplified as a problem of pure classification to be further studied and discussed, which ignores the ordinal relationship between different levels of damage, resulting in a waste of information. Data accumulated throughout history are used to build network models for assessing the level of damage, and models for assessing levels of damage to buildings based on deep learning are described in detail, including model construction, implementation methods, and the selection of hyperparameters, and verification is conducted by experiments. When categorizing the damage to buildings into four types, we apply the method proposed in this paper to aerial images acquired from the 2014 Ludian earthquake and achieve an overall accuracy of 77.39%; when categorizing damage to buildings into two types, the overall accuracy of the model is 93.95%, exceeding such values in similar types of theories and methods.


2021 ◽  
Vol 65 (1) ◽  
pp. 11-22
Author(s):  
Mengyao Lu ◽  
Shuwen Jiang ◽  
Cong Wang ◽  
Dong Chen ◽  
Tian’en Chen

HighlightsA classification model for the front and back sides of tobacco leaves was developed for application in industry.A tobacco leaf grading method that combines a CNN with double-branch integration was proposed.The A-ResNet network was proposed and compared with other classic CNN networks.The grading accuracy of eight different grades was 91.30% and the testing time was 82.180 ms, showing a relatively high classification accuracy and efficiency.Abstract. Flue-cured tobacco leaf grading is a key step in the production and processing of Chinese-style cigarette raw materials, directly affecting cigarette blend and quality stability. At present, manual grading of tobacco leaves is dominant in China, resulting in unsatisfactory grading quality and consuming considerable material and financial resources. In this study, for fast, accurate, and non-destructive tobacco leaf grading, 2,791 flue-cured tobacco leaves of eight different grades in south Anhui Province, China, were chosen as the study sample, and a tobacco leaf grading method that combines convolutional neural networks and double-branch integration was proposed. First, a classification model for the front and back sides of tobacco leaves was trained by transfer learning. Second, two processing methods (equal-scaled resizing and cropping) were used to obtain global images and local patches from the front sides of tobacco leaves. A global image-based tobacco leaf grading model was then developed using the proposed A-ResNet-65 network, and a local patch-based tobacco leaf grading model was developed using the ResNet-34 network. These two networks were compared with classic deep learning networks, such as VGGNet, GoogLeNet-V3, and ResNet. Finally, the grading results of the two grading models were integrated to realize tobacco leaf grading. The tobacco leaf classification accuracy of the final model, for eight different grades, was 91.30%, and grading of a single tobacco leaf required 82.180 ms. The proposed method achieved a relatively high grading accuracy and efficiency. It provides a method for industrial implementation of the tobacco leaf grading and offers a new approach for the quality grading of other agricultural products. Keywords: Convolutional neural network, Deep learning, Image classification, Transfer learning, Tobacco leaf grading


BMC Genomics ◽  
2019 ◽  
Vol 20 (S9) ◽  
Author(s):  
Yang-Ming Lin ◽  
Ching-Tai Chen ◽  
Jia-Ming Chang

Abstract Background Tandem mass spectrometry allows biologists to identify and quantify protein samples in the form of digested peptide sequences. When performing peptide identification, spectral library search is more sensitive than traditional database search but is limited to peptides that have been previously identified. An accurate tandem mass spectrum prediction tool is thus crucial in expanding the peptide space and increasing the coverage of spectral library search. Results We propose MS2CNN, a non-linear regression model based on deep convolutional neural networks, a deep learning algorithm. The features for our model are amino acid composition, predicted secondary structure, and physical-chemical features such as isoelectric point, aromaticity, helicity, hydrophobicity, and basicity. MS2CNN was trained with five-fold cross validation on a three-way data split on the large-scale human HCD MS2 dataset of Orbitrap LC-MS/MS downloaded from the National Institute of Standards and Technology. It was then evaluated on a publicly available independent test dataset of human HeLa cell lysate from LC-MS experiments. On average, our model shows better cosine similarity and Pearson correlation coefficient (0.690 and 0.632) than MS2PIP (0.647 and 0.601) and is comparable with pDeep (0.692 and 0.642). Notably, for the more complex MS2 spectra of 3+ peptides, MS2PIP is significantly better than both MS2PIP and pDeep. Conclusions We showed that MS2CNN outperforms MS2PIP for 2+ and 3+ peptides and pDeep for 3+ peptides. This implies that MS2CNN, the proposed convolutional neural network model, generates highly accurate MS2 spectra for LC-MS/MS experiments using Orbitrap machines, which can be of great help in protein and peptide identifications. The results suggest that incorporating more data for deep learning model may improve performance.


2021 ◽  
Author(s):  
Ayumi Koyama ◽  
Dai Miyazaki ◽  
Yuji Nakagawa ◽  
Yuji Ayatsuka ◽  
Hitomi Miyake ◽  
...  

Abstract Corneal opacities are an important cause of blindness, and its major etiology is infectious keratitis. Slit-lamp examinations are commonly used to determine the causative pathogen; however, their diagnostic accuracy is low even for experienced ophthalmologists. To characterize the “face” of an infected cornea, we have adapted a deep learning architecture used for facial recognition and applied it to determine a probability score for a specific pathogen causing keratitis. To record the diverse features and mitigate the uncertainty, batches of probability scores of 4 serial images taken from many angles or fluorescence staining were learned for score and decision level fusion using a gradient boosting decision tree. A total of 4306 slit-lamp images and 312 images obtained by internet publications on keratitis by bacteria, fungi, acanthamoeba, and herpes simplex virus (HSV) were studied. The created algorithm had a high overall accuracy of diagnosis, e.g., the accuracy/area under the curve (AUC) for acanthamoeba was 97.9%/0.995, bacteria was 90.7%/0.963, fungi was 95.0%/0.975, and HSV was 92.3%/0.946, by group K-fold validation, and it was robust to even the low resolution web images. We suggest that our hybrid deep learning-based algorithm be used as a simple and accurate method for computer-assisted diagnosis of infectious keratitis.


2021 ◽  
Vol 5 (3) ◽  
pp. 584-593
Author(s):  
Naufal Hilmiaji ◽  
Kemas Muslim Lhaksmana ◽  
Mahendra Dwifebri Purbolaksono

especially with the advancement of deep learning methods for text classification. Despite some effort to identify emotion on Indonesian tweets, its performance evaluation results have not achieved acceptable numbers. To solve this problem, this paper implements a classification model using a convolutional neural network (CNN), which has demonstrated expected performance in text classification. To easily compare with the previous research, this classification is performed on the same dataset, which consists of 4,403 tweets in Indonesian that were labeled using five different emotion classes: anger, fear, joy, love, and sadness. The performance evaluation results achieve the precision, recall, and F1-score at respectively 90.1%, 90.3%, and 90.2%, while the highest accuracy achieves 89.8%. These results outperform previous research that classifies the same classification on the same dataset.


Sign in / Sign up

Export Citation Format

Share Document