scholarly journals Classification of Dental Radiographs Using Deep Learning

2021 ◽  
Vol 10 (7) ◽  
pp. 1496
Author(s):  
Jose E. Cejudo ◽  
Akhilanand Chaurasia ◽  
Ben Feldberg ◽  
Joachim Krois ◽  
Falk Schwendicke

Objectives: To retrospectively assess radiographic data and to prospectively classify radiographs (namely, panoramic, bitewing, periapical, and cephalometric images), we compared three deep learning architectures for their classification performance. Methods: Our dataset consisted of 31,288 panoramic, 43,598 periapical, 14,326 bitewing, and 1176 cephalometric radiographs from two centers (Berlin/Germany; Lucknow/India). For a subset of images L (32,381 images), image classifications were available and manually validated by an expert. The remaining subset of images U was iteratively annotated using active learning, with ResNet-34 being trained on L, least confidence informative sampling being performed on U, and the most uncertain image classifications from U being reviewed by a human expert and iteratively used for re-training. We then employed a baseline convolutional neural networks (CNN), a residual network (another ResNet-34, pretrained on ImageNet), and a capsule network (CapsNet) for classification. Early stopping was used to prevent overfitting. Evaluation of the model performances followed stratified k-fold cross-validation. Gradient-weighted Class Activation Mapping (Grad-CAM) was used to provide visualizations of the weighted activations maps. Results: All three models showed high accuracy (>98%) with significantly higher accuracy, F1-score, precision, and sensitivity of ResNet than baseline CNN and CapsNet (p < 0.05). Specificity was not significantly different. ResNet achieved the best performance at small variance and fastest convergence. Misclassification was most common between bitewings and periapicals. For bitewings, model activation was most notable in the inter-arch space for periapicals interdentally, for panoramics on bony structures of maxilla and mandible, and for cephalometrics on the viscerocranium. Conclusions: Regardless of the models, high classification accuracies were achieved. Image features considered for classification were consistent with expert reasoning.

2021 ◽  
Vol 10 (8) ◽  
pp. 1635
Author(s):  
Joachim Krois ◽  
Lisa Schneider ◽  
Falk Schwendicke

Objectives: We aimed to assess the impact of image context information on the accuracy of deep learning models for tooth classification on panoramic dental radiographs. Methods: Our dataset contained 5008 panoramic radiographs with a mean number of 25.2 teeth per image. Teeth were segmented bounding-box-wise and classified by one expert; this was validated by another expert. Tooth segments were cropped allowing for different context; the baseline size was 100% of each box and was scaled up to capture 150%, 200%, 250% and 300% to increase context. On each of the five generated datasets, ResNet-34 classification models were trained using the Adam optimizer with a learning rate of 0.001 over 25 epochs with a batch size of 16. A total of 20% of the data was used for testing; in subgroup analyses, models were tested only on specific tooth types. Feature visualization using gradient-weighted class activation mapping (Grad-CAM) was employed to visualize salient areas. Results: F1-scores increased monotonically from 0.77 in the base-case (100%) to 0.93 on the largest segments (300%; p = 0.0083; Mann–Kendall-test). Gains in accuracy were limited between 200% and 300%. This behavior was found for all tooth types except canines, where accuracy was much higher even for smaller segments and increasing context yielded only minimal gains. With increasing context salient areas were more widely distributed over each segment; at maximum segment size, the models assessed minimum 3–4 teeth as well as the interdental or inter-arch space to come to a classification. Conclusions: Context matters; classification accuracy increased significantly with increasing context.


Author(s):  
Yuejun Liu ◽  
Yifei Xu ◽  
Xiangzheng Meng ◽  
Xuguang Wang ◽  
Tianxu Bai

Background: Medical imaging plays an important role in the diagnosis of thyroid diseases. In the field of machine learning, multiple dimensional deep learning algorithms are widely used in image classification and recognition, and have achieved great success. Objective: The method based on multiple dimensional deep learning is employed for the auxiliary diagnosis of thyroid diseases based on SPECT images. The performances of different deep learning models are evaluated and compared. Methods: Thyroid SPECT images are collected with three types, they are hyperthyroidism, normal and hypothyroidism. In the pre-processing, the region of interest of thyroid is segmented and the amount of data sample is expanded. Four CNN models, including CNN, Inception, VGG16 and RNN, are used to evaluate deep learning methods. Results: Deep learning based methods have good classification performance, the accuracy is 92.9%-96.2%, AUC is 97.8%-99.6%. VGG16 model has the best performance, the accuracy is 96.2% and AUC is 99.6%. Especially, the VGG16 model with a changing learning rate works best. Conclusion: The standard CNN, Inception, VGG16, and RNN four deep learning models are efficient for the classification of thyroid diseases with SPECT images. The accuracy of the assisted diagnostic method based on deep learning is higher than that of other methods reported in the literature.


2021 ◽  
pp. 1-11
Author(s):  
Yaning Liu ◽  
Lin Han ◽  
Hexiang Wang ◽  
Bo Yin

Papillary thyroid carcinoma (PTC) is a common carcinoma in thyroid. As many benign thyroid nodules have the papillary structure which could easily be confused with PTC in morphology. Thus, pathologists have to take a lot of time on differential diagnosis of PTC besides personal diagnostic experience and there is no doubt that it is subjective and difficult to obtain consistency among observers. To address this issue, we applied deep learning to the differential diagnosis of PTC and proposed a histological image classification method for PTC based on the Inception Residual convolutional neural network (IRCNN) and support vector machine (SVM). First, in order to expand the dataset and solve the problem of histological image color inconsistency, a pre-processing module was constructed that included color transfer and mirror transform. Then, to alleviate overfitting of the deep learning model, we optimized the convolution neural network by combining Inception Network and Residual Network to extract image features. Finally, the SVM was trained via image features extracted by IRCNN to perform the classification task. Experimental results show effectiveness of the proposed method in the classification of PTC histological images.


2021 ◽  
Vol 39 (4) ◽  
pp. 1190-1197
Author(s):  
Y. Ibrahim ◽  
E. Okafor ◽  
B. Yahaya

Manual grid-search tuning of machine learning hyperparameters is very time-consuming. Hence, to curb this problem, we propose the use of a genetic algorithm (GA) for the selection of optimal radial-basis-function based support vector machine (RBF-SVM) hyperparameters; regularization parameter C and cost-factor γ. The resulting optimal parameters were used during the training of face recognition models. To train the models, we independently extracted features from the ORL face image dataset using local binary patterns (handcrafted) and deep learning architectures (pretrained variants of VGGNet). The resulting features were passed as input to either linear-SVM or optimized RBF-SVM. The results show that the models from optimized RBFSVM combined with deep learning or hand-crafted features yielded performances that surpass models obtained from Linear-SVM combined with the aforementioned features in most of the data splits. The study demonstrated that it is profitable to optimize the hyperparameters of an SVM to obtain the best classification performance. Keywords: Face Recognition, Feature Extraction, Local Binary Patterns, Transfer Learning, Genetic Algorithm and Support Vector  Machines.


2020 ◽  
Vol 10 (21) ◽  
pp. 7488
Author(s):  
Yutu Yang ◽  
Xiaolin Zhou ◽  
Ying Liu ◽  
Zhongkang Hu ◽  
Fenglong Ding

The deep learning feature extraction method and extreme learning machine (ELM) classification method are combined to establish a depth extreme learning machine model for wood image defect detection. The convolution neural network (CNN) algorithm alone tends to provide inaccurate defect locations, incomplete defect contour and boundary information, and inaccurate recognition of defect types. The nonsubsampled shearlet transform (NSST) is used here to preprocess the wood images, which reduces the complexity and computation of the image processing. CNN is then applied to manage the deep algorithm design of the wood images. The simple linear iterative clustering algorithm is used to improve the initial model; the obtained image features are used as ELM classification inputs. ELM has faster training speed and stronger generalization ability than other similar neural networks, but the random selection of input weights and thresholds degrades the classification accuracy. A genetic algorithm is used here to optimize the initial parameters of the ELM to stabilize the network classification performance. The depth extreme learning machine can extract high-level abstract information from the data, does not require iterative adjustment of the network weights, has high calculation efficiency, and allows CNN to effectively extract the wood defect contour. The distributed input data feature is automatically expressed in layer form by deep learning pre-training. The wood defect recognition accuracy reached 96.72% in a test time of only 187 ms.


2020 ◽  
Author(s):  
Fatimah Alshamari ◽  
Abdou Youssef

Document classification is a fundamental task for many applications, including document annotation, document understanding, and knowledge discovery. This is especially true in STEM fields where the growth rate of scientific publications is exponential, and where the need for document processing and understanding is essential to technological advancement. Classifying a new publication into a specific domain based on the content of the document is an expensive process in terms of cost and time. Therefore, there is a high demand for a reliable document classification system. In this paper, we focus on classification of mathematics documents, which consist of English text and mathematics formulas and symbols. The paper addresses two key questions. The first question is whether math-document classification performance is impacted by math expressions and symbols, either alone or in conjunction with the text contents of documents. Our investigations show that Text-Only embedding produces better classification results. The second question we address is the optimization of a deep learning (DL) model, the LSTM combined with one dimension CNN, for math document classification. We examine the model with several input representations, key design parameters and decision choices, and choices of the best input representation for math documents classification.


2021 ◽  
Vol 36 (1) ◽  
pp. 443-450
Author(s):  
Mounika Jammula

As of 2020, the total area planted with crops in India overtook 125.78 million hectares. India is the second biggest organic product maker in the world. Thus, an Indian economy greatly depends on farming products. Nowadays, farmers suffer a drop in production due to a lot of diseases and pests. Thus, to overcome this problem, this article presents the artificial intelligence based deep learning approach for plant disease classification. Initially, the adaptive mean bilateral filter (AMBF) for noise removal and enhancement operations. Then, Gaussian kernel fuzzy C-means (GKFCM) approach is used to segment the effected disease regions. The optimal features from color, texture and shape features are extracted by using GLCM. Finally, Deep learning convolutional neural network (DLCNN) is used for the classification of five class diseases. The segmentation and classification performance of proposed method outperforms as compared with the state of art approaches.


2020 ◽  
Vol 8 (3) ◽  
pp. 234-238
Author(s):  
Nur Choiriyati ◽  
Yandra Arkeman ◽  
Wisnu Ananta Kusuma

An open challenge in bioinformatics is the analysis of the sequenced metagenomes from the various environments. Several studies demonstrated bacteria classification at the genus level using k-mers as feature extraction where the highest value of k gives better accuracy but it is costly in terms of computational resources and computational time. Spaced k-mers method was used to extract the feature of the sequence using 111 1111 10001 where 1 was a match and 0 was the condition that could be a match or did not match. Currently, deep learning provides the best solutions to many problems in image recognition, speech recognition, and natural language processing. In this research, two different deep learning architectures, namely Deep Neural Network (DNN) and Convolutional Neural Network (CNN), trained to approach the taxonomic classification of metagenome data and spaced k-mers method for feature extraction. The result showed the DNN classifier reached 90.89 % and the CNN classifier reached 88.89 % accuracy at the genus level taxonomy.


Sign in / Sign up

Export Citation Format

Share Document