scholarly journals Multi-Task Deep Learning Model for Classification of Dental Implant Brand and Treatment Stage Using Dental Panoramic Radiograph Images

Biomolecules ◽  
2021 ◽  
Vol 11 (6) ◽  
pp. 815
Author(s):  
Shintaro Sukegawa ◽  
Kazumasa Yoshii ◽  
Takeshi Hara ◽  
Tamamo Matsuyama ◽  
Katsusuke Yamashita ◽  
...  

It is necessary to accurately identify dental implant brands and the stage of treatment to ensure efficient care. Thus, the purpose of this study was to use multi-task deep learning to investigate a classifier that categorizes implant brands and treatment stages from dental panoramic radiographic images. For objective labeling, 9767 dental implant images of 12 implant brands and treatment stages were obtained from the digital panoramic radiographs of patients who underwent procedures at Kagawa Prefectural Central Hospital, Japan, between 2005 and 2020. Five deep convolutional neural network (CNN) models (ResNet18, 34, 50, 101 and 152) were evaluated. The accuracy, precision, recall, specificity, F1 score, and area under the curve score were calculated for each CNN. We also compared the multi-task and single-task accuracies of brand classification and implant treatment stage classification. Our analysis revealed that the larger the number of parameters and the deeper the network, the better the performance for both classifications. Multi-tasking significantly improved brand classification on all performance indicators, except recall, and significantly improved all metrics in treatment phase classification. Using CNNs conferred high validity in the classification of dental implant brands and treatment stages. Furthermore, multi-task learning facilitated analysis accuracy.

Diagnostics ◽  
2021 ◽  
Vol 11 (2) ◽  
pp. 233
Author(s):  
Dong-Woon Lee ◽  
Sung-Yong Kim ◽  
Seong-Nyum Jeong ◽  
Jae-Hong Lee

Fracture of a dental implant (DI) is a rare mechanical complication that is a critical cause of DI failure and explantation. The purpose of this study was to evaluate the reliability and validity of a three different deep convolutional neural network (DCNN) architectures (VGGNet-19, GoogLeNet Inception-v3, and automated DCNN) for the detection and classification of fractured DI using panoramic and periapical radiographic images. A total of 21,398 DIs were reviewed at two dental hospitals, and 251 intact and 194 fractured DI radiographic images were identified and included as the dataset in this study. All three DCNN architectures achieved a fractured DI detection and classification accuracy of over 0.80 AUC. In particular, automated DCNN architecture using periapical images showed the highest and most reliable detection (AUC = 0.984, 95% CI = 0.900–1.000) and classification (AUC = 0.869, 95% CI = 0.778–0.929) accuracy performance compared to fine-tuned and pre-trained VGGNet-19 and GoogLeNet Inception-v3 architectures. The three DCNN architectures showed acceptable accuracy in the detection and classification of fractured DIs, with the best accuracy performance achieved by the automated DCNN architecture using only periapical images.


2012 ◽  
Vol 83 (1) ◽  
pp. 117-126 ◽  
Author(s):  
Noriyuki Kitai ◽  
Yousuke Mukai ◽  
Manabu Murabayashi ◽  
Atsushi Kawabata ◽  
Kaei Washino ◽  
...  

Abstract Objective: To investigate measurement errors and head positioning effects on radiographs made with new dental panoramic radiograph equipment that uses tomosynthesis. Materials and Methods: Radiographic images of a simulated human head or phantom were made at standard head positions using the new dental panoramic radiograph equipment. Measurement errors were evaluated by comparing with the true values. The phantom was also radiographed at various alternative head positions. Significant differences between measurement values at standard and alternative head positions were evaluated. Magnification ratios of the dimensions at standard and alternative head positions were calculated. Results: The measurement errors were small for all dimensions. On the measurements at 4-mm displacement positions, no dimension was significantly different from the standard value, and all dimensions were within ±5% of the standard values. At 12-mm displacement positions, the magnification ratios for tooth length and mandibular ramus height were within ±5% of the standard values, but those for dental arch width, mandibular width, and mandibular body length were beyond ±5% of the standard values. Conclusions: Measurement errors on radiographs made using the new panoramic radiograph equipment were small in any direction. At 4-mm head displacement positions, no head positioning effect on the measurements was found. At 12-mm head displacement positions, the measurements for vertical dimensions were little affected by head positioning, while those for lateral and anteroposterior dimensions were strongly affected.


2018 ◽  
Vol 4 ◽  
pp. e154 ◽  
Author(s):  
Kelwin Fernandes ◽  
Davide Chicco ◽  
Jaime S. Cardoso ◽  
Jessica Fernandes

Cervical cancer remains a significant cause of mortality all around the world, even if it can be prevented and cured by removing affected tissues in early stages. Providing universal and efficient access to cervical screening programs is a challenge that requires identifying vulnerable individuals in the population, among other steps. In this work, we present a computationally automated strategy for predicting the outcome of the patient biopsy, given risk patterns from individual medical records. We propose a machine learning technique that allows a joint and fully supervised optimization of dimensionality reduction and classification models. We also build a model able to highlight relevant properties in the low dimensional space, to ease the classification of patients. We instantiated the proposed approach with deep learning architectures, and achieved accurate prediction results (top area under the curve AUC = 0.6875) which outperform previously developed methods, such as denoising autoencoders. Additionally, we explored some clinical findings from the embedding spaces, and we validated them through the medical literature, making them reliable for physicians and biomedical researchers.


2021 ◽  
Author(s):  
Jeoung Kun Kim ◽  
Yoo Jin Choo ◽  
Gyu Sang Choi ◽  
Hyunkwang Shin ◽  
Min Cheol Chang ◽  
...  

Abstract Background: Videofluoroscopic swallowing study (VFSS) is currently considered the gold standard to precisely diagnose and quantitatively investigate dysphagia. However, VFSS interpretation is complex and requires consideration of several factors. Purpose: Therefore, considering the expected impact on dysphagia management, this study aimed to apply deep learning to detect the presence of penetration or aspiration in VFSS of patients with dysphagia automatically.Materials and Methods: The VFSS data of 190 participants with dysphagia were collected. A total of 10 frame images from one swallowing process were selected (five high-peak images and five low-peak images) for the application of deep learning in a VFSS video of a patient with dysphagia. We applied a convolutional neural network (CNN) for deep learning using the Python programming language. For the classification of VFSS findings (normal swallowing, penetration, and aspiration), the classification was determined in both high-peak and low-peak images. Thereafter, the two classifications determined through high-peak and low-peak images were integrated into a final classification.Results: The area under the curve (AUC) for the validation dataset of the VFSS image for the CNN model was 0.946 for normal findings, 0.885 for penetration, and 1.000 for aspiration. The average AUC was 0.962.Conclusion: This study demonstrated that deep learning algorithms, particularly the CNN, could be applied for detecting the presence of penetration and aspiration in VFSS of patients with dysphagia.


2021 ◽  
Author(s):  
Soheil Ashkani-Esfahani ◽  
Reze Mojahed Yazdi ◽  
Rohan Bhimani ◽  
Gino M Kerkhoffs ◽  
Mario Maas ◽  
...  

Early and accurate detection of ankle fractures is crucial for reducing future complications. Radiographs are the most abundant imaging techniques for assessing fractures. We believe deep learning (DL) methods, through adequately trained deep convolutional neural networks (DCNNs), can assess radiographic images fast and accurate without human intervention. Herein, we aimed to assess the performance of two different DCNNs in detecting ankle fractures using radiographs compared to the ground truth. In this retrospective study, our DCNNs were trained using radiographs obtained from 1050 patients with ankle fracture and the same number of individuals with otherwise healthy ankles. Inception V3 and Renet50 pretrained models were used in our algorithms. Danis-Weber classification method was used. Out of 1050, 72 individuals were labeled as occult fractures as they were not detected in the primary radiographic assessment. Using single-view radiographs was compared with 3-views (anteroposterior, mortise, lateral) for training the DCNNs. Our DCNNs showed a better performance using 3-views images versus single-view based on greater values for accuracy, F-score, and area under the curve (AUC). The sensitivity and specificity in detection of ankle fractures using 3-views were 97.5% and 93.9% using Resnet50 compared to 98.7% and 98.6 using inception V3, respectively. Resnet50 missed 3 occult fractures while Inception V3 missed only one case. Clinical Significance: The performance of our DCNNs showed a promising potential that can be considered in developing the currently used image interpretation programs or as a separate assistant to the clinicians to detect ankle fractures faster and more precisely.


Diagnostics ◽  
2021 ◽  
Vol 11 (9) ◽  
pp. 1572
Author(s):  
Byung Su Kim ◽  
Han Gyeol Yeom ◽  
Jong Hyun Lee ◽  
Woo Sang Shin ◽  
Jong Pil Yun ◽  
...  

The purpose of this study was to determine whether convolutional neural networks (CNNs) can predict paresthesia of the inferior alveolar nerve using panoramic radiographic images before extraction of the mandibular third molar. The dataset consisted of a total of 300 preoperative panoramic radiographic images of patients who had planned mandibular third molar extraction. A total of 100 images taken of patients who had paresthesia after tooth extraction were classified as Group 1, and 200 images taken of patients without paresthesia were classified as Group 2. The dataset was randomly divided into a training and validation set (n = 150 [50%]), and a test set (n = 150 [50%]). CNNs of SSD300 and ResNet-18 were used for deep learning. The average accuracy, sensitivity, specificity, and area under the curve were 0.827, 0.84, 0.82, and 0.917, respectively. This study revealed that CNNs can assist in the prediction of paresthesia of the inferior alveolar nerve after third molar extraction using panoramic radiographic images.


2020 ◽  
Vol 6 (1) ◽  
Author(s):  
Toshihito Takahashi ◽  
Kazunori Nozaki ◽  
Tomoya Gonda ◽  
Tomoaki Mameno ◽  
Masahiro Wada ◽  
...  

Abstract Background In some cases, a dentist cannot solve the difficulties a patient has with an implant because the implant system is unknown. Therefore, there is a need for a system for identifying the implant system of a patient from limited data that does not depend on the dentist’s knowledge and experience. The purpose of this study was to identify dental implant systems using a deep learning method. Methods A dataset of 1282 panoramic radiograph images with implants were used for deep learning. An object detection algorithm (Yolov3) was used to identify the six implant systems by three manufactures. To implement the algorithm, TensorFlow and Keras deep-learning libraries were used. After training was complete, the true positive (TP) ratio and average precision (AP) of each implant system as well as the mean AP (mAP), and mean intersection over union (mIoU) were calculated to evaluate the performance of the model. Results The number of each implant system varied from 240 to 1919. The TP ratio and AP of each implant system varied from 0.50 to 0.82 and from 0.51 to 0.85, respectively. The mAP and mIoU of this model were 0.71 and 0.72, respectively. Conclusions The results of this study suggest that implants can be identified from panoramic radiographic images using deep learning-based object detection. This identification system could help dentists as well as patients suffering from implant problems. However, more images of other implant systems will be necessary to increase the learning performance to apply this system in clinical practice.


Diagnostics ◽  
2020 ◽  
Vol 10 (11) ◽  
pp. 910
Author(s):  
Jae-Hong Lee ◽  
Young-Taek Kim ◽  
Jong-Bin Lee ◽  
Seong-Nyum Jeong

In this study, the efficacy of the automated deep convolutional neural network (DCNN) was evaluated for the classification of dental implant systems (DISs) and the accuracy of the performance was compared against that of dental professionals using dental radiographic images collected from three dental hospitals. A total of 11,980 panoramic and periapical radiographic images with six different types of DISs were divided into training (n = 9584) and testing (n = 2396) datasets. To compare the accuracy of the trained automated DCNN with dental professionals (including six board-certified periodontists, eight periodontology residents, and 11 residents not specialized in periodontology), 180 images were randomly selected from the test dataset. The accuracy of the automated DCNN based on the AUC, Youden index, sensitivity, and specificity, were 0.954, 0.808, 0.955, and 0.853, respectively. The automated DCNN outperformed most of the participating dental professionals, including board-certified periodontists, periodontal residents, and residents not specialized in periodontology. The automated DCNN was highly effective in classifying similar shapes of different types of DISs based on dental radiographic images. Further studies are necessary to determine the efficacy and feasibility of applying an automated DCNN in clinical practice.


Sign in / Sign up

Export Citation Format

Share Document