scholarly journals Automatic Detection of Mandibular Fractures in Panoramic Radiographs Using Deep Learning

Diagnostics ◽  
2021 ◽  
Vol 11 (6) ◽  
pp. 933
Author(s):  
Dong-Min Son ◽  
Yeong-Ah Yoon ◽  
Hyuk-Ju Kwon ◽  
Chang-Hyeon An ◽  
Sung-Hak Lee

Mandibular fracture is one of the most frequent injuries in oral and maxillo-facial surgery. Radiologists diagnose mandibular fractures using panoramic radiography and cone-beam computed tomography (CBCT). Panoramic radiography is a conventional imaging modality, which is less complicated than CBCT. This paper proposes the diagnosis method of mandibular fractures in a panoramic radiograph based on a deep learning system without the intervention of radiologists. The deep learning system used has a one-stage detection called you only look once (YOLO). To improve detection accuracy, panoramic radiographs as input images are augmented using gamma modulation, multi-bounding boxes, single-scale luminance adaptation transform, and multi-scale luminance adaptation transform methods. Our results showed better detection performance than the conventional method using YOLO-based deep learning. Hence, it will be helpful for radiologists to double-check the diagnosis of mandibular fractures.

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Chiaki Kuwada ◽  
Yoshiko Ariji ◽  
Yoshitaka Kise ◽  
Takuma Funakoshi ◽  
Motoki Fukuda ◽  
...  

AbstractAlthough panoramic radiography has a role in the examination of patients with cleft alveolus (CA), its appearances is sometimes difficult to interpret. The aims of this study were to develop a computer-aided diagnosis system for diagnosing the CA status on panoramic radiographs using a deep learning object detection technique with and without normal data in the learning process, to verify its performance in comparison to human observers, and to clarify some characteristic appearances probably related to the performance. The panoramic radiographs of 383 CA patients with cleft palate (CA with CP) or without cleft palate (CA only) and 210 patients without CA (normal) were used to create two models on the DetectNet. The models 1 and 2 were developed based on the data without and with normal subjects, respectively, to detect the CAs and classify them into with or without CP. The model 2 reduced the false positive rate (1/30) compared to the model 1 (12/30). The overall accuracy of Model 2 was higher than Model 1 and human observers. The model created in this study appeared to have the potential to detect and classify CAs on panoramic radiographs, and might be useful to assist the human observers.


2021 ◽  
Author(s):  
Chiaki Kuwada ◽  
Yoshiko Ariji ◽  
Motoki Fukuda ◽  
Tsutomu Kuwada ◽  
Kenichi Gotoh ◽  
...  

Abstract Although panoramic radiography has a role in the examination of patients with cleft alveolus (CA), its appearances is sometimes difficult to interpret. The aims of the present study were to develop a computer-aided diagnosis system for diagnosing the CA status on panoramic radiographs using a deep learning object detection technique with and without normal data in the learning process, to verify its performance, and to clarify some characteristic appearances probably related to the performance. The panoramic radiographs of 383 CA patients with cleft palate (CA with CP group) or without cleft palate (CA only group) and 210 patients without CA (normal group) were used to create 2 learning models on the DetectNet. The models 1 and 2 were developed based on the data with and without normal subjects, respectively, to detect the CAs and classify them into the CA only and CA with CP groups. The model 2 reduced the false positive rate (1/30) compared to the model 1 (12/30). The model 2 performances were higher in almost values than those in the model 1, but no difference in the recall of CA with CP groups. The model created in the present study appeared to have the potential to detect and classify CAs on panoramic radiographs.


Computers ◽  
2020 ◽  
Vol 9 (4) ◽  
pp. 104
Author(s):  
Saraswati Sridhar ◽  
Vidya Manian

Electroencephalogram signals are used to assess neurodegenerative diseases and develop sophisticated brain machine interfaces for rehabilitation and gaming. Most of the applications use only motor imagery or evoked potentials. Here, a deep learning network based on a sensory motor paradigm (auditory, olfactory, movement, and motor-imagery) that employs a subject-agnostic Bidirectional Long Short-Term Memory (BLSTM) Network is developed to assess cognitive functions and identify its relationship with brain signal features, which is hypothesized to consistently indicate cognitive decline. Testing occurred with healthy subjects of age 20–40, 40–60, and >60, and mildly cognitive impaired subjects. Auditory and olfactory stimuli were presented to the subjects and the subjects imagined and conducted movement of each arm during which Electroencephalogram (EEG)/Electromyogram (EMG) signals were recorded. A deep BLSTM Neural Network is trained with Principal Component features from evoked signals and assesses their corresponding pathways. Wavelet analysis is used to decompose evoked signals and calculate the band power of component frequency bands. This deep learning system performs better than conventional deep neural networks in detecting MCI. Most features studied peaked at the age range 40–60 and were lower for the MCI group than for any other group tested. Detection accuracy of left-hand motor imagery signals best indicated cognitive aging (p = 0.0012); here, the mean classification accuracy per age group declined from 91.93% to 81.64%, and is 69.53% for MCI subjects. Motor-imagery-evoked band power, particularly in gamma bands, best indicated (p = 0.007) cognitive aging. Although the classification accuracy of the potentials effectively distinguished cognitive aging from MCI (p < 0.05), followed by gamma-band power.


2021 ◽  
pp. 20200513
Author(s):  
Su-Jin Jeon ◽  
Jong-Pil Yun ◽  
Han-Gyeol Yeom ◽  
Woo-Sang Shin ◽  
Jong-Hyun Lee ◽  
...  

Objective: The aim of this study was to evaluate the use of a convolutional neural network (CNN) system for predicting C-shaped canals in mandibular second molars on panoramic radiographs. Methods: Panoramic and cone beam CT (CBCT) images obtained from June 2018 to May 2020 were screened and 1020 patients were selected. Our dataset of 2040 sound mandibular second molars comprised 887 C-shaped canals and 1153 non-C-shaped canals. To confirm the presence of a C-shaped canal, CBCT images were analyzed by a radiologist and set as the gold standard. A CNN-based deep-learning model for predicting C-shaped canals was built using Xception. The training and test sets were set to 80 to 20%, respectively. Diagnostic performance was evaluated using accuracy, sensitivity, specificity, and precision. Receiver-operating characteristics (ROC) curves were drawn, and the area under the curve (AUC) values were calculated. Further, gradient-weighted class activation maps (Grad-CAM) were generated to localize the anatomy that contributed to the predictions. Results: The accuracy, sensitivity, specificity, and precision of the CNN model were 95.1, 92.7, 97.0, and 95.9%, respectively. Grad-CAM analysis showed that the CNN model mainly identified root canal shapes converging into the apex to predict the C-shaped canals, while the root furcation was predominantly used for predicting the non-C-shaped canals. Conclusions: The deep-learning system had significant accuracy in predicting C-shaped canals of mandibular second molars on panoramic radiographs.


2020 ◽  
Author(s):  
Sogol Ghassemzadeh ◽  
Luca Sbricoli ◽  
Anna Chiara Frigo ◽  
Christian Bacci

Abstract Objectives This study aimed to assess the prevalence of incidental findings, not strictly related to dentistry, viewed with panoramic radiography. Methods Panoramic radiographs performed between December 2013 and June 2016 were retrospectively collected. These images were analyzed, searching for incidental findings. All the information collected was statistically analysed Results A total of 2307 Panoramic Radiograph were analyzed and 2017 of them were included in the study. 529 incidental findings were seen: 255 (48.2%) were ESP (Elongation of Styloid Process), 167 were CAC (Carotid Artery Calcification) (31.57%), 36 were maxillary sinus pathologies (6.8%) and 71 were other incidental findings (13.42%). The total prevalence of IF was 26, 23%., CAC was 8.28% in the total population, and it was higher in women (9.82%) than men (6.54%). 48.5% of CAC were bilateral. When unilateral, the right side showed a higher right side prevalence. The prevalence of ESP was 12.64% in total population (men: 13.82%; women: 11.60%). 84.71% of ESP were bilateral and, when present unilaterally, no side difference was seen. 13.33% of the ESP appeared segmented. The prevalence of maxillary sinus pathologies was 1.78% (men: 2.32%; women: 1.31%). Only 8.33% of these pathologies were bilateral, and, when unilateral, they were mostly present on the right side. Between the 71 other IF (prevalence: 3.52%), sialoliths and tonsilloliths were assessed most frequently. Conclusion Due to the high prevalence of incidental findings detected with panoramic radiography, dental practitioners should be aware of the various pathologic conditions seen on the panoramic radiographs.


Scanning ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Lun Zhao ◽  
Yunlong Pan ◽  
Sen Wang ◽  
Liang Zhang ◽  
Md Shafiqul Islam

The scanning electron microscope (SEM) is widely used in the analysis and research of materials, including fracture analysis, microstructure morphology, and nanomaterial analysis. With the rapid development of materials science and computer vision technology, the level of detection technology is constantly improving. In this paper, the deep learning method is used to intelligently identify microcracks in the microscopic morphology of SEM image. A deep learning model based on image level is selected to reduce the interference of other complex microscopic topography, and a detection method with dense continuous bounding boxes suitable for SEM images is proposed. The dense and continuous bounding boxes were used to obtain the local features of the cracks and rotating the bounding boxes to reduce the feature differences between the bounding boxes. Finally, the bounding boxes with filled regression were used to highlight the microcrack detection effect. The results show that the detection accuracy of our approach reached 71.12%, and the highest mIOU reached 64.13%. Also, microcracks in different magnifications and in different backgrounds were detected successfully.


Author(s):  
Dihong Gong ◽  
Daisy Zhe Wang

We consider the problem of automatically extracting visual objects from web images. Despite the extraordinary advancement in deep learning, visual object detection remains a challenging task. To overcome the deficiency of pure visual techniques, we propose to make use of meta text surrounding images on the Web for enhanced detection accuracy. In this paper we present a multimodal learning algorithm to integrate text information into visual knowledge extraction. To demonstrate the effectiveness of our approach, we developed a system that takes raw webpages as input, and automatically extracts visual knowledge (e.g. object bounding boxes) from tens of millions of images crawled from the Web. Experimental results based on 46 object categories show that the extraction precision is improved significantly from 73% (with state-of-the-art deep learning programs) to 81%, which is equivalent to a 31% reduction in error rates.


Author(s):  
Oleksandr Nosyr ◽  
Serhii Khrulenko

The purpose of this essay is to present the multiple patterns of the duplication sign at the mandibular fracture line/gap visualized at the panoramic radiography as two-line fracture gap or pseudocomminuted fracture. We retrospectively reviewed the orthopantomography of patients with mandible fractures and presented nine patients with 12 duplication signs (also known as lambda course fracture line). On panoramic radiographs the fracture line/gap with duplication sign is visualized as two-line cortical bone discontinuity with or without dislocation due to the fact that the fracture gap runs asymmetrically on the external and internal cortical plates of the jaw. Knowledge of duplication sign patterns, artifacts is also crucial for the precise diagnosis and choice of correct management strategy.


2020 ◽  
Vol 49 (6) ◽  
pp. 20190290
Author(s):  
Zeynep Betül Arslan ◽  
Hilal Demir ◽  
Dila Berker Yıldız ◽  
Füsun Yaşar

Objectives: The purpose of this study was to compare the accuracy of imaging techniques in diagnosing periapical lesions. Methods: Imaging records of 80 patients (51 females, 29 males, aged between 14 and 75 years) including periapical and panoramic radiographs and ultrasonographic images were selected from databases of Selcuk University Dentistry Faculty. Periapical radiographs were accepted as gold-standard and 160 anterior maxillary and mandibular teeth with or without periapical lesion were included to the study. Three specialist observers (dental radiologists) evaluated the presence and appearance of periapical lesions on panoramic radiograph and ultrasonographic images. Sensitivity, specificity, positive predictive value, negative predictive value and diagnostic value of panoramic radiographs and ultrasonography were determined. Results: Sensitivity was 0.80 and 0.77 for ultrasonographic images and panoramic radiographs, respectively which shows that periapical lesion was correctly detected in 80% of the cases with ultrasound and in 77% of the cases with panoramic radiography. Specificity values were determined as 0.97 for ultrasound and 0.95 for panoramic radiography. Overall diagnostic accuracy was 0.86 and 0.84 for ultrasound and panoramic radiography, respectively. Conclusions: Periapical and panoramic radiographs are commonly used to visualize periapical lesions. Besides, ultrasonography is an alternative method to digital radiographic techniques in the diagnosis of anterior teeth with periapical lesions.


2021 ◽  
Author(s):  
Shankeeth Vinayahalingam ◽  
Steven Kempers ◽  
Lorenzo Limon ◽  
Dionne Deibel ◽  
Thomas Maal ◽  
...  

Abstract The objective of this study is to assess the diagnostic accuracy of dental caries on panoramic radiographs using deep-learning algorithms. A convolutional neural network (CNN) was trained on a reference data set consisted of 400 cropped panoramic images in the detection of carious lesions in mandibular and maxillary third molars, based on the CNN MobileNet V2. For this pilot study, the trained MobileNet V2 was applied on a test set consisting of 100 cropped OPG(s). The detection accuracy and the area-under-the-curve (AUC) were calculated. The proposed method achieved an accuracy of 0.87, a sensitivity of 0.87, a specificity of 0.86 and an AUC of 0.90 for the detection of carious lesions of third molars on OPG(s). A high diagnostic accuracy was achieved in caries detection in third molars based on the MobileNet V2 algorithm as presented. This is beneficial for the further development of a deep-learning based automated third molar removal assessment in future.


Sign in / Sign up

Export Citation Format

Share Document