scholarly journals Panoptic Segmentation on Panoramic Radiographs: Deep Learning-Based Segmentation of Various Structures Including Maxillary Sinus and Mandibular Canal

2021 ◽  
Vol 10 (12) ◽  
pp. 2577
Author(s):  
Jun-Young Cha ◽  
Hyung-In Yoon ◽  
In-Sung Yeo ◽  
Kyung-Hoe Huh ◽  
Jung-Suk Han

Panoramic radiographs, also known as orthopantomograms, are routinely used in most dental clinics. However, it has been difficult to develop an automated method that detects the various structures present in these radiographs. One of the main reasons for this is that structures of various sizes and shapes are collectively shown in the image. In order to solve this problem, the recently proposed concept of panoptic segmentation, which integrates instance segmentation and semantic segmentation, was applied to panoramic radiographs. A state-of-the-art deep neural network model designed for panoptic segmentation was trained to segment the maxillary sinus, maxilla, mandible, mandibular canal, normal teeth, treated teeth, and dental implants on panoramic radiographs. Unlike conventional semantic segmentation, each object in the tooth and implant classes was individually classified. For evaluation, the panoptic quality, segmentation quality, recognition quality, intersection over union (IoU), and instance-level IoU were calculated. The evaluation and visualization results showed that the deep learning-based artificial intelligence model can perform panoptic segmentation of images, including those of the maxillary sinus and mandibular canal, on panoramic radiographs. This automatic machine learning method might assist dental practitioners to set up treatment plans and diagnose oral and maxillofacial diseases.

Author(s):  
Vitoantonio Bevilacqua ◽  
Antonio Brunetti ◽  
Giacomo Donato Cascarano ◽  
Andrea Guerriero ◽  
Francesco Pesce ◽  
...  

Abstract Background The automatic segmentation of kidneys in medical images is not a trivial task when the subjects undergoing the medical examination are affected by Autosomal Dominant Polycystic Kidney Disease (ADPKD). Several works dealing with the segmentation of Computed Tomography images from pathological subjects were proposed, showing high invasiveness of the examination or requiring interaction by the user for performing the segmentation of the images. In this work, we propose a fully-automated approach for the segmentation of Magnetic Resonance images, both reducing the invasiveness of the acquisition device and not requiring any interaction by the users for the segmentation of the images. Methods Two different approaches are proposed based on Deep Learning architectures using Convolutional Neural Networks (CNN) for the semantic segmentation of images, without needing to extract any hand-crafted features. In details, the first approach performs the automatic segmentation of images without any procedure for pre-processing the input. Conversely, the second approach performs a two-steps classification strategy: a first CNN automatically detects Regions Of Interest (ROIs); a subsequent classifier performs the semantic segmentation on the ROIs previously extracted. Results Results show that even though the detection of ROIs shows an overall high number of false positives, the subsequent semantic segmentation on the extracted ROIs allows achieving high performance in terms of mean Accuracy. However, the segmentation of the entire images input to the network remains the most accurate and reliable approach showing better performance than the previous approach. Conclusion The obtained results show that both the investigated approaches are reliable for the semantic segmentation of polycystic kidneys since both the strategies reach an Accuracy higher than 85%. Also, both the investigated methodologies show performances comparable and consistent with other approaches found in literature working on images from different sources, reducing both the invasiveness of the analyses and the interaction needed by the users for performing the segmentation task.


2020 ◽  
Author(s):  
Sogol Ghassemzadeh ◽  
Luca Sbricoli ◽  
Anna Chiara Frigo ◽  
Christian Bacci

Abstract Objectives This study aimed to assess the prevalence of incidental findings, not strictly related to dentistry, viewed with panoramic radiography. Methods Panoramic radiographs performed between December 2013 and June 2016 were retrospectively collected. These images were analyzed, searching for incidental findings. All the information collected was statistically analysed Results A total of 2307 Panoramic Radiograph were analyzed and 2017 of them were included in the study. 529 incidental findings were seen: 255 (48.2%) were ESP (Elongation of Styloid Process), 167 were CAC (Carotid Artery Calcification) (31.57%), 36 were maxillary sinus pathologies (6.8%) and 71 were other incidental findings (13.42%). The total prevalence of IF was 26, 23%., CAC was 8.28% in the total population, and it was higher in women (9.82%) than men (6.54%). 48.5% of CAC were bilateral. When unilateral, the right side showed a higher right side prevalence. The prevalence of ESP was 12.64% in total population (men: 13.82%; women: 11.60%). 84.71% of ESP were bilateral and, when present unilaterally, no side difference was seen. 13.33% of the ESP appeared segmented. The prevalence of maxillary sinus pathologies was 1.78% (men: 2.32%; women: 1.31%). Only 8.33% of these pathologies were bilateral, and, when unilateral, they were mostly present on the right side. Between the 71 other IF (prevalence: 3.52%), sialoliths and tonsilloliths were assessed most frequently. Conclusion Due to the high prevalence of incidental findings detected with panoramic radiography, dental practitioners should be aware of the various pathologic conditions seen on the panoramic radiographs.


2020 ◽  
pp. 20200171 ◽  
Author(s):  
Ryosuke Kuwana ◽  
Yoshiko Ariji ◽  
Motoki Fukuda ◽  
Yoshitaka Kise ◽  
Michihito Nozawa ◽  
...  

Objective: The first aim of this study was to determine the performance of a deep learning object detection technique in the detection of maxillary sinuses on panoramic radiographs. The second aim was to clarify the performance in the classification of maxillary sinus lesions compared with healthy maxillary sinuses. Methods: The imaging data for healthy maxillary sinuses (587 sinuses, Class 0), inflamed maxillary sinuses (416 sinuses, Class 1), cysts of maxillary sinus regions (171 sinuses, Class 2) were assigned to training, testing 1, and testing 2 data sets. A learning process of 1000 epochs with the training images and labels was performed using DetectNet, and a learning model was created. The testing 1 and testing 2 images were applied to the model, and the detection sensitivities and the false-positive rates per image were calculated. The accuracies, sensitivities and specificities were determined for distinguishing the inflammation group (Class 1) and cyst group (Class 2) with respect to the healthy group (Class 0). Results: Detection sensitivities of healthy (Class 0) and inflamed (Class 1) maxillary sinuses were 100% for both testing 1 and testing 2 data sets, whereas they were 98 and 89% for cysts of the maxillary sinus regions (Class 2). False-positive rates per image were nearly 0.00. Accuracies, sensitivities and specificities for diagnosis maxillary sinusitis were 90–91%, 88–85%, and 91–96%, respectively; for cysts of the maxillary sinus regions, these values were 97–100%, 80–100%, and 100–100%, respectively. Conclusion: Deep learning could reliably detect the maxillary sinuses and identify maxillary sinusitis and cysts of the maxillary sinus regions. Advances in knowledge: This study using a deep leaning object detection technique indicated that the detection sensitivities of maxillary sinuses were high and the performance of maxillary sinus lesion identification was ≧80%. In particular, performance of sinusitis identification was ≧90%.


Author(s):  
E. Gülch ◽  
L. Obrock

Abstract. In this paper, we present an improved approach of enriching photogrammetric point clouds with semantic information extracted from images to enable a later automation of BIM modelling. Based on the DeepLabv3+ architecture, we use Semantic Segmentation of images to extract building components and objects of interiors. During the photogrammetric reconstruction, we project the segmented categories into the point cloud. Any interpolations that occur during this process are corrected automatically and we achieve a mIoU of 51.9 % in the classified point cloud. Based on the semantic information, we align the point cloud, correct the scale and extract further information. Our investigation confirms that utilizing photogrammetry and Deep Learning to generate a semantically enriched point cloud of interiors achieves good results. The combined extraction of geometric and semantic information yields a high potential for automated BIM model reconstruction.


2019 ◽  
Vol 53 (3) ◽  
pp. 281-294
Author(s):  
Jean-Michel Foucart ◽  
Augustin Chavanne ◽  
Jérôme Bourriau

Nombreux sont les apports envisagés de l’Intelligence Artificielle (IA) en médecine. En orthodontie, plusieurs solutions automatisées sont disponibles depuis quelques années en imagerie par rayons X (analyse céphalométrique automatisée, analyse automatisée des voies aériennes) ou depuis quelques mois (analyse automatique des modèles numériques, set-up automatisé; CS Model +, Carestream Dental™). L’objectif de cette étude, en deux parties, est d’évaluer la fiabilité de l’analyse automatisée des modèles tant au niveau de leur numérisation que de leur segmentation. La comparaison des résultats d’analyse des modèles obtenus automatiquement et par l’intermédiaire de plusieurs orthodontistes démontre la fiabilité de l’analyse automatique; l’erreur de mesure oscillant, in fine, entre 0,08 et 1,04 mm, ce qui est non significatif et comparable avec les erreurs de mesures inter-observateurs rapportées dans la littérature. Ces résultats ouvrent ainsi de nouvelles perspectives quand à l’apport de l’IA en Orthodontie qui, basée sur le deep learning et le big data, devrait permettre, à moyen terme, d’évoluer vers une orthodontie plus préventive et plus prédictive.


2020 ◽  
Vol 11 (1) ◽  
pp. 22-26
Author(s):  
S.V. Tsymbal ◽  

The digital revolution has transformed the way people access information, communicate and learn. It is teachers' responsibility to set up environments and opportunities for deep learning experiences that can uncover and boost learners’ capacities. Twentyfirst century competences can be seen as necessary to navigate contemporary and future life, shaped by technology that changes workplaces and lifestyles. This study explores the concept of digital competence and provide insight into the European Framework for the Digital Competence of Educators.


Impact ◽  
2020 ◽  
Vol 2020 (2) ◽  
pp. 9-11
Author(s):  
Tomohiro Fukuda

Mixed reality (MR) is rapidly becoming a vital tool, not just in gaming, but also in education, medicine, construction and environmental management. The term refers to systems in which computer-generated content is superimposed over objects in a real-world environment across one or more sensory modalities. Although most of us have heard of the use of MR in computer games, it also has applications in military and aviation training, as well as tourism, healthcare and more. In addition, it has the potential for use in architecture and design, where buildings can be superimposed in existing locations to render 3D generations of plans. However, one major challenge that remains in MR development is the issue of real-time occlusion. This refers to hiding 3D virtual objects behind real articles. Dr Tomohiro Fukuda, who is based at the Division of Sustainable Energy and Environmental Engineering, Graduate School of Engineering at Osaka University in Japan, is an expert in this field. Researchers, led by Dr Tomohiro Fukuda, are tackling the issue of occlusion in MR. They are currently developing a MR system that realises real-time occlusion by harnessing deep learning to achieve an outdoor landscape design simulation using a semantic segmentation technique. This methodology can be used to automatically estimate the visual environment prior to and after construction projects.


IEEE Access ◽  
2020 ◽  
pp. 1-1
Author(s):  
Jeremy M. Webb ◽  
Duane D. Meixner ◽  
Shaheeda A. Adusei ◽  
Eric C. Polley ◽  
Mostafa Fatemi ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document