scholarly journals Identification of dental implants using deep learning—pilot study

2020 ◽  
Vol 6 (1) ◽  
Author(s):  
Toshihito Takahashi ◽  
Kazunori Nozaki ◽  
Tomoya Gonda ◽  
Tomoaki Mameno ◽  
Masahiro Wada ◽  
...  

Abstract Background In some cases, a dentist cannot solve the difficulties a patient has with an implant because the implant system is unknown. Therefore, there is a need for a system for identifying the implant system of a patient from limited data that does not depend on the dentist’s knowledge and experience. The purpose of this study was to identify dental implant systems using a deep learning method. Methods A dataset of 1282 panoramic radiograph images with implants were used for deep learning. An object detection algorithm (Yolov3) was used to identify the six implant systems by three manufactures. To implement the algorithm, TensorFlow and Keras deep-learning libraries were used. After training was complete, the true positive (TP) ratio and average precision (AP) of each implant system as well as the mean AP (mAP), and mean intersection over union (mIoU) were calculated to evaluate the performance of the model. Results The number of each implant system varied from 240 to 1919. The TP ratio and AP of each implant system varied from 0.50 to 0.82 and from 0.51 to 0.85, respectively. The mAP and mIoU of this model were 0.71 and 0.72, respectively. Conclusions The results of this study suggest that implants can be identified from panoramic radiographic images using deep learning-based object detection. This identification system could help dentists as well as patients suffering from implant problems. However, more images of other implant systems will be necessary to increase the learning performance to apply this system in clinical practice.

Biomolecules ◽  
2021 ◽  
Vol 11 (6) ◽  
pp. 815
Author(s):  
Shintaro Sukegawa ◽  
Kazumasa Yoshii ◽  
Takeshi Hara ◽  
Tamamo Matsuyama ◽  
Katsusuke Yamashita ◽  
...  

It is necessary to accurately identify dental implant brands and the stage of treatment to ensure efficient care. Thus, the purpose of this study was to use multi-task deep learning to investigate a classifier that categorizes implant brands and treatment stages from dental panoramic radiographic images. For objective labeling, 9767 dental implant images of 12 implant brands and treatment stages were obtained from the digital panoramic radiographs of patients who underwent procedures at Kagawa Prefectural Central Hospital, Japan, between 2005 and 2020. Five deep convolutional neural network (CNN) models (ResNet18, 34, 50, 101 and 152) were evaluated. The accuracy, precision, recall, specificity, F1 score, and area under the curve score were calculated for each CNN. We also compared the multi-task and single-task accuracies of brand classification and implant treatment stage classification. Our analysis revealed that the larger the number of parameters and the deeper the network, the better the performance for both classifications. Multi-tasking significantly improved brand classification on all performance indicators, except recall, and significantly improved all metrics in treatment phase classification. Using CNNs conferred high validity in the classification of dental implant brands and treatment stages. Furthermore, multi-task learning facilitated analysis accuracy.


2018 ◽  
Vol 14 (1) ◽  
pp. 21-27
Author(s):  
Arjun Bhandari ◽  
Archana Manandhar ◽  
Raj Kumar Singh ◽  
Pramita Suwal ◽  
Prakash Kumar Parajuli

Background & Objectives:The study was conducted with objective to compare the horizontal condylar guidance (HCG) obtained by protrusive interocclusal records and panoramic radiographic images in completely edentulous patients.Materials & Methods:The horizontal condylar guidance was measured in 25 completely edentulous patients by protrusive interocclusal records using zinc oxide eugenol paste through a face bow transfer (HanauTM Spring bow, Whip Mix Corporation, USA) to a semi-adjustable articulator (HanauTM Wide-Vue Articulator, Whip Mix Corporation, USA).  In the same patients, HCG was traced in the panoramic radiograph. The angles formed by the intersection of two lines: Frankfurt’s horizontal plane and posterior slope of articular eminence was measured using protractor to represent the horizontal condylar guidance angle on each side.Results:The mean difference between the horizontal condylar guidance angles values obtained using protrusive interocclusal record and panoramic radiograph was 2.68 degrees and 3.40 degrees for the right and the left side respectively, with the panoramic radiograph values being higher. This difference between the values was found to be highly significant between the two methods for the right side (t = 2.70, p = 0.012) and left side (t = 3.69, p = 0.001). A significant positive correlation was found between the horizontal condylar guidance obtained from protrusive interocclusal record and panoramic radiograph for the right (r = 0.643, p = 0.001) and left sides (r = 0.622, p = 0.001) separately.Conclusion:The panoramic radiographic tracing can be used to calculate the mean horizontal condylar guidance in the completely edentulous patients and these values can be used to programme semi-adjustable articulators avoiding the cumbersome process of obtaining protrusive interocclusal records.


2020 ◽  
Vol 10 (14) ◽  
pp. 4744
Author(s):  
Hyukzae Lee ◽  
Jonghee Kim ◽  
Chanho Jung ◽  
Yongchan Park ◽  
Woong Park ◽  
...  

The arena fragmentation test (AFT) is one of the tests used to design an effective warhead. Conventionally, complex and expensive measuring equipment is used for testing a warhead and measuring important factors such as the size, velocity, and the spatial distribution of fragments where the fragments penetrate steel target plates. In this paper, instead of using specific sensors and equipment, we proposed the use of a deep learning-based object detection algorithm to detect fragments in the AFT. To this end, we acquired many high-speed videos and built an AFT image dataset with bounding boxes of warhead fragments. Our method fine-tuned an existing object detection network named the Faster R-convolutional neural network (CNN) on this dataset with modification of the network’s anchor boxes. We also employed a novel temporal filtering method, which was demonstrated as an effective non-fragment filtering scheme in our recent previous image processing-based fragment detection approach, to capture only the first penetrating fragments from all detected fragments. We showed that the performance of the proposed method was comparable to that of a sensor-based system under the same experimental conditions. We also demonstrated that the use of deep learning technologies in the task of AFT significantly enhanced the performance via a quantitative comparison between our proposed method and our recent previous image processing-based method. In other words, our proposed method outperformed the previous image processing-based method. The proposed method produced outstanding results in terms of finding the exact fragment positions.


Sensors ◽  
2018 ◽  
Vol 18 (8) ◽  
pp. 2484 ◽  
Author(s):  
Weixing Zhang ◽  
Chandi Witharana ◽  
Weidong Li ◽  
Chuanrong Zhang ◽  
Xiaojiang Li ◽  
...  

Traditional methods of detecting and mapping utility poles are inefficient and costly because of the demand for visual interpretation with quality data sources or intense field inspection. The advent of deep learning for object detection provides an opportunity for detecting utility poles from side-view optical images. In this study, we proposed using a deep learning-based method for automatically mapping roadside utility poles with crossarms (UPCs) from Google Street View (GSV) images. The method combines the state-of-the-art DL object detection algorithm (i.e., the RetinaNet object detection algorithm) and a modified brute-force-based line-of-bearing (LOB, a LOB stands for the ray towards the location of the target [UPC at here] from the original location of the sensor [GSV mobile platform]) measurement method to estimate the locations of detected roadside UPCs from GSV. Experimental results indicate that: (1) both the average precision (AP) and the overall accuracy (OA) are around 0.78 when the intersection-over-union (IoU) threshold is greater than 0.3, based on the testing of 500 GSV images with a total number of 937 objects; and (2) around 2.6%, 47%, and 79% of estimated locations of utility poles are within 1 m, 5 m, and 10 m buffer zones, respectively, around the referenced locations of utility poles. In general, this study indicates that even in a complex background, most utility poles can be detected with the use of DL, and the LOB measurement method can estimate the locations of most UPCs.


Author(s):  
Aofeng Li ◽  
Xufang Zhu ◽  
Shuo He ◽  
Jiawei Xia

AbstractIn view of the deficiencies in traditional visual water surface object detection, such as the existence of non-detection zones, failure to acquire global information, and deficiencies in a single-shot multibox detector (SSD) object detection algorithm such as remote detection and low detection precision of small objects, this study proposes a water surface object detection algorithm from panoramic vision based on an improved SSD. We reconstruct the backbone network for the SSD algorithm, replace VVG16 with a ResNet-50 network, and add five layers of feature extraction. More abundant semantic information of the shallow feature graph is obtained through a feature pyramid network structure with deconvolution. An experiment is conducted by building a water surface object dataset. Results showed the mean Average Precision (mAP) of the improved algorithm are increased by 4.03%, compared with the existing SSD detecting Algorithm. Improved algorithm can effectively improve the overall detection precision of water surface objects and enhance the detection effect of remote objects.


2021 ◽  
Vol 1995 (1) ◽  
pp. 012046
Author(s):  
Meian Li ◽  
Haojie Zhu ◽  
Hao Chen ◽  
Lixia Xue ◽  
Tian Gao

2020 ◽  
Vol 28 (S2) ◽  
Author(s):  
Asmida Ismail ◽  
Siti Anom Ahmad ◽  
Azura Che Soh ◽  
Mohd Khair Hassan ◽  
Hazreen Haizi Harith

The object detection system is a computer technology related to image processing and computer vision that detects instances of semantic objects of a certain class in digital images and videos. The system consists of two main processes, which are classification and detection. Once an object instance has been classified and detected, it is possible to obtain further information, including recognizes the specific instance, track the object over an image sequence and extract further information about the object and the scene. This paper presented an analysis performance of deep learning object detector by combining a deep learning Convolutional Neural Network (CNN) for object classification and applies classic object detection algorithms to devise our own deep learning object detector. MiniVGGNet is an architecture network used to train an object classification, and the data used for this purpose was collected from specific indoor environment building. For object detection, sliding windows and image pyramids were used to localize and detect objects at different locations, and non-maxima suppression (NMS) was used to obtain the final bounding box to localize the object location. Based on the experiment result, the percentage of classification accuracy of the network is 80% to 90% and the time for the system to detect the object is less than 15sec/frame. Experimental results show that there are reasonable and efficient to combine classic object detection method with a deep learning classification approach. The performance of this method can work in some specific use cases and effectively solving the problem of the inaccurate classification and detection of typical features.


2021 ◽  
Vol 13 (24) ◽  
pp. 13834
Author(s):  
Guk-Jin Son ◽  
Dong-Hoon Kwak ◽  
Mi-Kyung Park ◽  
Young-Duk Kim ◽  
Hee-Chul Jung

Supervised deep learning-based foreign object detection algorithms are tedious, costly, and time-consuming because they usually require a large number of training datasets and annotations. These disadvantages make them frequently unsuitable for food quality evaluation and food manufacturing processes. However, the deep learning-based foreign object detection algorithm is an effective method to overcome the disadvantages of conventional foreign object detection methods mainly used in food inspection. For example, color sorter machines cannot detect foreign objects with a color similar to food, and the performance is easily degraded by changes in illuminance. Therefore, to detect foreign objects, we use a deep learning-based foreign object detection algorithm (model). In this paper, we present a synthetic method to efficiently acquire a training dataset of deep learning that can be used for food quality evaluation and food manufacturing processes. Moreover, we perform data augmentation using color jitter on a synthetic dataset and show that this approach significantly improves the illumination invariance features of the model trained on synthetic datasets. The F1-score of the model that trained the synthetic dataset of almonds at 360 lux illumination intensity achieved a performance of 0.82, similar to the F1-score of the model that trained the real dataset. Moreover, the F1-score of the model trained with the real dataset combined with the synthetic dataset achieved better performance than the model trained with the real dataset in the change of illumination. In addition, compared with the traditional method of using color sorter machines to detect foreign objects, the model trained on the synthetic dataset has obvious advantages in accuracy and efficiency. These results indicate that the synthetic dataset not only competes with the real dataset, but they also complement each other.


2021 ◽  
Vol 163 (1) ◽  
pp. 23
Author(s):  
Kaiming Cui ◽  
Junjie Liu ◽  
Fabo Feng ◽  
Jifeng Liu

Abstract Deep learning techniques have been well explored in the transiting exoplanet field; however, previous work mainly focuses on classification and inspection. In this work, we develop a novel detection algorithm based on a well-proven object detection framework in the computer vision field. Through training the network on the light curves of the confirmed Kepler exoplanets, our model yields about 90% precision and recall for identifying transits with signal-to-noise ratio higher than 6 (set the confidence threshold to 0.6). Giving a slightly lower confidence threshold, recall can reach higher than 95%. We also transfer the trained model to the TESS data and obtain similar performance. The results of our algorithm match the intuition of the human visual perception and make it useful to find single-transiting candidates. Moreover, the parameters of the output bounding boxes can also help to find multiplanet systems. Our network and detection functions are implemented in the Deep-Transit toolkit, which is an open-source Python package hosted on Github and PyPI.


Sign in / Sign up

Export Citation Format

Share Document