scholarly journals Estimating Cervical Vertebral Maturation with a Lateral Cephalogram Using the Convolutional Neural Network

2021 ◽  
Vol 10 (22) ◽  
pp. 5400
Author(s):  
Eun-Gyeong Kim ◽  
Il-Seok Oh ◽  
Jeong-Eun So ◽  
Junhyeok Kang ◽  
Van Nhat Thang Le ◽  
...  

Recently, the estimation of bone maturation using deep learning has been actively conducted. However, many studies have considered hand–wrist radiographs, while a few studies have focused on estimating cervical vertebral maturation (CVM) using lateral cephalograms. This study proposes the use of deep learning models for estimating CVM from lateral cephalograms. As the second, third, and fourth cervical vertebral regions (denoted as C2, C3, and C4, respectively) are considerably smaller than the whole image, we propose a stepwise segmentation-based model that focuses on the C2–C4 regions. We propose three convolutional neural network-based classification models: a one-step model with only CVM classification, a two-step model with region of interest (ROI) detection and CVM classification, and a three-step model with ROI detection, cervical segmentation, and CVM classification. Our dataset contains 600 lateral cephalogram images, comprising six classes with 100 images each. The three-step segmentation-based model produced the best accuracy (62.5%) compared to the models that were not segmentation-based.

Author(s):  
Dima M. Alalharith ◽  
Hajar M. Alharthi ◽  
Wejdan M. Alghamdi ◽  
Yasmine M. Alsenbel ◽  
Nida Aslam ◽  
...  

Computer-based technologies play a central role in the dentistry field, as they present many methods for diagnosing and detecting various diseases, such as periodontitis. The current study aimed to develop and evaluate the state-of-the-art object detection and recognition techniques and deep learning algorithms for the automatic detection of periodontal disease in orthodontic patients using intraoral images. In this study, a total of 134 intraoral images were divided into a training dataset (n = 107 [80%]) and a test dataset (n = 27 [20%]). Two Faster Region-based Convolutional Neural Network (R-CNN) models using ResNet-50 Convolutional Neural Network (CNN) were developed. The first model detects the teeth to locate the region of interest (ROI), while the second model detects gingival inflammation. The detection accuracy, precision, recall, and mean average precision (mAP) were calculated to verify the significance of the proposed model. The teeth detection model achieved an accuracy, precision, recall, and mAP of 100 %, 100%, 51.85%, and 100%, respectively. The inflammation detection model achieved an accuracy, precision, recall, and mAP of 77.12%, 88.02%, 41.75%, and 68.19%, respectively. This study proved the viability of deep learning models for the detection and diagnosis of gingivitis in intraoral images. Hence, this highlights its potential usability in the field of dentistry and aiding in reducing the severity of periodontal disease globally through preemptive non-invasive diagnosis.


2020 ◽  
Vol 64 (2) ◽  
pp. 20507-1-20507-10 ◽  
Author(s):  
Hee-Jin Yu ◽  
Chang-Hwan Son ◽  
Dong Hyuk Lee

Abstract Traditional approaches for the identification of leaf diseases involve the use of handcrafted features such as colors and textures for feature extraction. Therefore, these approaches may have limitations in extracting abundant and discriminative features. Although deep learning approaches have been recently introduced to overcome the shortcomings of traditional approaches, existing deep learning models such as VGG and ResNet have been used in these approaches. This indicates that the approach can be further improved to increase the discriminative power because the spatial attention mechanism to predict the background and spot areas (i.e., local areas with leaf diseases) has not been considered. Therefore, a new deep learning architecture, which is hereafter referred to as region-of-interest-aware deep convolutional neural network (ROI-aware DCNN), is proposed to make deep features more discriminative and increase classification performance. The primary idea is that leaf disease symptoms appear in leaf area, whereas the background region does not contain useful information regarding leaf diseases. To realize this, two subnetworks are designed. One subnetwork is the ROI subnetwork to provide more discriminative features from the background, leaf areas, and spot areas in the feature map. The other subnetwork is the classification subnetwork to increase the classification accuracy. To train the ROI-aware DCNN, the ROI subnetwork is first learned with a new image set containing the ground truth images where the background, leaf area, and spot area are divided. Subsequently, the entire network is trained in an end-to-end manner to connect the ROI subnetwork with the classification subnetwork through a concatenation layer. The experimental results confirm that the proposed ROI-aware DCNN can increase the discriminative power by predicting the areas in the feature map that are more important for leaf diseases identification. The results prove that the proposed method surpasses conventional state-of-the-art methods such as VGG, ResNet, SqueezeNet, bilinear model, and multiscale-based deep feature extraction and pooling.


Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1033
Author(s):  
Qiaodi Wen ◽  
Ziqi Luo ◽  
Ruitao Chen ◽  
Yifan Yang ◽  
Guofa Li

By detecting the defect location in high-resolution insulator images collected by unmanned aerial vehicle (UAV) in various environments, the occurrence of power failure can be timely detected and the caused economic loss can be reduced. However, the accuracies of existing detection methods are greatly limited by the complex background interference and small target detection. To solve this problem, two deep learning methods based on Faster R-CNN (faster region-based convolutional neural network) are proposed in this paper, namely Exact R-CNN (exact region-based convolutional neural network) and CME-CNN (cascade the mask extraction and exact region-based convolutional neural network). Firstly, we proposed an Exact R-CNN based on a series of advanced techniques including FPN (feature pyramid network), cascade regression, and GIoU (generalized intersection over union). RoI Align (region of interest align) is introduced to replace RoI pooling (region of interest pooling) to address the misalignment problem, and the depthwise separable convolution and linear bottleneck are introduced to reduce the computational burden. Secondly, a new pipeline is innovatively proposed to improve the performance of insulator defect detection, namely CME-CNN. In our proposed CME-CNN, an insulator mask image is firstly generated to eliminate the complex background by using an encoder-decoder mask extraction network, and then the Exact R-CNN is used to detect the insulator defects. The experimental results show that our proposed method can effectively detect insulator defects, and its accuracy is better than the examined mainstream target detection algorithms.


2020 ◽  
Vol 20 (1) ◽  
pp. 223-232 ◽  
Author(s):  
Jinkyu Ryu ◽  
Dongkurl Kwak

Recently, cases of large-scale fires, such as those at Jecheon Sports Center in 2017 and Miryang Sejong Hospital in 2018, have been increasing. We require more advanced techniques than the existing approaches to better detect fires and avoid these situations. In this study, a procedure for the detection of fire in a region of interest in an image is presented using image pre-processing and the application of a convolutional neural network based on deep-learning. Data training based on the haze dataset is included in the process so that the generation of indoor haze smoke, which is difficult to recognize using conventional methods, is also detected along with flames and smoke. The results indicated that fires in images can be identified with an accuracy of 92.3% and a precision of 93.5%.


2019 ◽  
Vol 43 (3) ◽  
pp. 402-411
Author(s):  
A.V. Mingalev ◽  
A.V. Belov ◽  
I.M. Gabdullin ◽  
R.R. Agafonova ◽  
S.N. Shusharin

The paper presents a comparative analysis of several methods for recognition of test-object position in a thermal image when setting and testing characteristics of thermal image channels in an automated mode. We consider methods of image recognition based on the correlation image comparison, Viola-Jones method, LeNet classificatory convolutional neural network, GoogleNet (Inception v.1) classificatory convolutional neural network, and a deep-learning-based convolutional neural network of Single-Shot Multibox Detector (SSD) VGG16 type. The best performance is reached via using the deep-learning-based convolutional neural network of the VGG16-type. The main advantages of this method include robustness to variations in the test object size; high values of accuracy and recall parameters; and doing without additional methods for RoI (region of interest) localization.


2021 ◽  
Vol 13 (12) ◽  
pp. 2331
Author(s):  
Mengying Cao ◽  
Ying Sun ◽  
Xin Jiang ◽  
Ziming Li ◽  
Qinchuan Xin

Vegetation phenology plays a key role in influencing ecosystem processes and biosphere-atmosphere feedbacks. Digital cameras such as PhenoCam that monitor vegetation canopies in near real-time provide continuous images that record phenological and environmental changes. There is a need to develop methods for automated and effective detection of vegetation dynamics from PhenoCam images. Here we developed a method to predict leaf phenology of deciduous broadleaf forests from individual PhenoCam images using deep learning approaches. We tested four convolutional neural network regression (CNNR) networks on their ability to predict vegetation growing dates based on PhenoCam images at 56 sites in North America. In the one-site experiment, the predicted phenology dated to after the leaf-out events agree well with the observed data, with a coefficient of determination (R2) of nearly 0.999, a root mean square error (RMSE) of up to 3.7 days, and a mean absolute error (MAE) of up to 2.1 days. The method developed achieved lower accuracies in the all-site experiment than in the one-site experiment, and the achieved R2 was 0.843, RMSE was 25.2 days, and MAE was 9.3 days in the all-site experiment. The model accuracy increased when the deep networks used the region of interest images rather than the entire images as inputs. Compared to the existing methods that rely on time series of PhenoCam images for studying leaf phenology, we found that the deep learning method is a feasible solution to identify leaf phenology of deciduous broadleaf forests from individual PhenoCam images.


2019 ◽  
Author(s):  
Seoin Back ◽  
Junwoong Yoon ◽  
Nianhan Tian ◽  
Wen Zhong ◽  
Kevin Tran ◽  
...  

We present an application of deep-learning convolutional neural network of atomic surface structures using atomic and Voronoi polyhedra-based neighbor information to predict adsorbate binding energies for the application in catalysis.


2021 ◽  
Vol 22 (1) ◽  
Author(s):  
Sangmin Jeon ◽  
Kyungmin Clara Lee

Abstract Objective The rapid development of artificial intelligence technologies for medical imaging has recently enabled automatic identification of anatomical landmarks on radiographs. The purpose of this study was to compare the results of an automatic cephalometric analysis using convolutional neural network with those obtained by a conventional cephalometric approach. Material and methods Cephalometric measurements of lateral cephalograms from 35 patients were obtained using an automatic program and a conventional program. Fifteen skeletal cephalometric measurements, nine dental cephalometric measurements, and two soft tissue cephalometric measurements obtained by the two methods were compared using paired t test and Bland-Altman plots. Results A comparison between the measurements from the automatic and conventional cephalometric analyses in terms of the paired t test confirmed that the saddle angle, linear measurements of maxillary incisor to NA line, and mandibular incisor to NB line showed statistically significant differences. All measurements were within the limits of agreement based on the Bland-Altman plots. The widths of limits of agreement were wider in dental measurements than those in the skeletal measurements. Conclusions Automatic cephalometric analyses based on convolutional neural network may offer clinically acceptable diagnostic performance. Careful consideration and additional manual adjustment are needed for dental measurements regarding tooth structures for higher accuracy and better performance.


Sign in / Sign up

Export Citation Format

Share Document