scholarly journals Comparison of cephalometric measurements between conventional and automatic cephalometric analysis using convolutional neural network

2021 ◽  
Vol 22 (1) ◽  
Author(s):  
Sangmin Jeon ◽  
Kyungmin Clara Lee

Abstract Objective The rapid development of artificial intelligence technologies for medical imaging has recently enabled automatic identification of anatomical landmarks on radiographs. The purpose of this study was to compare the results of an automatic cephalometric analysis using convolutional neural network with those obtained by a conventional cephalometric approach. Material and methods Cephalometric measurements of lateral cephalograms from 35 patients were obtained using an automatic program and a conventional program. Fifteen skeletal cephalometric measurements, nine dental cephalometric measurements, and two soft tissue cephalometric measurements obtained by the two methods were compared using paired t test and Bland-Altman plots. Results A comparison between the measurements from the automatic and conventional cephalometric analyses in terms of the paired t test confirmed that the saddle angle, linear measurements of maxillary incisor to NA line, and mandibular incisor to NB line showed statistically significant differences. All measurements were within the limits of agreement based on the Bland-Altman plots. The widths of limits of agreement were wider in dental measurements than those in the skeletal measurements. Conclusions Automatic cephalometric analyses based on convolutional neural network may offer clinically acceptable diagnostic performance. Careful consideration and additional manual adjustment are needed for dental measurements regarding tooth structures for higher accuracy and better performance.

Entropy ◽  
2021 ◽  
Vol 23 (7) ◽  
pp. 816
Author(s):  
Pingping Liu ◽  
Xiaokang Yang ◽  
Baixin Jin ◽  
Qiuzhan Zhou

Diabetic retinopathy (DR) is a common complication of diabetes mellitus (DM), and it is necessary to diagnose DR in the early stages of treatment. With the rapid development of convolutional neural networks in the field of image processing, deep learning methods have achieved great success in the field of medical image processing. Various medical lesion detection systems have been proposed to detect fundus lesions. At present, in the image classification process of diabetic retinopathy, the fine-grained properties of the diseased image are ignored and most of the retinopathy image data sets have serious uneven distribution problems, which limits the ability of the network to predict the classification of lesions to a large extent. We propose a new non-homologous bilinear pooling convolutional neural network model and combine it with the attention mechanism to further improve the network’s ability to extract specific features of the image. The experimental results show that, compared with the most popular fundus image classification models, the network model we proposed can greatly improve the prediction accuracy of the network while maintaining computational efficiency.


Author(s):  
Satoru Tsuiki ◽  
Takuya Nagaoka ◽  
Tatsuya Fukuda ◽  
Yuki Sakamoto ◽  
Fernanda R. Almeida ◽  
...  

Abstract Purpose In 2-dimensional lateral cephalometric radiographs, patients with severe obstructive sleep apnea (OSA) exhibit a more crowded oropharynx in comparison with non-OSA. We tested the hypothesis that machine learning, an application of artificial intelligence (AI), could be used to detect patients with severe OSA based on 2-dimensional images. Methods A deep convolutional neural network was developed (n = 1258; 90%) and tested (n = 131; 10%) using data from 1389 (100%) lateral cephalometric radiographs obtained from individuals diagnosed with severe OSA (n = 867; apnea hypopnea index > 30 events/h sleep) or non-OSA (n = 522; apnea hypopnea index < 5 events/h sleep) at a single center for sleep disorders. Three kinds of data sets were prepared by changing the area of interest using a single image: the original image without any modification (full image), an image containing a facial profile, upper airway, and craniofacial soft/hard tissues (main region), and an image containing part of the occipital region (head only). A radiologist also performed a conventional manual cephalometric analysis of the full image for comparison. Results The sensitivity/specificity was 0.87/0.82 for full image, 0.88/0.75 for main region, 0.71/0.63 for head only, and 0.54/0.80 for the manual analysis. The area under the receiver-operating characteristic curve was the highest for main region 0.92, for full image 0.89, for head only 0.70, and for manual cephalometric analysis 0.75. Conclusions A deep convolutional neural network identified individuals with severe OSA with high accuracy. Future research on this concept using AI and images can be further encouraged when discussing triage of OSA.


2020 ◽  
Vol 9 (2) ◽  
pp. 74
Author(s):  
Eric Hsueh-Chan Lu ◽  
Jing-Mei Ciou

With the rapid development of surveying and spatial information technologies, more and more attention has been given to positioning. In outdoor environments, people can easily obtain positioning services through global navigation satellite systems (GNSS). In indoor environments, the GNSS signal is often lost, while other positioning problems, such as dead reckoning and wireless signals, will face accumulated errors and signal interference. Therefore, this research uses images to realize a positioning service. The main concept of this work is to establish a model for an indoor field image and its coordinate information and to judge its position by image eigenvalue matching. Based on the architecture of PoseNet, the image is input into a 23-layer convolutional neural network according to various sizes to train end-to-end location identification tasks, and the three-dimensional position vector of the camera is regressed. The experimental data are taken from the underground parking lot and the Palace Museum. The preliminary experimental results show that this new method designed by us can effectively improve the accuracy of indoor positioning by about 20% to 30%. In addition, this paper also discusses other architectures, field sizes, camera parameters, and error corrections for this neural network system. The preliminary experimental results show that the angle error correction method designed by us can effectively improve positioning by about 20%.


Author(s):  
Ruimin Ke ◽  
Wan Li ◽  
Zhiyong Cui ◽  
Yinhai Wang

Traffic speed prediction is a critically important component of intelligent transportation systems. Recently, with the rapid development of deep learning and transportation data science, a growing body of new traffic speed prediction models have been designed that achieved high accuracy and large-scale prediction. However, existing studies have two major limitations. First, they predict aggregated traffic speed rather than lane-level traffic speed; second, most studies ignore the impact of other traffic flow parameters in speed prediction. To address these issues, the authors propose a two-stream multi-channel convolutional neural network (TM-CNN) model for multi-lane traffic speed prediction considering traffic volume impact. In this model, the authors first introduce a new data conversion method that converts raw traffic speed data and volume data into spatial–temporal multi-channel matrices. Then the authors carefully design a two-stream deep neural network to effectively learn the features and correlations between individual lanes, in the spatial–temporal dimensions, and between speed and volume. Accordingly, a new loss function that considers the volume impact in speed prediction is developed. A case study using 1-year data validates the TM-CNN model and demonstrates its superiority. This paper contributes to two research areas: (1) traffic speed prediction, and (2) multi-lane traffic flow study.


2019 ◽  
Vol 11 (9) ◽  
pp. 1006 ◽  
Author(s):  
Quanlong Feng ◽  
Jianyu Yang ◽  
Dehai Zhu ◽  
Jiantao Liu ◽  
Hao Guo ◽  
...  

Coastal land cover classification is a significant yet challenging task in remote sensing because of the complex and fragmented nature of coastal landscapes. However, availability of multitemporal and multisensor remote sensing data provides opportunities to improve classification accuracy. Meanwhile, rapid development of deep learning has achieved astonishing results in computer vision tasks and has also been a popular topic in the field of remote sensing. Nevertheless, designing an effective and concise deep learning model for coastal land cover classification remains problematic. To tackle this issue, we propose a multibranch convolutional neural network (MBCNN) for the fusion of multitemporal and multisensor Sentinel data to improve coastal land cover classification accuracy. The proposed model leverages a series of deformable convolutional neural networks to extract representative features from a single-source dataset. Extracted features are aggregated through an adaptive feature fusion module to predict final land cover categories. Experimental results indicate that the proposed MBCNN shows good performance, with an overall accuracy of 93.78% and a Kappa coefficient of 0.9297. Inclusion of multitemporal data improves accuracy by an average of 6.85%, while multisensor data contributes to 3.24% of accuracy increase. Additionally, the featured fusion module in this study also increases accuracy by about 2% when compared with the feature-stacking method. Results demonstrate that the proposed method can effectively mine and fuse multitemporal and multisource Sentinel data, which improves coastal land cover classification accuracy.


2013 ◽  
Vol 84 (3) ◽  
pp. 437-442 ◽  
Author(s):  
Cecilia Goracci ◽  
Marco Ferrari

ABSTRACT Objective: To assess the reproducibility of cephalometric measurements performed with software for a tablet, with a program for personal computers (PCs), and manually. Materials and Methods: The pretreatment lateral cephalograms of 20 patients that were acquired using the same digital cephalometer were collected. Tracings were performed with NemoCeph for Windows (Nemotec), with SmileCeph for iPad (Glace Software), and by hand. Landmark identification was carried out with a mouse-driven cursor using NemoCeph and with a stylus pen on the iPad screen using SmileCeph. Hand tracings were performed on printouts of the cephalograms, using a 0.3-mm 2H pencil and a protractor. Cephalometric landmarks and linear and angular measurements were recorded. All the tracings were done by the same investigator. To evaluate reproducibility, for each cephalometric measurement the agreement between the value derived from NemoCeph, that given by SmileCeph and that measured manually was assessed with the intraclass correlation coefficient (ICC). Agreement was rated as low for an ICC ≤0.75, and an ICC &gt; 0.75 was considered indicative of good agreement. Also, differences in measurements between each software and manual tracing were statistically evaluated (P &lt; .05). Results: All the measurements had ICC &gt;0.8, indicative of a high agreement among the tracing methods. Relatively lower ICCs occurred for linear measurements related to the occlusal plane and to N perpendicular to the Frankfurt plane. Differences in measurements between both software programs and hand tracing were not statistically significant for any of the cephalometric parameters. Conclusion: Tablet-assisted, PC-aided, and manual cephalometric tracings showed good agreement.


2021 ◽  
Vol 10 (22) ◽  
pp. 5400
Author(s):  
Eun-Gyeong Kim ◽  
Il-Seok Oh ◽  
Jeong-Eun So ◽  
Junhyeok Kang ◽  
Van Nhat Thang Le ◽  
...  

Recently, the estimation of bone maturation using deep learning has been actively conducted. However, many studies have considered hand–wrist radiographs, while a few studies have focused on estimating cervical vertebral maturation (CVM) using lateral cephalograms. This study proposes the use of deep learning models for estimating CVM from lateral cephalograms. As the second, third, and fourth cervical vertebral regions (denoted as C2, C3, and C4, respectively) are considerably smaller than the whole image, we propose a stepwise segmentation-based model that focuses on the C2–C4 regions. We propose three convolutional neural network-based classification models: a one-step model with only CVM classification, a two-step model with region of interest (ROI) detection and CVM classification, and a three-step model with ROI detection, cervical segmentation, and CVM classification. Our dataset contains 600 lateral cephalogram images, comprising six classes with 100 images each. The three-step segmentation-based model produced the best accuracy (62.5%) compared to the models that were not segmentation-based.


2019 ◽  
Vol 2019 ◽  
pp. 1-13 ◽  
Author(s):  
Feng-Ping An ◽  
Zhi-Wen Liu

With the development of computer vision and image segmentation technology, medical image segmentation and recognition technology has become an important part of computer-aided diagnosis. The traditional image segmentation method relies on artificial means to extract and select information such as edges, colors, and textures in the image. It not only consumes considerable energy resources and people’s time but also requires certain expertise to obtain useful feature information, which no longer meets the practical application requirements of medical image segmentation and recognition. As an efficient image segmentation method, convolutional neural networks (CNNs) have been widely promoted and applied in the field of medical image segmentation. However, CNNs that rely on simple feedforward methods have not met the actual needs of the rapid development of the medical field. Thus, this paper is inspired by the feedback mechanism of the human visual cortex, and an effective feedback mechanism calculation model and operation framework is proposed, and the feedback optimization problem is presented. A new feedback convolutional neural network algorithm based on neuron screening and neuron visual information recovery is constructed. So, a medical image segmentation algorithm based on a feedback mechanism convolutional neural network is proposed. The basic idea is as follows: The model for obtaining an initial region with the segmented medical image classifies the pixel block samples in the segmented image. Then, the initial results are optimized by threshold segmentation and morphological methods to obtain accurate medical image segmentation results. Experiments show that the proposed segmentation method has not only high segmentation accuracy but also extremely high adaptive segmentation ability for various medical images. The research in this paper provides a new perspective for medical image segmentation research. It is a new attempt to explore more advanced intelligent medical image segmentation methods. It also provides technical approaches and methods for further development and improvement of adaptive medical image segmentation technology.


2015 ◽  
Vol 5 ◽  
pp. 103-110
Author(s):  
Manish Suresh Agrawal ◽  
Jiwan Asha Manish Agrawal ◽  
Vivek Patni ◽  
Lalita Nanjannawar

Objective To determine the reliability of Computer Assisted Digital Cephalometric Analysis System (CADCAS) in terms of landmark identification on the values of cephalometric measurements in comparison with those obtained from original radiographs. Materials and Methods The study material consisted of Twenty five lateral cephalograms selected randomly, 16 cephalometric points together with 10 angular and 5 linear cephalometric measurements. The landmarks were manually picked on the tracing & the measurements of X &Y axis done with reference grid. The same tracing was digitized & image loaded in the software (ViewBox 3.1.1) was checked for the magnification (metal ruler) & distortion. The second part of the study compared manual and the CADCAS since the landmarks were manually digitized on screen as against the manually picked ones on the tracing paper. The x and y-coordinates for 16 landmarks were measured, mean and standard deviation calculated, linear and angular measurements compared. Statistical Analysis A paired t-test was done to calculate the statistical significance of the differences. Intraclass reliability coefficient (signifying reproducibility) of the variable was recorded. The observations were tabulated and analysis was done using the paired t test at a P value <0.05. Results Out of 47 variables looked for, 21 showed statistical significance. Direct digitization onscreen (CADCAS) was the quickest and least tedious method. CADCAS was unreliable with linear measurements involving bilateral structures such as Gonion & Articulare. Conclusions Both the methods are equally reliable and reproducible. The intra-class reliability coefficient of all variables differed only slightly, which is not clinically significant.


2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Yi Qian

With the advent of the era of big data and the rapid development of deep learning and other technologies, people can use complex neural network models to mine and extract key information in massive data with the support of powerful computing power. However, it also increases the complexity of heterogeneous network and greatly increases the difficulty of network maintenance and management. In order to solve the problem of network fault diagnosis, this paper first proposes an improved semisupervised inverse network fault diagnosis algorithm; the proposed algorithm effectively guarantees the convergence of generated against network model, makes full use of a large amount of trouble-free tag data, and obtains a good accuracy of fault diagnosis. Then, the diagnosis model is further optimized and the fault classification task is completed by the convolutional neural network, the discriminant function of the network is simplified, and the generation pair network is only responsible for generating fault samples. The simulation results also show that the fault diagnosis algorithm based on network generation and convolutional neural network achieves good fault diagnosis accuracy and saves the overhead of manually labeling a large number of data samples.


Sign in / Sign up

Export Citation Format

Share Document