scholarly journals An artifıcial ıntelligence approach to automatic tooth detection and numbering in panoramic radiographs

2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Elif Bilgir ◽  
İbrahim Şevki Bayrakdar ◽  
Özer Çelik ◽  
Kaan Orhan ◽  
Fatma Akkoca ◽  
...  

Abstract Background Panoramic radiography is an imaging method for displaying maxillary and mandibular teeth together with their supporting structures. Panoramic radiography is frequently used in dental imaging due to its relatively low radiation dose, short imaging time, and low burden to the patient. We verified the diagnostic performance of an artificial intelligence (AI) system based on a deep convolutional neural network method to detect and number teeth on panoramic radiographs. Methods The data set included 2482 anonymized panoramic radiographs from adults from the archive of Eskisehir Osmangazi University, Faculty of Dentistry, Department of Oral and Maxillofacial Radiology. A Faster R-CNN Inception v2 model was used to develop an AI algorithm (CranioCatch, Eskisehir, Turkey) to automatically detect and number teeth on panoramic radiographs. Human observation and AI methods were compared on a test data set consisting of 249 panoramic radiographs. True positive, false positive, and false negative rates were calculated for each quadrant of the jaws. The sensitivity, precision, and F-measure values were estimated using a confusion matrix. Results The total numbers of true positive, false positive, and false negative results were 6940, 250, and 320 for all quadrants, respectively. Consequently, the estimated sensitivity, precision, and F-measure were 0.9559, 0.9652, and 0.9606, respectively. Conclusions The deep convolutional neural network system was successful in detecting and numbering teeth. Clinicians can use AI systems to detect and number teeth on panoramic radiographs, which may eventually replace evaluation by human observers and support decision making.

Molecules ◽  
2019 ◽  
Vol 24 (24) ◽  
pp. 4590
Author(s):  
Jiali Lv ◽  
Jian Wei ◽  
Zhenyu Wang ◽  
Jin Cao

Mixtures analysis can provide more information than individual components. It is important to detect the different compounds in the real complex samples. However, mixtures are often disturbed by impurities and noise to influence the accuracy. Purification and denoising will cost a lot of algorithm time. In this paper, we propose a model based on convolutional neural network (CNN) which can analyze the chemical peak information in the tandem mass spectrometry (MS/MS) data. Compared with traditional analyzing methods, CNN can reduce steps in data preprocessing. This model can extract features of different compounds and classify multi-label mass spectral data. When dealing with MS data of mixtures based on the Human Metabolome Database (HMDB), the accuracy can reach at 98%. In 600 MS test data, 451 MS data were fully detected (true positive), 142 MS data were partially found (false positive), and 7 MS data were falsely predicted (true negative). In comparison, the number of true positive test data for support vector machine (SVM) with principal component analysis (PCA), deep neural network (DNN), long short-term memory (LSTM), and XGBoost respectively are 282, 293, 270, and 402; the number of false positive test data for four models are 318, 284, 198, and 168; the number of true negative test data for four models are 0, 23, 7, 132, and 30. Compared with the model proposed in other literature, the accuracy and model performance of CNN improved considerably by separating the different compounds independent MS/MS data through three-channel architecture input. By inputting MS data from different instruments, adding more offset MS data will make CNN models have stronger universality in the future.


2022 ◽  
Vol 2022 ◽  
pp. 1-7
Author(s):  
Ibrahim S. Bayrakdar ◽  
Kaan Orhan ◽  
Özer Çelik ◽  
Elif Bilgir ◽  
Hande Sağlam ◽  
...  

The purpose of the paper was the assessment of the success of an artificial intelligence (AI) algorithm formed on a deep-convolutional neural network (D-CNN) model for the segmentation of apical lesions on dental panoramic radiographs. A total of 470 anonymized panoramic radiographs were used to progress the D-CNN AI model based on the U-Net algorithm (CranioCatch, Eskisehir, Turkey) for the segmentation of apical lesions. The radiographs were obtained from the Radiology Archive of the Department of Oral and Maxillofacial Radiology of the Faculty of Dentistry of Eskisehir Osmangazi University. A U-Net implemented with PyTorch model (version 1.4.0) was used for the segmentation of apical lesions. In the test data set, the AI model segmented 63 periapical lesions on 47 panoramic radiographs. The sensitivity, precision, and F1-score for segmentation of periapical lesions at 70% IoU values were 0.92, 0.84, and 0.88, respectively. AI systems have the potential to overcome clinical problems. AI may facilitate the assessment of periapical pathology based on panoramic radiographs.


PLoS ONE ◽  
2021 ◽  
Vol 16 (8) ◽  
pp. e0255605
Author(s):  
Ching-Juei Yang ◽  
Chien-Kuo Wang ◽  
Yu-Hua Dean Fang ◽  
Jing-Yao Wang ◽  
Fong-Chin Su ◽  
...  

The aim of the study was to use a previously proposed mask region–based convolutional neural network (Mask R-CNN) for automatic abnormal liver density detection and segmentation based on hepatocellular carcinoma (HCC) computed tomography (CT) datasets from a radiological perspective. Training and testing datasets were acquired retrospectively from two hospitals of Taiwan. The training dataset contained 10,130 images of liver tumor densities of 11,258 regions of interest (ROIs). The positive testing dataset contained 1,833 images of liver tumor densities with 1,874 ROIs, and negative testing data comprised 20,283 images without abnormal densities in liver parenchyma. The Mask R-CNN was used to generate a medical model, and areas under the curve, true positive rates, false positive rates, and Dice coefficients were evaluated. For abnormal liver CT density detection, in each image, we identified the mean area under the curve, true positive rate, and false positive rate, which were 0.9490, 91.99%, and 13.68%, respectively. For segmentation ability, the highest mean Dice coefficient obtained was 0.8041. This study trained a Mask R-CNN on various HCC images to construct a medical model that serves as an auxiliary tool for alerting radiologists to abnormal CT density in liver scans; this model can simultaneously detect liver lesions and perform automatic instance segmentation.


Thorax ◽  
2020 ◽  
Vol 75 (4) ◽  
pp. 306-312 ◽  
Author(s):  
David R Baldwin ◽  
Jennifer Gustafson ◽  
Lyndsey Pickup ◽  
Carlos Arteta ◽  
Petr Novotny ◽  
...  

BackgroundEstimation of the risk of malignancy in pulmonary nodules detected by CT is central in clinical management. The use of artificial intelligence (AI) offers an opportunity to improve risk prediction. Here we compare the performance of an AI algorithm, the lung cancer prediction convolutional neural network (LCP-CNN), with that of the Brock University model, recommended in UK guidelines.MethodsA dataset of incidentally detected pulmonary nodules measuring 5–15 mm was collected retrospectively from three UK hospitals for use in a validation study. Ground truth diagnosis for each nodule was based on histology (required for any cancer), resolution, stability or (for pulmonary lymph nodes only) expert opinion. There were 1397 nodules in 1187 patients, of which 234 nodules in 229 (19.3%) patients were cancer. Model discrimination and performance statistics at predefined score thresholds were compared between the Brock model and the LCP-CNN.ResultsThe area under the curve for LCP-CNN was 89.6% (95% CI 87.6 to 91.5), compared with 86.8% (95% CI 84.3 to 89.1) for the Brock model (p≤0.005). Using the LCP-CNN, we found that 24.5% of nodules scored below the lowest cancer nodule score, compared with 10.9% using the Brock score. Using the predefined thresholds, we found that the LCP-CNN gave one false negative (0.4% of cancers), whereas the Brock model gave six (2.5%), while specificity statistics were similar between the two models.ConclusionThe LCP-CNN score has better discrimination and allows a larger proportion of benign nodules to be identified without missing cancers than the Brock model. This has the potential to substantially reduce the proportion of surveillance CT scans required and thus save significant resources.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Muhammad Usman Tariq ◽  
Muhammad Babar ◽  
Marc Poulin ◽  
Akmal Saeed Khattak

Purpose The purpose of the proposed model is to assist the e-business to predict the churned users using machine learning. This paper aims to monitor the customer behavior and to perform decision-making accordingly. Design/methodology/approach The proposed model uses the 2-D convolutional neural network (CNN; a technique of deep learning). The proposed model is a layered architecture that comprises two different phases that are data load and preprocessing layer and 2-D CNN layer. In addition, the Apache Spark parallel and distributed framework is used to process the data in a parallel environment. Training data is captured from Kaggle by using Telco Customer Churn. Findings The proposed model is accurate and has an accuracy score of 0.963 out of 1. In addition, the training and validation loss is extremely less, which is 0.004. The confusion matric results show the true-positive values are 95% and the true-negative values are 94%. However, the false-negative is only 5% and the false-positive is only 6%, which is effective. Originality/value This paper highlights an inclusive description of preprocessing required for the CNN model. The data set is addressed more carefully for the successful customer churn prediction.


2021 ◽  
Vol 7 (2) ◽  
pp. 356-362
Author(s):  
Harry Coppock ◽  
Alex Gaskell ◽  
Panagiotis Tzirakis ◽  
Alice Baird ◽  
Lyn Jones ◽  
...  

BackgroundSince the emergence of COVID-19 in December 2019, multidisciplinary research teams have wrestled with how best to control the pandemic in light of its considerable physical, psychological and economic damage. Mass testing has been advocated as a potential remedy; however, mass testing using physical tests is a costly and hard-to-scale solution.MethodsThis study demonstrates the feasibility of an alternative form of COVID-19 detection, harnessing digital technology through the use of audio biomarkers and deep learning. Specifically, we show that a deep neural network based model can be trained to detect symptomatic and asymptomatic COVID-19 cases using breath and cough audio recordings.ResultsOur model, a custom convolutional neural network, demonstrates strong empirical performance on a data set consisting of 355 crowdsourced participants, achieving an area under the curve of the receiver operating characteristics of 0.846 on the task of COVID-19 classification.ConclusionThis study offers a proof of concept for diagnosing COVID-19 using cough and breath audio signals and motivates a comprehensive follow-up research study on a wider data sample, given the evident advantages of a low-cost, highly scalable digital COVID-19 diagnostic tool.


Biomolecules ◽  
2021 ◽  
Vol 11 (2) ◽  
pp. 264
Author(s):  
Kaisa Liimatainen ◽  
Riku Huttunen ◽  
Leena Latonen ◽  
Pekka Ruusuvuori

Identifying localization of proteins and their specific subpopulations associated with certain cellular compartments is crucial for understanding protein function and interactions with other macromolecules. Fluorescence microscopy is a powerful method to assess protein localizations, with increasing demand of automated high throughput analysis methods to supplement the technical advancements in high throughput imaging. Here, we study the applicability of deep neural network-based artificial intelligence in classification of protein localization in 13 cellular subcompartments. We use deep learning-based on convolutional neural network and fully convolutional network with similar architectures for the classification task, aiming at achieving accurate classification, but importantly, also comparison of the networks. Our results show that both types of convolutional neural networks perform well in protein localization classification tasks for major cellular organelles. Yet, in this study, the fully convolutional network outperforms the convolutional neural network in classification of images with multiple simultaneous protein localizations. We find that the fully convolutional network, using output visualizing the identified localizations, is a very useful tool for systematic protein localization assessment.


2020 ◽  
Vol 2020 ◽  
pp. 1-6
Author(s):  
Jian-ye Yuan ◽  
Xin-yuan Nan ◽  
Cheng-rong Li ◽  
Le-le Sun

Considering that the garbage classification is urgent, a 23-layer convolutional neural network (CNN) model is designed in this paper, with the emphasis on the real-time garbage classification, to solve the low accuracy of garbage classification and recycling and difficulty in manual recycling. Firstly, the depthwise separable convolution was used to reduce the Params of the model. Then, the attention mechanism was used to improve the accuracy of the garbage classification model. Finally, the model fine-tuning method was used to further improve the performance of the garbage classification model. Besides, we compared the model with classic image classification models including AlexNet, VGG16, and ResNet18 and lightweight classification models including MobileNetV2 and SuffleNetV2 and found that the model GAF_dense has a higher accuracy rate, fewer Params, and FLOPs. To further check the performance of the model, we tested the CIFAR-10 data set and found the accuracy rates of the model (GAF_dense) are 0.018 and 0.03 higher than ResNet18 and SufflenetV2, respectively. In the ImageNet data set, the accuracy rates of the model (GAF_dense) are 0.225 and 0.146 higher than Resnet18 and SufflenetV2, respectively. Therefore, the garbage classification model proposed in this paper is suitable for garbage classification and other classification tasks to protect the ecological environment, which can be applied to classification tasks such as environmental science, children’s education, and environmental protection.


2020 ◽  
pp. 1-11
Author(s):  
Jie Liu ◽  
Hongbo Zhao

BACKGROUND: Convolution neural network is often superior to other similar algorithms in image classification. Convolution layer and sub-sampling layer have the function of extracting sample features, and the feature of sharing weights greatly reduces the training parameters of the network. OBJECTIVE: This paper describes the improved convolution neural network structure, including convolution layer, sub-sampling layer and full connection layer. This paper also introduces five kinds of diseases and normal eye images reflected by the blood filament of the eyeball “yan.mat” data set, convenient to use MATLAB software for calculation. METHODSL: In this paper, we improve the structure of the classical LeNet-5 convolutional neural network, and design a network structure with different convolution kernels, different sub-sampling methods and different classifiers, and use this structure to solve the problem of ocular bloodstream disease recognition. RESULTS: The experimental results show that the improved convolutional neural network structure is ideal for the recognition of eye blood silk data set, which shows that the convolution neural network has the characteristics of strong classification and strong robustness. The improved structure can classify the diseases reflected by eyeball bloodstain well.


Sign in / Sign up

Export Citation Format

Share Document