scholarly journals Comparison of Deep Learning Models for Cervical Vertebral Maturation Stage Classification on Lateral Cephalometric Radiographs

2021 ◽  
Vol 10 (16) ◽  
pp. 3591
Author(s):  
Hyejun Seo ◽  
JaeJoon Hwang ◽  
Taesung Jeong ◽  
Jonghyun Shin

The purpose of this study is to evaluate and compare the performance of six state-of-the-art convolutional neural network (CNN)-based deep learning models for cervical vertebral maturation (CVM) on lateral cephalometric radiographs, and implement visualization of CVM classification for each model using gradient-weighted class activation map (Grad-CAM) technology. A total of 600 lateral cephalometric radiographs obtained from patients aged 6–19 years between 2013 and 2020 in Pusan National University Dental Hospital were used in this study. ResNet-18, MobileNet-v2, ResNet-50, ResNet-101, Inception-v3, and Inception-ResNet-v2 were tested to determine the optimal pre-trained network architecture. Multi-class classification metrics, accuracy, recall, precision, F1-score, and area under the curve (AUC) values from the receiver operating characteristic (ROC) curve were used to evaluate the performance of the models. All deep learning models demonstrated more than 90% accuracy, with Inception-ResNet-v2 performing the best, relatively. In addition, visualizing each deep learning model using Grad-CAM led to a primary focus on the cervical vertebrae and surrounding structures. The use of these deep learning models in clinical practice will facilitate dental practitioners in making accurate diagnoses and treatment plans.

Technologies ◽  
2021 ◽  
Vol 9 (1) ◽  
pp. 14
Author(s):  
James Dzisi Gadze ◽  
Akua Acheampomaa Bamfo-Asante ◽  
Justice Owusu Agyemang ◽  
Henry Nunoo-Mensah ◽  
Kwasi Adu-Boahen Opare

Software-Defined Networking (SDN) is a new paradigm that revolutionizes the idea of a software-driven network through the separation of control and data planes. It addresses the problems of traditional network architecture. Nevertheless, this brilliant architecture is exposed to several security threats, e.g., the distributed denial of service (DDoS) attack, which is hard to contain in such software-based networks. The concept of a centralized controller in SDN makes it a single point of attack as well as a single point of failure. In this paper, deep learning-based models, long-short term memory (LSTM) and convolutional neural network (CNN), are investigated. It illustrates their possibility and efficiency in being used in detecting and mitigating DDoS attack. The paper focuses on TCP, UDP, and ICMP flood attacks that target the controller. The performance of the models was evaluated based on the accuracy, recall, and true negative rate. We compared the performance of the deep learning models with classical machine learning models. We further provide details on the time taken to detect and mitigate the attack. Our results show that RNN LSTM is a viable deep learning algorithm that can be applied in the detection and mitigation of DDoS in the SDN controller. Our proposed model produced an accuracy of 89.63%, which outperformed linear-based models such as SVM (86.85%) and Naive Bayes (82.61%). Although KNN, which is a linear-based model, outperformed our proposed model (achieving an accuracy of 99.4%), our proposed model provides a good trade-off between precision and recall, which makes it suitable for DDoS classification. In addition, it was realized that the split ratio of the training and testing datasets can give different results in the performance of a deep learning algorithm used in a specific work. The model achieved the best performance when a split of 70/30 was used in comparison to 80/20 and 60/40 split ratios.


2021 ◽  
Vol 23 (Supplement_6) ◽  
pp. vi202-vi203
Author(s):  
Alvaro Sandino ◽  
Ruchika Verma ◽  
Yijiang Chen ◽  
David Becerra ◽  
Eduardo Romero ◽  
...  

Abstract PURPOSE Glioblastoma is a highly heterogeneous brain tumor. Primary treatment for glioblastoma involves maximally-safe surgical resection. After surgery, resected tissue slides are visually analyzed by neuro-pathologists to identify distinct histological hallmarks characterizing glioblastoma including high cellularity, necrosis, and vascular proliferation. In this work, we present a hierarchical deep learning-based strategy to automatically segment distinct Glioblastoma niches including necrosis, cellular tumor, and hyperplastic blood vessels, on digitized histopathology slides. METHODS We employed the IvyGap cohort for which Hematoxylin and eosin (H&E) slides (digitized at 20X magnification) from n=41 glioblastoma patients were available. Additionally, expert-driven segmentations of cellular tumor, necrosis, and hyperplastic blood vessels (along with other histological attributes) were made available. We randomly employed n=120 slides from 29 patients for training, n=38 slides from 6 cases for validation, and n=30 slides from 6 patients to feed our deep learning model based on Residual Network architecture (ResNet-50). ~2,000 patches of 224x224 pixels were sampled for every slide. Our hierarchical model included first segmenting necrosis from non-necrotic (i.e. cellular tumor) regions, and then from the regions segmented as non-necrotic, identifying hyperplastic blood-vessels from the rest of the cellular tumor. RESULTS Our model achieved a training accuracy of 94%, and a testing accuracy of 88% with an area under the curve (AUC) of 92% in distinguishing necrosis from non-necrotic (i.e. cellular tumor) regions. Similarly, we obtained a training accuracy of 78%, and a testing accuracy of 87% (with an AUC of 94%) in identifying hyperplastic blood vessels from the rest of the cellular tumor. CONCLUSION We developed a reliable hierarchical segmentation model for automatic segmentation of necrotic, cellular tumor, and hyperplastic blood vessels on digitized H&E-stained Glioblastoma tissue images. Future work will involve extension of our model for segmentation of pseudopalisading patterns and microvascular proliferation.


2006 ◽  
Vol 76 (6) ◽  
pp. 984-989 ◽  
Author(s):  
Paola Gandini ◽  
Marta Mancini ◽  
Federico Andreani

Abstract Objective: To compare skeletal maturation as measured by hand-wrist bone analysis and by cervical vertebral analysis. Materials and Methods: A radiographic hand-wrist bone analysis and cephalometric cervical vertebral analysis of 30 patients (14 males and 16 females; 7–18 years of age) were examined. The hand-wrist bone analysis was evaluated by the Bjork index, whereas the cervical vertebral analysis was assessed by the cervical vertebral maturation stage (CVMS) method. To define vertebral stages, the analysis consisted of both cephalometric (13 points) and morphologic evaluation of three cervical vertebrae (concavity of second, third, and fourth vertebrae and shape of third and fourth vertebrae). These measurements were then compared with the hand-wrist bone analysis, and the results were statistically analyzed by the Cohen κ concordance index. The same procedure was repeated after 6 months and showed identical results. Results: The Cohen κ index obtained (mean ± SD) was 0.783 ± 0.098, which is in the significant range. The results show a concordance of 83.3%, considering that the estimated percentage for each case is 23.3%. The results also show a correlation of CVMS I with Bjork stages 1–3 (interval A), CVMS II with Bjork stage 4 (interval B), CVMS III with Bjork stage 5 (interval C), CVMS IV with Bjork stages 6 and 7 (interval D), and CVMS V with Bjork stages 8 and 9 (interval E). Conclusions: Vertebral analysis on a lateral cephalogram is as valid as the hand-wrist bone analysis with the advantage of reducing the radiation exposure of growing subjects.


COVID ◽  
2021 ◽  
Vol 1 (1) ◽  
pp. 403-415
Author(s):  
Abeer Badawi ◽  
Khalid Elgazzar

Coronavirus disease (COVID-19) is an illness caused by a novel coronavirus family. One of the practical examinations for COVID-19 is chest radiography. COVID-19 infected patients show abnormalities in chest X-ray images. However, examining the chest X-rays requires a specialist with high experience. Hence, using deep learning techniques in detecting abnormalities in the X-ray images is presented commonly as a potential solution to help diagnose the disease. Numerous research has been reported on COVID-19 chest X-ray classification, but most of the previous studies have been conducted on a small set of COVID-19 X-ray images, which created an imbalanced dataset and affected the performance of the deep learning models. In this paper, we propose several image processing techniques to augment COVID-19 X-ray images to generate a large and diverse dataset to boost the performance of deep learning algorithms in detecting the virus from chest X-rays. We also propose innovative and robust deep learning models, based on DenseNet201, VGG16, and VGG19, to detect COVID-19 from a large set of chest X-ray images. A performance evaluation shows that the proposed models outperform all existing techniques to date. Our models achieved 99.62% on the binary classification and 95.48% on the multi-class classification. Based on these findings, we provide a pathway for researchers to develop enhanced models with a balanced dataset that includes the highest available COVID-19 chest X-ray images. This work is of high interest to healthcare providers, as it helps to better diagnose COVID-19 from chest X-rays in less time with higher accuracy.


Diagnostics ◽  
2021 ◽  
Vol 11 (11) ◽  
pp. 2109
Author(s):  
Skandha S. Sanagala ◽  
Andrew Nicolaides ◽  
Suneet K. Gupta ◽  
Vijaya K. Koppula ◽  
Luca Saba ◽  
...  

Background and Purpose: Only 1–2% of the internal carotid artery asymptomatic plaques are unstable as a result of >80% stenosis. Thus, unnecessary efforts can be saved if these plaques can be characterized and classified into symptomatic and asymptomatic using non-invasive B-mode ultrasound. Earlier plaque tissue characterization (PTC) methods were machine learning (ML)-based, which used hand-crafted features that yielded lower accuracy and unreliability. The proposed study shows the role of transfer learning (TL)-based deep learning models for PTC. Methods: As pertained weights were used in the supercomputer framework, we hypothesize that transfer learning (TL) provides improved performance compared with deep learning. We applied 11 kinds of artificial intelligence (AI) models, 10 of them were augmented and optimized using TL approaches—a class of Atheromatic™ 2.0 TL (AtheroPoint™, Roseville, CA, USA) that consisted of (i–ii) Visual Geometric Group-16, 19 (VGG16, 19); (iii) Inception V3 (IV3); (iv–v) DenseNet121, 169; (vi) XceptionNet; (vii) ResNet50; (viii) MobileNet; (ix) AlexNet; (x) SqueezeNet; and one DL-based (xi) SuriNet-derived from UNet. We benchmark 11 AI models against our earlier deep convolutional neural network (DCNN) model. Results: The best performing TL was MobileNet, with accuracy and area-under-the-curve (AUC) pairs of 96.10 ± 3% and 0.961 (p < 0.0001), respectively. In DL, DCNN was comparable to SuriNet, with an accuracy of 95.66% and 92.7 ± 5.66%, and an AUC of 0.956 (p < 0.0001) and 0.927 (p < 0.0001), respectively. We validated the performance of the AI architectures with established biomarkers such as greyscale median (GSM), fractal dimension (FD), higher-order spectra (HOS), and visual heatmaps. We benchmarked against previously developed Atheromatic™ 1.0 ML and showed an improvement of 12.9%. Conclusions: TL is a powerful AI tool for PTC into symptomatic and asymptomatic plaques.


2019 ◽  
Vol 4 (3) ◽  
pp. 149
Author(s):  
Wenti Komala ◽  
Endah Mardiati ◽  
Eky Soeria Soemantri ◽  
Isnaniah Malik

Cleft lip and palate is one of the most common congenital anomalies. Cleft lip and palate patients encounter growth problems in lip and palate area, although their overall growth and development remains unknown. Cervical vertebral maturation are indicators of physiological maturation used in interceptive treatment and orthognathic surgery. The present study aims to determine physiological maturation stage of cervical vertebrae maturation index in cleft andnon-cleft patients. Lateral cephalogram of 26 cleft patients and 27 non-cleft patients with a range of chronological age from 8-16 years old were involved. The cervical vertebrae maturation were analyzed in six stages of cervical vertebrae maturation method of Hassel and Farman. Data were analyzed using t-test (p≤ 0.05). The result shows that physiologicalmaturation stage of cervical vertebrae maturation index in cleft and non-cleft patients has no significant difference in stage acceleration (p= 0.38), stage transition (p= 0.41) and deceleration (p= 0.39). Likewise, there is no significant difference in physiological maturation stage of cervical vertebrae maturation index between cleft and non-cleft patients. 


2021 ◽  
Author(s):  
Atiq Rehman ◽  
Samir Brahim Belhaouari

<div><div><div><p>Video classification task has gained a significant success in the recent years. Specifically, the topic has gained more attention after the emergence of deep learning models as a successful tool for automatically classifying videos. In recognition to the importance of video classification task and to summarize the success of deep learning models for this task, this paper presents a very comprehensive and concise review on the topic. There are a number of existing reviews and survey papers related to video classification in the scientific literature. However, the existing review papers are either outdated, and therefore, do not include the recent state-of-art works or they have some limitations. In order to provide an updated and concise review, this paper highlights the key findings based on the existing deep learning models. The key findings are also discussed in a way to provide future research directions. This review mainly focuses on the type of network architecture used, the evaluation criteria to measure the success, and the data sets used. To make the review self- contained, the emergence of deep learning methods towards automatic video classification and the state-of-art deep learning methods are well explained and summarized. Moreover, a clear insight of the newly developed deep learning architectures and the traditional approaches is provided, and the critical challenges based on the benchmarks are highlighted for evaluating the technical progress of these methods. The paper also summarizes the benchmark datasets and the performance evaluation matrices for video classification. Based on the compact, complete, and concise review, the paper proposes new research directions to solve the challenging video classification problem.</p></div></div></div>


2021 ◽  
Vol 5 (4) ◽  
pp. 73
Author(s):  
Mohamed Chetoui ◽  
Moulay A. Akhloufi ◽  
Bardia Yousefi ◽  
El Mostafa Bouattane

The coronavirus pandemic is spreading around the world. Medical imaging modalities such as radiography play an important role in the fight against COVID-19. Deep learning (DL) techniques have been able to improve medical imaging tools and help radiologists to make clinical decisions for the diagnosis, monitoring and prognosis of different diseases. Computer-Aided Diagnostic (CAD) systems can improve work efficiency by precisely delineating infections in chest X-ray (CXR) images, thus facilitating subsequent quantification. CAD can also help automate the scanning process and reshape the workflow with minimal patient contact, providing the best protection for imaging technicians. The objective of this study is to develop a deep learning algorithm to detect COVID-19, pneumonia and normal cases on CXR images. We propose two classifications problems, (i) a binary classification to classify COVID-19 and normal cases and (ii) a multiclass classification for COVID-19, pneumonia and normal. Nine datasets and more than 3200 COVID-19 CXR images are used to assess the efficiency of the proposed technique. The model is trained on a subset of the National Institute of Health (NIH) dataset using swish activation, thus improving the training accuracy to detect COVID-19 and other pneumonia. The models are tested on eight merged datasets and on individual test sets in order to confirm the degree of generalization of the proposed algorithms. An explainability algorithm is also developed to visually show the location of the lung-infected areas detected by the model. Moreover, we provide a detailed analysis of the misclassified images. The obtained results achieve high performances with an Area Under Curve (AUC) of 0.97 for multi-class classification (COVID-19 vs. other pneumonia vs. normal) and 0.98 for the binary model (COVID-19 vs. normal). The average sensitivity and specificity are 0.97 and 0.98, respectively. The sensitivity of the COVID-19 class achieves 0.99. The results outperformed the comparable state-of-the-art models for the detection of COVID-19 on CXR images. The explainability model shows that our model is able to efficiently identify the signs of COVID-19.


Electronics ◽  
2020 ◽  
Vol 10 (1) ◽  
pp. 17
Author(s):  
Soha A. Nossier ◽  
Julie Wall ◽  
Mansour Moniri ◽  
Cornelius Glackin ◽  
Nigel Cannings

Recent speech enhancement research has shown that deep learning techniques are very effective in removing background noise. Many deep neural networks are being proposed, showing promising results for improving overall speech perception. The Deep Multilayer Perceptron, Convolutional Neural Networks, and the Denoising Autoencoder are well-established architectures for speech enhancement; however, choosing between different deep learning models has been mainly empirical. Consequently, a comparative analysis is needed between these three architecture types in order to show the factors affecting their performance. In this paper, this analysis is presented by comparing seven deep learning models that belong to these three categories. The comparison includes evaluating the performance in terms of the overall quality of the output speech using five objective evaluation metrics and a subjective evaluation with 23 listeners; the ability to deal with challenging noise conditions; generalization ability; complexity; and, processing time. Further analysis is then provided while using two different approaches. The first approach investigates how the performance is affected by changing network hyperparameters and the structure of the data, including the Lombard effect. While the second approach interprets the results by visualizing the spectrogram of the output layer of all the investigated models, and the spectrograms of the hidden layers of the convolutional neural network architecture. Finally, a general evaluation is performed for supervised deep learning-based speech enhancement while using SWOC analysis, to discuss the technique’s Strengths, Weaknesses, Opportunities, and Challenges. The results of this paper contribute to the understanding of how different deep neural networks perform the speech enhancement task, highlight the strengths and weaknesses of each architecture, and provide recommendations for achieving better performance. This work facilitates the development of better deep neural networks for speech enhancement in the future.


Sign in / Sign up

Export Citation Format

Share Document