scholarly journals Peri-Implant Bone Loss Measurement Using a Region-Based Convolutional Neural Network on Dental Periapical Radiographs

2021 ◽  
Vol 10 (5) ◽  
pp. 1009
Author(s):  
Jun-Young Cha ◽  
Hyung-In Yoon ◽  
In-Sung Yeo ◽  
Kyung-Hoe Huh ◽  
Jung-Suk Han

Determining the peri-implant marginal bone level on radiographs is challenging because the boundaries of the bones around implants are often unclear or the heights of the buccal and lingual bone levels are different. Therefore, a deep convolutional neural network (CNN) was evaluated for detecting the marginal bone level, top, and apex of implants on dental periapical radiographs. An automated assistant system was proposed for calculating the bone loss percentage and classifying the bone resorption severity. A modified region-based CNN (R-CNN) was trained using transfer learning based on Microsoft Common Objects in Context dataset. Overall, 708 periapical radiographic images were divided into training (n = 508), validation (n = 100), and test (n = 100) datasets. The training dataset was randomly enriched by data augmentation. For evaluation, average precision, average recall, and mean object keypoint similarity (OKS) were calculated, and the mean OKS values of the model and a dental clinician were compared. Using detected keypoints, radiographic bone loss was measured and classified. No statistically significant difference was found between the modified R-CNN model and dental clinician for detecting landmarks around dental implants. The modified R-CNN model can be utilized to measure the radiographic peri-implant bone loss ratio to assess the severity of peri-implantitis.

2020 ◽  
Vol 10 (5) ◽  
pp. 1040-1048 ◽  
Author(s):  
Xianwei Jiang ◽  
Liang Chang ◽  
Yu-Dong Zhang

More than 35 million patients are suffering from Alzheimer’s disease and this number is growing, which puts a heavy burden on countries around the world. Early detection is of benefit, in which the deep learning can aid AD identification effectively and gain ideal results. A novel eight-layer convolutional neural network with batch normalization and dropout techniques for classification of Alzheimer’s disease was proposed. After data augmentation, the training dataset contained 7399 AD patient and 7399 HC subjects. Our eight-layer CNN-BN-DO-DA method yielded a sensitivity of 97.77%, a specificity of 97.76%, a precision of 97.79%, an accuracy of 97.76%, a F1 of 97.76%, and a MCC of 95.56% on the test set, which achieved the best performance in seven state-of-the-art approaches. The results strongly demonstrate that this method can effectively assist the clinical diagnosis of Alzheimer’s disease.


2020 ◽  
Vol 21 (S1) ◽  
Author(s):  
Dina Abdelhafiz ◽  
Jinbo Bi ◽  
Reda Ammar ◽  
Clifford Yang ◽  
Sheida Nabavi

Abstract Background Automatic segmentation and localization of lesions in mammogram (MG) images are challenging even with employing advanced methods such as deep learning (DL) methods. We developed a new model based on the architecture of the semantic segmentation U-Net model to precisely segment mass lesions in MG images. The proposed end-to-end convolutional neural network (CNN) based model extracts contextual information by combining low-level and high-level features. We trained the proposed model using huge publicly available databases, (CBIS-DDSM, BCDR-01, and INbreast), and a private database from the University of Connecticut Health Center (UCHC). Results We compared the performance of the proposed model with those of the state-of-the-art DL models including the fully convolutional network (FCN), SegNet, Dilated-Net, original U-Net, and Faster R-CNN models and the conventional region growing (RG) method. The proposed Vanilla U-Net model outperforms the Faster R-CNN model significantly in terms of the runtime and the Intersection over Union metric (IOU). Training with digitized film-based and fully digitized MG images, the proposed Vanilla U-Net model achieves a mean test accuracy of 92.6%. The proposed model achieves a mean Dice coefficient index (DI) of 0.951 and a mean IOU of 0.909 that show how close the output segments are to the corresponding lesions in the ground truth maps. Data augmentation has been very effective in our experiments resulting in an increase in the mean DI and the mean IOU from 0.922 to 0.951 and 0.856 to 0.909, respectively. Conclusions The proposed Vanilla U-Net based model can be used for precise segmentation of masses in MG images. This is because the segmentation process incorporates more multi-scale spatial context, and captures more local and global context to predict a precise pixel-wise segmentation map of an input full MG image. These detected maps can help radiologists in differentiating benign and malignant lesions depend on the lesion shapes. We show that using transfer learning, introducing augmentation, and modifying the architecture of the original model results in better performance in terms of the mean accuracy, the mean DI, and the mean IOU in detecting mass lesion compared to the other DL and the conventional models.


2021 ◽  
Vol 9 ◽  
Author(s):  
Shui-Hua Wang ◽  
Ziquan Zhu ◽  
Yu-Dong Zhang

Objective: COVID-19 is a sort of infectious disease caused by a new strain of coronavirus. This study aims to develop a more accurate COVID-19 diagnosis system.Methods: First, the n-conv module (nCM) is introduced. Then we built a 12-layer convolutional neural network (12l-CNN) as the backbone network. Afterwards, PatchShuffle was introduced to integrate with 12l-CNN as a regularization term of the loss function. Our model was named PSCNN. Moreover, multiple-way data augmentation and Grad-CAM are employed to avoid overfitting and locating lung lesions.Results: The mean and standard variation values of the seven measures of our model were 95.28 ± 1.03 (sensitivity), 95.78 ± 0.87 (specificity), 95.76 ± 0.86 (precision), 95.53 ± 0.83 (accuracy), 95.52 ± 0.83 (F1 score), 91.7 ± 1.65 (MCC), and 95.52 ± 0.83 (FMI).Conclusion: Our PSCNN is better than 10 state-of-the-art models. Further, we validate the optimal hyperparameters in our model and demonstrate the effectiveness of PatchShuffle.


2021 ◽  
Author(s):  
Ananda Ananda ◽  
Kwun Ho Ngan ◽  
Cefa Karabag ◽  
Eduardo Alonso ◽  
Alex Ter-Sarkisov ◽  
...  

This paper investigates the classification of radiographic images with eleven convolutional neural network (CNN) architectures (GoogleNet, VGG-19, AlexNet, SqueezeNet, ResNet-18, Inception-v3, ResNet-50, VGG-16, ResNet-101, DenseNet-201 and Inception-ResNet-v2). The CNNs were used to classify a series of wrist radiographs from the Stanford Musculoskeletal Radiographs (MURA) dataset into two classes - normal and abnormal. The architectures were compared for different hyper-parameters against accuracy and Cohen's kappa coefficient. The best two results were then explored with data augmentation. Without the use of augmentation, the best results were provided by Inception-Resnet-v2 (Mean accuracy = 0.723, Mean kappa = 0.506). These were significantly improved with augmentation to Inception-Resnet-v2 (Mean accuracy = 0.857, Mean kappa = 0.703). Finally, Class Activation Mapping was applied to interpret activation of the network against the location of an anomaly in the radiographs.


Sensors ◽  
2021 ◽  
Vol 21 (16) ◽  
pp. 5381
Author(s):  
Ananda Ananda ◽  
Kwun Ho Ngan ◽  
Cefa Karabağ ◽  
Aram Ter-Sarkisov ◽  
Eduardo Alonso ◽  
...  

This paper investigates the classification of radiographic images with eleven convolutional neural network (CNN) architectures (GoogleNet, VGG-19, AlexNet, SqueezeNet, ResNet-18, Inception-v3, ResNet-50, VGG-16, ResNet-101, DenseNet-201 and Inception-ResNet-v2). The CNNs were used to classify a series of wrist radiographs from the Stanford Musculoskeletal Radiographs (MURA) dataset into two classes—normal and abnormal. The architectures were compared for different hyper-parameters against accuracy and Cohen’s kappa coefficient. The best two results were then explored with data augmentation. Without the use of augmentation, the best results were provided by Inception-ResNet-v2 (Mean accuracy = 0.723, Mean kappa = 0.506). These were significantly improved with augmentation to Inception-ResNet-v2 (Mean accuracy = 0.857, Mean kappa = 0.703). Finally, Class Activation Mapping was applied to interpret activation of the network against the location of an anomaly in the radiographs.


2019 ◽  
Vol 60 (5) ◽  
pp. 586-594 ◽  
Author(s):  
Iori Sumida ◽  
Taiki Magome ◽  
Hideki Kitamori ◽  
Indra J Das ◽  
Hajime Yamaguchi ◽  
...  

Abstract This study aims to produce non-contrast computed tomography (CT) images using a deep convolutional neural network (CNN) for imaging. Twenty-nine patients were selected. CT images were acquired without and with a contrast enhancement medium. The transverse images were divided into 64 × 64 pixels. This resulted in 14 723 patches in total for both non-contrast and contrast-enhanced CT image pairs. The proposed CNN model comprises five two-dimensional (2D) convolution layers with one shortcut path. For comparison, the U-net model, which comprises five 2D convolution layers interleaved with pooling and unpooling layers, was used. Training was performed in 24 patients and, for testing of trained models, another 5 patients were used. For quantitative evaluation, 50 regions of interest (ROIs) were selected on the reference contrast-enhanced image of the test data, and the mean pixel value of the ROIs was calculated. The mean pixel values of the ROIs at the same location on the reference non-contrast image and the predicted non-contrast image were calculated and those values were compared. Regarding the quantitative analysis, the difference in mean pixel value between the reference contrast-enhanced image and the predicted non-contrast image was significant (P < 0.0001) for both models. Significant differences in pixels (P < 0.0001) were found using the U-net model; in contrast, there was no significant difference using the proposed CNN model when comparing the reference non-contrast images and the predicted non-contrast images. Using the proposed CNN model, the contrast-enhanced region was satisfactorily reduced.


2020 ◽  
Author(s):  
Dina Abdelhafiz ◽  
Jinbo Bi ◽  
Reda Ammar ◽  
Clifford Yang ◽  
Sheida Nabavi

AbstractBackgroundAutomatic segmentation and localization of lesions in mammogram (MG) images are challenging even with employing advanced methods such as deep learning (DL) methods. We developed a new model based on the architecture of the semantic segmentation U-Net model to precisely segment mass lesions in MG images. The proposed end-to-end convolutional neural network (CNN) based model extracts contextual information by combining low-level and high-level features. We trained the proposed model using huge publicly available databases, (CBIS-DDSM, BCDR-01, and INbreast), and a private database from the University of Connecticut Health Center (UCHC).ResultsWe compared the performance of the proposed model with those of the state-of-the-art DL models including the fully convolutional network (FCN), SegNet, Dilated-Net, original U-Net, and Faster R-CNN models and the conventional region growing (RG) method. The proposed Vanilla U-Net model outperforms the Faster R-CNN model significantly in terms of the runtime and the Intersection over Union metric (IOU). Training with digitized film-based and fully digitized MG images, the proposed Vanilla U-Net model achieves a mean test accuracy of 92.6%. The proposed model achieves a mean Dice coefficient index (DI) of 0.951 and a mean IOU of 0.909 that show how close the output segments are to the corresponding lesions in the ground truth maps. Data augmentation has been very effective in our experiments resulting in an increase in the mean DI and the mean IOU from 0.922 to 0.951 and 0.856 to 0.909, respectively.ConclusionsThe proposed Vanilla U-Net based model can be used for precise segmentation of masses in MG images. This is because the segmentation process incorporates more multi-scale spatial context, and captures more local and global context to predict a precise pixel-wise segmentation map of an input full MG image. These detected maps can help radiologists in differentiating benign and malignant lesions depend on the lesion shapes. We show that using transfer learning, introducing augmentation, and modifying the architecture of the original model results in better performance in terms of the mean accuracy, the mean DI, and the mean IOU in detecting mass lesion compared to the other DL and the conventional models.


Buildings ◽  
2021 ◽  
Vol 11 (10) ◽  
pp. 463
Author(s):  
Yoonsoo Shin ◽  
Sekojae Heo ◽  
Sehee Han ◽  
Junhee Kim ◽  
Seunguk Na

Conventionally, the number of steel rebars at construction sites is manually counted by workers. However, this practice gives rise to several problems: it is slow, human-resource-intensive, time-consuming, error-prone, and not very accurate. Consequently, a new method of quickly and accurately counting steel rebars with a minimal number of workers needs to be developed to enhance work efficiency and reduce labor costs at construction sites. In this study, the authors developed an automated system to estimate the size and count the number of steel rebars in bale packing using computer vision techniques based on a convolutional neural network (CNN). A dataset containing 622 images of rebars with a total of 186,522 rebar cross sections and 409 poly tags was established for segmentation rebars and poly tags in images. The images were collected in a full HD resolution of 1920×1080 pixels and then center-cropped to 512 × 512 pixels. Moreover, data augmentation was carried out to create 4668 images for the training dataset. Based on the training dataset, YOLACT-based steel bar size estimation and a counting model with a Box and Mask of over 30 mAP was generated to satisfy the aim of this study. The proposed method, which is a CNN model combined with homography, can estimate the size and count the number of steel rebars in an image quickly and accurately, and the developed method can be applied to real construction sites to efficiently manage the stock of steel rebars.


Author(s):  
Yu Hwa Pan ◽  
His Kuei Lin ◽  
Jerry C-Y Lin ◽  
Yung-Szu Hsu ◽  
Yi-Fan Wu ◽  
...  

Objective: To describe remodeling of the mesial and distal marginal bone level around platform-switched (PS) and platform-matched (PM) dental implants that were sandblasted with large grit and etched with acid over a three-year period. Materials and Methods: Digital periapical radiographs were obtained at the following time-points: during Stage I of the surgical placement of dental implants, before loading, immediately after loading (baseline), and one, three, six, 12, and 36 months after loading for measuring the horizontal and vertical marginal bone levels. Results: Sixty implants were successfully osseointegrated during the overall observation period. Vertical marginal bone levels for the PS and PM dental implants were 0.78 ± 0.77 and 0.98 ± 0.81 mm, respectively, whereas the horizontal marginal bone levels for the PS and PM implants were 0.84 ± 0.45 and 0.98 ± 0.68 mm, respectively. During the time leading up to the procedure until 36 months after the procedure, the average vertical marginal bone level resulted in less bone loss for the PS and PM groups—0.96 ± 1.28 and 0.30 ± 1.15 mm, respectively (p < 0.05). The mean levels of the horizontal marginal bone also showed increases of 0.48 ± 1.01 mm in the PS and 0.37 ± 0.77 mm in the PM groups from the time before loading until 36 months after the procedure. However, these increases were not statistically significant (p > 0.05). Conclusion: PS dental implants appeared to be more effective than PM implants for minimizing the mean marginal vertical and horizontal marginal bone loss during the three-year period. Regardless of which abutment connection was used, the dental implant in the present retrospective investigation exhibited minimal marginal bone remodeling, thus indicating long-term stability.


2020 ◽  
Author(s):  
Sohaib Asif ◽  
Kamran Amjad

AbstractThe global pandemic of the novel coronavirus that started in Wuhan, China has affected more than 50 million people worldwide and caused more than 1263,787 tragic deaths. To date, the COVID-19 virus is still spreading and affecting thousands of people. The main problem with testing for COVID-19 is that there are very few test kits available for a large number of affected or suspicious individuals. This leads to the need for automatic detection systems that use artificial intelligence. Deep learning is one of the most powerful AI tools available, so we recommend creating a convolutional neural network to detect COVID-19 positive patients from chest radiographs. According to previous studies, lung X-rays of COVID-19-positive patients show obvious characteristics, so this is a reliable method for testing patients, because X-ray examination of suspicious patients is easier than rt-PCR. Our model has been trained with 820 chest radiographic images (excluding data augmentation) collected from 3 databases, with a classification accuracy of 99.45% (training accuracy of 99.70%), sensitivity of 99.30% and specificity of 99.40 %, proved that our model has become a reliable COVID-19 detector.


Sign in / Sign up

Export Citation Format

Share Document