Predicting the Debonding of CAD/CAM Composite Resin Crowns with AI

2019 ◽  
Vol 98 (11) ◽  
pp. 1234-1238 ◽  
Author(s):  
S. Yamaguchi ◽  
C. Lee ◽  
O. Karaer ◽  
S. Ban ◽  
A. Mine ◽  
...  

A preventive measure for debonding has not been established and is highly desirable to improve the survival rate of computer-aided design/computer-aided manufacturing (CAD/CAM) composite resin (CR) crowns. The aim of this study was to assess the usefulness of deep learning with a convolution neural network (CNN) method to predict the debonding probability of CAD/CAM CR crowns from 2-dimensional images captured from 3-dimensional (3D) stereolithography models of a die scanned by a 3D oral scanner. All cases of CAD/CAM CR crowns were manufactured from April 2014 to November 2015 at the Division of Prosthodontics, Osaka University Dental Hospital (Ethical Review Board at Osaka University, approval H27-E11). The data set consisted of a total of 24 cases: 12 trouble-free and 12 debonding as known labels. A total of 8,640 images were randomly divided into 6,480 training and validation images and 2,160 test images. Deep learning with a CNN method was conducted to develop a learning model to predict the debonding probability. The prediction accuracy, precision, recall, F-measure, receiver operating characteristic, and area under the curve of the learning model were assessed for the test images. Also, the mean calculation time was measured during the prediction for the test images. The prediction accuracy, precision, recall, and F-measure values of deep learning with a CNN method for the prediction of the debonding probability were 98.5%, 97.0%, 100%, and 0.985, respectively. The mean calculation time was 2 ms/step for 2,160 test images. The area under the curve was 0.998. Artificial intelligence (AI) technology—that is, the deep learning with a CNN method established in this study—demonstrated considerably good performance in terms of predicting the debonding probability of a CAD/CAM CR crown with 3D stereolithography models of a die scanned from patients.

Author(s):  
Yongfeng Gao ◽  
Jiaxing Tan ◽  
Zhengrong Liang ◽  
Lihong Li ◽  
Yumei Huo

AbstractComputer aided detection (CADe) of pulmonary nodules plays an important role in assisting radiologists’ diagnosis and alleviating interpretation burden for lung cancer. Current CADe systems, aiming at simulating radiologists’ examination procedure, are built upon computer tomography (CT) images with feature extraction for detection and diagnosis. Human visual perception in CT image is reconstructed from sinogram, which is the original raw data acquired from CT scanner. In this work, different from the conventional image based CADe system, we propose a novel sinogram based CADe system in which the full projection information is used to explore additional effective features of nodules in the sinogram domain. Facing the challenges of limited research in this concept and unknown effective features in the sinogram domain, we design a new CADe system that utilizes the self-learning power of the convolutional neural network to learn and extract effective features from sinogram. The proposed system was validated on 208 patient cases from the publicly available online Lung Image Database Consortium database, with each case having at least one juxtapleural nodule annotation. Experimental results demonstrated that our proposed method obtained a value of 0.91 of the area under the curve (AUC) of receiver operating characteristic based on sinogram alone, comparing to 0.89 based on CT image alone. Moreover, a combination of sinogram and CT image could further improve the value of AUC to 0.92. This study indicates that pulmonary nodule detection in the sinogram domain is feasible with deep learning.


2020 ◽  
pp. 000313482098255
Author(s):  
Michael D. Watson ◽  
Maria R. Baimas-George ◽  
Keith J. Murphy ◽  
Ryan C. Pickens ◽  
David A. Iannitti ◽  
...  

Background Neoadjuvant therapy may improve survival of patients with pancreatic adenocarcinoma; however, determining response to therapy is difficult. Artificial intelligence allows for novel analysis of images. We hypothesized that a deep learning model can predict tumor response to NAC. Methods Patients with pancreatic cancer receiving neoadjuvant therapy prior to pancreatoduodenectomy were identified between November 2009 and January 2018. The College of American Pathologists Tumor Regression Grades 0-2 were defined as pathologic response (PR) and grade 3 as no response (NR). Axial images from preoperative computed tomography scans were used to create a 5-layer convolutional neural network and LeNet deep learning model to predict PRs. The hybrid model incorporated decrease in carbohydrate antigen 19-9 (CA19-9) of 10%. Accuracy was determined by area under the curve. Results A total of 81 patients were included in the study. Patients were divided between PR (333 images) and NR (443 images). The pure model had an area under the curve (AUC) of .738 ( P < .001), whereas the hybrid model had an AUC of .785 ( P < .001). CA19-9 decrease alone was a poor predictor of response with an AUC of .564 ( P = .096). Conclusions A deep learning model can predict pathologic tumor response to neoadjuvant therapy for patients with pancreatic adenocarcinoma and the model is improved with the incorporation of decreases in serum CA19-9. Further model development is needed before clinical application.


2020 ◽  
Vol 11 (12) ◽  
pp. 3615-3622 ◽  
Author(s):  
Lei Cong ◽  
Wanbing Feng ◽  
Zhigang Yao ◽  
Xiaoming Zhou ◽  
Wei Xiao

BMJ Open ◽  
2020 ◽  
Vol 10 (9) ◽  
pp. e036423
Author(s):  
Zhigang Song ◽  
Chunkai Yu ◽  
Shuangmei Zou ◽  
Wenmiao Wang ◽  
Yong Huang ◽  
...  

ObjectivesThe microscopic evaluation of slides has been gradually moving towards all digital in recent years, leading to the possibility for computer-aided diagnosis. It is worthwhile to know the similarities between deep learning models and pathologists before we put them into practical scenarios. The simple criteria of colorectal adenoma diagnosis make it to be a perfect testbed for this study.DesignThe deep learning model was trained by 177 accurately labelled training slides (156 with adenoma). The detailed labelling was performed on a self-developed annotation system based on iPad. We built the model based on DeepLab v2 with ResNet-34. The model performance was tested on 194 test slides and compared with five pathologists. Furthermore, the generalisation ability of the learning model was tested by extra 168 slides (111 with adenoma) collected from two other hospitals.ResultsThe deep learning model achieved an area under the curve of 0.92 and obtained a slide-level accuracy of over 90% on slides from two other hospitals. The performance was on par with the performance of experienced pathologists, exceeding the average pathologist. By investigating the feature maps and cases misdiagnosed by the model, we found the concordance of thinking process in diagnosis between the deep learning model and pathologists.ConclusionsThe deep learning model for colorectal adenoma diagnosis is quite similar to pathologists. It is on-par with pathologists’ performance, makes similar mistakes and learns rational reasoning logics. Meanwhile, it obtains high accuracy on slides collected from different hospitals with significant staining configuration variations.


2019 ◽  
Vol 9 (22) ◽  
pp. 4928
Author(s):  
Jeong Han ◽  
Soon Hwang

Computer-aided design/computer-aided manufacturing (CAD/CAM)-based maxillary templates can transfer a surgical plan accurately only when the template is positioned correctly. Our study aimed to evaluate the positioning accuracy of the CAD/CAM-based template for maxillary orthognathic surgery using dry skulls. After reconstruction of a three-dimensional (3D) virtual skull model, a surface-based surgical template for Le Fort I osteotomy was designed and fabricated using CAD/CAM and 3D printing technology. To determine accuracy, the deviation of the template between the planned and the actual position and the fitness of the template were evaluated. The mean deviation was 0.41 ± 0.30 mm in the medio-lateral direction, 0.55 ± 0.59 mm in the antero-posterior direction, and 0.69 ± 0.59 mm in the supero-inferior direction. The root mean square deviation between the planned and the actual position of the template was 1.21 ± 0.54 mm. With respect to the fitness of the template, the mean distance between the inner surface of the template and the underlying bone surface was 0.76 ± 0.24 mm. CAD/CAM-based templates showed precise positioning and good fitness. These results suggest that surface topography-based CAD-CAM templates can be considered as an alternative solution in replacing the traditional intermediate splints for the transfer of surgical plans.


Author(s):  
Zahra Khamverdi1 ◽  
Elmira Najafrad ◽  
Maryam Farhadian

Objectives: Marginal and internal fit of restorations are two important clinical factors for assessing the quality and durability of computer-aided design/computer-aided manufacturing (CAD/CAM)-fabricated monolithic zirconia restorations. The purpose of this study was to evaluate the marginal and internal fit of CAD/CAM zirconia crowns with two different scanners (i3D scanner and 3Shape D700). Materials and Methods: Twelve extracted sound human posterior teeth were prepared for full zirconia crowns. Two different extraoral scanners namely i3D scanner and 3Shape D700 were used to digitize type IV gypsum casts poured from impressions. The crowns were milled from presintered monolithic zirconia blocks by a 5-axis milling machine. The replica technique and MIP4 microscopic image analysis software were utilized to measure the marginal and internal fit by a stereomicroscope at ×40 magnification. The collected data were analyzed by paired t-test. Results: The mean marginal gap was 203.62 μm with 3Shape D700 scanner and 241.07 μm with i3D scanner. The mean internal gap was 192.30 μm with 3Shape D700 scanner and 196.06 μm with i3D scanner. The results of paired t-test indicated that there was a statistically significant difference between the two scanners in marginal fit (P=0.04); while, there was no statistically significant difference in internal fit (P=0.761). Conclusion: Within the limitations of this study, the results showed that type of extraoral scanner affected the marginal fit of CAD/CAM fabricated crowns; however, it did not have a significant effect on their internal fit.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Gang Yu ◽  
Kai Sun ◽  
Chao Xu ◽  
Xing-Hua Shi ◽  
Chong Wu ◽  
...  

AbstractMachine-assisted pathological recognition has been focused on supervised learning (SL) that suffers from a significant annotation bottleneck. We propose a semi-supervised learning (SSL) method based on the mean teacher architecture using 13,111 whole slide images of colorectal cancer from 8803 subjects from 13 independent centers. SSL (~3150 labeled, ~40,950 unlabeled; ~6300 labeled, ~37,800 unlabeled patches) performs significantly better than the SL. No significant difference is found between SSL (~6300 labeled, ~37,800 unlabeled) and SL (~44,100 labeled) at patch-level diagnoses (area under the curve (AUC): 0.980 ± 0.014 vs. 0.987 ± 0.008, P value = 0.134) and patient-level diagnoses (AUC: 0.974 ± 0.013 vs. 0.980 ± 0.010, P value = 0.117), which is close to human pathologists (average AUC: 0.969). The evaluation on 15,000 lung and 294,912 lymph node images also confirm SSL can achieve similar performance as that of SL with massive annotations. SSL dramatically reduces the annotations, which has great potential to effectively build expert-level pathological artificial intelligence platforms in practice.


2020 ◽  
Author(s):  
Jinyou Chai ◽  
Xiaoqian Liu ◽  
Ramona Schweyen ◽  
Jürgen Setz ◽  
Shaoxia Pan ◽  
...  

Abstract Background To evaluate the accuracy of a computer-aided design and computer-aided manufacturing (CAD-CAM) surgical guide for implant placement in edentulous jaws. Methods Nine patients with twelve edentulous jaws seeking implants were recruited. Radiographic guides with diagnostic templates were fabricated from try-in waxup dentures. Planning software (Organical® Dental Implant, Berlin, Germany) was used to virtually design the implant positions, and the radiographic templates were converted into surgical guides using computer numerical control (CNC) milling. Following the guided implant surgery protocol, forty-four implants were placed into twelve edentulous jaws. Cone-beam computed tomography (CBCT) scans were performed post-operatively for each jaw, and the deviations between the planned and actual implant positions were measured. Results All 44 implants survived, and no severe haematomas, nerve injuries or unexpected sinus perforations occurred. The mean three dimensional linear deviation of implant position between virtual planning and actual placement was 1.53 ± 0.48 mm at the implant neck and 1.58 ± 0.4 mm at the apex. The angular deviation was 3.96 ± 3.05 degrees. The mean deviation between virtual and actual implant position was significantly smaller in the maxilla than in the mandible. No significant differences were found in the deviation of implant position between cases with and without anchor pins. Conclusions The guides fabricated using the CAD-CAM CNC milling technique provided comparable accuracy as those fabricated by Stereolithography. The displacement of the guides on edentulous arch might be the main contributing factor of deviation. Trial registration: Chinese Clinical Trial Registry, ChiCTR-ONC-17014159


2020 ◽  
Author(s):  
Chih-Min Liu ◽  
Chien-Liang Liu ◽  
Kai-Wen Hu ◽  
Vincent S. Tseng ◽  
Shih-Lin Chang ◽  
...  

BACKGROUND Brugada syndrome is a rare inherited arrhythmia with a unique electrocardiogram (ECG) pattern (type 1 Brugada ECG pattern), which is a major cause of sudden cardiac death in young people. Automatic screening for the ECG pattern of Brugada syndrome by a deep learning model gives us the chance to identify these patients at an early time, thus allowing them to receive life-saving therapy. OBJECTIVE To develop a deep learning-enabled ECG model for diagnosing Brugada syndrome. METHODS A total of 276 ECGs with a type 1 Brugada ECG pattern (276 type 1 Brugada ECGs and another randomly retrieved 276 non-Brugada type ECGs for one to one allocation) were extracted from the hospital-based ECG database for a two-stage analysis with a deep learning model. We first trained the network to identify right bundle branch block (RBBB) pattern, and then, we transferred the first-stage learning to the second task to diagnose the type 1 Brugada ECG pattern. The diagnostic performance of the deep learning model was compared to that of board-certified practicing cardiologists. The model was also validated by the independent international data of ECGs. RESULTS The AUC (area under the curve) of the deep learning model in diagnosing the type 1 Brugada ECG pattern was 0.96 (sensitivity: 88.4%, specificity: 89.1%). The sensitivity and specificity of the cardiologists for the diagnosis of the type 1 Brugada ECG pattern were 62.7±17.8%, and 98.5±3.0%, respectively. The diagnoses by the deep learning model were highly consistent with the standard diagnoses (Kappa coefficient: 0.78, McNemar test, P = 0.86). However, the diagnoses by the cardiologists were significantly different from the standard diagnoses, with only moderate consistency (Kappa coefficient: 0.60, McNemar test, P = 2.35x10-22). For the international validation, the AUC of the deep learning model for diagnosing the type 1 Brugada ECG pattern was 0.99 (sensitivity: 85.7%, specificity: 100.0%). CONCLUSIONS The deep learning-enabled ECG model for diagnosing Brugada syndrome is a robust screening tool with better diagnostic sensitivity than that of cardiologists.


Sign in / Sign up

Export Citation Format

Share Document