Deep-learning for predicting C-shaped canals in mandibular second molars on panoramic radiographs

2021 ◽  
pp. 20200513
Author(s):  
Su-Jin Jeon ◽  
Jong-Pil Yun ◽  
Han-Gyeol Yeom ◽  
Woo-Sang Shin ◽  
Jong-Hyun Lee ◽  
...  

Objective: The aim of this study was to evaluate the use of a convolutional neural network (CNN) system for predicting C-shaped canals in mandibular second molars on panoramic radiographs. Methods: Panoramic and cone beam CT (CBCT) images obtained from June 2018 to May 2020 were screened and 1020 patients were selected. Our dataset of 2040 sound mandibular second molars comprised 887 C-shaped canals and 1153 non-C-shaped canals. To confirm the presence of a C-shaped canal, CBCT images were analyzed by a radiologist and set as the gold standard. A CNN-based deep-learning model for predicting C-shaped canals was built using Xception. The training and test sets were set to 80 to 20%, respectively. Diagnostic performance was evaluated using accuracy, sensitivity, specificity, and precision. Receiver-operating characteristics (ROC) curves were drawn, and the area under the curve (AUC) values were calculated. Further, gradient-weighted class activation maps (Grad-CAM) were generated to localize the anatomy that contributed to the predictions. Results: The accuracy, sensitivity, specificity, and precision of the CNN model were 95.1, 92.7, 97.0, and 95.9%, respectively. Grad-CAM analysis showed that the CNN model mainly identified root canal shapes converging into the apex to predict the C-shaped canals, while the root furcation was predominantly used for predicting the non-C-shaped canals. Conclusions: The deep-learning system had significant accuracy in predicting C-shaped canals of mandibular second molars on panoramic radiographs.

2020 ◽  
Vol 10 (13) ◽  
pp. 4640 ◽  
Author(s):  
Javier Civit-Masot ◽  
Francisco Luna-Perejón ◽  
Manuel Domínguez Morales ◽  
Anton Civit

The spread of the SARS-CoV-2 virus has made the COVID-19 disease a worldwide epidemic. The most common tests to identify COVID-19 are invasive, time consuming and limited in resources. Imaging is a non-invasive technique to identify if individuals have symptoms of disease in their lungs. However, the diagnosis by this method needs to be made by a specialist doctor, which limits the mass diagnosis of the population. Image processing tools to support diagnosis reduce the load by ruling out negative cases. Advanced artificial intelligence techniques such as Deep Learning have shown high effectiveness in identifying patterns such as those that can be found in diseased tissue. This study analyzes the effectiveness of a VGG16-based Deep Learning model for the identification of pneumonia and COVID-19 using torso radiographs. Results show a high sensitivity in the identification of COVID-19, around 100%, and with a high degree of specificity, which indicates that it can be used as a screening test. AUCs on ROC curves are greater than 0.9 for all classes considered.


Endoscopy ◽  
2020 ◽  
Author(s):  
Alanna Ebigbo ◽  
Robert Mendel ◽  
Tobias Rückert ◽  
Laurin Schuster ◽  
Andreas Probst ◽  
...  

Background and aims: The accurate differentiation between T1a and T1b Barrett’s cancer has both therapeutic and prognostic implications but is challenging even for experienced physicians. We trained an Artificial Intelligence (AI) system on the basis of deep artificial neural networks (deep learning) to differentiate between T1a and T1b Barrett’s cancer white-light images. Methods: Endoscopic images from three tertiary care centres in Germany were collected retrospectively. A deep learning system was trained and tested using the principles of cross-validation. A total of 230 white-light endoscopic images (108 T1a and 122 T1b) was evaluated with the AI-system. For comparison, the images were also classified by experts specialized in endoscopic diagnosis and treatment of Barrett’s cancer. Results: The sensitivity, specificity, F1 and accuracy of the AI-system in the differentiation between T1a and T1b cancer lesions was 0.77, 0.64, 0.73 and 0.71, respectively. There was no statistically significant difference between the performance of the AI-system and that of human experts with sensitivity, specificity, F1 and accuracy of 0.63, 0.78, 0.67 and 0.70 respectively. Conclusion: This pilot study demonstrates the first multicenter application of an AI-based system in the prediction of submucosal invasion in endoscopic images of Barrett’s cancer. AI scored equal to international experts in the field, but more work is necessary to improve the system and apply it to video sequences and in a real-life setting. Nevertheless, the correct prediction of submucosal invasion in Barret´s cancer remains challenging for both experts and AI.


2021 ◽  
Vol 5 (1) ◽  
Author(s):  
Isabella Castiglioni ◽  
Davide Ippolito ◽  
Matteo Interlenghi ◽  
Caterina Beatrice Monti ◽  
Christian Salvatore ◽  
...  

Abstract Background We aimed to train and test a deep learning classifier to support the diagnosis of coronavirus disease 2019 (COVID-19) using chest x-ray (CXR) on a cohort of subjects from two hospitals in Lombardy, Italy. Methods We used for training and validation an ensemble of ten convolutional neural networks (CNNs) with mainly bedside CXRs of 250 COVID-19 and 250 non-COVID-19 subjects from two hospitals (Centres 1 and 2). We then tested such system on bedside CXRs of an independent group of 110 patients (74 COVID-19, 36 non-COVID-19) from one of the two hospitals. A retrospective reading was performed by two radiologists in the absence of any clinical information, with the aim to differentiate COVID-19 from non-COVID-19 patients. Real-time polymerase chain reaction served as the reference standard. Results At 10-fold cross-validation, our deep learning model classified COVID-19 and non-COVID-19 patients with 0.78 sensitivity (95% confidence interval [CI] 0.74–0.81), 0.82 specificity (95% CI 0.78–0.85), and 0.89 area under the curve (AUC) (95% CI 0.86–0.91). For the independent dataset, deep learning showed 0.80 sensitivity (95% CI 0.72–0.86) (59/74), 0.81 specificity (29/36) (95% CI 0.73–0.87), and 0.81 AUC (95% CI 0.73–0.87). Radiologists’ reading obtained 0.63 sensitivity (95% CI 0.52–0.74) and 0.78 specificity (95% CI 0.61–0.90) in Centre 1 and 0.64 sensitivity (95% CI 0.52–0.74) and 0.86 specificity (95% CI 0.71–0.95) in Centre 2. Conclusions This preliminary experience based on ten CNNs trained on a limited training dataset shows an interesting potential of deep learning for COVID-19 diagnosis. Such tool is in training with new CXRs to further increase its performance.


2020 ◽  
pp. 000313482098255
Author(s):  
Michael D. Watson ◽  
Maria R. Baimas-George ◽  
Keith J. Murphy ◽  
Ryan C. Pickens ◽  
David A. Iannitti ◽  
...  

Background Neoadjuvant therapy may improve survival of patients with pancreatic adenocarcinoma; however, determining response to therapy is difficult. Artificial intelligence allows for novel analysis of images. We hypothesized that a deep learning model can predict tumor response to NAC. Methods Patients with pancreatic cancer receiving neoadjuvant therapy prior to pancreatoduodenectomy were identified between November 2009 and January 2018. The College of American Pathologists Tumor Regression Grades 0-2 were defined as pathologic response (PR) and grade 3 as no response (NR). Axial images from preoperative computed tomography scans were used to create a 5-layer convolutional neural network and LeNet deep learning model to predict PRs. The hybrid model incorporated decrease in carbohydrate antigen 19-9 (CA19-9) of 10%. Accuracy was determined by area under the curve. Results A total of 81 patients were included in the study. Patients were divided between PR (333 images) and NR (443 images). The pure model had an area under the curve (AUC) of .738 ( P < .001), whereas the hybrid model had an AUC of .785 ( P < .001). CA19-9 decrease alone was a poor predictor of response with an AUC of .564 ( P = .096). Conclusions A deep learning model can predict pathologic tumor response to neoadjuvant therapy for patients with pancreatic adenocarcinoma and the model is improved with the incorporation of decreases in serum CA19-9. Further model development is needed before clinical application.


2020 ◽  
pp. archdischild-2020-320549
Author(s):  
Fang Hu ◽  
Shuai-Jun Guo ◽  
Jian-Jun Lu ◽  
Ning-Xuan Hua ◽  
Yan-Yan Song ◽  
...  

BackgroundDiagnosis of congenital syphilis (CS) is not straightforward and can be challenging. This study aimed to evaluate the validity of an algorithm using timing of maternal antisyphilis treatment and titres of non-treponemal antibody as predictors of CS.MethodsConfirmed CS cases and those where CS was excluded were obtained from the Guangzhou Prevention of Mother-to-Child Transmission of syphilis programme between 2011 and 2019. We calculated sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV) using receiver operating characteristics (ROC) in two situations: (1) receiving antisyphilis treatment or no-treatment during pregnancy and (2) initiating treatment before 28 gestational weeks (GWs), initiating after 28 GWs or receiving no treatment for syphilis seropositive women.ResultsAmong 1558 syphilis-exposed children, 39 had confirmed CS. Area under the curve, sensitivity and specificity of maternal non-treponemal titres before treatment and treatment during pregnancy were 0.80, 76.9%, 78.7% and 0.79, 69.2%, 88.7%, respectively, for children with CS. For the algorithm, ROC results showed that PPV and NPV for predicting CS were 37.3% and 96.4% (non-treponemal titres cut-off value 1:8 and no antisyphilis treatment), 9.4% and 100% (non-treponemal titres cut-off value 1:16 and treatment after 28 GWs), 4.2% and 99.5% (non-treponemal titres cut-off value 1:32 and treatment before 28 GWs), respectively.ConclusionsAn algorithm using maternal non-treponemal titres and timing of treatment during pregnancy could be an effective strategy to diagnose or rule out CS, especially when the rate of loss to follow-up is high or there are no straightforward diagnostic tools.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Chiaki Kuwada ◽  
Yoshiko Ariji ◽  
Yoshitaka Kise ◽  
Takuma Funakoshi ◽  
Motoki Fukuda ◽  
...  

AbstractAlthough panoramic radiography has a role in the examination of patients with cleft alveolus (CA), its appearances is sometimes difficult to interpret. The aims of this study were to develop a computer-aided diagnosis system for diagnosing the CA status on panoramic radiographs using a deep learning object detection technique with and without normal data in the learning process, to verify its performance in comparison to human observers, and to clarify some characteristic appearances probably related to the performance. The panoramic radiographs of 383 CA patients with cleft palate (CA with CP) or without cleft palate (CA only) and 210 patients without CA (normal) were used to create two models on the DetectNet. The models 1 and 2 were developed based on the data without and with normal subjects, respectively, to detect the CAs and classify them into with or without CP. The model 2 reduced the false positive rate (1/30) compared to the model 1 (12/30). The overall accuracy of Model 2 was higher than Model 1 and human observers. The model created in this study appeared to have the potential to detect and classify CAs on panoramic radiographs, and might be useful to assist the human observers.


2021 ◽  
Vol 39 (15_suppl) ◽  
pp. 8536-8536
Author(s):  
Gouji Toyokawa ◽  
Fahdi Kanavati ◽  
Seiya Momosaki ◽  
Kengo Tateishi ◽  
Hiroaki Takeoka ◽  
...  

8536 Background: Lung cancer is the leading cause of cancer-related death in many countries, and its prognosis remains unsatisfactory. Since treatment approaches differ substantially based on the subtype, such as adenocarcinoma (ADC), squamous cell carcinoma (SCC) and small cell lung cancer (SCLC), an accurate histopathological diagnosis is of great importance. However, if the specimen is solely composed of poorly differentiated cancer cells, distinguishing between histological subtypes can be difficult. The present study developed a deep learning model to classify lung cancer subtypes from whole slide images (WSIs) of transbronchial lung biopsy (TBLB) specimens, in particular with the aim of using this model to evaluate a challenging test set of indeterminate cases. Methods: Our deep learning model consisted of two separately trained components: a convolutional neural network tile classifier and a recurrent neural network tile aggregator for the WSI diagnosis. We used a training set consisting of 638 WSIs of TBLB specimens to train a deep learning model to classify lung cancer subtypes (ADC, SCC and SCLC) and non-neoplastic lesions. The training set consisted of 593 WSIs for which the diagnosis had been determined by pathologists based on the visual inspection of Hematoxylin-Eosin (HE) slides and of 45 WSIs of indeterminate cases (64 ADCs and 19 SCCs). We then evaluated the models using five independent test sets. For each test set, we computed the receiver operator curve (ROC) area under the curve (AUC). Results: We applied the model to an indeterminate test set of WSIs obtained from TBLB specimens that pathologists had not been able to conclusively diagnose by examining the HE-stained specimens alone. Overall, the model achieved ROC AUCs of 0.993 (confidence interval [CI] 0.971-1.0) and 0.996 (0.981-1.0) for ADC and SCC, respectively. We further evaluated the model using five independent test sets consisting of both TBLB and surgically resected lung specimens (combined total of 2490 WSIs) and obtained highly promising results with ROC AUCs ranging from 0.94 to 0.99. Conclusions: In this study, we demonstrated that a deep learning model could be trained to predict lung cancer subtypes in indeterminate TBLB specimens. The extremely promising results obtained show that if deployed in clinical practice, a deep learning model that is capable of aiding pathologists in diagnosing indeterminate cases would be extremely beneficial as it would allow a diagnosis to be obtained sooner and reduce costs that would result from further investigations.


Cancers ◽  
2019 ◽  
Vol 11 (10) ◽  
pp. 1551 ◽  
Author(s):  
Edyta Marta Borkowska ◽  
Tomasz Konecki ◽  
Michał Pietrusiński ◽  
Maciej Borowiec ◽  
Zbigniew Jabłonowski

Bladder cancer (BC) is still characterized by a very high death rate in patients with this disease. One of the reasons for this is the lack of adequate markers which could help determine the biological potential of the tumor to develop into its invasive stage. It has been found that some microRNAs (miRNAs) correlate with disease progression. The purpose of this study was to identify which miRNAs can accurately predict the presence of BC and can differentiate low grade (LG) tumors from high grade (HG) tumors. The study included 55 patients with diagnosed bladder cancer and 30 persons belonging to the control group. The expression of seven selected miRNAs was estimated with the real-time PCR technique according to miR-103-5p (for the normalization of the results). Receiver operating characteristics (ROC) curves and the area under the curve (AUC) were used to evaluate the feasibility of using selected markers as biomarkers for detecting BC and discriminating non-muscle invasive BC (NMIBC) from muscle invasive BC (MIBC). For HG tumors, the relevant classifiers are miR-205-5p and miR-20a-5p, whereas miR-205-5p and miR-182-5p are for LG (AUC = 0.964 and AUC = 0.992, respectively). NMIBC patients with LG disease are characterized by significantly higher miR-130b-3p expression values compared to patients in HG tumors.


2022 ◽  
Vol 17 (1) ◽  
Author(s):  
Bachar Alabdullah ◽  
Amir Hadji-Ashrafy

Abstract Background A number of biomarkers have the potential of differentiating between primary lung tumours and secondary lung tumours from the gastrointestinal tract, however, a standardised panel for that purpose does not exist yet. We aimed to identify the smallest panel that is most sensitive and specific at differentiating between primary lung tumours and secondary lung tumours from the gastrointestinal tract. Methods A total of 170 samples were collected, including 140 primary and 30 non-primary lung tumours and staining for CK7, Napsin-A, TTF1, CK20, CDX2, and SATB2 was performed via tissue microarray. The data was then analysed using univariate regression models and a combination of multivariate regression models and Receiver Operating Characteristic (ROC) curves. Results Univariate regression models confirmed the 6 biomarkers’ ability to independently predict the primary outcome (p < 0.001). Multivariate models of 2-biomarker combinations identified 11 combinations with statistically significant odds ratios (ORs) (p < 0.05), of which TTF1/CDX2 had the highest area under the curve (AUC) (0.983, 0.960–1.000 95% CI). The sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) were 75.7, 100, 100, and 37.5% respectively. Multivariate models of 3-biomarker combinations identified 4 combinations with statistically significant ORs (p < 0.05), of which CK7/CK20/SATB2 had the highest AUC (0.965, 0.930–1.000 95% CI). The sensitivity, specificity, PPV, and NPV were 85.1, 100, 100, and 41.7% respectively. Multivariate models of 4-biomarker combinations did not identify any combinations with statistically significant ORs (p < 0.05). Conclusions The analysis identified the combination of CK7/CK20/SATB2 to be the smallest panel with the highest sensitivity (85.1%) and specificity (100%) for predicting tumour origin with an ROC AUC of 0.965 (p < 0.001; SE: 0.018, 0.930–1.000 95% CI).


Cancers ◽  
2021 ◽  
Vol 14 (1) ◽  
pp. 12
Author(s):  
Jose M. Castillo T. ◽  
Muhammad Arif ◽  
Martijn P. A. Starmans ◽  
Wiro J. Niessen ◽  
Chris H. Bangma ◽  
...  

The computer-aided analysis of prostate multiparametric MRI (mpMRI) could improve significant-prostate-cancer (PCa) detection. Various deep-learning- and radiomics-based methods for significant-PCa segmentation or classification have been reported in the literature. To be able to assess the generalizability of the performance of these methods, using various external data sets is crucial. While both deep-learning and radiomics approaches have been compared based on the same data set of one center, the comparison of the performances of both approaches on various data sets from different centers and different scanners is lacking. The goal of this study was to compare the performance of a deep-learning model with the performance of a radiomics model for the significant-PCa diagnosis of the cohorts of various patients. We included the data from two consecutive patient cohorts from our own center (n = 371 patients), and two external sets of which one was a publicly available patient cohort (n = 195 patients) and the other contained data from patients from two hospitals (n = 79 patients). Using multiparametric MRI (mpMRI), the radiologist tumor delineations and pathology reports were collected for all patients. During training, one of our patient cohorts (n = 271 patients) was used for both the deep-learning- and radiomics-model development, and the three remaining cohorts (n = 374 patients) were kept as unseen test sets. The performances of the models were assessed in terms of their area under the receiver-operating-characteristic curve (AUC). Whereas the internal cross-validation showed a higher AUC for the deep-learning approach, the radiomics model obtained AUCs of 0.88, 0.91 and 0.65 on the independent test sets compared to AUCs of 0.70, 0.73 and 0.44 for the deep-learning model. Our radiomics model that was based on delineated regions resulted in a more accurate tool for significant-PCa classification in the three unseen test sets when compared to a fully automated deep-learning model.


Sign in / Sign up

Export Citation Format

Share Document