ITVT-06. Application of artificial intelligence and radiomics for the analysis of intraoperative ultrasound images of brain tumors

2021 ◽  
Vol 23 (Supplement_6) ◽  
pp. vi229-vi229
Author(s):  
Santiago Cepeda

Abstract BACKGROUND Intraoperative ultrasound (ioUS) images of brain tumors contain information that has not yet been exploited. The present work aims to analyze images in both B-mode and strain-elastography using techniques based on artificial intelligence and radiomics. We pretend to assess the capacity for differentiating glioblastomas (GBM) from solitary brain metastases (SBM) and also to assess the ability to predict the overall survival (OS) in GBM. METHODS We performed a retrospective analysis of patients who underwent craniotomy between March 2018 to June 2020 with GBM and SBM diagnoses. Cases with an ioUS study were included. In the first group of patients, an analysis based on deep learning was performed. An existing neural network (Inception V3) was used to classify tumors into GBM and SBM. The models were evaluated using the area under the curve (AUC), classification accuracy, and precision. In the second group, radiomic features from the tumor region were extracted. Radiomic features associated with OS were selected employing univariate correlations. Then, a survival analysis was conducted using Cox regression. RESULTS For the classification task, a total of 36 patients were included. 26 GBM and 10 SBM. Models were built using a total of 812 ultrasound images. For B-mode, AUC and accuracy values ranged from 0.790 to 0.943 and from 72 to 89 % respectively. For elastography, AUC and accuracy values ranged from 0.847 to 0.985 and from 79 to 95 % respectively. Sixteen patients were available for the survival analysis. A total of 52 radiomic features were extracted. Two texture features from B-mode (Conventional mean and GLZLM_SZLGE) and one texture feature from strain-elastography (GLZLM_LZHGE) were significantly associated with OS. CONCLUSIONS Automated processing of ioUS images through deep learning can generate high-precision classification algorithms. Radiomic tumor region features in B-mode and elastography appear to be significantly associated with OS in GBM.

2021 ◽  
Vol 10 ◽  
Author(s):  
Santiago Cepeda ◽  
Sergio García-García ◽  
Ignacio Arrese ◽  
Gabriel Fernández-Pérez ◽  
María Velasco-Casares ◽  
...  

BackgroundThe differential diagnosis of glioblastomas (GBM) from solitary brain metastases (SBM) is essential because the surgical strategy varies according to the histopathological diagnosis. Intraoperative ultrasound elastography (IOUS-E) is a relatively novel technique implemented in the surgical management of brain tumors that provides additional information about the elasticity of tissues. This study compares the discriminative capacity of intraoperative ultrasound B-mode and strain elastography to differentiate GBM from SBM.MethodsWe performed a retrospective analysis of patients who underwent craniotomy between March 2018 to June 2020 with glioblastoma (GBM) and solitary brain metastases (SBM) diagnoses. Cases with an intraoperative ultrasound study were included. Images were acquired before dural opening, first in B-mode, and then using the strain elastography module. After image pre-processing, an analysis based on deep learning was conducted using the open-source software Orange. We have trained an existing neural network to classify tumors into GBM and SBM via the transfer learning method using Inception V3. Then, logistic regression (LR) with LASSO (least absolute shrinkage and selection operator) regularization, support vector machine (SVM), random forest (RF), neural network (NN), and k-nearest neighbor (kNN) were used as classification algorithms. After the models’ training, ten-fold stratified cross-validation was performed. The models were evaluated using the area under the curve (AUC), classification accuracy, and precision.ResultsA total of 36 patients were included in the analysis, 26 GBM and 10 SBM. Models were built using a total of 812 ultrasound images, 435 of B-mode, 265 (60.92%) corresponded to GBM and 170 (39.8%) to metastases. In addition, 377 elastograms, 232 (61.54%) GBM and 145 (38.46%) metastases were analyzed. For B-mode, AUC and accuracy values of the classification algorithms ranged from 0.790 to 0.943 and from 72 to 89%, respectively. For elastography, AUC and accuracy values ranged from 0.847 to 0.985 and from 79% to 95%, respectively.ConclusionAutomated processing of ultrasound images through deep learning can generate high-precision classification algorithms that differentiate glioblastomas from metastases using intraoperative ultrasound. The best performance regarding AUC was achieved by the elastography-based model supporting the additional diagnostic value that this technique provides.


Author(s):  
Elisee Ilunga-Mbuyamba ◽  
Juan Gabriel Avina-Cervantes ◽  
Dirk Lindner ◽  
Felix Arlt ◽  
Jean Fulbert Ituna-Yudonago ◽  
...  

2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Gang Yu ◽  
Kai Sun ◽  
Chao Xu ◽  
Xing-Hua Shi ◽  
Chong Wu ◽  
...  

AbstractMachine-assisted pathological recognition has been focused on supervised learning (SL) that suffers from a significant annotation bottleneck. We propose a semi-supervised learning (SSL) method based on the mean teacher architecture using 13,111 whole slide images of colorectal cancer from 8803 subjects from 13 independent centers. SSL (~3150 labeled, ~40,950 unlabeled; ~6300 labeled, ~37,800 unlabeled patches) performs significantly better than the SL. No significant difference is found between SSL (~6300 labeled, ~37,800 unlabeled) and SL (~44,100 labeled) at patch-level diagnoses (area under the curve (AUC): 0.980 ± 0.014 vs. 0.987 ± 0.008, P value = 0.134) and patient-level diagnoses (AUC: 0.974 ± 0.013 vs. 0.980 ± 0.010, P value = 0.117), which is close to human pathologists (average AUC: 0.969). The evaluation on 15,000 lung and 294,912 lymph node images also confirm SSL can achieve similar performance as that of SL with massive annotations. SSL dramatically reduces the annotations, which has great potential to effectively build expert-level pathological artificial intelligence platforms in practice.


2020 ◽  
Vol 2020 ◽  
pp. 1-10 ◽  
Author(s):  
Ilker Ozsahin ◽  
Boran Sekeroglu ◽  
Musa Sani Musa ◽  
Mubarak Taiwo Mustapha ◽  
Dilber Uzun Ozsahin

The COVID-19 diagnostic approach is mainly divided into two broad categories, a laboratory-based and chest radiography approach. The last few months have witnessed a rapid increase in the number of studies use artificial intelligence (AI) techniques to diagnose COVID-19 with chest computed tomography (CT). In this study, we review the diagnosis of COVID-19 by using chest CT toward AI. We searched ArXiv, MedRxiv, and Google Scholar using the terms “deep learning”, “neural networks”, “COVID-19”, and “chest CT”. At the time of writing (August 24, 2020), there have been nearly 100 studies and 30 studies among them were selected for this review. We categorized the studies based on the classification tasks: COVID-19/normal, COVID-19/non-COVID-19, COVID-19/non-COVID-19 pneumonia, and severity. The sensitivity, specificity, precision, accuracy, area under the curve, and F1 score results were reported as high as 100%, 100%, 99.62, 99.87%, 100%, and 99.5%, respectively. However, the presented results should be carefully compared due to the different degrees of difficulty of different classification tasks.


Diagnostics ◽  
2021 ◽  
Vol 12 (1) ◽  
pp. 66
Author(s):  
Yung-Hsien Hsieh ◽  
Fang-Rong Hsu ◽  
Seng-Tong Dai ◽  
Hsin-Ya Huang ◽  
Dar-Ren Chen ◽  
...  

In this study, we applied semantic segmentation using a fully convolutional deep learning network to identify characteristics of the Breast Imaging Reporting and Data System (BI-RADS) lexicon from breast ultrasound images to facilitate clinical malignancy tumor classification. Among 378 images (204 benign and 174 malignant images) from 189 patients (102 benign breast tumor patients and 87 malignant patients), we identified seven malignant characteristics related to the BI-RADS lexicon in breast ultrasound. The mean accuracy and mean IU of the semantic segmentation were 32.82% and 28.88, respectively. The weighted intersection over union was 85.35%, and the area under the curve was 89.47%, showing better performance than similar semantic segmentation networks, SegNet and U-Net, in the same dataset. Our results suggest that the utilization of a deep learning network in combination with the BI-RADS lexicon can be an important supplemental tool when using ultrasound to diagnose breast malignancy.


Author(s):  
Alaa Ahmed Abbood ◽  
Qahtan Makki Shallal ◽  
Mohammed Abdulraheem Fadhel

The brain tumor, the most common and aggressive disease, leads to a very shorter lifespan. Thus, planning treatments is a crucial step in improving a patient's quality of life. In general, several image techniques such as CT, MRI, and ultrasound have been used for assessing tumors in the prostate, breast, lung, brain, etc. Primarily, MRI images are applied to detect tumors in the brain during this work. The enormous amount of data produced by the MRI scan thwarts tumor vs. non-tumor manual classification at a particular time. Unfortunately, with a small number of images, it has certain limitations (i.e., precise quantitative measurements). Therefore, an automated classification system is necessary to avoid human mortality. The automatic categorization of brain tumors in the surrounding tumor region is a challenging task concerning space and structural variability. Four deep learning models: AlexNet, VGG16, GoogleNet, and RestNet50, are used in this comparative study to classify brain tumors. Based on accuracy, the results showed that RestNet50 is the best model with an accuracy of 95.8%, while AlexNet has the fast performance with a processing time of 1.2 seconds. In addition, a hardware parallel processing unit (GPU) is employed for real-time purposes, where AlexNet (the fastest model) has a processing time of only 8.3 msec.


2020 ◽  
Author(s):  
Ka Young Shim ◽  
Sung Won Chung ◽  
Jae Hak Jeong ◽  
Inpyeong Hwang ◽  
Chul-Kee Park ◽  
...  

Abstract Glioblastoma remains the most devastating brain tumor despite optimal treatment, because of the high rate of recurrence. Distant recurrence has distinct genomic alterations compared to local recurrence, which requires different treatment planning both in clinical practice and trials. To date, perfusion-weighted MRI has revealed that perfusional characteristics of tumor are associated with prognosis. However, not much research has focused on recurrence patterns in glioblastoma: namely, local and distant recurrence. Here, we propose two different neural network models to predict the recurrence patterns in glioblastoma that utilizes high-dimensional radiomic profiles based on perfusion MRI: area under the curve (AUC) (95% confidence interval), 0.969 (0.903-1.000) for local recurrence; 0.864 (0.726-0.976) for distant recurrence for each patient in the validation set. This creates an opportunity to provide personalized medicine in contrast to studies investigating only group differences. Moreover, interpretable deep learning identified that salient radiomic features for each recurrence pattern are related to perfusional intratumoral heterogeneity. We also demonstrated that the combined salient radiomic features, or “radiomic risk score”, increased risk of recurrence/progression (hazard ratio, 1.61; p=0.03) in multivariate Cox regression on progression-free survival.


2020 ◽  
Author(s):  
Marvin Chia-Han Yeh ◽  
Yu-Chuan(Jack) Li ◽  
Yu-Hsiang Wang ◽  
Hsuan-Chia Yang ◽  
Kuan-Jen Bai ◽  
...  

BACKGROUND Artificial intelligence can integrate complex features and may be used to predict the risk of developing lung cancer, thereby decreasing the need for unnecessary and expensive diagnostic interventions. OBJECTIVE Using electronic medical records to pre-screening patient’s risk for developing lung cancer. METHODS Two million participants were randomly selected from the Taiwan National Health Insurance Research Database from 1999 to 2013; We built a predictive lung cancer screening model with neural networks that were trained and validated using pre-2012 data and tested prospectively on post-2012 data. An age- and gender-matched subgroup that is 10 times larger than the original lung cancer group was used to assess the predictive power of EMR. Discrimination (area under the curve [AUC]) and calibration analyses were performed. RESULTS The analysis included 11,617 cases of lung cancer and 1,423,154 controls. The model achieved an AUC of 0.90 for the overall population and 0.87 in patients >55 years of age. The AUC in the matched subgroup was 0.82. The positive predictive value was highest (14.3%) among those >55-years-old with a preexisting history of lung disease. CONCLUSIONS Our model achieved excellent performance at predicting lung cancer within one year and may be deployed for digital patient screening. Deep learning facilitates the effective use of EMRs to identify individuals at high risk for developing lung cancer.


Sign in / Sign up

Export Citation Format

Share Document