Prediction of lesion shrinkage using CT imaging with radiomic and deep learning approaches.

2019 ◽  
Vol 37 (15_suppl) ◽  
pp. e14592-e14592
Author(s):  
Junshui Ma ◽  
Rongjie Liu ◽  
Gregory V. Goldmacher ◽  
Richard Baumgartner

e14592 Background: Radiomic features derived from CT scans have shown promise in predicting treatment response (Sun et al 2018, and others). We carried out a proof-of-concept study to investigate the use of CT images to predict lesion-level response. Methods: CT images from Merck studies KEYNOTE-010 (NCT01905657) and KEYNOTE-024 (NCT02142738), were used. Data from each study were evaluated separately and split for training (80%) and validation (20%) in each study. A lesion was classified as “shrinking” if ≥30% size reduction from baseline was seen on any future scan. There were 2004 (613 shrinking vs. 1391 non-shrinking) and 588 (311 vs. 277) lesions in KN10 and KN24, respectively. 130 radiomic features were extracted, followed by random forest to predict lesion response. In addition, end-to-end deep learning was used, which predicts the response directly from ROIs of CT images. Models were trained in two ways: (1) using pre-treatment baseline (BL) only or (2) using both BL and the first post-treatment image (V1) as predictors. Finally, to evaluate the predictive power without relying on initial lesion size, size information was omitted from CT images. Results: Results from the KN10 and KN24 are summarized in Table. Conclusions: The results suggest that the BL CT images alone have little power to predict lesion response, while BL and the first post-baseline image exhibit high predictive power. Although a substantial part of the predictive power can be attributed to change in ROI size, the predictive power does exist in other aspects of CT images. Overall, the radiomic signature followed by random forest produced predictions similar to, if not better than, the deep learning approach. [Table: see text]

Energies ◽  
2021 ◽  
Vol 14 (15) ◽  
pp. 4595
Author(s):  
Parisa Asadi ◽  
Lauren E. Beckingham

X-ray CT imaging provides a 3D view of a sample and is a powerful tool for investigating the internal features of porous rock. Reliable phase segmentation in these images is highly necessary but, like any other digital rock imaging technique, is time-consuming, labor-intensive, and subjective. Combining 3D X-ray CT imaging with machine learning methods that can simultaneously consider several extracted features in addition to color attenuation, is a promising and powerful method for reliable phase segmentation. Machine learning-based phase segmentation of X-ray CT images enables faster data collection and interpretation than traditional methods. This study investigates the performance of several filtering techniques with three machine learning methods and a deep learning method to assess the potential for reliable feature extraction and pixel-level phase segmentation of X-ray CT images. Features were first extracted from images using well-known filters and from the second convolutional layer of the pre-trained VGG16 architecture. Then, K-means clustering, Random Forest, and Feed Forward Artificial Neural Network methods, as well as the modified U-Net model, were applied to the extracted input features. The models’ performances were then compared and contrasted to determine the influence of the machine learning method and input features on reliable phase segmentation. The results showed considering more dimensionality has promising results and all classification algorithms result in high accuracy ranging from 0.87 to 0.94. Feature-based Random Forest demonstrated the best performance among the machine learning models, with an accuracy of 0.88 for Mancos and 0.94 for Marcellus. The U-Net model with the linear combination of focal and dice loss also performed well with an accuracy of 0.91 and 0.93 for Mancos and Marcellus, respectively. In general, considering more features provided promising and reliable segmentation results that are valuable for analyzing the composition of dense samples, such as shales, which are significant unconventional reservoirs in oil recovery.


Author(s):  
Annika Hänsch ◽  
Michael Schwier ◽  
Tomasz Morgas ◽  
Jan Klein ◽  
Horst K. Hahn ◽  
...  

2020 ◽  
Vol 10 ◽  
Author(s):  
Zefan Liu ◽  
Guannan Zhu ◽  
Xian Jiang ◽  
Yunuo Zhao ◽  
Hao Zeng ◽  
...  

ObjectiveTo establish a classifier for accurately predicting the overall survival of gallbladder cancer (GBC) patients by analyzing pre-treatment CT images using machine learning technology.MethodsThis retrospective study included 141 patients with pathologically confirmed GBC. After obtaining the pre-treatment CT images, manual segmentation of the tumor lesion was performed and LIFEx package was used to extract the tumor signature. Next, LASSO and Random Forest methods were used to optimize and model. Finally, the clinical information was combined to accurately predict the survival outcomes of GBC patients.ResultsFifteen CT features were selected through LASSO and random forest. On the basis of relative importance GLZLM-HGZE, GLCM-homogeneity and NGLDM-coarseness were included in the final model. The hazard ratio of the CT-based model was 1.462(95% CI: 1.014–2.107). According to the median of risk score, all patients were divided into high and low risk groups, and survival analysis showed that high-risk groups had a poor survival outcome (P = 0.012). After inclusion of clinical factors, we used multivariate COX to classify patients with GBC. The AUC values in the test set and validation set for 3 years reached 0.79 and 0.73, respectively.ConclusionGBC survival outcomes could be predicted by radiomics based on LASSO and Random Forest.


Cancers ◽  
2021 ◽  
Vol 14 (1) ◽  
pp. 40
Author(s):  
Gyu Sang Yoo ◽  
Huan Minh Luu ◽  
Heejung Kim ◽  
Won Park ◽  
Hongryull Pyo ◽  
...  

We aimed to evaluate and compare the qualities of synthetic computed tomography (sCT) generated by various deep-learning methods in volumetric modulated arc therapy (VMAT) planning for prostate cancer. Simulation computed tomography (CT) and T2-weighted simulation magnetic resonance image from 113 patients were used in the sCT generation by three deep-learning approaches: generative adversarial network (GAN), cycle-consistent GAN (CycGAN), and reference-guided CycGAN (RgGAN), a new model which performed further adjustment of sCTs generated by CycGAN with available paired images. VMAT plans on the original simulation CT images were recalculated on the sCTs and the dosimetric differences were evaluated. For soft tissue, a significant difference in the mean Hounsfield unites (HUs) was observed between the original CT images and only sCTs from GAN (p = 0.03). The mean relative dose differences for planning target volumes or organs at risk were within 2% among the sCTs from the three deep-learning approaches. The differences in dosimetric parameters for D98% and D95% from original CT were lowest in sCT from RgGAN. In conclusion, HU conservation for soft tissue was poorest for GAN. There was the trend that sCT generated from the RgGAN showed best performance in dosimetric conservation D98% and D95% than sCTs from other methodologies.


2023 ◽  
Vol 55 (1) ◽  
pp. 1-44
Author(s):  
Massimiliano Luca ◽  
Gianni Barlacchi ◽  
Bruno Lepri ◽  
Luca Pappalardo

The study of human mobility is crucial due to its impact on several aspects of our society, such as disease spreading, urban planning, well-being, pollution, and more. The proliferation of digital mobility data, such as phone records, GPS traces, and social media posts, combined with the predictive power of artificial intelligence, triggered the application of deep learning to human mobility. Existing surveys focus on single tasks, data sources, mechanistic or traditional machine learning approaches, while a comprehensive description of deep learning solutions is missing. This survey provides a taxonomy of mobility tasks, a discussion on the challenges related to each task and how deep learning may overcome the limitations of traditional models, a description of the most relevant solutions to the mobility tasks described above, and the relevant challenges for the future. Our survey is a guide to the leading deep learning solutions to next-location prediction, crowd flow prediction, trajectory generation, and flow generation. At the same time, it helps deep learning scientists and practitioners understand the fundamental concepts and the open challenges of the study of human mobility.


2020 ◽  
Author(s):  
kan He ◽  
Xiaoming Liu ◽  
Mingyang Li ◽  
Xueyan Li ◽  
Hualin Yang ◽  
...  

Abstract ABSTRACT Background: The detection of Kirsten rat sarcoma viral oncogene homolog ( KRAS )gene mutations in colorectal cancer (CRC) is key to the optimal design of individualized therapeutic strategies. The noninvasive prediction of the KRAS status in CRC is challenging. Deep learning (DL) in medical imaging has shown its high performance in diagnosis, classification, and prediction in recent years. In this paper, we investigated predictive performance by using a DL method with a residual neural network ( ResNet ) to estimate the KRAS mutation status in CRC patients based on pre-treatment contrast-enhanced CT imaging. Methods: We have collected a dataset consisting of 157 patients with pathology-confirmed CRC who were divided into a training cohort (n = 117) and a testing cohort (n = 40). We developed an ResNet model that used portal venous phase CT images to estimate KRAS mutations in the axial, coronal, and sagittal directions of the training cohort and evaluated the model in the testing cohort. Several groups of expended region of interest (ROI)patches were generated for the ResNet model,to explore whether tissues around the tumor can contribute to cancer assessment. We also explored a radiomics model with the random forest classifier (RFC) to predict KRAS mutations and compared it with the DL model. Results: The ResNet model in the axial direction achieved the higher area under the curve (AUC) value (0.90) in the testing cohort and peaked at 0.93 with an input of “ROI and 20-pixel” surrounding area. AUC of radiomics model in testing cohorts were 0.818. In comparison, the ResNet model showed better predictive ability . Conclusions: Our experiments reveal that the computerized assessment of the pre-treatment CT images of CRC patients using a DL model has the potential to precisely predict KRAS mutations. This new model has the potential to assist in noninvasive KRAS mutation estimation. Keywords: Colorectal Neoplasm, Mutation, Deep Learning


2021 ◽  
Author(s):  
Mohamed A. Naser ◽  
Kareem A. Wahid ◽  
Abdallah Sherif Radwan Mohamed ◽  
Moamen Abobakr Abdelaal ◽  
Renjie He ◽  
...  

Determining progression-free survival (PFS) for head and neck squamous cell carcinoma (HNSCC) patients is a challenging but pertinent task that could help stratify patients for improved overall outcomes. PET/CT images provide a rich source of anatomical and metabolic data for potential clinical biomarkers that would inform treatment decisions and could help improve PFS. In this study, we participate in the 2021 HECKTOR Challenge to predict PFS in a large dataset of HNSCC PET/CT images using deep learning approaches. We develop a series of deep learning models based on the DenseNet architecture using a negative log-likelihood loss function that utilizes PET/CT images and clinical data as separate input channels to predict PFS in days. Internal model validation based on 10-fold cross-validation using the training data (N=224) yielded C-index values up to 0.622 (without) and 0.842 (with) censoring status considered in C-index computation, respectively. We then implemented model ensembling approaches based on the training data cross-validation folds to predict the PFS of the test set patients (N=101). External validation on the test set for the best ensembling method yielded a C-index value of 0.694. Our results are a promising example of how deep learning approaches can effectively utilize imaging and clinical data for medical outcome prediction in HNSCC, but further work in optimizing these processes is needed.


2019 ◽  
Vol 11 (23) ◽  
pp. 2881 ◽  
Author(s):  
Leandro Parente ◽  
Evandro Taquary ◽  
Ana Silva ◽  
Carlos Souza ◽  
Laerte Ferreira

The rapid growth of satellites orbiting the planet is generating massive amounts of data for Earth science applications. Concurrently, state-of-the-art deep-learning-based algorithms and cloud computing infrastructure have become available with a great potential to revolutionize the image processing of satellite remote sensing. Within this context, this study evaluated, based on thousands of PlanetScope images obtained over a 12-month period, the performance of three machine learning approaches (random forest, long short-term memory-LSTM, and U-Net). We applied these approaches to mapped pasturelands in a Central Brazil region. The deep learning algorithms were implemented using TensorFlow, while the random forest utilized the Google Earth Engine platform. The accuracy assessment presented F1 scores for U-Net, LSTM, and random forest of, respectively, 96.94%, 98.83%, and 95.53% in the validation data, and 94.06%, 87.97%, and 82.57% in the test data, indicating a better classification efficiency using the deep learning approaches. Although the use of deep learning algorithms depends on a high investment in calibration samples and the generalization of these methods requires further investigations, our results suggest that the neural network architectures developed in this study can be used to map large geographic regions that consider a wide variety of satellite data (e.g., PlanetScope, Sentinel-2, Landsat-8).


Sign in / Sign up

Export Citation Format

Share Document