scholarly journals A deep-learning-based prognostic nomogram integrating microscopic digital pathology and macroscopic magnetic resonance images in nasopharyngeal carcinoma: a multi-cohort study

2020 ◽  
Vol 12 ◽  
pp. 175883592097141
Author(s):  
Fan Zhang ◽  
Lian-Zhen Zhong ◽  
Xun Zhao ◽  
Di Dong ◽  
Ji-Jin Yao ◽  
...  

Background: To explore the prognostic value of radiomics-based and digital pathology-based imaging biomarkers from macroscopic magnetic resonance imaging (MRI) and microscopic whole-slide images for patients with nasopharyngeal carcinoma (NPC). Methods: We recruited 220 NPC patients and divided them into training ( n = 132), internal test ( n = 44), and external test ( n = 44) cohorts. The primary endpoint was failure-free survival (FFS). Radiomic features were extracted from pretreatment MRI and selected and integrated into a radiomic signature. The histopathological signature was extracted from whole-slide images of biopsy specimens using an end-to-end deep-learning method. Incorporating two signatures and independent clinical factors, a multi-scale nomogram was constructed. We also tested the correlation between the key imaging features and genetic alternations in an independent cohort of 16 patients (biological test cohort). Results: Both radiomic and histopathologic signatures presented significant associations with treatment failure in the three cohorts (C-index: 0.689–0.779, all p < 0.050). The multi-scale nomogram showed a consistent significant improvement for predicting treatment failure compared with the clinical model in the training (C-index: 0.817 versus 0.730, p < 0.050), internal test (C-index: 0.828 versus 0.602, p < 0.050) and external test (C-index: 0.834 versus 0.679, p < 0.050) cohorts. Furthermore, patients were stratified successfully into two groups with distinguishable prognosis (log-rank p < 0.0010) using our nomogram. We also found that two texture features were related to the genetic alternations of chromatin remodeling pathways in another independent cohort. Conclusion: The multi-scale imaging features showed a complementary value in prognostic prediction and may improve individualized treatment in NPC.

2021 ◽  
Vol 7 (3) ◽  
pp. 51
Author(s):  
Emanuela Paladini ◽  
Edoardo Vantaggiato ◽  
Fares Bougourzi ◽  
Cosimo Distante ◽  
Abdenour Hadid ◽  
...  

In recent years, automatic tissue phenotyping has attracted increasing interest in the Digital Pathology (DP) field. For Colorectal Cancer (CRC), tissue phenotyping can diagnose the cancer and differentiate between different cancer grades. The development of Whole Slide Images (WSIs) has provided the required data for creating automatic tissue phenotyping systems. In this paper, we study different hand-crafted feature-based and deep learning methods using two popular multi-classes CRC-tissue-type databases: Kather-CRC-2016 and CRC-TP. For the hand-crafted features, we use two texture descriptors (LPQ and BSIF) and their combination. In addition, two classifiers are used (SVM and NN) to classify the texture features into distinct CRC tissue types. For the deep learning methods, we evaluate four Convolutional Neural Network (CNN) architectures (ResNet-101, ResNeXt-50, Inception-v3, and DenseNet-161). Moreover, we propose two Ensemble CNN approaches: Mean-Ensemble-CNN and NN-Ensemble-CNN. The experimental results show that the proposed approaches outperformed the hand-crafted feature-based methods, CNN architectures and the state-of-the-art methods in both databases.


2021 ◽  
Vol 11 (2) ◽  
pp. 782 ◽  
Author(s):  
Albert Comelli ◽  
Navdeep Dahiya ◽  
Alessandro Stefano ◽  
Federica Vernuccio ◽  
Marzia Portoghese ◽  
...  

Magnetic Resonance Imaging-based prostate segmentation is an essential task for adaptive radiotherapy and for radiomics studies whose purpose is to identify associations between imaging features and patient outcomes. Because manual delineation is a time-consuming task, we present three deep-learning (DL) approaches, namely UNet, efficient neural network (ENet), and efficient residual factorized convNet (ERFNet), whose aim is to tackle the fully-automated, real-time, and 3D delineation process of the prostate gland on T2-weighted MRI. While UNet is used in many biomedical image delineation applications, ENet and ERFNet are mainly applied in self-driving cars to compensate for limited hardware availability while still achieving accurate segmentation. We apply these models to a limited set of 85 manual prostate segmentations using the k-fold validation strategy and the Tversky loss function and we compare their results. We find that ENet and UNet are more accurate than ERFNet, with ENet much faster than UNet. Specifically, ENet obtains a dice similarity coefficient of 90.89% and a segmentation time of about 6 s using central processing unit (CPU) hardware to simulate real clinical conditions where graphics processing unit (GPU) is not always available. In conclusion, ENet could be efficiently applied for prostate delineation even in small image training datasets with potential benefit for patient management personalization.


2021 ◽  
Vol 11 ◽  
Author(s):  
Xiangyu Ma ◽  
Xinyuan Chen ◽  
Jingwen Li ◽  
Yu Wang ◽  
Kuo Men ◽  
...  

BackgroundRadical radiotherapy is the main treatment modality for early and locally advanced nasopharyngeal carcinoma (NPC). Magnetic resonance imaging (MRI) has the advantages of no ionizing radiation and high soft-tissue resolution compared to computed tomography (CT), but it does not provide electron density (ED) information for radiotherapy planning. Therefore, in this study, we developed a pseudo-CT (pCT) generation method to provide necessary ED information for MRI-only planning in NPC radiotherapy.MethodsTwenty patients with early-stage NPC who received radiotherapy in our hospital were investigated. First, 1433 sets of paired T1 weighted magnetic resonance (MR) simulation images and CT simulation images were rigidly registered and preprocessed. A 16-layer U-Net was used to train the pCT generative model and a “pix2pix” generative adversarial network (GAN) was also trained to compare with the pure U-Net regrading pCT quality. Second, the contours of all target volumes and organs at risk in the original CT were transferred to the pCT for planning, and the beams were copied back to the original CT for reference dose calculation. Finally, the dose distribution calculated on the pCT was compared with the reference dose distribution through gamma analysis and dose-volume indices.ResultsThe average time for pCT generation for each patient was 7.90 ± 0.47 seconds. The average mean (absolute) error was −9.3 ± 16.9 HU (102.6 ± 11.4 HU), and the mean-root-square error was 209.8 ± 22.6 HU. There was no significant difference between the pCT quality of pix2pix GAN and that of pure U-Net (p &gt; 0.05). The dose distribution on the pCT was highly consistent with that on the original CT. The mean gamma pass rate (2 mm/3%, 10% low dose threshold) was 99.1% ± 0.3%, and the mean absolute difference of nasopharyngeal PGTV D99% and PTV V95% were 0.4% ± 0.2% and 0.1% ± 0.1%.ConclusionThe proposed deep learning model can accurately predict CT from MRI, and the generated pCT can be employed in precise dose calculations. It is of great significance to realize MRI-only planning in NPC radiotherapy, which can improve structure delineation and considerably reduce additional imaging dose, especially when an MR-guided linear accelerator is adopted for treatment.


2021 ◽  
Vol 2021 ◽  
pp. 1-11
Author(s):  
Deqian Xin ◽  
Zhongzhe An ◽  
Juan Ding ◽  
Zhi Li ◽  
Leyan Qiao

This study aimed to explore the value of magnetic resonance imaging (MRI) features based on deep learning super-resolution algorithms in evaluating the value of propofol anesthesia for brain protection of patients undergoing craniotomy evacuation of the hematoma. An optimized super-resolution algorithm was obtained through the multiscale network reconstruction model based on the traditional algorithm. A total of 100 patients undergoing craniotomy evacuation of hematoma were recruited and rolled into sevoflurane control group and propofol experimental group. Both were evaluated using diffusion tensor imaging (DTI) images based on deep learning super-resolution algorithms. The results showed that the fractional anisotropic image (FA) value of the hind limb corticospinal tract of the affected side of the internal capsule of the experimental group after the operation was 0.67 ± 0.28. The National Institute of Health Stroke Scale (NIHSS) score was 6.14 ± 3.29. The oxygen saturation in jugular venous (SjvO2) at T4 and T5 was 61.93 ± 6.58% and 59.38 ± 6.2%, respectively, and cerebral oxygen uptake rate (CO2ER) was 31.12 ± 6.07% and 35.83 ± 7.91%, respectively. The difference in jugular venous oxygen (Da-jvO2) at T3, T4, and T5 was 63.28 ± 10.15 mL/dL, 64.89 ± 13.11 mL/dL, and 66.03 ± 11.78 mL/dL, respectively. The neuron-specific enolase (NSE) and central-nerve-specific protein (S100β) levels at T5 were 53.85 ± 12.31 ng/mL and 7.49 ± 3.16 ng/mL, respectively. In terms of the number of postoperative complications, the patients in the experimental group were better than the control group under sevoflurane anesthesia, and the differences were substantial ( P  < 0.05). In conclusion, MRI images based on deep learning super-resolution algorithm have great clinical value in evaluating the degree of brain injury in patients anesthetized with propofol and the protective effect of propofol on brain nerves.


2018 ◽  
Vol 2018 ◽  
pp. 1-7 ◽  
Author(s):  
Qiaoliang Li ◽  
Yuzhen Xu ◽  
Zhewei Chen ◽  
Dexiang Liu ◽  
Shi-Ting Feng ◽  
...  

Objectives. To evaluate the application of a deep learning architecture, based on the convolutional neural network (CNN) technique, to perform automatic tumor segmentation of magnetic resonance imaging (MRI) for nasopharyngeal carcinoma (NPC). Materials and Methods. In this prospective study, 87 MRI containing tumor regions were acquired from newly diagnosed NPC patients. These 87 MRI were augmented to >60,000 images. The proposed CNN network is composed of two phases: feature representation and scores map reconstruction. We designed a stepwise scheme to train our CNN network. To evaluate the performance of our method, we used case-by-case leave-one-out cross-validation (LOOCV). The ground truth of tumor contouring was acquired by the consensus of two experienced radiologists. Results. The mean values of dice similarity coefficient, percent match, and their corresponding ratio with our method were 0.89±0.05, 0.90±0.04, and 0.84±0.06, respectively, all of which were better than reported values in the similar studies. Conclusions. We successfully established a segmentation method for NPC based on deep learning in contrast-enhanced magnetic resonance imaging. Further clinical trials with dedicated algorithms are warranted.


2020 ◽  
pp. 1-9
Author(s):  
Ewen David McAlpine ◽  
Liron Pantanowitz ◽  
Pamela M. Michelow

<b><i>Background:</i></b> The incorporation of digital pathology into routine pathology practice is becoming more widespread. Definite advantages exist with respect to the implementation of artificial intelligence (AI) and deep learning in pathology, including cytopathology. However, there are also unique challenges in this regard. <b><i>Summary:</i></b> This review discusses cytology-specific challenges, including the need to implement digital cytology prior to AI; the large file sizes and increased acquisition times for whole slide images in cytology; the routine use of multiple stains, such as Papanicolaou and Romanowsky stains; the lack of high-quality annotated datasets on which to train algorithms; and the considerable computer resources required, in terms of both computer infrastructure and skilled personnel, for computing and storage of data. Global concerns regarding AI that are certainly applicable to cytology include the need for model validation and continued quality assurance, ethical issues such as the use of patient data in developing algorithms, the need to develop regulatory frameworks regarding what type of data can be utilized and ensuring cybersecurity during data collection and storage, and algorithm development. <b><i>Key Messages:</i></b> While AI will likely play a role in cytology practice in the future, applying this technology to cytology poses a unique set of challenges. A broad understanding of digital pathology and algorithm development is desirable to guide the development of algorithms, as well as the need to be cognizant of potential pitfalls to avoid when incorporating the technology in practice.


Author(s):  
Byron Smith ◽  
Meyke Hermsen ◽  
Elizabeth Lesser ◽  
Deepak Ravichandar ◽  
Walter Kremers

Abstract Deep learning has pushed the scope of digital pathology beyond simple digitization and telemedicine. The incorporation of these algorithms in routine workflow is on the horizon and maybe a disruptive technology, reducing processing time, and increasing detection of anomalies. While the newest computational methods enjoy much of the press, incorporating deep learning into standard laboratory workflow requires many more steps than simply training and testing a model. Image analysis using deep learning methods often requires substantial pre- and post-processing order to improve interpretation and prediction. Similar to any data processing pipeline, images must be prepared for modeling and the resultant predictions need further processing for interpretation. Examples include artifact detection, color normalization, image subsampling or tiling, removal of errant predictions, etc. Once processed, predictions are complicated by image file size – typically several gigabytes when unpacked. This forces images to be tiled, meaning that a series of subsamples from the whole-slide image (WSI) are used in modeling. Herein, we review many of these methods as they pertain to the analysis of biopsy slides and discuss the multitude of unique issues that are part of the analysis of very large images.


Life ◽  
2021 ◽  
Vol 11 (6) ◽  
pp. 582
Author(s):  
Yuchai Wan ◽  
Zhongshu Zheng ◽  
Ran Liu ◽  
Zheng Zhu ◽  
Hongen Zhou ◽  
...  

Many computer-aided diagnosis methods, especially ones with deep learning strategies, of liver cancers based on medical images have been proposed. However, most of such methods analyze the images under only one scale, and the deep learning models are always unexplainable. In this paper, we propose a deep learning-based multi-scale and multi-level fusing approach of CNNs for liver lesion diagnosis on magnetic resonance images, termed as MMF-CNN. We introduce a multi-scale representation strategy to encode both the local and semi-local complementary information of the images. To take advantage of the complementary information of multi-scale representations, we propose a multi-level fusion method to combine the information of both the feature level and the decision level hierarchically and generate a robust diagnostic classifier based on deep learning. We further explore the explanation of the diagnosis decision of the deep neural network through visualizing the areas of interest of the network. A new scoring method is designed to evaluate whether the attention maps can highlight the relevant radiological features. The explanation and visualization make the decision-making process of the deep neural network transparent for the clinicians. We apply our proposed approach to various state-of-the-art deep learning architectures. The experimental results demonstrate the effectiveness of our approach.


Sign in / Sign up

Export Citation Format

Share Document