scholarly journals Integrative Model of CT Imaging and Clinical Features Using Attentional Multi-view Convolutional Neural Network (AM-CNN) for Prediction of Esophageal Fistula in Esophageal Cancer

2020 ◽  
Vol 108 (3) ◽  
pp. e637-e638
Author(s):  
Y. Xu ◽  
H. Cui ◽  
B. Fan ◽  
B. Zou ◽  
W. Li ◽  
...  
2021 ◽  
Vol 11 ◽  
Author(s):  
Yiyue Xu ◽  
Hui Cui ◽  
Taotao Dong ◽  
Bing Zou ◽  
Bingjie Fan ◽  
...  

Background and PurposeThis study aims to develop a risk model to predict esophageal fistula in esophageal cancer (EC) patients by learning from both clinical data and computerized tomography (CT) radiomic features.Materials and MethodsIn this retrospective study, computerized tomography (CT) images and clinical data of 186 esophageal fistula patients and 372 controls (1:2 matched by the diagnosis time of EC, sex, marriage, and race) were collected. All patients had esophageal cancer and did not receive esophageal surgery. 70% patients were assigned into training set randomly and 30% into validation set. We firstly use a novel attentional convolutional neural network for radiographic descriptor extraction from nine views of planes of contextual CT, segmented tumor and neighboring structures. Then clinical factors including general, diagnostic, pathologic, therapeutic and hematological parameters are fed into neural network for high-level latent representation. The radiographic descriptors and latent clinical factor representations are finally associated by a fully connected layer for patient level risk prediction using SoftMax classifier.Results512 deep radiographic features and 32 clinical features were extracted. The integrative deep learning model achieved C-index of 0.901, sensitivity of 0.835, and specificity of 0.918 on validation set with superior performance than non-integrative model using CT imaging alone (C-index = 0.857) or clinical data alone (C-index = 0.780).ConclusionThe integration of radiomic descriptors from CT and clinical data significantly improved the esophageal fistula prediction. We suggest that this model has the potential to support individualized stratification and treatment planning for EC patients.


2020 ◽  
Vol 7 ◽  
Author(s):  
Hayden Gunraj ◽  
Linda Wang ◽  
Alexander Wong

The coronavirus disease 2019 (COVID-19) pandemic continues to have a tremendous impact on patients and healthcare systems around the world. In the fight against this novel disease, there is a pressing need for rapid and effective screening tools to identify patients infected with COVID-19, and to this end CT imaging has been proposed as one of the key screening methods which may be used as a complement to RT-PCR testing, particularly in situations where patients undergo routine CT scans for non-COVID-19 related reasons, patients have worsening respiratory status or developing complications that require expedited care, or patients are suspected to be COVID-19-positive but have negative RT-PCR test results. Early studies on CT-based screening have reported abnormalities in chest CT images which are characteristic of COVID-19 infection, but these abnormalities may be difficult to distinguish from abnormalities caused by other lung conditions. Motivated by this, in this study we introduce COVIDNet-CT, a deep convolutional neural network architecture that is tailored for detection of COVID-19 cases from chest CT images via a machine-driven design exploration approach. Additionally, we introduce COVIDx-CT, a benchmark CT image dataset derived from CT imaging data collected by the China National Center for Bioinformation comprising 104,009 images across 1,489 patient cases. Furthermore, in the interest of reliability and transparency, we leverage an explainability-driven performance validation strategy to investigate the decision-making behavior of COVIDNet-CT, and in doing so ensure that COVIDNet-CT makes predictions based on relevant indicators in CT images. Both COVIDNet-CT and the COVIDx-CT dataset are available to the general public in an open-source and open access manner as part of the COVID-Net initiative. While COVIDNet-CT is not yet a production-ready screening solution, we hope that releasing the model and dataset will encourage researchers, clinicians, and citizen data scientists alike to leverage and build upon them.


2018 ◽  
Vol 31 (Supplement_1) ◽  
pp. 140-140
Author(s):  
Po-Kuei Hsu ◽  
Joe Yeh

Abstract Background Both lymphovascular invasion, which is characterized by penetration of tumor cells into the peritumoural vascular or lymphatic network, and perineural invasion, which is characterized by involvement of tumor cells surrounding nerve fibers, are considered as an important step for tumor spreading, and are known poor prognostic factors in esophageal cancer. However, the information of these histological features is unavailable until pathological examination of surgical resected specimens. We aim to predict the presence or absence of these factors by positron emission tomography images during staging workup. Methods The positron emission tomography images before treatment and pathological reports of 278 patients who underwent esophagectomy for squamous cell carcinoma were collected. Stepwise convolutional neural network was constructed to distinguish patient with either lymphovascular invasion or perineural invasion from those without. Results Randomly selected 248 patients were included in the testing set. Stepwise approach was used in training our custom neural network. The performance of fine-tuned neural network was tested in another independent 30 patients. The accuracy rate of predicting the presence or absence of either lymphovascular invasion or perineural invasion was 66.7% (20 of 30 were accurate). Conclusion Using pre-treatment positron emission tomography images alone to predict the presence of absence of poor prognostic histological factors, i.e. lymphovascular invasion or perineural invasion, with deep convolutional neural network is possible. The technique of deep learning may identify patients with poor prognosis and enable personalized medicine in esophageal cancer. Disclosure All authors have declared no conflicts of interest.


2021 ◽  
Vol 2021 ◽  
pp. 1-8
Author(s):  
Yunhui Zhao ◽  
Junkai Xu ◽  
Qisong Chen

An esophageal cancer intelligent diagnosis system is developed to improve the recognition rate of esophageal cancer image diagnosis and the efficiency of physicians, as well as to improve the level of esophageal cancer image diagnosis in primary care institutions. In this paper, by collecting medical images related to esophageal cancer over the years, we establish an intelligent diagnosis system based on the convolutional neural network for esophageal cancer images through the steps of data annotation, image preprocessing, data enhancement, and deep learning to assist doctors in intelligent diagnosis. The convolutional neural network-based esophageal cancer image intelligent diagnosis system has been successfully applied in hospitals and widely praised by frontline doctors. This system is beneficial for primary care physicians to improve the overall accuracy of esophageal cancer diagnosis and reduce the risk of death of esophageal cancer patients. We also analyze that the efficacy of radiation therapy for esophageal cancer can be influenced by many factors, and clinical attention should be paid to grasp the relevant factors in order to improve the final treatment effect and prognosis of patients.


2021 ◽  
Vol 70 ◽  
pp. 102001
Author(s):  
Tianling Lyu ◽  
Wei Zhao ◽  
Yinsu Zhu ◽  
Zhan Wu ◽  
Yikun Zhang ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document