scholarly journals US-Based Deep Learning Model for Differentiating Hepatocellular Carcinoma (HCC) From Other Malignancy in Cirrhotic Patients

2021 ◽  
Vol 11 ◽  
Author(s):  
Hang Zhou ◽  
Tao Jiang ◽  
Qunying Li ◽  
Chao Zhang ◽  
Cong Zhang ◽  
...  

The aim was to build a predictive model based on ultrasonography (US)-based deep learning model (US-DLM) and clinical features (Clin) for differentiating hepatocellular carcinoma (HCC) from other malignancy (OM) in cirrhotic patients. 112 patients with 120 HCCs and 60 patients with 61 OMs were included. They were randomly divided into training and test cohorts with a 4:1 ratio for developing and evaluating US-DLM model, respectively. Significant Clin predictors of OM in the training cohort were combined with US-DLM to build a nomogram predictive model (US-DLM+Clin). The diagnostic performance of US-DLM and US-DLM+Clin were compared with that of contrast enhanced magnetic resonance imaging (MRI) liver imaging and reporting system category M (MRI LR-M). US-DLM was the best independent predictor for evaluating OMs, followed by clinical information, including high cancer antigen 199 (CA199) level and female. The US-DLM achieved an AUC of 0.74 in the test cohort, which was comparable with that of MRI LR-M (AUC=0.84, p=0.232). The US-DLM+Clin for predicting OMs also had similar AUC value (0.81) compared with that of LR-M+Clin (0.83, p>0.05). US-DLM+Clin obtained a higher specificity, but a lower sensitivity, compared to that of LR-M +Clin (Specificity: 82.6% vs. 73.9%, p=0.007; Sensitivity: 78.6% vs. 92.9%, p=0.006) for evaluating OMs in the test set. The US-DLM+Clin model is valuable in differentiating HCC from OM in the setting of cirrhosis.

2021 ◽  
Author(s):  
Shi Feng ◽  
Xiaotian Yu ◽  
Wenjie Liang ◽  
Xuejie Li ◽  
Weixiang Zhong ◽  
...  

2020 ◽  
Vol 3 (9) ◽  
pp. e2015626 ◽  
Author(s):  
George N. Ioannou ◽  
Weijing Tang ◽  
Lauren A. Beste ◽  
Monica A. Tincopa ◽  
Grace L. Su ◽  
...  

JMIR Cancer ◽  
10.2196/19812 ◽  
2021 ◽  
Vol 7 (4) ◽  
pp. e19812
Author(s):  
Chia-Wei Liang ◽  
Hsuan-Chia Yang ◽  
Md Mohaimenul Islam ◽  
Phung Anh Alex Nguyen ◽  
Yi-Ting Feng ◽  
...  

Background Hepatocellular carcinoma (HCC), usually known as hepatoma, is the third leading cause of cancer mortality globally. Early detection of HCC helps in its treatment and increases survival rates. Objective The aim of this study is to develop a deep learning model, using the trend and severity of each medical event from the electronic health record to accurately predict the patients who will be diagnosed with HCC in 1 year. Methods Patients with HCC were screened out from the National Health Insurance Research Database of Taiwan between 1999 and 2013. To be included, the patients with HCC had to register as patients with cancer in the catastrophic illness file and had to be diagnosed as a patient with HCC in an inpatient admission. The control cases (non-HCC patients) were randomly sampled from the same database. We used age, gender, diagnosis code, drug code, and time information as the input variables of a convolution neural network model to predict those patients with HCC. We also inspected the highly weighted variables in the model and compared them to their odds ratio at HCC to understand how the predictive model works Results We included 47,945 individuals, 9553 of whom were patients with HCC. The area under the receiver operating curve (AUROC) of the model for predicting HCC risk 1 year in advance was 0.94 (95% CI 0.937-0.943), with a sensitivity of 0.869 and a specificity 0.865. The AUROC for predicting HCC patients 7 days, 6 months, 1 year, 2 years, and 3 years early were 0.96, 0.94, 0.94, 0.91, and 0.91, respectively. Conclusions The findings of this study show that the convolutional neural network model has immense potential to predict the risk of HCC 1 year in advance with minimal features available in the electronic health records.


2020 ◽  
Author(s):  
Myeongkyun Kang ◽  
Philip Chikontwe ◽  
Miguel Luna ◽  
Kyung Soo Hong ◽  
Jong Geol Jang ◽  
...  

ABSTRACTAs the number of COVID-19 patients has increased worldwide, many efforts have been made to find common patterns in CT images of COVID-19 patients and to confirm the relevance of these patterns against other clinical information. The aim of this paper is to propose a new method that allowed us to find patterns which observed on CTs of patients, and further we use these patterns for disease and severity diagnosis. For the experiment, we performed a retrospective cohort study of 170 confirmed patients with COVID-19 and bacterial pneumonia acquired at Yeungnam University hospital in Daegu, Korea. We extracted lesions inside the lungs from the CT images and classified whether these lesions were from COVID-19 patients or bacterial pneumonia patients by applying a deep learning model. From our experiments, we found 20 patterns that have a major effect on the classification performance of the deep learning model. Crazy-paving was extracted as a major pattern of bacterial pneumonia, while Ground-glass opacities (GGOs) in the peripheral lungs as that of COVID-19. Diffuse GGOs in the central and peripheral lungs was considered to be a key factor for severity classification. The proposed method achieved an accuracy of 91.2% for classifying COVID-19 and bacterial pneumonia with 95% reported for severity classification. Chest CT analysis with constructed lesion clusters revealed well-known COVID-19 CT manifestations comparable to manual CT analysis. Moreover, the constructed patient level histogram with/without radiomics features showed feasibility and improved accuracy for both disease and severity classification with key clinical implications.


2020 ◽  
Vol 65 (3) ◽  
pp. 035014
Author(s):  
Cong Zhu ◽  
Steven H Lin ◽  
Xiaoqian Jiang ◽  
Yang Xiang ◽  
Zayne Belal ◽  
...  

2021 ◽  
Vol 11 ◽  
Author(s):  
Hang Zhou ◽  
Jiawei Sun ◽  
Tao Jiang ◽  
Jiaqi Wu ◽  
Qunying Li ◽  
...  

PurposesTo establish a predictive model incorporating clinical features and contrast enhanced ultrasound liver imaging and reporting and data system (CEUS LI-RADS) for estimation of microvascular invasion (MVI) in hepatocellular carcinoma (HCC) patients.MethodsIn the retrospective study, 127 HCC patients from two hospitals were allocated as training cohort (n=98) and test cohorts (n=29) based on cutoff time-point, June 2020. Multivariate regression analysis was performed to identify independent indicators for developing predictive nomogram models. The area under receiver operating characteristic (AUC) curve was also determined to establish the diagnostic performance of different predictive models. Corresponding sensitivities and specificities of different models at the cutoff nomogram value were compared.ResultsIn the training cohort, clinical information (larger tumor size, higher AFP level) and CEUS LR-M were significantly correlated with the presence of MVI (all p<0.05). By incorporating clinical information and CEUS LR-M, the predictive model (LR-M+Clin) achieved a desirable diagnostic performance (AUC=0.80 and 0.84) in both cohorts at nomogram cutoff score value of 89. The sensitivity of LR-M+Clin when predicting MVI in HCC patients was higher than that of the clinical model alone (86.7% vs. 46.7%, p=0.027), while specificities were 78.6% and 85.7% (p=0.06), respectively, in the test cohort. In addition, LR-M+Clin exhibited similar AUC and specificity, but a significantly higher sensitivity (86.7%) than those of LR-M alone and LR-5(No)+Clin (both sensitivities=73.3%, both p=0.048).ConclusionThe predictive model incorporating CEUS LR-M and clinical features was able to predict the MVI status of HCC and is a potential reliable preoperative tool for informing treatment.


JHEP Reports ◽  
2020 ◽  
Vol 2 (6) ◽  
pp. 100175 ◽  
Author(s):  
Joon Yeul Nam ◽  
Dong Hyun Sinn ◽  
Junho Bae ◽  
Eun Sun Jang ◽  
Jin-Wook Kim ◽  
...  

2021 ◽  
Vol 1 (1) ◽  
Author(s):  
Youqing Mu ◽  
Hamid R. Tizhoosh ◽  
Rohollah Moosavi Tayebi ◽  
Catherine Ross ◽  
Monalisa Sur ◽  
...  

Abstract Background Pathology synopses consist of semi-structured or unstructured text summarizing visual information by observing human tissue. Experts write and interpret these synopses with high domain-specific knowledge to extract tissue semantics and formulate a diagnosis in the context of ancillary testing and clinical information. The limited number of specialists available to interpret pathology synopses restricts the utility of the inherent information. Deep learning offers a tool for information extraction and automatic feature generation from complex datasets. Methods Using an active learning approach, we developed a set of semantic labels for bone marrow aspirate pathology synopses. We then trained a transformer-based deep-learning model to map these synopses to one or more semantic labels, and extracted learned embeddings (i.e., meaningful attributes) from the model’s hidden layer. Results Here we demonstrate that with a small amount of training data, a transformer-based natural language model can extract embeddings from pathology synopses that capture diagnostically relevant information. On average, these embeddings can be used to generate semantic labels mapping patients to probable diagnostic groups with a micro-average F1 score of 0.779 Â ± 0.025. Conclusions We provide a generalizable deep learning model and approach to unlock the semantic information inherent in pathology synopses toward improved diagnostics, biodiscovery and AI-assisted computational pathology.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Sunghoon Joo ◽  
Eun Sook Ko ◽  
Soonhwan Kwon ◽  
Eunjoo Jeon ◽  
Hyungsik Jung ◽  
...  

AbstractThe achievement of the pathologic complete response (pCR) has been considered a metric for the success of neoadjuvant chemotherapy (NAC) and a powerful surrogate indicator of the risk of recurrence and long-term survival. This study aimed to develop a multimodal deep learning model that combined clinical information and pretreatment MR images for predicting pCR to NAC in patients with breast cancer. The retrospective study cohort consisted of 536 patients with invasive breast cancer who underwent pre-operative NAC. We developed a deep learning model to fuse high-dimensional MR image features and the clinical information for the pretreatment prediction of pCR to NAC in breast cancer. The proposed deep learning model trained on all datasets as clinical information, T1-weighted subtraction images, and T2-weighted images shows better performance with area under the curve (AUC) of 0.888 as compared to the model using only clinical information (AUC = 0.827, P < 0.05). Our results demonstrate that the multimodal fusion approach using deep learning with both clinical information and MR images achieve higher prediction performance compared to the deep learning model without the fusion approach. Deep learning could integrate pretreatment MR images with clinical information to improve pCR prediction performance.


Sign in / Sign up

Export Citation Format

Share Document