Development of Novel Deep Multimodal Representation Learning‐based Model for the Differentiation of Liver Tumors on B‐Mode Ultrasound Images

Author(s):  
Masaya Sato ◽  
Tamaki Kobayashi ◽  
Yoko Soroida ◽  
Takashi Tanaka ◽  
Takuma Nakatsuka ◽  
...  
2021 ◽  
Author(s):  
Masaya Sato ◽  
Tamaki Kobayashi ◽  
Yoko Soroida ◽  
Takashi Tanaka ◽  
Takuma Nakatsuka ◽  
...  

Abstract Recently, multimodal representation learning for images and other information such as numbers or language has gained much attention due to the possibility of combining latent features using a single distribution. The aim of the current study was to analyze the diagnostic performance of deep multimodal representation model-based integration of tumor image, patient background, and blood biomarkers for the differentiation of liver tumors observed using B-mode ultrasonography (US). First, we applied supervised learning with a convolutional neural network (CNN) to 972 liver nodules in the training and development sets (479 benign and 493 malignant nodules), to develop a predictive model using segmented B-mode tumor images. Additionally, we also applied a deep multimodal representation model to integrate information about patient background or blood biomarkers to B-mode images. We then investigated the performance of the models in an independent test set of 108 liver nodules, including 53 benign and 55 malignant tumors. Using only the segmented B-mode images, the diagnostic accuracy and area under the curve (AUC) values were 68.52% and 0.721, respectively. As the information about patient background such as age or sex and blood biomarkers was integrated, the diagnostic performance increased in a stepwise manner. The diagnostic accuracy and AUC value of the multimodal DL model (which integrated B-mode tumor image, patient age, sex, AST, ALT, platelet count, and albumin data) reached 96.30% and 0.994, respectively. Integration of patient background and blood biomarkers in addition to US image using multimodal representation learning outperformed the CNN model using US images. We expect that the deep multimodal representation model could be a feasible and acceptable tool that can effectively support the definitive diagnosis of liver tumors using B-mode US in daily clinical practice.


2020 ◽  
Vol 6 (3) ◽  
pp. 284-287
Author(s):  
Jannis Hagenah ◽  
Mohamad Mehdi ◽  
Floris Ernst

AbstractAortic root aneurysm is treated by replacing the dilated root by a grafted prosthesis which mimics the native root morphology of the individual patient. The challenge in predicting the optimal prosthesis size rises from the highly patient-specific geometry as well as the absence of the original information on the healthy root. Therefore, the estimation is only possible based on the available pathological data. In this paper, we show that representation learning with Conditional Variational Autoencoders is capable of turning the distorted geometry of the aortic root into smoother shapes while the information on the individual anatomy is preserved. We evaluated this method using ultrasound images of the porcine aortic root alongside their labels. The observed results show highly realistic resemblance in shape and size to the ground truth images. Furthermore, the similarity index has noticeably improved compared to the pathological images. This provides a promising technique in planning individual aortic root replacement.


Author(s):  
Nicholas Westing ◽  
Kevin C. Gross ◽  
Brett J. Borghetti ◽  
Christine M. Schubert Kabban ◽  
Jacob Martin ◽  
...  

2005 ◽  
Vol 1281 ◽  
pp. 218-223 ◽  
Author(s):  
M. Cvancarova ◽  
F. Albregtsen ◽  
K. Brabrand ◽  
E. Samset

2019 ◽  
Vol 6 (6) ◽  
pp. 10675-10685 ◽  
Author(s):  
Zhenhua Huang ◽  
Xin Xu ◽  
Juan Ni ◽  
Honghao Zhu ◽  
Cheng Wang

Kanzo ◽  
1989 ◽  
Vol 30 (11) ◽  
pp. 1637-1638 ◽  
Author(s):  
Yousuke ARITA ◽  
Kazuaki YASUHARA ◽  
Jyunji FURUSE ◽  
Shoichi MATSUTANI ◽  
Masaaki EBARA ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document