Automatic Report Generation for Chest X-Ray Images: A Multilevel Multi-attention Approach

Author(s):  
Gaurav O. Gajbhiye ◽  
Abhijeet V. Nandedkar ◽  
Ibrahima Faye
Keyword(s):  
X Ray ◽  
Author(s):  
An Yan ◽  
Zexue He ◽  
Xing Lu ◽  
Jiang Du ◽  
Eric Chang ◽  
...  

Author(s):  
Fenglin Liu ◽  
Changchang Yin ◽  
Xian Wu ◽  
Shen Ge ◽  
Ping Zhang ◽  
...  
Keyword(s):  
X Ray ◽  

2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Alexandros Karargyris ◽  
Satyananda Kashyap ◽  
Ismini Lourentzou ◽  
Joy T. Wu ◽  
Arjun Sharma ◽  
...  

AbstractWe developed a rich dataset of Chest X-Ray (CXR) images to assist investigators in artificial intelligence. The data were collected using an eye-tracking system while a radiologist reviewed and reported on 1,083 CXR images. The dataset contains the following aligned data: CXR image, transcribed radiology report text, radiologist’s dictation audio and eye gaze coordinates data. We hope this dataset can contribute to various areas of research particularly towards explainable and multimodal deep learning/machine learning methods. Furthermore, investigators in disease classification and localization, automated radiology report generation, and human-machine interaction can benefit from these data. We report deep learning experiments that utilize the attention maps produced by the eye gaze dataset to show the potential utility of this dataset.


2021 ◽  
pp. 625-635
Author(s):  
Ivona Najdenkoska ◽  
Xiantong Zhen ◽  
Marcel Worring ◽  
Ling Shao
Keyword(s):  
X Ray ◽  

IEEE Access ◽  
2021 ◽  
Vol 9 ◽  
pp. 21236-21250
Author(s):  
Daibing Hou ◽  
Zijian Zhao ◽  
Yuying Liu ◽  
Faliang Chang ◽  
Sanyuan Hu

IEEE Access ◽  
2019 ◽  
Vol 7 ◽  
pp. 154808-154817
Author(s):  
Xin Huang ◽  
Fengqi Yan ◽  
Wei Xu ◽  
Maozhen Li

Author(s):  
Tanveer Syeda-Mahmood ◽  
Ken C. L. Wong ◽  
Yaniv Gur ◽  
Joy T. Wu ◽  
Ashutosh Jadhav ◽  
...  

PLoS ONE ◽  
2021 ◽  
Vol 16 (11) ◽  
pp. e0259639
Author(s):  
Zaheer Babar ◽  
Twan van Laarhoven ◽  
Elena Marchiori

High quality radiology reporting of chest X-ray images is of core importance for high-quality patient diagnosis and care. Automatically generated reports can assist radiologists by reducing their workload and even may prevent errors. Machine Learning (ML) models for this task take an X-ray image as input and output a sequence of words. In this work, we show that ML models for this task based on the popular encoder-decoder approach, like ‘Show, Attend and Tell’ (SA&T) have similar or worse performance than models that do not use the input image, called unconditioned baseline. An unconditioned model achieved diagnostic accuracy of 0.91 on the IU chest X-ray dataset, and significantly outperformed SA&T (0.877) and other popular ML models (p-value < 0.001). This unconditioned model also outperformed SA&T and similar ML methods on the BLEU-4 and METEOR metrics. Also, an unconditioned version of SA&T obtained by permuting the reports generated from images of the test set, achieved diagnostic accuracy of 0.862, comparable to that of SA&T (p-value ≥ 0.05).


Sign in / Sign up

Export Citation Format

Share Document