RA-SIFA: Unsupervised domain adaptation multi-modality cardiac segmentation network combining parallel attention module and residual attention unit

2021 ◽  
pp. 1-14
Author(s):  
Tiejun Yang ◽  
Xiaojuan Cui ◽  
Xinhao Bai ◽  
Lei Li ◽  
Yuehong Gong

BACKGROUND: Convolutional neural network has achieved a profound effect on cardiac image segmentation. The diversity of medical imaging equipment brings the challenge of domain shift for cardiac image segmentation. OBJECTIVE: In order to solve the domain shift existed in multi-modality cardiac image segmentation, this study aims to investigate and test an unsupervised domain adaptation network RA-SIFA, which combines a parallel attention module (PAM) and residual attention unit (RAU). METHODS: First, the PAM is introduced in the generator of RA-SIFA to fuse global information, which can reduce the domain shift from the respect of image alignment. Second, the shared encoder adopts the RAU, which has residual block based on the spatial attention module to alleviate the problem that the convolution layer is insensitive to spatial position. Therefore, RAU enables to further reduce the domain shift from the respect of feature alignment. RA-SIFA model can realize the unsupervised domain adaption (UDA) through combining the image and feature alignment, and then solve the domain shift of cardiac image segmentation in a complementary manner. RESULTS: The model is evaluated using MM-WHS2017 datasets. Compared with SIFA, the Dice of our new RA-SIFA network is improved by 8.4%and 3.2%in CT and MR images, respectively, while, the average symmetric surface distance (ASD) is reduced by 3.4 and 0.8mm in CT and MR images, respectively. CONCLUSION: The study results demonstrate that our new RA-SIFA network can effectively improve the accuracy of whole-heart segmentation from CT and MR images.

2021 ◽  
Vol 11 (4) ◽  
pp. 1965
Author(s):  
Raul-Ronald Galea ◽  
Laura Diosan ◽  
Anca Andreica ◽  
Loredana Popa ◽  
Simona Manole ◽  
...  

Despite the promising results obtained by deep learning methods in the field of medical image segmentation, lack of sufficient data always hinders performance to a certain degree. In this work, we explore the feasibility of applying deep learning methods on a pilot dataset. We present a simple and practical approach to perform segmentation in a 2D, slice-by-slice manner, based on region of interest (ROI) localization, applying an optimized training regime to improve segmentation performance from regions of interest. We start from two popular segmentation networks, the preferred model for medical segmentation, U-Net, and a general-purpose model, DeepLabV3+. Furthermore, we show that ensembling of these two fundamentally different architectures brings constant benefits by testing our approach on two different datasets, the publicly available ACDC challenge, and the imATFIB dataset from our in-house conducted clinical study. Results on the imATFIB dataset show that the proposed approach performs well with the provided training volumes, achieving an average Dice Similarity Coefficient of the whole heart of 89.89% on the validation set. Moreover, our algorithm achieved a mean Dice value of 91.87% on the ACDC validation, being comparable to the second best-performing approach on the challenge. Our approach provides an opportunity to serve as a building block of a computer-aided diagnostic system in a clinical setting.


Sign in / Sign up

Export Citation Format

Share Document