scholarly journals A Generative Adversarial Network Fused with Dual-Attention Mechanism and Its Application in Multitarget Image Fine Segmentation

2021 ◽  
Vol 2021 ◽  
pp. 1-16
Author(s):  
Jian Yin ◽  
Zhibo Zhou ◽  
Shaohua Xu ◽  
Ruiping Yang ◽  
Kun Liu

Aiming at the problem of insignificant target morphological features, inaccurate detection and unclear boundary of small-target regions, and multitarget boundary overlap in multitarget complex image segmentation, combining the image segmentation mechanism of generative adversarial network with the feature enhancement method of nonlocal attention, a generative adversarial network fused with attention mechanism (AM-GAN) is proposed. The generative network in the model is composed of residual network and nonlocal attention module, which use the feature extraction and multiscale fusion mechanism of residual network, as well as feature enhancement and global information fusion ability of nonlocal spatial-channel dual attention to enhance the target features in the detection area and improve the continuity and clarity of the segmentation boundary. The adversarial network is composed of fully convolutional networks, which penalizes the loss of information in small-target regions by judging the authenticity of prediction and label segmentation and improves the detection ability of the generative adversarial model for small targets and the accuracy of multitarget segmentation. AM-GAN can use the GAN’s inherent mechanism that reconstruct and repair high-resolution image, as well as the ability of nonlocal attention global receptive field to strengthen detail features, automatically learn to focus on target structures of different shapes and sizes, highlight salient features useful for specific tasks, reduce the loss of image detail features, improve the accuracy of small-target detection, and optimize the segmentation boundary of multitargets. Taking medical MRI abdominal image segmentation as a verification experiment, multitargets such as liver, left/right kidney, and spleen are selected for segmentation and abnormal tissue detection. In the case of small and unbalanced sample datasets, the class pixels’ accuracy reaches 87.37%, the intersection over union is 92.42%, and the average Dice coefficient is 93%. Compared with other methods in the experiment, the segmentation precision and accuracy are greatly improved. It shows that the proposed method has good applicability for solving typical multitarget image segmentation problems such as small-target feature detection, boundary overlap, and offset deformation.

2020 ◽  
Vol 10 (15) ◽  
pp. 5032
Author(s):  
Xiaochang Wu ◽  
Xiaolin Tian

Medical image segmentation is a classic challenging problem. The segmentation of parts of interest in cardiac medical images is a basic task for cardiac image diagnosis and guided surgery. The effectiveness of cardiac segmentation directly affects subsequent medical applications. Generative adversarial networks have achieved outstanding success in image segmentation compared with classic neural networks by solving the oversegmentation problem. Cardiac X-ray images are prone to weak edges, artifacts, etc. This paper proposes an adaptive generative adversarial network for cardiac segmentation to improve the segmentation rate of X-ray images by generative adversarial networks. The adaptive generative adversarial network consists of three parts: a feature extractor, a discriminator and a selector. In this method, multiple generators are trained in the feature extractor. The discriminator scores the features of different dimensions. The selector selects the appropriate features and adjusts the network for the next iteration. With the help of the discriminator, this method uses multinetwork joint feature extraction to achieve network adaptivity. This method allows features of multiple dimensions to be combined to perform joint training of the network to enhance its generalization ability. The results of cardiac segmentation experiments on X-ray chest radiographs show that this method has higher segmentation accuracy and less overfitting than other methods. In addition, the proposed network is more stable.


2020 ◽  
Vol 10 (2) ◽  
pp. 554 ◽  
Author(s):  
Dongdong Xu ◽  
Yongcheng Wang ◽  
Shuyan Xu ◽  
Kaiguang Zhu ◽  
Ning Zhang ◽  
...  

Infrared and visible image fusion can obtain combined images with salient hidden objectives and abundant visible details simultaneously. In this paper, we propose a novel method for infrared and visible image fusion with a deep learning framework based on a generative adversarial network (GAN) and a residual network (ResNet). The fusion is accomplished with an adversarial game and directed by the unique loss functions. The generator with residual blocks and skip connections can extract deep features of source image pairs and generate an elementary fused image with infrared thermal radiation information and visible texture information, and more details in visible images are added to the final images through the discriminator. It is unnecessary to design the activity level measurements and fusion rules manually, which are now implemented automatically. Also, there are no complicated multi-scale transforms in this method, so the computational cost and complexity can be reduced. Experiment results demonstrate that the proposed method eventually gets desirable images, achieving better performance in objective assessment and visual quality compared with nine representative infrared and visible image fusion methods.


Sign in / Sign up

Export Citation Format

Share Document