scholarly journals A Noisy SAR Image Fusion Method Based on NLM and GAN

Entropy ◽  
2021 ◽  
Vol 23 (4) ◽  
pp. 410
Author(s):  
Jing Fang ◽  
Xiaole Ma ◽  
Jingjing Wang ◽  
Kai Qin ◽  
Shaohai Hu ◽  
...  

The unavoidable noise often present in synthetic aperture radar (SAR) images, such as speckle noise, negatively impacts the subsequent processing of SAR images. Further, it is not easy to find an appropriate application for SAR images, given that the human visual system is sensitive to color and SAR images are gray. As a result, a noisy SAR image fusion method based on nonlocal matching and generative adversarial networks is presented in this paper. A nonlocal matching method is applied to processing source images into similar block groups in the pre-processing step. Then, adversarial networks are employed to generate a final noise-free fused SAR image block, where the generator aims to generate a noise-free SAR image block with color information, and the discriminator tries to increase the spatial resolution of the generated image block. This step ensures that the fused image block contains high resolution and color information at the same time. Finally, a fused image can be obtained by aggregating all the image blocks. By extensive comparative experiments on the SEN1–2 datasets and source images, it can be found that the proposed method not only has better fusion results but is also robust to image noise, indicating the superiority of the proposed noisy SAR image fusion method over the state-of-the-art methods.

Author(s):  
Zhiguang Yang ◽  
Youping Chen ◽  
Zhuliang Le ◽  
Yong Ma

Abstract In this paper, a novel multi-exposure image fusion method based on generative adversarial networks (termed as GANFuse) is presented. Conventional multi-exposure image fusion methods improve their fusion performance by designing sophisticated activity-level measurement and fusion rules. However, these methods have a limited success in complex fusion tasks. Inspired by the recent FusionGAN which firstly utilizes generative adversarial networks (GAN) to fuse infrared and visible images and achieves promising performance, we improve its architecture and customize it in the task of extreme exposure image fusion. To be specific, in order to keep content of extreme exposure image pairs in the fused image, we increase the number of discriminators differentiating between fused image and extreme exposure image pairs. While, a generator network is trained to generate fused images. Through the adversarial relationship between generator and discriminators, the fused image will contain more information from extreme exposure image pairs. Thus, this relationship can realize better performance of fusion. In addition, the method we proposed is an end-to-end and unsupervised learning model, which can avoid designing hand-crafted features and does not require a number of ground truth images for training. We conduct qualitative and quantitative experiments on a public dataset, and the experimental result shows that the proposed model demonstrates better fusion ability than existing multi-exposure image fusion methods in both visual effect and evaluation metrics.


2012 ◽  
Vol 239-240 ◽  
pp. 1432-1436
Author(s):  
Zhuan Zheng Zhao

Image Fusion is integrating two or more sensors at the same time or at different times of image or videos equenece to generate a new interpretation of this scene. Its main purpose is increasing reliability or image resolution by redueing uncertainty through redundancy of different images.In this paper, a image fusion method based on contourlet transform is presented. The algorithm can fuse corresponding information in different resolutions and directions, which makes the fused image clearer and more abundant in details. Meanwhile, because of the fuzzy logic’s capacity of resolving uncertain problems, it overcomes the drawbacks of traditional fusion algorithm based on contourlet transform, and integrates as much information as possible into the fused image.


2021 ◽  
pp. 1-20
Author(s):  
Yun Wang ◽  
Xin Jin ◽  
Jie Yang ◽  
Qian Jiang ◽  
Yue Tang ◽  
...  

Multi-focus image fusion is a technique that integrates the focused areas in a pair or set of source images with the same scene into a fully focused image. Inspired by transfer learning, this paper proposes a novel color multi-focus image fusion method based on deep learning. First, color multi-focus source images are fed into VGG-19 network, and the parameters of convolutional layer of the VGG-19 network are then migrated to a neural network containing multilayer convolutional layers and multilayer skip-connection structures for feature extraction. Second, the initial decision maps are generated using the reconstructed feature maps of a deconvolution module. Third, the initial decision maps are refined and processed to obtain the second decision maps, and then the source images are fused to obtain the initial fused images based on the second decision maps. Finally, the final fused image is produced by comparing the Q ABF metrics of the initial fused images. The experimental results show that the proposed method can effectively improve the segmentation performance of the focused and unfocused areas in the source images, and the generated fused images are superior in both subjective and objective metrics compared with most contrast methods.


2012 ◽  
Vol 546-547 ◽  
pp. 806-810 ◽  
Author(s):  
Xu Zhang ◽  
Yun Hui Yan ◽  
Wen Hui Chen ◽  
Jun Jun Chen

To solve the problem of the pseudo-Gibbs phenomena around singularities when we implement image fusion with images of strip surface detects obtained from different angles, a novel image fusion method based on Bandelet-PCNN(Pulse coupled neural networks) is proposed. Low-pass sub-band coefficient of source image by Bandelet is inputted into PCNN. And the coefficient is selected by ignition frequency by the neuron iteration. At last the fused image can be got through inverse Bandelet using the coefficient and Geometric flow parameters. Experimental results demonstrate that for the scrip surface detects of scratches, abrasions and pit, fused image effectively combines defect information of multiple image sources. Contrast to the classical wavelet transform and Bandelet transform the method reserves more detailed and comprehensive detect information. Consequently the method proposed in this paper is more effective.


2012 ◽  
Vol 195-196 ◽  
pp. 555-560
Author(s):  
Jun Li Li ◽  
Yi Na Qiu ◽  
Yang Lou

The traditional wavelet-based image fusion method has some problems, in order to further satisfy the requirements of image fusion on the right direction, this paper studies the principle and performance contourlet transformation based on the proposed sensitivity analysis. In this method, contourlet multi-resolution, locality and direction effectively capture the source image in detail, texture, direction of information, enhance the visibility of the fused image and obtain the higher quality medical images.


Electronics ◽  
2019 ◽  
Vol 8 (3) ◽  
pp. 303 ◽  
Author(s):  
Xiaole Ma ◽  
Shaohai Hu ◽  
Shuaiqi Liu ◽  
Jing Fang ◽  
Shuwen Xu

In this paper, a remote sensing image fusion method is presented since sparse representation (SR) has been widely used in image processing, especially for image fusion. Firstly, we used source images to learn the adaptive dictionary, and sparse coefficients were obtained by sparsely coding the source images with the adaptive dictionary. Then, with the help of improved hyperbolic tangent function (tanh) and l 0 − max , we fused these sparse coefficients together. The initial fused image can be obtained by the image fusion method based on SR. To take full advantage of the spatial information of the source images, the fused image based on the spatial domain (SF) was obtained at the same time. Lastly, the final fused image could be reconstructed by guided filtering of the fused image based on SR and SF. Experimental results show that the proposed method outperforms some state-of-the-art methods on visual and quantitative evaluations.


Sign in / Sign up

Export Citation Format

Share Document