An Image Fusion Algorithm Based on NSCT

2013 ◽  
Vol 427-429 ◽  
pp. 1589-1592
Author(s):  
Zhong Jie Xiao

The study proposed an improved NSCT fusion method based on the infrared and visible light images characteristics and fusion requirement. This paper improved the high-frequency coefficient and low-frequency coefficient fusion rules. The low-frequency sub-band images adopted the pixel feature energy weighted fusion rule. The high-frequency sub-band images adopted the neighborhood variance feature information fusion rule. The fusion experiment results show that this algorithm has good robustness. It could effectively extract edges and texture information. The fused images have abundance scene information and clear target. So this algorithm is an effective infrared and visible image fusion method.

2013 ◽  
Vol 834-836 ◽  
pp. 1011-1015 ◽  
Author(s):  
Nian Yi Wang ◽  
Wei Lan Wang ◽  
Xiao Ran Guo

A new image fusion algorithm based on nonsubsampled contourlet transform and spiking cortical model is proposed in this paper. Considering the human visual system characteristics, two different fusion rules are used to fuse the low and high frequency sub-bands of nonsubsampled contourlet transform respectively. A new maximum selection rule is defined to fuse low frequency coefficients. Spatial frequency is used for the fusion rule of high frequency coefficients. Experimental results demonstrate the effectiveness of the proposed fusion method.


Author(s):  
Cheng Zhao ◽  
Yongdong Huang

The rolling guidance filtering (RGF) has a good characteristic which can smooth texture and preserve the edges, and non-subsampled shearlet transform (NSST) has the features of translation invariance and direction selection based on which a new infrared and visible image fusion method is proposed. Firstly, the rolling guidance filter is used to decompose infrared and visible images into the base and detail layers. Then, the NSST is utilized on the base layer to get the high-frequency coefficients and low-frequency coefficients. The fusion of low-frequency coefficients uses visual saliency map as a fusion rule, and the coefficients of the high-frequency subbands use gradient domain guided filtering (GDGF) and improved Laplacian sum to fuse coefficients. Finally, the fusion of the detail layers combines phase congruency and gradient domain guided filtering as the fusion rule. As a result, the proposed method can not only extract the infrared targets, but also fully preserves the background information of the visible images. Experimental results indicate that our method can achieve a superior performance compared with other fusion methods in both subjective and objective assessments.


Author(s):  
Yahui Zhu ◽  
Li Gao

To overcome the shortcomings of traditional image fusion algorithms based on multiscale transform, an infrared and visible image fusion method based on compound decomposition and intuitionistic fuzzy set is proposed. Firstly, the non-subsampled contour transform is used to decompose the source image into low-frequency coefficients and high-frequency coefficients. Then the potential low-rank representation model is used to decompose low-frequency coefficients into basic sub-bands and salient sub-bands, in which the visual saliency map is taken as weighted coefficient. The weighted summation of low-frequency basic sub-bands is used as the fusion rule. The maximum absolute value of low-frequency salient sub-bands is also used as the fusion rule. The two fusion rules are superimposed to obtain low-frequency fusion coefficients. The intuitionistic fuzzy entropy is used as the fusion rule to measure the texture information and edge information of high-frequency coefficients. Finally, the infrared visible fusion image is obtained with the non-subsampled contour inverse transform. The comparison results on the objective and subjective evaluation of several sets of fusion images show that our image fusion method can effectively keep edge information and rich information on source images, thus producing better visual quality and objective evaluation than other image fusion methods.


2020 ◽  
Author(s):  
Xiaoxue XING ◽  
Cheng LIU ◽  
Cong LUO ◽  
Tingfa XU

Abstract In Multi-scale Geometric Analysis (MGA)-based fusion methods for infrared and visible images, adopting the same representation for the two types of the images will result in the non-obvious thermal radiation target in the fused image, which can hardly be distinguished from the background. To solve the problem, a novel fusion algorithm based on nonlinear enhancement and Non-Subsampled Shearlet Transform (NSST) decomposition is proposed. Firstly, NSST is used to decompose the two source images into low- and high-frequency sub-bands. Then, the Wavelet Transform (WT) is used to decompose high-frequency sub-bands into obtain approximate sub-bands and directional detail sub-bands. The “average” fusion rule is performed for fusion for approximate sub-bands. And the “max-absolute” fusion rule is performed for fusion for directional detail sub-bands. The inverse WT is used to reconstruct the high-frequency sub-bands. To highlight the thermal radiation target, we construct a non-linear transform function to determine the fusion weight of low-frequency sub-bands, and whose parameters can be further adjusted to meet different fusion requirements. Finally, the inverse NSST is used to reconstruct the fused image. The experimental results show that the proposed method can simultaneously enhance the thermal target in infrared images and preserve the texture details in visible images, and which is competitive with or even superior to the state-of-the-art fusion methods in terms of both visual and quantitative evaluations.


2020 ◽  
Author(s):  
Xiaoxue XING ◽  
Cheng LIU ◽  
Cong LUO ◽  
Tingfa XU

Abstract In Multi-scale Geometric Analysis (MGA)-based fusion methods for infrared and visible images, adopting the same representation for the two types of the images will result in the non-obvious thermal radiation target in the fused image, which can hardly be distinguished from the background. To solve the problem, a novel fusion algorithm based on nonlinear enhancement and Non-Subsampled Shearlet Transform (NSST) decomposition is proposed. Firstly, NSST is used to decompose the two source images into low- and high-frequency sub-bands. Then, the wavelet transform(WT) is used to decompose high-frequency sub-bands into obtain approximate sub-bands and directional detail sub-bands. The “average” fusion rule is performed for fusion for approximate sub-bands. And the “max-absolute” fusion rule is performed for fusion for directional detail sub-bands. The inverse WT is used to reconstruct the high-frequency sub-bands. To highlight the thermal radiation target, we construct a non-linear transform function to determine the fusion weight of low-frequency sub-bands, and whose parameters can be further adjusted to meet different fusion requirements. Finally, the inverse NSST is used to reconstruct the fused image. The experimental results show that the proposed method can simultaneously enhance the thermal target in infrared images and preserve the texture details in visible images, and which is competitive with or even superior to the state-of-the-art fusion methods in terms of both visual and quantitative evaluations.


2014 ◽  
Vol 530-531 ◽  
pp. 394-402
Author(s):  
Ze Tao Jiang ◽  
Li Wen Zhang ◽  
Le Zhou

At present, image fusion universally exists problem that fuzzy edge, sparse texture. To solve this problem, this study proposes an image fusion method based on the combination of Lifting Wavelet and Median Filter. The method adopts different fusion rules. For the low frequency coefficient, the low frequency scale coefficients have had the convolution do the square respectively to get enhanced edge of the image fusion. Then the details information of original image is extracted by measuring region characteristics. For high frequency coefficient, the high frequency parts are denoised by the Median Filter, and then neighborhood spatial frequency and consistency verification fusion rule is adopted to the fusion of detail sub-images. Compared with Weighted Average and Regional Energy , experimental results show that edge and texture information are the most. Method in study solves the fuzzy edge and sparse texture in a certain degree,which has strong practical value in image fusion.


2012 ◽  
Vol 424-425 ◽  
pp. 223-226 ◽  
Author(s):  
Zheng Hong Cao ◽  
Yu Dong Guan ◽  
Peng Wang ◽  
Chun Li Ti

This paper focuses on the fusion method of visible image and infrared image, making in-depth discussion on the existing algorithms and proposes a novel method on the fusion rules. The image is firstly decomposed into low-frequency and high-frequency coefficients by NSCT and the characteristics of visible image and infrared image are then taken into account to finish the fusion. Finally, the quality of the fused image by different algorithms is compared with several existing criterions. MATLAB is employed to finish the simulation and the results will demonstrate this algorithm can improve the quality of the fused image effectively and the features in the image won’t be missing


Author(s):  
LIU BIN ◽  
JIAXIONG PENG

In this paper, image fusion method based on a new class of wavelet — non-separable wavelet with compactly supported, linear phase, orthogonal and dilation matrix [Formula: see text] is presented. We first construct a non-separable wavelet filter bank. Using these filters, the images involved are decomposed into wavelet pyramids. Then the following fusion algorithm was proposed: for low-frequency part, the average value is selected for new pixel value, For the three high-frequency parts of each level, the standard deviation of each image patch over 3×3 window in the high-frequency sub-images is computed as activity measurement. If the standard deviation of the area 3×3 window is bigger than the standard deviation of the corresponding 3×3 window in the other high-frequency sub-image. The center pixel values of the area window that the weighted area energy is bigger are selected. Otherwise the weighted value of the pixel is computed. Then a new fused image is reconstructed. The performance of the method is evaluated using the entropy, cross-entropy, fusion symmetry, root mean square error and peak-to-peak signal-to-noise ratio. The experiment results show that the non-separable wavelet fusion method proposed in this paper is very close to the performance of the Haar separable wavelet fusion method.


2021 ◽  
Vol 38 (3) ◽  
pp. 607-617
Author(s):  
Sumanth Kumar Panguluri ◽  
Laavanya Mohan

Nowadays multimodal image fusion has been majorly utilized as an important processing tool in various image related applications. For capturing useful information different sensors have been developed. Mainly such sensors are infrared (IR) image sensor and visible (VI) image sensor. Fusing both these sensors provides better and accurate scene information. The major application areas where this fused image has been mostly used are military, surveillance, and remote sensing. For better identification of targets and to understand overall scene information, the fused image has to provide better contrast and more edge information. This paper introduces a novel multimodal image fusion method mainly for improving contrast and as well as edge information. Primary step of this algorithm is to resize source images. The 3×3 sharpen filter and morphology hat transform are applied separately on resized IR image and VI image. DWT transform has been used to produce "low-frequency" and "high-frequency" sub-bands. "Filters based mean-weighted fusion rule" and "Filters based max-weighted fusion rule" are newly introduced in this algorithm for combining "low-frequency" sub-bands and "high-frequency" sub-bands respectively. Fused image reconstruction is done with IDWT. Proposed method has outperformed and shown improved results in subjective manner and objectively than similar existing techniques.


2019 ◽  
Vol 14 (7) ◽  
pp. 658-666
Author(s):  
Kai-jian Xia ◽  
Jian-qiang Wang ◽  
Jian Cai

Background: Lung cancer is one of the common malignant tumors. The successful diagnosis of lung cancer depends on the accuracy of the image obtained from medical imaging modalities. Objective: The fusion of CT and PET is combining the complimentary and redundant information both images and can increase the ease of perception. Since the existing fusion method sare not perfect enough, and the fusion effect remains to be improved, the paper proposes a novel method called adaptive PET/CT fusion for lung cancer in Piella framework. Methods: This algorithm firstly adopted the DTCWT to decompose the PET and CT images into different components, respectively. In accordance with the characteristics of low-frequency and high-frequency components and the features of PET and CT image, 5 membership functions are used as a combination method so as to determine the fusion weight for low-frequency components. In order to fuse different high-frequency components, we select the energy difference of decomposition coefficients as the match measure, and the local energy as the activity measure; in addition, the decision factor is also determined for the high-frequency components. Results: The proposed method is compared with some of the pixel-level spatial domain image fusion algorithms. The experimental results show that our proposed algorithm is feasible and effective. Conclusion: Our proposed algorithm can better retain and protrude the lesions edge information and the texture information of lesions in the image fusion.


Sign in / Sign up

Export Citation Format

Share Document