scholarly journals Improving the Spatial Resolution of Real Time Satellite Image Fusion Using 2D Curvelet Transform

2018 ◽  
Vol 7 (2.19) ◽  
pp. 55
Author(s):  
Gandla Maharnisha ◽  
R Veerasundari ◽  
Gandla Roopesh Kumar ◽  
Arunraj .

The fused image will have structural details of the higher spatial resolution panchromatic images as well as rich spectral information from the multispectral images. Before fusion, Mean adjustment algorithm of Adaptive Median Filter (AMF) and Hybrid Enhancer (combination of AMF and Contrast Limited Adaptive Histogram Equalization (CLAHE)) are used in the pre-processing. Here, conventional Principal Component image fusion method will be compared with newly modified Curvelet transform image fusion method. Principal Component fusion technique will improve the spatial resolution but it may produce spectral degradation in the output image. To overcome the spectral degradation, Curvelet transform fusion methods can be used. Curvelet transform uses curve which represents edges and extraction of the detailed information from the image. Curvelet Transform of individual acquired low-frequency approximate component of PAN image and high-frequency detail components from PAN and MS image is used. Peak Signal to Noise Ratio (PSNR) and Root Mean Square Error (RMSE) are measured to evaluate the image fusion accuracy. 

2014 ◽  
Vol 530-531 ◽  
pp. 394-402
Author(s):  
Ze Tao Jiang ◽  
Li Wen Zhang ◽  
Le Zhou

At present, image fusion universally exists problem that fuzzy edge, sparse texture. To solve this problem, this study proposes an image fusion method based on the combination of Lifting Wavelet and Median Filter. The method adopts different fusion rules. For the low frequency coefficient, the low frequency scale coefficients have had the convolution do the square respectively to get enhanced edge of the image fusion. Then the details information of original image is extracted by measuring region characteristics. For high frequency coefficient, the high frequency parts are denoised by the Median Filter, and then neighborhood spatial frequency and consistency verification fusion rule is adopted to the fusion of detail sub-images. Compared with Weighted Average and Regional Energy , experimental results show that edge and texture information are the most. Method in study solves the fuzzy edge and sparse texture in a certain degree,which has strong practical value in image fusion.


Entropy ◽  
2021 ◽  
Vol 23 (10) ◽  
pp. 1362
Author(s):  
Hui Wan ◽  
Xianlun Tang ◽  
Zhiqin Zhu ◽  
Weisheng Li

Multi-focus image fusion is an important method used to combine the focused parts from source multi-focus images into a single full-focus image. Currently, to address the problem of multi-focus image fusion, the key is on how to accurately detect the focus regions, especially when the source images captured by cameras produce anisotropic blur and unregistration. This paper proposes a new multi-focus image fusion method based on the multi-scale decomposition of complementary information. Firstly, this method uses two groups of large-scale and small-scale decomposition schemes that are structurally complementary, to perform two-scale double-layer singular value decomposition of the image separately and obtain low-frequency and high-frequency components. Then, the low-frequency components are fused by a rule that integrates image local energy with edge energy. The high-frequency components are fused by the parameter-adaptive pulse-coupled neural network model (PA-PCNN), and according to the feature information contained in each decomposition layer of the high-frequency components, different detailed features are selected as the external stimulus input of the PA-PCNN. Finally, according to the two-scale decomposition of the source image that is structure complementary, and the fusion of high and low frequency components, two initial decision maps with complementary information are obtained. By refining the initial decision graph, the final fusion decision map is obtained to complete the image fusion. In addition, the proposed method is compared with 10 state-of-the-art approaches to verify its effectiveness. The experimental results show that the proposed method can more accurately distinguish the focused and non-focused areas in the case of image pre-registration and unregistration, and the subjective and objective evaluation indicators are slightly better than those of the existing methods.


2020 ◽  
Vol 2020 ◽  
pp. 1-8 ◽  
Author(s):  
Hui Zhang ◽  
Xu Ma ◽  
Yanshan Tian

In order to improve the clarity of image fusion and solve the problem that the image fusion effect is affected by the illumination and weather of visible light, a fusion method of infrared and visible images for night-vision context enhancement is proposed. First, a guided filter is used to enhance the details of the visible image. Then, the enhanced visible and infrared images are decomposed by the curvelet transform. The improved sparse representation is used to fuse the low-frequency part, while the high-frequency part is fused with the parametric adaptation pulse-coupled neural networks. Finally, the fusion result is obtained by inverse transformation of the curvelet transform. The experimental results show that the proposed method has good performance in detail processing, edge protection, and source image information.


Author(s):  
Alaa A. Abdullatif ◽  
Firas A. Abdullatif ◽  
Amna Al Safar

The multi-focus image fusion method can fuse more than one focused image to generate a single image with more accurate description. The purpose of image fusion is to generate one image by combining information from many source images of the same scene. In this paper, a multi-focus image fusion method is proposed with a hybrid pixel level obtained in the spatial and transform domains. The proposed method is implemented on multi-focus source images in YCbCr color space. As the first step two-level stationary wavelet transform was applied on the Y channel of two source images. The fused Y channel is implemented by using many fusion rule techniques. The Cb and Cr channels of the source images are fused using principal component analysis (PCA). The proposed method performance is evaluated in terms of PSNR, RMSE and SSIM. The results show that the fusion quality of the proposed algorithm is better than obtained by several other fusion methods, including SWT, PCA with RGB source images and PCA with YCbCr source images.


2013 ◽  
Vol 457-458 ◽  
pp. 1097-1101
Author(s):  
Jun Yong Ma ◽  
Sheng Wei Zhang ◽  
Cai Bing Yue

An image fusion method based on fuzzy regional characteristics is proposed in this paper. After the multi-resolution decomposition of an image, k-mean clustering is firstly done for the low frequency components of the each layer to decompose the low frequency image into important region, sub important region and background region. Then, all areas of the image are fuzzificated and fusion strategies are determined according to their fuzzy membership degrees. Finally, fusion result is obtained by the reconstruction from the multiresolution image representation. Experimental results and fusion quality assessments show the effectiveness of the proposed fusion method.


Author(s):  
S. Srimathi ◽  
G. Yamuna ◽  
R. Nanmaran

Objective: Image fusion-based cancer classification models used to diagnose cancer and assessment of medical problems in earlier stages that help doctors or health care professionals to plan the treatment plan accordingly. Methods : In this work, a novel Curvelet transform-based image fusion method is developed. CT and PET scan images of benign type tumors fused together using the proposed fusion algorithm and the same way MRI and PET scan images of malignant type tumors fused together to achieve the combined benefits of individual imaging techniques. Then the Marker controlled watershed Algorithm applied on fused image to segment cancer affected area. The various color features, shape features and texture-based features extracted from the segmented image. Then a data set formed with various features, which have given as input to different classifiers namely neural network classifier, Random forest classifier, K-NN classifier to determine the nature of cancer. The results of the classifier will be Normal, Benign or Malignant category of cancer. Results: The performance of the proposed fusion Algorithm compared with existing fusion techniques based on the parameters PSNR, SSIM, Entropy, Mean and Standard Deviation. Curvelet transform based fusion method performs better than already existing methods in terms of five parameters. The performances of classifiers are evaluated using three parameters Accuracy, Sensitivity, and Specificity. K-NN Classifier performs better when compared to the other two classifiers and it provides overall accuracy of 94%, Sensitivity of 88% and Specificity of 84%. Conclusion: The proposed Curvelet transform based image fusion method combined with the K-NN classifier provides better results when compared to other two classifiers when two input images used individually.


Sign in / Sign up

Export Citation Format

Share Document