scholarly journals FUSION OF HYPERSPECTRAL AND MULTISPECTRAL IMAGERY WITH REGRESSION KRIGING AND THE LULU OPERATORS; A COMPARISON

Author(s):  
N. Jeevanand ◽  
P. A. Verma ◽  
S. Saran

<p><strong>Abstract.</strong> In this digital world, there is a large requirement of high resolution satellite image. Images at a low resolution may contain relevant information that has to be integrated with the high resolution image to obtain the required information. This is being fulfilled by image fusion. Image fusion is merging of different resolution images into a single image. The output image contains more information, as the information is integrated from both the images Image fusion was conducted with two different algorithms: regression kriging and the LULU operators. First, regression Kriging estimates the value of a dependent variable at unsampled location with the help of auxiliary variables. Here we used regression Kriging with the Hyperion image band as the response variables and the LISS III image bands are the explanatory variables. The fused image thus has the spectral variables from Hyperion image and the spatial variables from the LISS III image. Second, the LULU operator is an image processing methods that can be used as well in image fusion technique. Here we explored to fuse the Hyperion and LISS III image. The LULU operators work in three stages of the process, viz the decomposition stage, the fusion and the reconstruction stage. Quality aspects of the fused image for both techniques have been compared for spectral quality (correlation) and spatial quality (entropy). The study concludes that the quality of the fused image obtained with regression kriging is better than that obtained with the LULU operator.</p>

2011 ◽  
Vol 1 (3) ◽  
Author(s):  
T. Sumathi ◽  
M. Hemalatha

AbstractImage fusion is the method of combining relevant information from two or more images into a single image resulting in an image that is more informative than the initial inputs. Methods for fusion include discrete wavelet transform, Laplacian pyramid based transform, curvelet based transform etc. These methods demonstrate the best performance in spatial and spectral quality of the fused image compared to other spatial methods of fusion. In particular, wavelet transform has good time-frequency characteristics. However, this characteristic cannot be extended easily to two or more dimensions with separable wavelet experiencing limited directivity when spanning a one-dimensional wavelet. This paper introduces the second generation curvelet transform and uses it to fuse images together. This method is compared against the others previously described to show that useful information can be extracted from source and fused images resulting in the production of fused images which offer clear, detailed information.


2018 ◽  
Vol 11 (4) ◽  
pp. 1937-1946
Author(s):  
Nancy Mehta ◽  
Sumit Budhiraja

Multimodal medical image fusion aims at minimizing the redundancy and collecting the relevant information using the input images acquired from different medical sensors. The main goal is to produce a single fused image having more information and has higher efficiency for medical applications. In this paper modified fusion method has been proposed in which NSCT decomposition is used to decompose the wavelet coefficients obtained after wavelet decomposition. NSCT being multidirectional,shift invariant transform provide better results.Guided filter has been used for the fusion of high frequency coefficients on account of its edge preserving property. Phase congruency is used for the fusion of low frequency coefficients due to its insensitivity to illumination contrast hence making it suitable for medical images. The simulated results show that the proposed technique shows better performance in terms of entropy, structural similarity index, Piella metric. The fusion response of the proposed technique is also compared with other fusion approaches; proving the effectiveness of the obtained fusion results.


2013 ◽  
Vol 860-863 ◽  
pp. 2846-2849
Author(s):  
Ming Jing Li ◽  
Yu Bing Dong ◽  
Xiao Li Wang

Image fusion is process which combine relevant information from two or more images into a single image. The aim of fusion is to extract relevant information for research. According to different application and characteristic of algorithm, image fusion algorithm could be used to improve quality of image. This paper complete compare analyze of image fusion algorithm based on wavelet transform and Laplacian pyramid. In this paper, principle, operation, steps and characteristic of fusion algorithm are summarized, advantage and disadvantage of different algorithm are compared. The fusion effects of different fusion algorithm are given by MATLAB. Experimental results shows that quality of fused image would be improve obviously.


2020 ◽  
pp. 407-410
Author(s):  
Jakir Hussain G K ◽  
Tamilanban R ◽  
Tamilselvan K S ◽  
Vinoth Saravanan M

The multimodal image fusion is the process of combining relevant information from multiple imaging modalities. A fused image which contains recovering description than the one provided by any image fusion techniques are most widely used for real-world applications like agriculture, robotics and informatics, aeronautical, military, medical, pedestrian detection, etc. We try to give an outline of multimodal medical image fusion methods, developed during the period of time. The fusion of medical images in many combinations assists in utilizing it for medical diagnostics and examination. There is an incredible progress within the fields of deep learning, AI and bio-inspired optimization techniques. Effective utilization of these techniques is often used to further improve the effectiveness of image fusion algorithms.


Sensors ◽  
2019 ◽  
Vol 19 (11) ◽  
pp. 2619 ◽  
Author(s):  
Luxiao He ◽  
Mi Wang ◽  
Ying Zhu ◽  
Xueli Chang ◽  
Xiaoxiao Feng

Ratio transformation methods are widely used for image fusion of high-resolution optical satellites. The premise for the use the ratio transformation is that there is a zero-bias linear relationship between the panchromatic band and the corresponding multi-spectral bands. However, there are bias terms and residual terms with large values in reality, depending on the sensors, the response spectral ranges, and the land-cover types. To address this problem, this paper proposes a panchromatic and multi-spectral image fusion method based on the panchromatic spectral decomposition (PSD). The low-resolution panchromatic and multi-spectral images are used to solve the proportionality coefficients, the bias coefficients, and the residual matrixes. These coefficients are substituted into the high-resolution panchromatic band and decompose it into the high-resolution multi-spectral bands. The experiments show that this method can make the fused image acquire high color fidelity and sharpness, it is robust to different sensors and features, and it can be applied to the panchromatic and multi-spectral fusion of high-resolution optical satellites.


Author(s):  
D. Vijayan ◽  
G. Ravi Shankar ◽  
T. Ravi Shankar

An attempt has been made to compare the multispectral Resourcesat-2 LISS III and Hyperion image for the selected area at sub class level classes of major land use/ land cover. On-screen interpretation of LISS III (resolution 23.5 m) was compared with Spectral Angle Mapping (SAM) classification of Hyperion (resolution 30m). Results of the preliminary interpretation of both images showed that features like fallow, built up and wasteland classes in Hyperion image are clearer than LISS-III and Hyperion is comparable with any high resolution data. Even canopy types of vegetation classes, aquatic vegetation and aquatic systems are distinct in Hyperion data. Accuracy assessment of SAM classification of Hyperion compared with the common classification systems followed for LISS III there was no much significant difference between the two. However, more number of vegetation classes could be classified in SAM. There is a misinterpretation of built up and fallow classes in SAM. The advantages of Hyperion over visual interpretation are the differentiation of the type of crop canopy and also crop stage could be confirmed with the spectral signature. The Red edge phenomenon was found for different canopy type of the study area and it clearly differentiated the stage of vegetation, which was verified with high resolution image. Hyperion image for a specific area is on par with high resolution data along with LISS III data.


Author(s):  
Xiuming Sun ◽  
◽  
Weina Wu ◽  
Peng Geng ◽  
Lin Lu ◽  
...  

In order to achieve the multi-focus image fusion task, a sparse representation method based on quaternion for multi-focus image fusion is proposed in this paper. Firstly, the RGB color information of each pixel in the color image is represented by quaternion based on the relevant knowledge of computational mathematics, and the color image pixel is processed as a whole vector to maintain the relevant information between the three color channels. Secondly, the dictionary represented by quaternion and the sparse coefficient represented by quaternion are obtained by using the our proposed sparse representation model. Thirdly, the coefficient fusion is carried out by using the “max-L1” rule. Finally, the fused sparse coefficient and dictionary are used for image reconstruction to obtain the quaternion fused image, which is then converted into RGB color multi-focus fused image. Our method belongs to computational mathematics, and uses the relevant knowledge in the field of computational mathematics to help us carry out the experiment. The experimental results show that the method has achieved good results in visual quality and objective evaluation.


Sign in / Sign up

Export Citation Format

Share Document