scholarly journals Fusion of Hyperspectral and Multispectral Images Based on a Centralized Non-local Sparsity Model of Abundance Maps

Tecnura ◽  
2020 ◽  
Vol 24 (66) ◽  
pp. 62-75
Author(s):  
Edwin Vargas ◽  
Kevin Arias ◽  
Fernando Rojas ◽  
Henry Arguello

Objective: Hyperspectral (HS) imaging systems are commonly used in a diverse range of applications that involve detection and classification tasks. However, the low spatial resolution of hyperspectral images may limit the performance of the involved tasks in such applications. In the last years, fusing the information of an HS image with high spatial resolution multispectral (MS) or panchromatic (PAN) images has been widely studied to enhance the spatial resolution. Image fusion has been formulated as an inverse problem whose solution is an HS image which assumed to be sparse in an analytic or learned dictionary. This work proposes a non-local centralized sparse representation model on a set of learned dictionaries in order to regularize the conventional fusion problem.Methodology: The dictionaries are learned from the estimated abundance data taking advantage of the depth correlation between abundance maps and the non-local self- similarity over the spatial domain. Then, conditionally on these dictionaries, the fusion problem is solved by an alternating iterative numerical algorithm.Results: Experimental results with real data show that the proposed method outperforms the state-of-the-art methods under different quantitative assessments.Conclusions: In this work, we propose a hyperspectral and multispectral image fusion method based on a non-local centralized sparse representation on abundance maps. This model allows us to include the non-local redundancy of abundance maps in the fusion problem using spectral unmixing and improve the performance of the sparsity-based fusion approaches.

2020 ◽  
Vol 12 (6) ◽  
pp. 1009
Author(s):  
Xiaoxiao Feng ◽  
Luxiao He ◽  
Qimin Cheng ◽  
Xiaoyi Long ◽  
Yuxin Yuan

Hyperspectral (HS) images usually have high spectral resolution and low spatial resolution (LSR). However, multispectral (MS) images have high spatial resolution (HSR) and low spectral resolution. HS–MS image fusion technology can combine both advantages, which is beneficial for accurate feature classification. Nevertheless, heterogeneous sensors always have temporal differences between LSR-HS and HSR-MS images in the real cases, which means that the classical fusion methods cannot get effective results. For this problem, we present a fusion method via spectral unmixing and image mask. Considering the difference between the two images, we firstly extracted the endmembers and their corresponding positions from the invariant regions of LSR-HS images. Then we can get the endmembers of HSR-MS images based on the theory that HSR-MS images and LSR-HS images are the spectral and spatial degradation from HSR-HS images, respectively. The fusion image is obtained by two result matrices. Series experimental results on simulated and real datasets substantiated the effectiveness of our method both quantitatively and visually.


2020 ◽  
Vol 2020 ◽  
pp. 1-8
Author(s):  
Shao-lei Zhang ◽  
Guang-yuan Fu ◽  
Hong-qiao Wang ◽  
Yu-qing Zhao

In this paper, we propose a novel hyperspectral image superresolution method based on superpixel spectral unmixing using a coupled encoder-decoder network. The hyperspectral image and multispectral images are fused to generate high-resolution hyperspectral images through the spectral unmixing framework with low-rank constraint. Specifically, the endmember and abundance information is extracted via a coupled encoder-decoder network integrating the priori for unmixing. The coupled network consists of two encoders and one shared decoder, where spectral information is preserved through the encoder. The multispectral image is clustered into superpixels to explore self-similarity, and then, the superpixels are unmixed to obtain an abundance matrix. By imposing a low-rank constraint on the abundance matrix, we further improve the superresolution performance. Experiments on the CAVE and Harvard datasets indicate that our superresolution method outperforms the other compared methods in terms of quantitative evaluation and visual quality.


Author(s):  
C. Lanaras ◽  
E. Baltsavias ◽  
K. Schindler

In this work, we jointly process high spectral and high geometric resolution images and exploit their synergies to (a) generate a fused image of high spectral and geometric resolution; and (b) improve (linear) spectral unmixing of hyperspectral endmembers at subpixel level w.r.t. the pixel size of the hyperspectral image. We assume that the two images are radiometrically corrected and geometrically co-registered. The scientific contributions of this work are (a) a simultaneous approach to image fusion and hyperspectral unmixing, (b) enforcing several physically plausible constraints during unmixing that are all well-known, but typically not used in combination, and (c) the use of efficient, state-of-the-art mathematical optimization tools to implement the processing. The results of our joint fusion and unmixing has the potential to enable more accurate and detailed semantic interpretation of objects and their properties in hyperspectral and multispectral images, with applications in environmental mapping, monitoring and change detection. In our experiments, the proposed method always improves the fusion compared to competing methods, reducing RMSE between 4% and 53%.


2013 ◽  
Vol 710 ◽  
pp. 603-607
Author(s):  
Xiao Dong Zhao ◽  
Jian Zhong Cao ◽  
Hui Zhang ◽  
Guang Sen Liu ◽  
Hua Wang ◽  
...  

In this paper, we propose a new single super-resolution (SR) reconstruction algorithm via block sparse representation and regularization constraint. Firstly, discrete K-L transform is used to learn compression sub-dictionary according to the specific image block. Combined with threshold choice of training data, the transform bases are generated adaptively corresponding to the sparse domain. Secondly, Non-local Self-similarity (NLSS) regularization term is introduced into sparse reconstruction objective function as a prior knowledge to optimize reconstruction result. Simulation results validate that the proposed algorithm achieves much better results in PSNR and SSIM. It can both enhance edge and suppress noise effectively, which proves better robustness.


Sensors ◽  
2020 ◽  
Vol 20 (5) ◽  
pp. 1536 ◽  
Author(s):  
Nallig Leal ◽  
Eduardo Zurek ◽  
Esmeide Leal

Magnetic Resonance (MR) Imaging is a diagnostic technique that produces noisy images, which must be filtered before processing to prevent diagnostic errors. However, filtering the noise while keeping fine details is a difficult task. This paper presents a method, based on sparse representations and singular value decomposition (SVD), for non-locally denoising MR images. The proposed method prevents blurring, artifacts, and residual noise. Our method is composed of three stages. The first stage divides the image into sub-volumes, to obtain its sparse representation, by using the KSVD algorithm. Then, the global influence of the dictionary atoms is computed to upgrade the dictionary and obtain a better reconstruction of the sub-volumes. In the second stage, based on the sparse representation, the noise-free sub-volume is estimated using a non-local approach and SVD. The noise-free voxel is reconstructed by aggregating the overlapped voxels according to the rarity of the sub-volumes it belongs, which is computed from the global influence of the atoms. The third stage repeats the process using a different sub-volume size for producing a new filtered image, which is averaged with the previously filtered images. The results provided show that our method outperforms several state-of-the-art methods in both simulated and real data.


Heritage ◽  
2020 ◽  
Vol 3 (4) ◽  
pp. 1046-1062
Author(s):  
Dimitris Kaimaris ◽  
Aristoteles Kandylas

For many decades the multispectral images of the earth’s surface and its objects were taken from multispectral sensors placed on satellites. In recent years, the technological evolution produced similar sensors (much smaller in size and weight) which can be placed on Unmanned Aerial Vehicles (UAVs), thereby allowing the collection of higher spatial resolution multispectral images. In this paper, Parrot’s small Multispectral (MS) camera Sequoia+ is used, and its images are evaluated at two archaeological sites, on the Byzantine wall (ground application) of Thessaloniki city (Greece) and on a mosaic floor (aerial application) at the archaeological site of Dion (Greece). The camera receives RGB and MS images simultaneously, a fact which does not allow image fusion to be performed, as in the standard utilization procedure of Panchromatic (PAN) and MS image of satellite passive systems. In this direction, that is, utilizing the image fusion processes of satellite PAN and MS images, this paper demonstrates that with proper digital processing the images (RGB and MS) of small MS cameras can lead to a fused image with a high spatial resolution, which retains a large percentage of the spectral information of the original MS image. The high percentage of spectral fidelity of the fused images makes it possible to perform high-precision digital measurements in archaeological sites such as the accurate digital separation of the objects, area measurements and retrieval of information not so visible with common RGB sensors via the MS and RGB data of small MS sensors.


Author(s):  
Asma Abdolahpoor ◽  
Peyman Kabiri

Image fusion is an important concept in remote sensing. Earth observation satellites provide both high-resolution panchromatic and low-resolution multispectral images. Pansharpening is aimed on fusion of a low-resolution multispectral image with a high-resolution panchromatic image. Because of this fusion, a multispectral image with high spatial and spectral resolution is generated. This paper reports a new method to improve spatial resolution of the final multispectral image. The reported work proposes an image fusion method using wavelet packet transform (WPT) and principal component analysis (PCA) methods based on the textures of the panchromatic image. Initially, adaptive PCA (APCA) is applied to both multispectral and panchromatic images. Consequently, WPT is used to decompose the first principal component of multispectral and panchromatic images. Using WPT, high frequency details of both panchromatic and multispectral images are extracted. In areas with similar texture, extracted spatial details from the panchromatic image are injected into the multispectral image. Experimental results show that the proposed method can provide promising results in fusing multispectral images with high-spatial resolution panchromatic image. Moreover, results show that the proposed method can successfully improve spectral features of the multispectral image.


Sign in / Sign up

Export Citation Format

Share Document