Feature Enhancement of Multispectral Images Using Vegetation, Water, and Soil Indices Image Fusion

Author(s):  
M. HemaLatha ◽  
S. Varadarajan
Author(s):  
C. Lanaras ◽  
E. Baltsavias ◽  
K. Schindler

In this work, we jointly process high spectral and high geometric resolution images and exploit their synergies to (a) generate a fused image of high spectral and geometric resolution; and (b) improve (linear) spectral unmixing of hyperspectral endmembers at subpixel level w.r.t. the pixel size of the hyperspectral image. We assume that the two images are radiometrically corrected and geometrically co-registered. The scientific contributions of this work are (a) a simultaneous approach to image fusion and hyperspectral unmixing, (b) enforcing several physically plausible constraints during unmixing that are all well-known, but typically not used in combination, and (c) the use of efficient, state-of-the-art mathematical optimization tools to implement the processing. The results of our joint fusion and unmixing has the potential to enable more accurate and detailed semantic interpretation of objects and their properties in hyperspectral and multispectral images, with applications in environmental mapping, monitoring and change detection. In our experiments, the proposed method always improves the fusion compared to competing methods, reducing RMSE between 4% and 53%.


2012 ◽  
Vol 28 (1) ◽  
pp. 39-54 ◽  
Author(s):  
Kwan-Young Oh ◽  
Hyung-Sup Jung ◽  
Kwang-Jae Lee

Tecnura ◽  
2020 ◽  
Vol 24 (66) ◽  
pp. 62-75
Author(s):  
Edwin Vargas ◽  
Kevin Arias ◽  
Fernando Rojas ◽  
Henry Arguello

Objective: Hyperspectral (HS) imaging systems are commonly used in a diverse range of applications that involve detection and classification tasks. However, the low spatial resolution of hyperspectral images may limit the performance of the involved tasks in such applications. In the last years, fusing the information of an HS image with high spatial resolution multispectral (MS) or panchromatic (PAN) images has been widely studied to enhance the spatial resolution. Image fusion has been formulated as an inverse problem whose solution is an HS image which assumed to be sparse in an analytic or learned dictionary. This work proposes a non-local centralized sparse representation model on a set of learned dictionaries in order to regularize the conventional fusion problem.Methodology: The dictionaries are learned from the estimated abundance data taking advantage of the depth correlation between abundance maps and the non-local self- similarity over the spatial domain. Then, conditionally on these dictionaries, the fusion problem is solved by an alternating iterative numerical algorithm.Results: Experimental results with real data show that the proposed method outperforms the state-of-the-art methods under different quantitative assessments.Conclusions: In this work, we propose a hyperspectral and multispectral image fusion method based on a non-local centralized sparse representation on abundance maps. This model allows us to include the non-local redundancy of abundance maps in the fusion problem using spectral unmixing and improve the performance of the sparsity-based fusion approaches.


Heritage ◽  
2020 ◽  
Vol 3 (4) ◽  
pp. 1046-1062
Author(s):  
Dimitris Kaimaris ◽  
Aristoteles Kandylas

For many decades the multispectral images of the earth’s surface and its objects were taken from multispectral sensors placed on satellites. In recent years, the technological evolution produced similar sensors (much smaller in size and weight) which can be placed on Unmanned Aerial Vehicles (UAVs), thereby allowing the collection of higher spatial resolution multispectral images. In this paper, Parrot’s small Multispectral (MS) camera Sequoia+ is used, and its images are evaluated at two archaeological sites, on the Byzantine wall (ground application) of Thessaloniki city (Greece) and on a mosaic floor (aerial application) at the archaeological site of Dion (Greece). The camera receives RGB and MS images simultaneously, a fact which does not allow image fusion to be performed, as in the standard utilization procedure of Panchromatic (PAN) and MS image of satellite passive systems. In this direction, that is, utilizing the image fusion processes of satellite PAN and MS images, this paper demonstrates that with proper digital processing the images (RGB and MS) of small MS cameras can lead to a fused image with a high spatial resolution, which retains a large percentage of the spectral information of the original MS image. The high percentage of spectral fidelity of the fused images makes it possible to perform high-precision digital measurements in archaeological sites such as the accurate digital separation of the objects, area measurements and retrieval of information not so visible with common RGB sensors via the MS and RGB data of small MS sensors.


Author(s):  
M. B. Devi ◽  
R. Devanathan

<p><strong>Abstract.</strong> The remote sensing satellites provide complementary images of different resolutions which need to be integrated using the techniques of image fusion. In this paper, image fusion using the IKONOS satellite data is discussed. Unlike other models which are based on sensor model, our approach is data centric including the effects of the sensor as well as the reflectance characteristics of the imaged object. A linear relationship is built between the panchromatic channel and the multispectral channel data. We then formulate a minimisation function in terms of Lagrange multiplier to optimally maximise the spectral consistency and minimise the error in variance. The variances of the downsampled multispectral channels are observed and compared with the original multispectral data. A chi-square goodness of fit test is performed to evaluate the data computed based on our algorithm. Simulation results are presented using the IKONOS 1m resolution panchromatic and 4<span class="thinspace"></span>m resolution multispectral data.</p>


Sign in / Sign up

Export Citation Format

Share Document