Endmember Variability in Hyperspectral Analysis: Addressing Spectral Variability During Spectral Unmixing

2014 ◽  
Vol 31 (1) ◽  
pp. 95-104 ◽  
Author(s):  
Alina Zare ◽  
K.C. Ho
2019 ◽  
Vol 11 (9) ◽  
pp. 1045 ◽  
Author(s):  
Yang Shao ◽  
Jinhui Lan

Limited to the low spatial resolution of the hyperspectral imaging sensor, mixed pixels are inevitable in hyperspectral images. Therefore, to obtain the endmembers and corresponding fractions in mixed pixels, hyperspectral unmixing becomes a hot spot in the field of remote sensing. Endmember spectral variability (ESV), which is common in hyperspectral images, affects spectral unmixing accuracy. This paper proposes a spectral unmixing method based on maximum margin criterion and derivative weights (MDWSU) to reduce the effect of ESV on spectral unmixing. Firstly, in the MDWSU model, an effective and fast algorithm is employed for establishing the endmember spectral library. Then a spectral weighting matrix based on the maximum margin criterion is constructed based on the endmember spectral library. Besides, derivative analysis and local neighborhood weights are merged into local neighborhood derivative weights, which act as a regularization term to penalize different abundance vectors. Local neighborhood derivative weights and spectral weighting matrix are proved to reduce the effect of ESV. Real hyperspectral data experiments show that the MDWSU model can obtain more accurate endmembers and abundance estimation. In addition, the experimental results, including the spectral angle distance and the root mean square error, prove the superiority of the MDWSU model over the previous methods.


2020 ◽  
Vol 12 (14) ◽  
pp. 2326 ◽  
Author(s):  
Tatsumi Uezato ◽  
Mathieu Fauvel ◽  
Nicolas Dobigeon

Accounting for endmember variability is a challenging issue when unmixing hyperspectral data. This paper models the variability that is associated with each endmember as a conical hull defined by extremal pixels from the data set. These extremal pixels are considered as so-called prototypal endmember spectra that have meaningful physical interpretation. Capitalizing on this data-driven modeling, the pixels of the hyperspectral image are then described as combinations of these prototypal endmember spectra weighted by bundling coefficients and spatial abundances. The proposed unmixing model not only extracts and clusters the prototypal endmember spectra, but also estimates the abundances of each endmember. The performance of the approach is illustrated thanks to experiments conducted on simulated and real hyperspectral data and it outperforms state-of-the-art methods.


2016 ◽  
Vol 54 (5) ◽  
pp. 2812-2831 ◽  
Author(s):  
Tatsumi Uezato ◽  
Richard J. Murphy ◽  
Arman Melkumyan ◽  
Anna Chlingaryan

TecnoLógicas ◽  
2019 ◽  
Vol 22 (45) ◽  
pp. 129-143
Author(s):  
Hector Vargas ◽  
Ariolfo Camacho Velasco ◽  
Henry Arguello

Oil palm plantations typically span large areas; therefore, remote sensing has become a useful tool for advanced oil palm monitoring. This work reviews and evaluates two approaches to analyze oil palm plantations based on hyperspectral remote sensing data: linear spectral unmixing and spectral variability. Moreover, a computational framework based on spectral unmixing for the estimation of fractional abundances of oil palm plantations is proposed in this study. Such approach also considers the spectral variability of hyperspectral image signatures. More specifically, the proposed computational framework modifies the linear mixing model by introducing a weighting vector, so that the spectral bands that contribute the least to the estimation of erroneous fractional abundances can be identified. This approach improves palm detection as it allows to differentiate them from other materials in terms of fractional abundances. Experimental results obtained from hyperspectral remote sensing data in the range 410-990 nm show improvements of 8.18 % in User Accuracy (Uacc) in the identification of oil palms by the proposed framework with respect to traditional unmixing methods. Thus, the proposed method achieved a 95% Uacc. This confirms the capabilities of the proposed computational framework and facilitates the management and monitoring of large areas of oil palm plantations.


2018 ◽  
Vol 10 (9) ◽  
pp. 1388 ◽  
Author(s):  
Jianhang Ma ◽  
Wenjuan Zhang ◽  
Andrea Marinoni ◽  
Lianru Gao ◽  
Bing Zhang

The trade-off between spatial and temporal resolution limits the acquisition of dense time series of Landsat images, and limits the ability to properly monitor land surface dynamics in time. Spatiotemporal image fusion methods provide a cost-efficient alternative to generate dense time series of Landsat-like images for applications that require both high spatial and temporal resolution images. The Spatial and Temporal Reflectance Unmixing Model (STRUM) is a kind of spatial-unmixing-based spatiotemporal image fusion method. The temporal change image derived by STRUM lacks spectral variability and spatial details. This study proposed an improved STRUM (ISTRUM) architecture to tackle the problem by taking spatial heterogeneity of land surface into consideration and integrating the spectral mixture analysis of Landsat images. Sensor difference and applicability with multiple Landsat and coarse-resolution image pairs (L-C pairs) are also considered in ISTRUM. Experimental results indicate the image derived by ISTRUM contains more spectral variability and spatial details when compared with the one derived by STRUM, and the accuracy of fused Landsat-like image is improved. Endmember variability and sliding-window size are factors that influence the accuracy of ISTRUM. The factors were assessed by setting them to different values. Results indicate ISTRUM is robust to endmember variability and the publicly published endmembers (Global SVD) for Landsat images could be applied. Only sliding-window size has strong influence on the accuracy of ISTRUM. In addition, ISTRUM was compared with the Spatial Temporal Data Fusion Approach (STDFA), the Enhanced Spatial and Temporal Adaptive Reflectance Fusion Model (ESTARFM), the Hybrid Color Mapping (HCM) and the Flexible Spatiotemporal DAta Fusion (FSDAF) methods. ISTRUM is superior to STDFA, slightly superior to HCM in cases when the temporal change is significant, comparable with ESTARFM and a little inferior to FSDAF. However, the computational efficiency of ISTRUM is much higher than ESTARFM and FSDAF. ISTRUM can to synthesize Landsat-like images on a global scale.


Sign in / Sign up

Export Citation Format

Share Document