scholarly journals FUSION OF HYPERSPECTRAL AND PANCHROMATIC DATA BY SPECTRAL UNMIXING IN THE REFLECTIVE DOMAIN

Author(s):  
Y. Constans ◽  
S. Fabre ◽  
M. Seymour ◽  
V. Crombez ◽  
X. Briottet ◽  
...  

Abstract. Earth observation at the local scale implies working on images with both high spatial and spectral resolutions. As the latter cannot be simultaneously provided by current sensors, hyperspectral pansharpening methods combine images jointly acquired by two different sensors, a panchromatic one providing high spatial resolution, and a hyperspectral one providing high spectral resolution, to generate an image with both high spatial and spectral resolutions. The main limitation in the fusion process is in presence of mixed pixels, which particularly affect urban scenes, and where large fusion errors may occur. Recently, the Spatially Organized Spectral Unmixing (SOSU) method was developed to overcome this limitation, delivering good results on agricultural and peri-urban landscapes, which contain a limited number of mixed pixels. This article presents a new version of SOSU, adapted to urban landscapes. It is validated on a Toulouse (France) urban dataset at a 1.6 m spatial resolution acquired by the HySpex instrument from the 2012 UMBRA campaign. A performance assessment is established, following Wald’s protocol and using complementary quality criteria. Visual and numerical (at the global and local scales) analyses of this performance are also proposed. Notably, in the VNIR domain, around 51 % of the mixed pixels are better processed by the presented version of SOSU than by the method used as a reference. This ratio is improved regarding shadowed areas in the reflective (52 %) and VNIR (57 %) domains.

2020 ◽  
Vol 12 (6) ◽  
pp. 1009
Author(s):  
Xiaoxiao Feng ◽  
Luxiao He ◽  
Qimin Cheng ◽  
Xiaoyi Long ◽  
Yuxin Yuan

Hyperspectral (HS) images usually have high spectral resolution and low spatial resolution (LSR). However, multispectral (MS) images have high spatial resolution (HSR) and low spectral resolution. HS–MS image fusion technology can combine both advantages, which is beneficial for accurate feature classification. Nevertheless, heterogeneous sensors always have temporal differences between LSR-HS and HSR-MS images in the real cases, which means that the classical fusion methods cannot get effective results. For this problem, we present a fusion method via spectral unmixing and image mask. Considering the difference between the two images, we firstly extracted the endmembers and their corresponding positions from the invariant regions of LSR-HS images. Then we can get the endmembers of HSR-MS images based on the theory that HSR-MS images and LSR-HS images are the spectral and spatial degradation from HSR-HS images, respectively. The fusion image is obtained by two result matrices. Series experimental results on simulated and real datasets substantiated the effectiveness of our method both quantitatively and visually.


2016 ◽  
Author(s):  
G. C. Hulley ◽  
R. M. Duren ◽  
F. M. Hopkins ◽  
S. J. Hook ◽  
N. Vance ◽  
...  

Abstract. Currently large uncertainties exist associated with the attribution and quantification of fugitive emissions of criteria pollutants and greenhouse gases such as methane across large regions and key economic sectors. In this study, data from the airborne Hyperspectral Thermal Emission Spectrometer (HyTES) have been used to develop robust and reliable techniques for the detection and wide-area mapping of emission plumes of methane and other atmospheric trace gas species over challenging and diverse environmental conditions with high spatial resolution that permits direct attribution to sources. HyTES is a pushbroom imaging spectrometer with high spectral resolution (256 bands from 7.5–12 µm), wide swath (1–2 km), and high spatial resolution (~2 m at 1 km altitude) that incorporates new thermal infrared (TIR) remote sensing technologies. In this study we introduce a hybrid Clutter Matched Filter (CMF) and plume dilation algorithm applied to HyTES observations to efficiently detect and characterize the spatial structures of individual plumes of CH4, H2S, NH3, NO2, and SO2 emitters. The sensitivity and field of regard of HyTES allows rapid and frequent airborne surveys of large areas including facilities not readily accessible from the surface. The HyTES CMF algorithm produces plume intensity images of methane and other gases from strong emission sources. The combination of high spatial resolution and multi-species imaging capability provides source attribution in complex environments. The CMF-based detection of strong emission sources over large areas is a fast and powerful tool needed to focus more computationally intensive retrieval algorithms to quantify emissions with error estimates, and is useful for expediting mitigation efforts and addressing critical science questions.


Forests ◽  
2021 ◽  
Vol 12 (9) ◽  
pp. 1290
Author(s):  
Benjamin T. Fraser ◽  
Russell G. Congalton

Remotely sensed imagery has been used to support forest ecology and management for decades. In modern times, the propagation of high-spatial-resolution image analysis techniques and automated workflows have further strengthened this synergy, leading to the inquiry into more complex, local-scale, ecosystem characteristics. To appropriately inform decisions in forestry ecology and management, the most reliable and efficient methods should be adopted. For this reason, our research compares visual interpretation to digital (automated) processing for forest plot composition and individual tree identification. During this investigation, we qualitatively and quantitatively evaluated the process of classifying species groups within complex, mixed-species forests in New England. This analysis included a comparison of three high-resolution remotely sensed imagery sources: Google Earth, National Agriculture Imagery Program (NAIP) imagery, and unmanned aerial system (UAS) imagery. We discovered that, although the level of detail afforded by the UAS imagery spatial resolution (3.02 cm average pixel size) improved the visual interpretation results (7.87–9.59%), the highest thematic accuracy was still only 54.44% for the generalized composition groups. Our qualitative analysis of the uncertainty for visually interpreting different composition classes revealed the persistence of mislabeled hardwood compositions (including an early successional class) and an inability to consistently differentiate between ‘pure’ and ‘mixed’ stands. The results of digitally classifying the same forest compositions produced a higher level of accuracy for both detecting individual trees (93.9%) and labeling them (59.62–70.48%) using machine learning algorithms including classification and regression trees, random forest, and support vector machines. These results indicate that digital, automated, classification produced an increase in overall accuracy of 16.04% over visual interpretation for generalized forest composition classes. Other studies, which incorporate multitemporal, multispectral, or data fusion approaches provide evidence for further widening this gap. Further refinement of the methods for individual tree detection, delineation, and classification should be developed for structurally and compositionally complex forests to supplement the critical deficiency in local-scale forest information around the world.


2019 ◽  
Vol 11 (3) ◽  
pp. 367 ◽  
Author(s):  
Florent Taureau ◽  
Marc Robin ◽  
Christophe Proisy ◽  
François Fromard ◽  
Daniel Imbert ◽  
...  

Despite the low tree diversity and scarcity of the understory vegetation, the high morphological plasticity of mangrove trees induces, at the stand level, a very large variability of forest structures that need to be mapped for assessing the functioning of such complex ecosystems. Fully constrained linear spectral unmixing (FCLSU) of very high spatial resolution (VHSR) multispectral images was tested to fine-scale map mangrove zonations in terms of horizontal variation of forest structure. The study was carried out on three Pleiades-1A satellite images covering French island territories located in the Atlantic, Indian, and Pacific Oceans, namely Guadeloupe, Mayotte, and New Caledonia archipelagos. In each image, FCLSU was trained from the delineation of areas exclusively related to four components including either pure vegetation, soil (ferns included), water, or shadows. It was then applied to the whole mangrove cover imaged for each island and yielded the respective contributions of those four components for each image pixel. On the forest stand scale, the results interestingly indicated a close correlation between FCLSU-derived vegetation fractions and canopy closure estimated from hemispherical photographs (R2 = 0.95) and a weak relation with the Normalized Difference Vegetation Index (R2 = 0.29). Classification of these fractions also offered the opportunity to detect and map horizontal patterns of mangrove structure in a given site. K-means classifications of fraction indeed showed a global view of mangrove structure organization in the three sites, complementary to the outputs obtained from spectral data analysis. Our findings suggest that the pixel intensity decomposition applied to VHSR multispectral satellite images can be a simple but valuable approach for (i) mangrove canopy monitoring and (ii) mangrove forest structure analysis in the perspective of assessing mangrove dynamics and productivity. As with Lidar-based surveys, these potential new mapping capabilities deserve further physically based interpretation of sunlight scattering mechanisms within forest canopy.


Author(s):  
Dr.Vani. K ◽  
Anto. A. Micheal

This paper is an attempt to combine high resolution panchromatic lunar image with low resolution multispectral lunar image to produce a composite image using wavelet approach. There are many sensors that provide us image data about the lunar surface. The spatial resolution and spectral resolution is unique for each sensor, thereby resulting in limitation in extraction of information about the lunar surface. The high resolution panchromatic lunar image has high spatial resolution but low spectral resolution; the low resolution multispectral image has low spatial resolution but high spectral resolution. Extracting features such as craters, crater morphology, rilles and regolith surfaces with a low spatial resolution in multispectral image may not yield satisfactory results. A sensor which has high spatial resolution can provide better information when fused with the high spectral resolution. These fused image results pertain to enhanced crater mapping and mineral mapping in lunar surface. Since fusion using wavelet preserve spectral content needed for mineral mapping, image fusion has been done using wavelet approach.


Sensors ◽  
2019 ◽  
Vol 19 (7) ◽  
pp. 1667 ◽  
Author(s):  
Dong Zhang ◽  
Liyin Yuan ◽  
Shengwei Wang ◽  
Hongxuan Yu ◽  
Changxing Zhang ◽  
...  

Wide Swath and High Resolution Airborne Pushbroom Hyperspectral Imager (WiSHiRaPHI) is the new-generation airborne hyperspectral imager instrument of China, aimed at acquiring accurate spectral curve of target on the ground with both high spatial resolution and high spectral resolution. The spectral sampling interval of WiSHiRaPHI is 2.4 nm and the spectral resolution is 3.5 nm (FWHM), integrating 256 channels coving from 400 nm to 1000 nm. The instrument has a 40-degree field of view (FOV), 0.125 mrad instantaneous field of view (IFOV) and can work in high spectral resolution mode, high spatial resolution mode and high sensitivity mode for different applications, which can adapt to the Velocity to Height Ratio (VHR) lower than 0.04. The integration has been finished, and several airborne flight validation experiments have been conducted. The results showed the system’s excellent performance and high efficiency.


Sign in / Sign up

Export Citation Format

Share Document