scholarly journals Hyperspectral and Multispectral Image Fusion Using Cluster-Based Multi-Branch BP Neural Networks

2019 ◽  
Vol 11 (10) ◽  
pp. 1173 ◽  
Author(s):  
Xiaolin Han ◽  
Jing Yu ◽  
Jiqiang Luo ◽  
Weidong Sun

Fusion of the high-spatial-resolution hyperspectral (HHS) image using low-spatial- resolution hyperspectral (LHS) and high-spatial-resolution multispectral (HMS) image is usually formulated as a spatial super-resolution problem of LHS image with the help of an HMS image, and that may result in the loss of detailed structural information. Facing the above problem, the fusion of HMS with LHS image is formulated as a nonlinear spectral mapping from an HMS to HHS image with the help of an LHS image, and a novel cluster-based fusion method using multi-branch BP neural networks (named CF-BPNNs) is proposed, to ensure a more reasonable spectral mapping for each cluster. In the training stage, considering the intrinsic characteristics that the spectra are more similar within each cluster than that between clusters and so do the corresponding spectral mapping, an unsupervised clustering is used to divide the spectra of the down-sampled HMS image (marked as LMS) into several clusters according to spectral correlation. Then, the spectrum-pairs from the clustered LMS image and the corresponding LHS image are used to train multi-branch BP neural networks (BPNNs), to establish the nonlinear spectral mapping for each cluster. In the fusion stage, a supervised clustering is used to group the spectra of HMS image into the clusters determined during the training stage, and the final HHS image is reconstructed from the clustered HMS image using the trained multi-branch BPNNs accordingly. Comparison results with the related state-of-the-art methods demonstrate that our proposed method achieves a better fusion quality both in spatial and spectral domains.

2021 ◽  
Vol 13 (10) ◽  
pp. 1944
Author(s):  
Xiaoming Liu ◽  
Menghua Wang

The Visible Infrared Imaging Radiometer Suite (VIIRS) onboard the Suomi National Polar-orbiting Partnership (SNPP) satellite has been a reliable source of ocean color data products, including five moderate (M) bands and one imagery (I) band normalized water-leaving radiance spectra nLw(λ). The spatial resolutions of the M-band and I-band nLw(λ) are 750 m and 375 m, respectively. With the technique of convolutional neural network (CNN), the M-band nLw(λ) imagery can be super-resolved from 750 m to 375 m spatial resolution by leveraging the high spatial resolution features of I1-band nLw(λ) data. However, it is also important to enhance the spatial resolution of VIIRS-derived chlorophyll-a (Chl-a) concentration and the water diffuse attenuation coefficient at the wavelength of 490 nm (Kd(490)), as well as other biological and biogeochemical products. In this study, we describe our effort to derive high-resolution Kd(490) and Chl-a data based on super-resolved nLw(λ) images at the VIIRS five M-bands. To improve the network performance over extremely turbid coastal oceans and inland waters, the networks are retrained with a training dataset including ocean color data from the Bohai Sea, Baltic Sea, and La Plata River Estuary, covering water types from clear open oceans to moderately turbid and highly turbid waters. The evaluation results show that the super-resolved Kd(490) image is much sharper than the original one, and has more detailed fine spatial structures. A similar enhancement of finer structures is also found in the super-resolved Chl-a images. Chl-a filaments are much sharper and thinner in the super-resolved image, and some of the very fine spatial features that are not shown in the original images appear in the super-resolved Chl-a imageries. The networks are also applied to four other coastal and inland water regions. The results show that super-resolution occurs mainly on pixels of Chl-a and Kd(490) features, especially on the feature edges and locations with a large spatial gradient. The biases between the original M-band images and super-resolved high-resolution images are small for both Chl-a and Kd(490) in moderately to extremely turbid coastal oceans and inland waters, indicating that the super-resolution process does not change the mean values of the original images.


2021 ◽  
Author(s):  
Xikun Wei ◽  
Guojie Wang ◽  
Donghan Feng ◽  
Zheng Duan ◽  
Daniel Fiifi Tawia Hagan ◽  
...  

Abstract. Future global temperature change would have significant effects on society and ecosystems. Earth system models (ESM) are the primary tools to explore the future climate change. However, ESMs still exist great uncertainty and often run at a coarse spatial resolution (The majority of ESMs at about 2 degree). Accurate temperature data at high spatial resolution are needed to improve our understanding of the temperature variation and for many applications. We innovatively apply the deep-learning(DL) method from the Super resolution (SR) in the computer vision to merge 31 ESMs data and the proposed method can perform data merge, bias-correction and spatial-downscaling simultaneously. The SR algorithms are designed to enhance image quality and outperform much better than the traditional methods. The CRU TS (Climate Research Unit gridded Time Series) is considered as reference data in the model training process. In order to find a suitable DL method for our work, we choose five SR methodologies made by different structures. Those models are compared based on multiple evaluation metrics (Mean square error(MSE), mean absolute error(MAE) and Pearson correlation coefficient(R)) and the optimal model is selected and used to merge the monthly historical data during 1850–1900 and monthly future scenarios data (SSP1-2.6, SSP2-4.5, SSP3-7.0, SSP5-8.5) during 2015–2100 at the high spatial resolution of 0.5 degree. Results showed that the merged data have considerably improved performance than any of the individual ESM data and the ensemble mean (EM) of all ESM data in terms of both spatial and temporal aspects. The MAE displays a great improvement and the spatial distribution of the MAE become larger and larger along the latitudes in north hemisphere, presenting like a ‘tertiary class echelon’ condition. The merged product also presents excellent performance when the observation data is smooth with few fluctuations in time series. Additionally, this work proves that the DL model can be transferred to deal with the data merge, bias-correction and spatial-downscaling successfully when enough training data are available. Data can be accessed at https://doi.org/10.5281/zenodo.5746632 (Wei et al., 2021).


Author(s):  
L. Liebel ◽  
M. Körner

In optical remote sensing, spatial resolution of images is crucial for numerous applications. Space-borne systems are most likely to be affected by a lack of spatial resolution, due to their natural disadvantage of a large distance between the sensor and the sensed object. Thus, methods for <i>single-image super resolution</i> are desirable to exceed the limits of the sensor. Apart from assisting visual inspection of datasets, post-processing operations—e.g., segmentation or feature extraction—can benefit from detailed and distinguishable structures. In this paper, we show that recently introduced state-of-the-art approaches for single-image super resolution of conventional photographs, making use of <i>deep learning</i> techniques, such as <i>convolutional neural networks</i> (CNN), can successfully be applied to remote sensing data. With a huge amount of training data available, <i>end-to-end learning</i> is reasonably easy to apply and can achieve results unattainable using conventional handcrafted algorithms. <br><br> We trained our CNN on a specifically designed, domain-specific dataset, in order to take into account the special characteristics of multispectral remote sensing data. This dataset consists of publicly available SENTINEL-2 images featuring 13 spectral bands, a ground resolution of up to 10m, and a high radiometric resolution and thus satisfying our requirements in terms of quality and quantity. In experiments, we obtained results superior compared to competing approaches trained on generic image sets, which failed to reasonably scale satellite images with a high radiometric resolution, as well as conventional interpolation methods.


2021 ◽  
Author(s):  
Rajagopal T K P ◽  
Sakthi G ◽  
Prakash J

Abstract Hyperspectral remote sensing based image classification is found to be a very widely used method employed for scene analysis that is from a remote sensing data which is of a high spatial resolution. Classification is a critical task in the processing of remote sensing. On the basis of the fact that there are different materials with reflections in a particular spectral band, all the traditional pixel-wise classifiers both identify and also classify all materials on the basis of their spectral curves (or pixels). Owing to the dimensionality of the remote sensing data of high spatial resolution along with a limited number of labelled samples, a remote sensing image of a high spatial resolution tends to suffer from something known as the Hughes phenomenon which can pose a serious problem. In order to overcome such a small-sample problem, there are several methods of learning like the Support Vector Machine (SVM) along with the other methods that are kernel based and these were introduced recently for a remote sensing classification of the image and this has shown a good performance. For the purpose of this work, an SVM along with Radial Basis Function (RBF) method was proposed. But, a feature learning approach for the classification of the hyperspectral image is based on the Convolutional Neural Networks (CNNs). The results of the experiment that were based on various image datasets that were hyperspectral which implies that the method proposed will be able to achieve a better performance of classification compared to other traditional methods like the SVM and the RBF kernel and also all conventional methods based on deep learning (CNN).


2015 ◽  
Vol 22 (5) ◽  
pp. 1306-1311 ◽  
Author(s):  
Nark-Eon Sung ◽  
Ik-Jae Lee ◽  
Kug-Seong Lee ◽  
Seong-Hun Jeong ◽  
Seen-Woong Kang ◽  
...  

A microprobe system has been installed on the nanoprobe/XAFS beamline (BL8C) at PLS-II, South Korea. Owing to the reproducible switch of the gap of the in-vacuum undulator (IVU), the intense and brilliant hard X-ray beam of an IVU can be used in X-ray fluorescence (XRF) and X-ray absorption fine-structure (XAFS) experiments. For high-spatial-resolution microprobe experiments a Kirkpatrick–Baez mirror system has been used to focus the millimeter-sized X-ray beam to a micrometer-sized beam. The performance of this system was examined by a combination of micro-XRF imaging and micro-XAFS of a beetle wing. These results indicate that the microprobe system of the BL8C can be used to obtain the distributions of trace elements and chemical and structural information of complex materials.


2018 ◽  
Vol 10 (10) ◽  
pp. 1574 ◽  
Author(s):  
Dongsheng Gao ◽  
Zhentao Hu ◽  
Renzhen Ye

Due to sensor limitations, hyperspectral images (HSIs) are acquired by hyperspectral sensors with high-spectral-resolution but low-spatial-resolution. It is difficult for sensors to acquire images with high-spatial-resolution and high-spectral-resolution simultaneously. Hyperspectral image super-resolution tries to enhance the spatial resolution of HSI by software techniques. In recent years, various methods have been proposed to fuse HSI and multispectral image (MSI) from an unmixing or a spectral dictionary perspective. However, these methods extract the spectral information from each image individually, and therefore ignore the cross-correlation between the observed HSI and MSI. It is difficult to achieve high-spatial-resolution while preserving the spatial-spectral consistency between low-resolution HSI and high-resolution HSI. In this paper, a self-dictionary regression based method is proposed to utilize cross-correlation between the observed HSI and MSI. Both the observed low-resolution HSI and MSI are simultaneously considered to estimate the endmember dictionary and the abundance code. To preserve the spectral consistency, the endmember dictionary is extracted by performing a common sparse basis selection on the concatenation of observed HSI and MSI. Then, a consistent constraint is exploited to ensure the spatial consistency between the abundance code of low-resolution HSI and the abundance code of high-resolution HSI. Extensive experiments on three datasets demonstrate that the proposed method outperforms the state-of-the-art methods.


Sign in / Sign up

Export Citation Format

Share Document