scholarly journals Efficient Lossless Compression of Multitemporal Hyperspectral Image Data

2018 ◽  
Vol 4 (12) ◽  
pp. 142 ◽  
Author(s):  
Hongda Shen ◽  
Zhuocheng Jiang ◽  
W. Pan

Hyperspectral imaging (HSI) technology has been used for various remote sensing applications due to its excellent capability of monitoring regions-of-interest over a period of time. However, the large data volume of four-dimensional multitemporal hyperspectral imagery demands massive data compression techniques. While conventional 3D hyperspectral data compression methods exploit only spatial and spectral correlations, we propose a simple yet effective predictive lossless compression algorithm that can achieve significant gains on compression efficiency, by also taking into account temporal correlations inherent in the multitemporal data. We present an information theoretic analysis to estimate potential compression performance gain with varying configurations of context vectors. Extensive simulation results demonstrate the effectiveness of the proposed algorithm. We also provide in-depth discussions on how to construct the context vectors in the prediction model for both multitemporal HSI and conventional 3D HSI data.

Author(s):  
Haoyi Zhou ◽  
Jun Zhou ◽  
Haichuan Yang ◽  
Cheng Yan ◽  
Xiao Bai ◽  
...  

Imaging devices are of increasing use in environmental research requiring an urgent need to deal with such issues as image data, feature matching over different dimensions. Among them, matching hyperspectral image with other types of images is challenging due to the high dimensional nature of hyperspectral data. This chapter addresses this problem by investigating structured support vector machines to construct and learn a graph-based model for each type of image. The graph model incorporates both low-level features and stable correspondences within images. The inherent characteristics are depicted by using a graph matching algorithm on extracted weighted graph models. The effectiveness of this method is demonstrated through experiments on matching hyperspectral images to RGB images, and hyperspectral images with different dimensions on images of natural objects.


Author(s):  
B. Saichandana ◽  
K. Srinivas ◽  
R. KiranKumar

<p>Hyperspectral remote sensors collect image data for a large number of narrow, adjacent spectral bands. Every pixel in hyperspectral image involves a continuous spectrum that is used to classify the objects with great detail and precision. This paper presents hyperspectral image classification mechanism using genetic algorithm with empirical mode decomposition and image fusion used in preprocessing stage. 2-D Empirical mode decomposition method is used to remove any noisy components in each band of the hyperspectral data. After filtering, image fusion is performed on the hyperspectral bands to selectively merge the maximum possible features from the source images to form a single image. This fused image is classified using genetic algorithm. Different indices, such as K-means (KMI), Davies-Bouldin Index (DBI), and Xie-Beni Index (XBI) are used as objective functions. This method increases classification accuracy of hyperspectral image.</p>


Author(s):  
R. Kiran Kumar ◽  
B. Saichandana ◽  
K. Srinivas

<p>This paper presents genetic algorithm based band selection and classification on hyperspectral image data set. Hyperspectral remote sensors collect image data for a large number of narrow, adjacent spectral bands. Every pixel in hyperspectral image involves a continuous spectrum that is used to classify the objects with great detail and precision. In this paper, first filtering based on 2-D Empirical mode decomposition method is used to remove any noisy components in each band of the hyperspectral data. After filtering, band selection is done using genetic algorithm in-order to remove bands that convey less information. This dimensionality reduction minimizes many requirements such as storage space, computational load, communication bandwidth etc which is imposed on the unsupervised classification algorithms. Next image fusion is performed on the selected hyperspectral bands to selectively merge the maximum possible features from the selected images to form a single image. This fused image is classified using genetic algorithm. Three different indices, such as K-means Index (KMI) and Jm measure are used as objective functions. This method increases classification accuracy and performance of hyperspectral image than without dimensionality reduction.</p>


Sensors ◽  
2021 ◽  
Vol 22 (1) ◽  
pp. 263
Author(s):  
Amal Altamimi ◽  
Belgacem Ben Ben Youssef

Hyperspectral imaging is an indispensable technology for many remote sensing applications, yet expensive in terms of computing resources. It requires significant processing power and large storage due to the immense size of hyperspectral data, especially in the aftermath of the recent advancements in sensor technology. Issues pertaining to bandwidth limitation also arise when seeking to transfer such data from airborne satellites to ground stations for postprocessing. This is particularly crucial for small satellite applications where the platform is confined to limited power, weight, and storage capacity. The availability of onboard data compression would help alleviate the impact of these issues while preserving the information contained in the hyperspectral image. We present herein a systematic review of hardware-accelerated compression of hyperspectral images targeting remote sensing applications. We reviewed a total of 101 papers published from 2000 to 2021. We present a comparative performance analysis of the synthesized results with an emphasis on metrics like power requirement, throughput, and compression ratio. Furthermore, we rank the best algorithms based on efficiency and elaborate on the major factors impacting the performance of hardware-accelerated compression. We conclude by highlighting some of the research gaps in the literature and recommend potential areas of future research.


Author(s):  
Leila Akrour ◽  
Soltane Ameur ◽  
Mourad Lahdir ◽  
Régis Fournier ◽  
Amine Nait Ali

Many compression methods, lossy or lossless, were developed for 3D hyperspectral images, and various standards have emerged and applied to these amounts of data in order to achieve the best rate-distortion performance. However, high-dimensional data volume of hyperspectal images is problematic for compression and decompression time. Nowadays, fast compression and especially fast decompression algorithms are of primary importance in image data applications. In this case, we present a lossy hyperspectral image compression based on supervised multimodal scheme in order to improve the compression results. The supervised multimodal method is used to reduce the amount of data before their compression with the 3D-SPIHT encoder based on 3D wavelet transform. The performance of the Supervised Multimodal Compression (SMC-3D-SPIHT encoder) has been evaluated on AVIRIS hyperspectral images. Experimental results indicate that the proposed algorithm provides very promising performance at low bit-rates while reducing the encoding/decoding time.


Symmetry ◽  
2021 ◽  
Vol 13 (9) ◽  
pp. 1673
Author(s):  
Aili Wang ◽  
Chengyang Liu ◽  
Dong Xue ◽  
Haibin Wu ◽  
Yuxiao Zhang ◽  
...  

Although hyperspectral data provide rich feature information and are widely used in other fields, the data are still scarce. Training small sample data classification is still a major challenge for HSI classification based on deep learning. Recently, the method of mining sample relationships has been proved to be an effective method for training small samples. However, this strategy requires high computational power, which will increase the difficulty of network model training. This paper proposes a modified depthwise separable relational network to deeply capture the similarity between samples. In addition, in order to effectively mine the similarity between samples, the feature vectors of support samples and query samples are symmetrically spliced. According to the metric distance between symmetrical structures, the dependence of the model on samples can be effectively reduced. Firstly, in order to improve the training efficiency of the model, depthwise separable convolution is introduced to reduce the computational cost of the model. Secondly, the Leaky-ReLU function effectively activates all neurons in each layer of neural network to improve the training efficiency of the model. Finally, the cosine annealing learning rate adjustment strategy is introduced to avoid the model falling into the local optimal solution and enhance the robustness of the model. The experimental results on two widely used hyperspectral remote sensing image data sets (Pavia University and Kennedy Space Center) show that compared with seven other advanced classification methods, the proposed method achieves better classification accuracy under the condition of limited training samples.


2019 ◽  
pp. 561-580
Author(s):  
Haoyi Zhou ◽  
Jun Zhou ◽  
Haichuan Yang ◽  
Cheng Yan ◽  
Xiao Bai ◽  
...  

Imaging devices are of increasing use in environmental research requiring an urgent need to deal with such issues as image data, feature matching over different dimensions. Among them, matching hyperspectral image with other types of images is challenging due to the high dimensional nature of hyperspectral data. This chapter addresses this problem by investigating structured support vector machines to construct and learn a graph-based model for each type of image. The graph model incorporates both low-level features and stable correspondences within images. The inherent characteristics are depicted by using a graph matching algorithm on extracted weighted graph models. The effectiveness of this method is demonstrated through experiments on matching hyperspectral images to RGB images, and hyperspectral images with different dimensions on images of natural objects.


Author(s):  
Jing Li ◽  
Xiaorun Li ◽  
Liaoying Zhao

The minimization problem of reconstruction error over large hyperspectral image data is one of the most important problems in unsupervised hyperspectral unmixing. A variety of algorithms based on nonnegative matrix factorization (NMF) have been proposed in the literature to solve this minimization problem. One popular optimization method for NMF is the projected gradient descent (PGD). However, as the algorithm must compute the full gradient on the entire dataset at every iteration, the PGD suffers from high computational cost in the large-scale real hyperspectral image. In this paper, we try to alleviate this problem by introducing a mini-batch gradient descent-based algorithm, which has been widely used in large-scale machine learning. In our method, the endmember can be updated pixel set by pixel set while abundance can be updated band set by band set. Thus, the computational cost is lowered to a certain extent. The performance of the proposed algorithm is quantified in the experiment on synthetic and real data.


Author(s):  
Guohua Xiong

To ensure the high efficiency of the development of car networking technology, large data compression technology based on car networking was studied. First, RFID technology and vehicle networking, big data technology in vehicle networking, RFID path data compression technology in the Internet of vehicles were introduced. Then, RFID path data compression verification experiments were performed. The results showed that when the data volume was relatively small, there was no obvious change in the compression ratio under the fixed threshold and the threshold change. However, when the amount of data gradually increased, the compression ratio under the condition of changing the threshold was slightly higher than the fixed threshold. Therefore, RFID path big data processing is feasible, and compression technology is efficient.


Sensors ◽  
2018 ◽  
Vol 18 (10) ◽  
pp. 3528 ◽  
Author(s):  
Yang Shao ◽  
Jinhui Lan ◽  
Yuzhen Zhang ◽  
Jinlin Zou

Hyperspectral unmixing, which decomposes mixed pixels into endmembers and corresponding abundance maps of endmembers, has obtained much attention in recent decades. Most spectral unmixing algorithms based on non-negative matrix factorization (NMF) do not explore the intrinsic manifold structure of hyperspectral data space. Studies have proven image data is smooth along the intrinsic manifold structure. Thus, this paper explores the intrinsic manifold structure of hyperspectral data space and introduces manifold learning into NMF for spectral unmixing. Firstly, a novel projection equation is employed to model the intrinsic structure of hyperspectral image preserving spectral information and spatial information of hyperspectral image. Then, a graph regularizer which establishes a close link between hyperspectral image and abundance matrix is introduced in the proposed method to keep intrinsic structure invariant in spectral unmixing. In this way, decomposed abundance matrix is able to preserve the true abundance intrinsic structure, which leads to a more desired spectral unmixing performance. At last, the experimental results including the spectral angle distance and the root mean square error on synthetic and real hyperspectral data prove the superiority of the proposed method over the previous methods.


Sign in / Sign up

Export Citation Format

Share Document