scholarly journals Spectral Distortion in Lossy Compression of Hyperspectral Data

2012 ◽  
Vol 2012 ◽  
pp. 1-8 ◽  
Author(s):  
Bruno Aiazzi ◽  
Luciano Alparone ◽  
Stefano Baronti ◽  
Cinzia Lastri ◽  
Massimo Selva

Distortion allocation varying with wavelength in lossy compression of hyperspectral imagery is investigated, with the aim of minimizing the spectral distortion between original and decompressed data. The absolute angular error, or spectral angle mapper (SAM), is used to quantify spectral distortion, while radiometric distortions are measured by maximum absolute deviation (MAD) for near-lossless methods, for example, differential pulse code modulation (DPCM), or mean-squared error (MSE) for lossy methods, for example, spectral decorrelation followed by JPEG 2000. Two strategies ofinterbanddistortion allocation are compared: given a target average bit rate, distortion may be set to be constant with wavelength. Otherwise, it may be allocated proportionally to the noise level of each band, according to thevirtually losslessprotocol. Comparisons with the uncompressed originals show that the average SAM of radiance spectra is minimized by constant distortion allocation to radiance data. However, variable distortion allocation according to the virtually lossless protocol yields significantly lower SAM in case of reflectance spectra obtained from compressed radiance data, if compared with the constant distortion allocation at the same compression ratio.

2004 ◽  
Vol 17 (2) ◽  
pp. 165-184 ◽  
Author(s):  
Johannes Huber ◽  
Bernd Matschkal

A new method for efficient digitizing analog signals while preserving the original waveform as close as possible with respect to the relative quantization error is presented. Logarithmic quantization is applied to short vectors of samples represented in sphere coordinates. The resulting advantages, i.e. a constant Signal-to-Noise Ratio over a very high dynamic range at a small loss with respect to rate-distortion theory are discussed. In order to increase the Signal-to-Noise Ratio (SNR) by exploitation of correlations within the source signal, a method of combining differential pulse code modulation (DPCM) with spherical logarithmic quantization is presented. The resulting technique achieves an efficient digital representation of waveforms with a high long term as well as segmental SNR at an extreme low delay of the signal.


Author(s):  
Dingwen Tao ◽  
Sheng Di ◽  
Hanqi Guo ◽  
Zizhong Chen ◽  
Franck Cappello

Because of the vast volume of data being produced by today’s scientific simulations and experiments, lossy data compressor allowing user-controlled loss of accuracy during the compression is a relevant solution for significantly reducing the data size. However, lossy compressor developers and users are missing a tool to explore the features of scientific data sets and understand the data alteration after compression in a systematic and reliable way. To address this gap, we have designed and implemented a generic framework called Z-checker. On the one hand, Z-checker combines a battery of data analysis components for data compression. On the other hand, Z-checker is implemented as an open-source community tool to which users and developers can contribute and add new analysis components based on their additional analysis demands. In this article, we present a survey of existing lossy compressors. Then, we describe the design framework of Z-checker, in which we integrated evaluation metrics proposed in prior work as well as other analysis tools. Specifically, for lossy compressor developers, Z-checker can be used to characterize critical properties (such as entropy, distribution, power spectrum, principal component analysis, and autocorrelation) of any data set to improve compression strategies. For lossy compression users, Z-checker can detect the compression quality (compression ratio and bit rate) and provide various global distortion analysis comparing the original data with the decompressed data (peak signal-to-noise ratio, normalized mean squared error, rate–distortion, rate-compression error, spectral, distribution, and derivatives) and statistical analysis of the compression error (maximum, minimum, and average error; autocorrelation; and distribution of errors). Z-checker can perform the analysis with either coarse granularity (throughout the whole data set) or fine granularity (by user-defined blocks), such that the users and developers can select the best fit, adaptive compressors for different parts of the data set. Z-checker features a visualization interface displaying all analysis results in addition to some basic views of the data sets such as time series. To the best of our knowledge, Z-checker is the first tool designed to assess lossy compression comprehensively for scientific data sets.


2010 ◽  
Vol 69 (6) ◽  
pp. 537-563 ◽  
Author(s):  
N. N. Ponomarenko ◽  
M. S. Zriakhov ◽  
A. Kaarna

2021 ◽  
Vol 13 (11) ◽  
pp. 2125
Author(s):  
Bardia Yousefi ◽  
Clemente Ibarra-Castanedo ◽  
Martin Chamberland ◽  
Xavier P. V. Maldague ◽  
Georges Beaudoin

Clustering methods unequivocally show considerable influence on many recent algorithms and play an important role in hyperspectral data analysis. Here, we challenge the clustering for mineral identification using two different strategies in hyperspectral long wave infrared (LWIR, 7.7–11.8 μm). For that, we compare two algorithms to perform the mineral identification in a unique dataset. The first algorithm uses spectral comparison techniques for all the pixel-spectra and creates RGB false color composites (FCC). Then, a color based clustering is used to group the regions (called FCC-clustering). The second algorithm clusters all the pixel-spectra to directly group the spectra. Then, the first rank of non-negative matrix factorization (NMF) extracts the representative of each cluster and compares results with the spectral library of JPL/NASA. These techniques give the comparison values as features which convert into RGB-FCC as the results (called clustering rank1-NMF). We applied K-means as clustering approach, which can be modified in any other similar clustering approach. The results of the clustering-rank1-NMF algorithm indicate significant computational efficiency (more than 20 times faster than the previous approach) and promising performance for mineral identification having up to 75.8% and 84.8% average accuracies for FCC-clustering and clustering-rank1 NMF algorithms (using spectral angle mapper (SAM)), respectively. Furthermore, several spectral comparison techniques are used also such as adaptive matched subspace detector (AMSD), orthogonal subspace projection (OSP) algorithm, principal component analysis (PCA), local matched filter (PLMF), SAM, and normalized cross correlation (NCC) for both algorithms and most of them show a similar range in accuracy. However, SAM and NCC are preferred due to their computational simplicity. Our algorithms strive to identify eleven different mineral grains (biotite, diopside, epidote, goethite, kyanite, scheelite, smithsonite, tourmaline, pyrope, olivine, and quartz).


1983 ◽  
Vol 19 (2) ◽  
pp. 63 ◽  
Author(s):  
N.M. Nasrabadi ◽  
S.K. Pal ◽  
R.A. King

2019 ◽  
Vol 11 (14) ◽  
pp. 1635 ◽  
Author(s):  
Jiaojiao Li ◽  
Jiaji Wu ◽  
Gwanggil Jeon

It is well known that aurorae have very high research value, but the data volume of aurora spectral data is very large, which brings great challenges to storage and transmission. To alleviate this problem, compression of aurora spectral data is indispensable. This paper presents a parallel Compute Unified Device Architecture (CUDA) implementation of the prediction-based online Differential Pulse Code Modulation (DPCM) method for the lossless compression of the aurora spectral data. Two improvements are proposed to improve the compression performance of the online DPCM method. One is on the computing of the prediction coefficients, and the other is on the encoding of the residual. In the CUDA implementation, we proposed a decomposition method for the matrix multiplication to avoid redundant data accesses and calculations. In addition, the CUDA implementation is optimized with a multi-stream technique and multi-graphics processing unit (GPU) technique, respectively. Finally, the average compression time of an aurora spectral image reaches about 0.06 s, which is much less than the 15 s aurora spectral data acquisition time interval and can save a lot of time for transmission and other subsequent tasks.


Sign in / Sign up

Export Citation Format

Share Document