Vector Quantization for Satellite Image Compression

Author(s):  
Sanjith Sathya Joseph ◽  
R. Ganesan

Image compression is the process of reducing the size of a file without humiliating the quality of the image to an unacceptable level by Human Visual System. The reduction in file size allows as to store more data in less memory and speed up the transmission process in low bandwidth also, in case of satellite images it reduces the time required for the image to reach the ground station. In order to increase the transmission process compression plays an important role in remote sensing images.  This paper presents a coding scheme for satellite images using Vector Quantization. And it is a well-known technique for signal compression, and it is also the generalization of the scalar quantization.  The given satellite image is compressed using VCDemo software by creating codebooks for vector quantization and the quality of the compressed and decompressed image is compared by the Mean Square Error, Signal to Noise Ratio, Peak Signal to Noise Ratio values.

Author(s):  
S. Sanjith ◽  
R. Ganesan

Measuring the quality of image is very complex and hard process since the opinion of the humans are affected by physical and psychological parameters. So many techniques are invented and proposed for image quality analysis but none of the methods suits best for it. Assessment of image quality plays an important role in image processing. In this paper we present the experimental results by comparing the quality of different satellite images (ALOS, RapidEye, SPOT4, SPOT5, SPOT6, SPOTMap) after compression using four different compression methods namely Joint Photographic Expert Group (JPEG), Embedded Zero tree Wavelet (EZW), Set Partitioning in Hierarchical Tree (SPIHT), Joint Photographic Expert Group – 2000 (JPEG 2000). The Mean Square Error (MSE), Signal to Noise Ratio (SNR) and Peak Signal to Noise Ratio (PSNR) values are calculated to determine the quality of the high resolution satellite images after compression.


In the recent days, the importance of image compression techniques is exponentially increased due to the generation of massive amount of data which needs to be stored or transmitted. Numerous approaches have been presented for effective image compression by the principle of representing images in its compact form through the avoidance of unnecessary pixels. Vector quantization (VA) is an effective method in image compression and the construction of quantization table is an important process is an important task. The compression performance and the quality of reconstructed data are based on the quantization table, which is actually a matrix of 64 integers. The quantization table selection is a complex combinatorial problem which can be resolved by the evolutionary algorithms (EA). Presently, EA became famous to resolve the real world problems in a reasonable amount of time. This chapter introduces Firefly (FF) with Teaching and learning based optimization (TLBO) algorithm termed as FF-TLBO algorithm for the selection of quantization table and introduces Firefly with Tumbling algorithm termed as FF-Tumbling algorithm for the selection of search space. As the FF algorithm faces a problem when brighter FFs are insignificant, the TLBO algorithm is integrated to it to resolve the problem and Tumbling efficiently train the algorithm to explore all direction in the solution space. This algorithm determines the best fit value for every bock as local best and best fitness value for the entire image is considered as global best. When these values are found by FF algorithm, compression process takes place by efficient image compression algorithm like Run Length Encoding and Huffman coding. The proposed FF-TLBO and FF-Tumbling algorithm is evaluated by comparing its results with existing FF algorithm using a same set of benchmark images in terms of Mean Square Error (MSE), Peak Signal to Noise Ratio (PSNR), Signal to Noise Ratio (SNR). The obtained results ensure the superior performance of FF-TLBO and FF-Tumbling algorithm over FF algorithm and make it highly useful for real time applications.


Author(s):  
Anusorn Jitkam ◽  
Satra Wongthanavasu

This research presents an image compression algorithm using modified Haar wavelet and vector quantization. For comparison purposes, a standard Haar wavelet with vector quantization and SPIHT, which is used in JPEG2000, are compared with the proposed method using Peak Signal-to-Noise Ratio (PSNR). The proposed method shows better results on average over the compared methods.


2018 ◽  
Vol 9 (2) ◽  
pp. 93
Author(s):  
Novera Kristianti ◽  
Niwayan Purnawati ◽  
Bryand Rolando

Abstract. An image is classified into dark, normal, and bright image. The images are grouped in the dark images according to the histogram and the mu value. An image consists of information and redundancies. The use of wavelet is considered effective in image compression and it does not only cut down the memory usage but also it makes devices work faster. In this study, an analysis in conducted on the influence of dark, normal, and bright images on the orthogonal wavelet. Peak Signal to Noise Ratio (PSNR) is used to compare 17 functions of wavelet orthogonal in the image compression of dark, normal, and bright images. PSNR is a measurement parameter commonly used for measuring the quality of image reconstruction which is then compared with the original image. Compression ratio is used to measure the reduction of the data size after the compression process. Based on the research on the dark, normal, and bright image, the findings reveal that bright image has got the lowest PNSR value at all image testing while the normal image has the highest PSNR value at the wavelet orthogonal application. Keywords : Image compression, Orthogonal wavelet, PSNR, compression ratio.Abstrak. Suatu citra dikelompokkan menjadi citra gelap, citra normal, dan citra terang. Pengelompokan citra menjadi warna gelap terlihat dari histogram dan nilai rerata intensitas (mu). Citra terdiri atas informasi dan redudansi. Penggunaan wavelet dinilai efektif dalam kompresi citra dan menurunkan penggunaan memori serta membuat perangkat menjadi lebih cepat. Pada penelitian ini, dilakukan analisis pengaruh citra gelap, citra normal, dan citra terang terhadap wavelet orthogonal. Peak Signal to Noise Ratio (PSNR) digunakan untuk membandingkan 17 fungsi wavelet orthogonal dalam kompresi citra gelap, citra normal, dan citra terang. PSNR adalah parameter ukur yang sering digunakan untuk pengukuran kualitas gambar rekonstruksi, yang lalu dibandingkan dengan gambar asli. Rasio kompresi digunakan untuk mengukur pengurangan ukuran data setelah proses kompresi. Berdasarkan penelitian pada citra gelap, citra normal, dan citra terang diperoleh bahwa citra terang menghasilkan nilai PSNR paling kecil untuk seluruh citra uji dan citra normal menghasilkan nilai PSNR paling besar dalam penerapan wavelet orthogonal. Kata kunci : Kompresi citra, Wavelet orthogonal, PSNR, rasio kompresi.


Author(s):  
S. Sanjith ◽  
R. Ganeshan

The rapid growth of remote sensing technology has a great advantage in producing high resolution images which are huge in data volume. Due to the huge volume it is tedious to store and transmit the data. In order to overcome this, a good compression algorithm should be used to compress the data before storing are transmitting. In this paper we have chosen seven different very high resolution satellite images namely Worldview 3, Worldview 2, GeoEye-1, Worldview 1, Pleiades, Quick Bird and IKONOS they are compressed using three different compression methods JPEG, SPIHT and JPEG2000. The Mean square Error, Signal to noise Ratio and Peak Signal to Noise Ratio are calculated to evaluate the quality of the compression methods in very high resolution satellite images.


2014 ◽  
Vol 2 (2) ◽  
pp. 47-58
Author(s):  
Ismail Sh. Baqer

A two Level Image Quality enhancement is proposed in this paper. In the first level, Dualistic Sub-Image Histogram Equalization DSIHE method decomposes the original image into two sub-images based on median of original images. The second level deals with spikes shaped noise that may appear in the image after processing. We presents three methods of image enhancement GHE, LHE and proposed DSIHE that improve the visual quality of images. A comparative calculations is being carried out on above mentioned techniques to examine objective and subjective image quality parameters e.g. Peak Signal-to-Noise Ratio PSNR values, entropy H and mean squared error MSE to measure the quality of gray scale enhanced images. For handling gray-level images, convenient Histogram Equalization methods e.g. GHE and LHE tend to change the mean brightness of an image to middle level of the gray-level range limiting their appropriateness for contrast enhancement in consumer electronics such as TV monitors. The DSIHE methods seem to overcome this disadvantage as they tend to preserve both, the brightness and contrast enhancement. Experimental results show that the proposed technique gives better results in terms of Discrete Entropy, Signal to Noise ratio and Mean Squared Error values than the Global and Local histogram-based equalization methods


Author(s):  
Mourad Talbi ◽  
Med Salim Bouhlel

Background: In this paper, we propose a secure image watermarking technique which is applied to grayscale and color images. It consists in applying the SVD (Singular Value Decomposition) in the Lifting Wavelet Transform domain for embedding a speech image (the watermark) into the host image. Methods: It also uses signature in the embedding and extraction steps. Its performance is justified by the computation of PSNR (Pick Signal to Noise Ratio), SSIM (Structural Similarity), SNR (Signal to Noise Ratio), SegSNR (Segmental SNR) and PESQ (Perceptual Evaluation Speech Quality). Results: The PSNR and SSIM are used for evaluating the perceptual quality of the watermarked image compared to the original image. The SNR, SegSNR and PESQ are used for evaluating the perceptual quality of the reconstructed or extracted speech signal compared to the original speech signal. Conclusion: The Results obtained from computation of PSNR, SSIM, SNR, SegSNR and PESQ show the performance of the proposed technique.


Sensors ◽  
2021 ◽  
Vol 21 (16) ◽  
pp. 5540
Author(s):  
Nayeem Hasan ◽  
Md Saiful Islam ◽  
Wenyu Chen ◽  
Muhammad Ashad Kabir ◽  
Saad Al-Ahmadi

This paper proposes an encryption-based image watermarking scheme using a combination of second-level discrete wavelet transform (2DWT) and discrete cosine transform (DCT) with an auto extraction feature. The 2DWT has been selected based on the analysis of the trade-off between imperceptibility of the watermark and embedding capacity at various levels of decomposition. DCT operation is applied to the selected area to gather the image coefficients into a single vector using a zig-zig operation. We have utilized the same random bit sequence as the watermark and seed for the embedding zone coefficient. The quality of the reconstructed image was measured according to bit correction rate, peak signal-to-noise ratio (PSNR), and similarity index. Experimental results demonstrated that the proposed scheme is highly robust under different types of image-processing attacks. Several image attacks, e.g., JPEG compression, filtering, noise addition, cropping, sharpening, and bit-plane removal, were examined on watermarked images, and the results of our proposed method outstripped existing methods, especially in terms of the bit correction ratio (100%), which is a measure of bit restoration. The results were also highly satisfactory in terms of the quality of the reconstructed image, which demonstrated high imperceptibility in terms of peak signal-to-noise ratio (PSNR ≥ 40 dB) and structural similarity (SSIM ≥ 0.9) under different image attacks.


2019 ◽  
Vol 829 ◽  
pp. 252-257
Author(s):  
Azhari ◽  
Yohanes Hutasoit ◽  
Freddy Haryanto

CBCT is a modernized technology in producing radiograph image on dentistry. The image quality excellence is very important for clinicians to interpret the image, so the result of diagnosis produced becoming more accurate, appropriate, thus minimizing the working time. This research was aimed to assess the image quality using the blank acrylic phantom polymethylmethacrylate (PMMA) (C­5H8O2)n in the density of 1.185 g/cm3 for evaluating the homogeneity and uniformity of the image produced. Acrylic phantom was supported with a tripod and laid down on the chin rest of the CBCT device, then the phantom was fixed, and the edge of the phantom was touched by the bite block. Furthermore, the exposure of the X-ray was executed toward the acrylic phantom with various kVp and mAs, from 80 until 90, with the range of 5 kV and the variation of mA was 3, 5, and 7 mA respectively. The time exposure was kept constant for 25 seconds. The samples were taken from CBCT acrylic images, then as much as 5 ROIs (Region of Interest) was chosen to be analyzed. The ROIs determination was analyzed by using the ImageJ® software for recognizing the influence of kVp and mAs towards the image uniformity, noise and SNR. The lowest kVp and mAs had the result of uniformity value, homogeneity and signal to noise ratio of 11.22; 40.35; and 5.96 respectively. Meanwhile, the highest kVp and mAs had uniformity value, homogeneity and signal to noise ratio of 16.96; 26.20; and 5.95 respectively. There were significant differences between the image uniformity and homogeneity on the lowest kVp and mAs compared to the highest kVp and mAs, as analyzed with the ANOVA statistics analysis continued with the t-student post-hoc test with α = 0.05. However, there was no significant difference in SNR as analyzed with the ANOVA statistic analysis. The usage of the higher kVp and mAs caused the improvement of the image homogeneity and uniformity compared to the lower kVp and mAs.


1988 ◽  
Vol 132 ◽  
pp. 35-38
Author(s):  
Dennis C. Ebbets ◽  
Sara R. Heap ◽  
Don J. Lindler

The G-HRS is one of four axial scientific instruments which will fly aboard the Hubble Space Telescope (ref 1,2). It will produce spectroscopic observations in the 1050 A ≤ λ ≤ 3300 A region with greater spectral, spatial and temporal resolution than has been possible with previous space-based instruments. Five first order diffraction gratings and one Echelle provide three modes of spectroscopic operation with resolving powers of R = λ/ΔΔ = 2000, 20000 and 90000. Two magnetically focused, pulse-counting digicon detectors, which differ only in the nature of their photocathodes, produce data whose photometric quality is usually determined by statistical noise in the signal (ref 3). Under ideal circumstances the signal to noise ratio increases as the square root of the exposure time. For some observations detector dark count, instrumental scattered light or granularity in the pixel to pixel sensitivity will cause additional noise. The signal to noise ratio of the net spectrum will then depend on several parameters, and will increase more slowly with exposure time. We have analyzed data from the ground based calibration programs, and have developed a theoretical model of the HRS performance (ref 4). Our results allow observing and data reduction strategies to be optimized when factors other than photon statistics influence the photometric quality of the data.


Sign in / Sign up

Export Citation Format

Share Document