scholarly journals Design of various Image Compression Methods in Wireless Sensor Networks

The processing capacity and power of nodes in a Wireless Sensor Network (WSN) are restricted. The quality of the images is deficient, and the contents of the images may vary after decoding when we apply image compression algorithms in WSN. Various compression algorithms are compared in this paper. An Image Compression method based on Restricted Boltzmann Machine (RBM), Auto encoders and Non-negative Matrix Factorization (NMF), Least Square Non-Negative Matrix Factorization (LSNMF), Projective Non-Negative Matrix Factorization (PNMF) network are proposed in this paper. For the WSN, we have used a Message Queue Telemetry Transport (MQTT) protocol. We have used a three Raspberry Pi’s to build a WSN; Publisher, Broker, Subscriber. A Publisher, where it can trigger the camera and captures the images then compress it and send it to another raspberry pi which is a MQTT broker. The PSNR values for those image compression methods were analyzed and compared against each other for images evaluated from the MNIST dataset. Along with the simulation results, all these compression methods are implemented using hardware implementation. Raspberry Pi, a single-board computer with in-built Wi-Fi capabilities, was used in establishing a WSN. Message Queue Telemetry Transport (MQTT) protocol was used for transmitting the compressed images across the WSN, that offers fast and reliable transmission

Author(s):  
Cathlyn Y. Wen ◽  
Robert J. Beaton

Image compression reduces the amount of data in digital images and, therefore, allows efficient storage, processing, and transmission of pictorial information. However, compression algorithms can degrade image quality by introducing artifacts, which may be unacceptable for users' tasks. This work examined the subjective effects of JPEG and wavelet compression algorithms on a series of medical images. Six digitized chest images were processed by each algorithm at various compression levels. Twelve radiologists rated the perceived image quality of the compressed images relative to the corresponding uncompressed images, as well as rated the acceptability of the compressed images for diagnostic purposes. The results indicate that subjective image quality and acceptability decreased with increasing compression levels; however, all images remained acceptable for diagnostic purposes. At high compression ratios, JPEG compressed images were judged less acceptable for diagnostic purposes than the wavelet compressed images. These results contribute to emerging system design guidelines for digital imaging workstations.


2017 ◽  
Vol 71 (12) ◽  
pp. 2681-2691 ◽  
Author(s):  
H. Georg Schulze ◽  
Stanislav O. Konorov ◽  
James M. Piret ◽  
Michael W. Blades ◽  
Robin F. B. Turner

Mammalian cells contain various macromolecules that can be investigated non-invasively with Raman spectroscopy. The particular mixture of major macromolecules present in a cell being probed are reflected in the measured Raman spectra. Determining macromolecular identities and estimating their concentrations from these mixture Raman spectra can distinguish cell types and otherwise enable biological research. However, the application of canonical multivariate methods, such as principal component analysis (PCA), to perform spectral unmixing yields mathematical solutions that can be difficult to interpret. Non-negative matrix factorization (NNMF) improves the interpretability of unmixed macromolecular components, but can be difficult to apply because ambiguities produced by overlapping Raman bands permit multiple solutions. Furthermore, theoretically sound methods can be difficult to implement in practice. Here we examined the effects of a number of empirical approaches on the quality of NNMF results. These approaches were evaluated on simulated mammalian cell Raman hyperspectra and the results were used to develop an enhanced procedure for implementing NNMF. We demonstrated the utility of this procedure using a Raman hyperspectral data set measured from human islet cells to recover the spectra of insulin and glucagon. This was compared to the relatively inferior PCA of these data.


2020 ◽  
Vol 12 (13) ◽  
pp. 2072
Author(s):  
Mireille Guillaume ◽  
Audrey Minghelli ◽  
Yannick Deville ◽  
Malik Chami ◽  
Louis Juste ◽  
...  

Monitoring of coastal areas by remote sensing is an important issue. The interest of using an unmixing method to determine the seabed composition from hyperspectral aerial images of coastal areas is investigated. Unmixing provides both seabed abundances and endmember reflectances. A sub-surface mixing model is presented, based on a recently proposed oceanic radiative transfer model that accounts for seabed adjacency effects in the water column. Two original non-negative matrix factorization ( N M F )-based unmixing algorithms, referred to as W A D J U M (Water ADJacency UnMixing) and W U M (Water UnMixing, no adjacency effects) are developed, assuming as known the water column bio-optical properties. Simulations show that W A D J U M algorithm achieves performance close to that of the N M F -based unmixing of the seabed without any water column, up to 10 m depth. W U M performance is lower and decreases with the depth. The robustness of the algorithms when using erroneous information about the water column bio-optical properties is evaluated. The results show that the abundance estimation is more reliable using W A D J U M approach. W A D J U M is applied to real data acquired along the French coast; the derived abundance maps of the benthic habitats are discussed and compared to the maps obtained using a fixed spectral library and a least-square ( L S ) estimation of the seabed mixing coefficients. The results show the relevance of the W A D J U M algorithm for the local analysis of the benthic habitats.


2011 ◽  
Vol 11 (03) ◽  
pp. 355-375 ◽  
Author(s):  
MOHAMMAD REZA BONYADI ◽  
MOHSEN EBRAHIMI MOGHADDAM

Most of image compression methods are based on frequency domain transforms that are followed by a quantization and rounding approach to discard some coefficients. It is obvious that the quality of compressed images highly depends on the manner of discarding these coefficients. However, finding a good balance between image quality and compression ratio is an important issue in such manners. In this paper, a new lossy compression method called linear mapping image compression (LMIC) is proposed to compress images with high quality while the user-specified compression ratio is satisfied. This method is based on discrete cosine transform (DCT) and an adaptive zonal mask. The proposed method divides image to equal size blocks and the structure of zonal mask for each block is determined independently by considering its gray-level distance (GLD). The experimental results showed that the presented method had higher pick signal to noise ratio (PSNR) in comparison with some related works in a specified compression ratio. In addition, the results were comparable with JPEG2000.


Author(s):  
Magy El Banhawy ◽  
Walaa Saber ◽  
Fathy Amer

A fundamental factor of digital image compression is the conversion processes. The intention of this process is to understand the shape of an image and to modify the digital image to a grayscale configuration where the encoding of the compression technique is operational. This article focuses on an investigation of compression algorithms for images with artistic effects. A key component in image compression is how to effectively preserve the original quality of images. Image compression is to condense by lessening the redundant data of images in order that they are transformed cost-effectively. The common techniques include discrete cosine transform (DCT), fast Fourier transform (FFT), and shifted FFT (SFFT). Experimental results point out compression ratio between original RGB images and grayscale images, as well as comparison. The superior algorithm improving a shape comprehension for images with grahic effect is SFFT technique.


Sign in / Sign up

Export Citation Format

Share Document