WATERMARKING ON COMPRESSED DATA INTEGRATING CONVOLUTION CODING IN INTEGER WAVELETS

Author(s):  
SANTI P. MAITY ◽  
CLAUDE DELPHA ◽  
RÉMY BOYER

This paper explores the scope of integer wavelets in watermarking on compressed image with the aid of convolution coding as channel coding. Convolution coding is applied on compressed host data, instead of its direct application on watermark signal as used widely for robustness improvement in conventional system. Two-fold advantages, namely flexibility in watermarking through the creation of redundancy on the compressed data as well as protection of watermark information from additive white Gaussian noise (AWGN) attack are achieved. Integer wavelet is used to decompose the encoded compressed data that leads to lossless processing and creation of correlation among the host samples due to its mathematical structure. Watermark information is then embedded using dither modulation (DM)-based quantization index modulation (QIM). The relative gain in imperceptibility and robustness performance are reported for direct watermark embedding on entropy decoded host, using repetition code, convolution code, and finally the combined use of channel codes and integer wavelets. Simulation results show that 6.24 dB (9.50 dB) improvement in document-to-watermark ratio (DWR) at watermark power 12.73 dB (16.81 dB) and 15 dB gain in noise power for watermark decoding at bit error rate (BER) of 10-2 are achieved, respectively over direct watermarking on entropy decoded data.

2010 ◽  
Vol 56 (4) ◽  
pp. 351-355
Author(s):  
Marcin Rodziewicz

Joint Source-Channel Coding in Dictionary Methods of Lossless Data Compression Limitations on memory and resources of communications systems require powerful data compression methods. Decompression of compressed data stream is very sensitive to errors which arise during transmission over noisy channels, therefore error correction coding is also required. One of the solutions to this problem is the application of joint source and channel coding. This paper contains a description of methods of joint source-channel coding based on the popular data compression algorithms LZ'77 and LZSS. These methods are capable of introducing some error resiliency into compressed stream of data without degradation of the compression ratio. We analyze joint source and channel coding algorithms based on these compression methods and present their novel extensions. We also present some simulation results showing usefulness and achievable quality of the analyzed algorithms.


Sensors ◽  
2019 ◽  
Vol 19 (13) ◽  
pp. 2932 ◽  
Author(s):  
Jose Balsa ◽  
Tomás Domínguez-Bolaño ◽  
Óscar Fresnedo ◽  
José A. García-Naya ◽  
Luis Castedo

An analog joint source-channel coding (JSCC) system designed for the transmission of still images is proposed and its performance is compared to that of two digital alternatives which differ in the source encoding operation: Joint Photographic Experts Group (JPEG) and JPEG without entropy coding (JPEGw/oEC), respectively, both relying on an optimized channel encoder–modulator tandem. Apart from a visual comparison, the figures of merit considered in the assessment are the structural similarity (SSIM) index and the time required to transmit an image through additive white Gaussian noise (AWGN) and Rayleigh channels. This work shows that the proposed analog system exhibits a performance similar to that of the digital scheme based on JPEG compression with a noticeable better visual degradation to the human eye, a lower computational complexity, and a negligible delay. These results confirm the suitability of analog JSCC for the transmission of still images in scenarios with severe constraints on power consumption, computational capabilities, and for real-time applications. For these reasons the proposed system is a good candidate for surveillance systems, low-constrained devices, Internet of things (IoT) applications, etc.


2012 ◽  
Vol 2 (2) ◽  
pp. 53-58
Author(s):  
Shaikh Enayet Ullah ◽  
Md. Golam Rashed ◽  
Most. Farjana Sharmin

In this paper, we made a comprehensive BER simulation study of a quasi- orthogonal space time block encoded (QO-STBC) multiple-input single output(MISO) system. The communication system under investigation has incorporated four digital modulations (QPSK, QAM, 16PSK and 16QAM) over an Additative White Gaussian Noise (AWGN) and Raleigh fading channels for three transmit and one receive antennas. In its FEC channel coding section, three schemes such as Cyclic, Reed-Solomon and ½-rated convolutionally encoding have been used. Under implementation of merely low complexity ML decoding based channel estimation and RSA cryptographic encoding /decoding algorithms, it is observable from conducted simulation test on encrypted text message transmission that the communication system with QAM digital modulation and ½-rated convolutionally encoding techniques is highly effective to combat inherent interferences under Raleigh fading and additive white Gaussian noise (AWGN) channels. It is also noticeable from the study that the retrieving performance of the communication system degrades with the lowering of the signal to noise ratio (SNR) and increasing in order of modulation.


2012 ◽  
Vol 532-533 ◽  
pp. 1135-1139
Author(s):  
Dan Hu

Low-Density Parity-Check(LDPC) codes are a class of channel codes based on matrix encoding and iterative decoding. It has low decoding complexity as well as capacity approaching performance. Until now, the best designed LDPC codes can achieve the performance within only 0.0045dB of the Shannon limit. With the in-depth study, the encoding complexity of LDPC codes is not a difficult problem for application any more. Today, we can see LDPC codes widely used in many practical systems, such as wireless communication system, deep-space communication system, optical-fiber communication system and media storage system. This thesis first introduces the development of channel coding, and then the basic principles and concepts of LDPC codes. The following parts discuss several techniques of LDPC codes, including the construction methods of low-density parity matrix, the iterative decoding algorithms and performance analysis methods. Besides, we propose our opinions and our improved algorithms.


2011 ◽  
Vol 480-481 ◽  
pp. 775-780
Author(s):  
Ting Jun Li

The area of robust detection in the presence of partly unknown useful signal or interference is a widespread task in many signal processing applications. In this paper, we consider the robustness of a matched subspace detector in additive white Gaussian noise, under the condition that the noise power is known under null hypothesis, and unknown under alternative hypothesis when the useful signal triggers an variation of noise power, and we also consider the mismatch between the signal subspace and receiver matched filter. The test statistic of this detection problem is derived based on generalized likelihood ratio test, and the distribution of the test statistic is analysis. The computer simulation is used to validate the performance analysis and the robustness of this algorithm at low SNR, compared with other matched subspace detectors.


Author(s):  
Filbert O. Ombongi ◽  
Philip L. Kibet ◽  
Stephen Musyoki

This paper has analyzed the performance a Wireless Division Multiple Access (WCDMA) system model at a data rate of 384kbps and 2Mbps over an Additive White Gaussian Noise (AWGN) channel. The signal was modulated by Quadrature Phase Shift Keying (QPSK) and Quadrature Amplitude Modulation (QAM) with modulation order, M=16. The performance of the system was enhanced by implementing convolution coding scheme. This study was important as it formed a basis through which the performance analysis can be extended to Long Term Evolution (LTE) networks which have data rates starting from 1Mbps to as high as 100Mbps.The performance of the WCDMA at these data rates was seen to improve when convolutional coding scheme was implemented. Since the Shannon capacity formula depends on the BER of a system then this improvement means an additional capacity in the channel and this can accommodate more users in the channel. The results have further shown that the choice of a modulation technique depending on the throughput required affects the BER performance of the system. Therefore, there must be a trade-off between the throughput required, the modulation format to be used and the pulse shaping filter parameters.


Author(s):  
Nejwa El Maammar ◽  
Seddik Bri ◽  
Jaouad Foshi

This paper presents the bit error rate performance of the low density parity check (LDPC) with the concatenation of convolutional channel coding based orthogonal frequency-division-multiplexing (OFDM) using space time block coded (STBC). The OFDM wireless communication system incorporates 3/4-rated convolutional encoder under various digital modulations (BPSK, QPSK and QAM) over an additative white gaussian noise (AWGN) and fading (Raleigh and Rician) channels. At the receiving section of the simulated system, Maximum Ratio combining (MRC) channel equalization technique has been implemented to extract transmitted symbols without enhancing noise power.


Sensors ◽  
2021 ◽  
Vol 22 (1) ◽  
pp. 41
Author(s):  
Sofia Figueiredo ◽  
Nuno Souto ◽  
Francisco Cercas

It is envisioned that healthcare systems of the future will be revolutionized with the development and integration of body-centric networks into future generations of communication systems, giving rise to the so-called “Internet of Bio-nano things”. Molecular communications (MC) emerge as the most promising way of transmitting information for in-body communications. One of the biggest challenges is how to minimize the effects of environmental noise and reduce the inter-symbol interference (ISI) which in an MC via diffusion scenario can be very high. To address this problem, channel coding is one of the most promising techniques. In this paper, we study the effects of different channel codes integrated into MC systems. We provide a study of Tomlinson, Cercas, Hughes (TCH) codes as a new attractive approach for the MC environment due to the codeword properties which enable simplified detection. Simulation results show that TCH codes are more effective for these scenarios when compared to other existing alternatives, without introducing too much complexity or processing power into the system. Furthermore, an experimental proof-of-concept macroscale test bed is described, which uses pH as the information carrier, and which demonstrates that the proposed TCH codes can improve the reliability in this type of communication channel.


Author(s):  
Виктория Владимировна Науменко ◽  
Алексей Сергеевич Рубель ◽  
Александр Владимирович Тоцкий ◽  
Валерий Борисович Шаронов

In a number of practical applications of digital signal processing, the process under study may include correlated spectral components or phase coupling. Extracting the phase relationships provides very important and useful information for the correct understanding, analysis, and description of the properties of physical phenomena generating these processes. However, such information is irretrievably lost when using classical methods of signal processing using energy statistics, i.e. second-order statistics. Obtaining estimates of signal parameters and analyzing them using third-order correlation functions and bispectrum makes it possible to learn much more about signal properties than when using conventional correlation functions. Estimating the bispectral density (third order spectral density), in contrast to estimating the energy spectrum, makes it possible not only to describe the characteristics of the observed process correctly, but also to preserve and, if necessary, extract the phase characteristics of the component, which includes the observed process. Therefore, in a number of applied tasks of telecommunications, as well as tasks of image processing and other bisection analysis, often serves as an effective tool of signal processing. The aim of the article is to study the feasibility of using a recursive algorithm when restoring a waveform and image by bispectrum in the noise environment. The following types of signals were selected for the study: rectangular, triangular, Gaussian impulses and signal-triplet. They were distorted with additive white Gaussian noise Test image was distorted by additive white Gaussian, pulsed, Poisson and multiplicative noises. Analysis of the signal recovery results indicates that as the noise power increases, the quality of the recovery decreases. The effect of random signal shift does not affect the shape of the recovered signal. Analysis of the image recovery results indicates image recovery, but this algorithm introduces distortions in the form of an offset


Sign in / Sign up

Export Citation Format

Share Document