Embedding Robust Gray-Level Watermark in an Image Using Discrete Cosine Transformation

Author(s):  
Chwei-Shyong Tsai ◽  
Chin-Chen Chang

Digital watermarking is an effective technique to protect the intellectual property rights of digital images. In general, a gray-level image can provide more perceptual information; moreover, the size of each pixel in the gray-level image is bigger. Commonly, gray-level digital watermarks are more robust. In this chapter, the proposed watermarking scheme adopts a gray-level image as the watermark. In addition, discrete cosine transformation (DCT) technique and quantization method are applied to strengthen the robustness of the watermarking system. Both original image and digital watermark, processed by DCT transformation, can build a quantization table to reduce the information size of the digital watermark. After quantized watermark is embedded into the middle frequency bands of the transformed original image, the quality of the watermarked image is always visually acceptable because of the effectiveness of the quantization technique. The experimental results show that the embedded watermark can resist image cropping, JPEG lossy compression, and destructive processes such as image blurring and sharpening.

2010 ◽  
Vol 19 (02) ◽  
pp. 491-502 ◽  
Author(s):  
YONG-GANG FU ◽  
RUIMIN SHEN

In this paper, a novel image watermarking scheme based on a self-reference technique is proposed. The meaningful watermark is embedded into a gray-level image according to the relation between the constructed reference image and the original host. In order to be robust against Jpeg compression, the reference image should be robust under Jpeg compression. Firstly, the original image is transformed into DCT domain; and then most of the high frequency coefficients are omitted; after the quantization step and inverse DCT transform, we can obtain a robust reference. By considering the relation between the original image and its reference, we can embed the watermark into the host. The watermark extraction process is oblivious. Experimental results under several attacks show good robustness of the proposed scheme. Especially under cropping and Jpeg compression attacks, the watermark can be extracted with only few errors.


2007 ◽  
Vol 5 ◽  
pp. 305-311 ◽  
Author(s):  
B. Heyne ◽  
J. Götze

Abstract. In this paper a computationally efficient and high-quality preserving DCT architecture is presented. It is obtained by optimizing the Loeffler DCT based on the Cordic algorithm. The computational complexity is reduced from 11 multiply and 29 add operations (Loeffler DCT) to 38 add and 16 shift operations (which is similar to the complexity of the binDCT). The experimental results show that the proposed DCT algorithm not only reduces the computational complexity significantly, but also retains the good transformation quality of the Loeffler DCT. Therefore, the proposed Cordic based Loeffler DCT is especially suited for low-power and high-quality CODECs in battery-based systems.


In today’s era the image has become useful for communication purpose. But due to the development of software and various techniques it is possible to change images in adding or removing essential feature from it without leaving a clue of real image. It is not easy for the common people to identify whether the image original or tampered. In order to avoid this problem, forgery detection came into existence. Detection of forgery refers to task of image processing to identify that the images are unique or tampered. Several techniques have been used in order to detect the forgeries from the forged image, but this issue has not yet solved. In order to solve these issues we have used Discrete Cosine Transformation (DCT) and quantization matrix techniques for identifying forged areas of image, where the quality of image is not reduced. The Discrete Cosine Transformation (DCT) is used in order for characterizing the overlapping blocks and quantization matrix is used to compress DCT values and gives both highly compressed and best decompressed image quality. Here we use block matching algorithm. This algorithm one of the most frequently used for detecting image which is duplicate. This proposed work also supports for different kinds of images such as JPEG, JPG or PNG of any size it can be either mxn or nxn.


2021 ◽  
Vol 13 (2) ◽  
pp. 56-61
Author(s):  
Iwan Setiawan ◽  
Akbari Indra Basuki ◽  
Didi Rosiyadi

High performance computing (HPC) is required for image processing especially for picture element (pixel) with huge size. To avoid dependence to HPC equipment which is very expensive to be provided, the soft approach has been performed in this work. Actually, both hard and soft methods offer similar goal which are to reach time computation as short as possible. The discrete cosine transformation (DCT) and singular values decomposition (SVD) are conventionally performed to original image by consider it as a single matrix. This will result in computational burden for images with huge pixel. To overcome this problem, the second order matrix has been performed as block matrix to be applied on the original image which delivers the DCT-SVD hybrid formula. Hybrid here means the only required parameter shown in formula is intensity of the original pixel as the DCT and SVD formula has been merged in derivation. Result shows that when using Lena as original image, time computation of the singular values using the hybrid formula is almost two seconds faster than the conventional. Instead of pushing hard to provide the equipment, it is possible to overcome computational problem due to the size simply by using the proposed formula.


Author(s):  
Prajwalasimha S. N. ◽  
Chethan Suputhra .S ◽  
Mohan C. S.

In this article, a combined Discrete Cosine Transformation (DCT) and Successive Division based image watermarking scheme is proposed. In many spatial domain approaches, the watermark information is embedded into Least Significant Bits (LSBs) of host image. These LSBs are more vulnerable to noise and other unwanted information contents in the channel, in few cases these are subjected for modifications also. Many frequency domain approaches withstands LSB interference problem but utilizes more execution time. The proposed technique is a frequency domain approach which can withstand LSB attack and utilizes very less execution time than other existing approaches. Performance analysis is done based on robustness, imperceptibility, data embedding capacity and time of execution. The experimental results are better compared to other existing techniques.


2008 ◽  
Vol 5 (1) ◽  
pp. 155-159
Author(s):  
Baghdad Science Journal

In this work a fragile watermarking scheme is presented. This scheme is applied to digital color images in spatial domain. The image is divided into blocks, and each block has its authentication mark embedded in it, we would be able to insure which parts of the image are authentic and which parts have been modified. This authentication carries out without need to exist the original image. The results show the quality of the watermarked image is remaining very good and the watermark survived some type of unintended modification such as familiar compression software like WINRAR and ZIP


2015 ◽  
Vol 731 ◽  
pp. 197-200
Author(s):  
Hong Guo Wang ◽  
Yang Jin ◽  
Pei Lei Miao

Loss of gray level is a usual phenomenon in tone transformation of digital image. In order to improve the effect of tone transformation, original image is segmented into blocks, which consist of some pixels. In tone transformation, average value of gray levels, which belong to the pixels of every image block, is computed. According to the different average values of image blocks, different curve for the tone transformation is selected and the pixels of the block are processed correspondingly. If the transformation curves for the processing are enough, the loss of gray level is relative fewer. Although some of the gray levels of different image block are lost, but the lost gray levels of different image block are disperse and are not always coincident. Therefore, compared with the tone transform with one single curve for whole image, the loss of the gray level in image, which is transformed with block-segmentation and with different curves, is much fewer and the quality of processed image is relative better. The principle, as well as the algorithm is described, and the results of processed images are analyzed.


2014 ◽  
Vol 2 (2) ◽  
pp. 47-58
Author(s):  
Ismail Sh. Baqer

A two Level Image Quality enhancement is proposed in this paper. In the first level, Dualistic Sub-Image Histogram Equalization DSIHE method decomposes the original image into two sub-images based on median of original images. The second level deals with spikes shaped noise that may appear in the image after processing. We presents three methods of image enhancement GHE, LHE and proposed DSIHE that improve the visual quality of images. A comparative calculations is being carried out on above mentioned techniques to examine objective and subjective image quality parameters e.g. Peak Signal-to-Noise Ratio PSNR values, entropy H and mean squared error MSE to measure the quality of gray scale enhanced images. For handling gray-level images, convenient Histogram Equalization methods e.g. GHE and LHE tend to change the mean brightness of an image to middle level of the gray-level range limiting their appropriateness for contrast enhancement in consumer electronics such as TV monitors. The DSIHE methods seem to overcome this disadvantage as they tend to preserve both, the brightness and contrast enhancement. Experimental results show that the proposed technique gives better results in terms of Discrete Entropy, Signal to Noise ratio and Mean Squared Error values than the Global and Local histogram-based equalization methods


Author(s):  
Dawlat Mustafa Sulaiman ◽  
Adnan Mohsin Abdulazeez ◽  
Habibollah Haron

Today, finger vein recognition has a lot of attention as a promising approach of biometric identification framework and still does not meet the challenges of the researchers on this filed. To solve this problem, we propose s double stage of feature extraction schemes based localized finger fine image detection. We propose Globalized Features Pattern Map Indication (GFPMI) to extract the globalized finger vein line features basede on using two generated vein image datasets: original gray level color, globalized finger vein line feature, original localized gray level image, and the colored localized finger vein images. Then, two kinds of features (gray scale and texture features) are extracted, which tell the structure information of the whole finger vein pattern in the whole dataset. The recurrent based residual neural network (RNN) is used to identify the finger vein images. The experimental show that the localized colored finger vein images based globalized feature extraction has achieved the higher accuracy (93.49%) while the original image dataset achieved less accuracy by (69.86%).


Sign in / Sign up

Export Citation Format

Share Document