Robust Multibit Image Watermarking Based on Contrast Modulation and Affine Rectification

Author(s):  
Xin Zhong ◽  
Frank Y. Shih

In this paper, we present a robust multibit image watermarking scheme to undertake the common image-processing attacks as well as affine distortions. This scheme combines contrast modulation and effective synchronization for large payload and high robustness. We analyze the robustness, payload, and the lower bound of fidelity. Regarding watermark resynchronization under affine distortions, we develop a self-referencing rectification method to detect the distortion parameters for reconstruction by the center of mass in affine covariant regions. The effectiveness and advantages of the proposed scheme are confirmed by experimental results, which show the superior performance as comparing against several state-of-the-art watermarking methods.

This paper presents a digital image watermarking scheme comprising of DWT transformation. Due to the common practice of creating the copy, transmitting and spreading the data duplication of the original data occurs. Digital image watermarking has the ability to provide a solution for the unauthorized duplication problem. The scheme designed and presented in this paper comprises of mainly two modules one for embedding the watermark within the cover image and another for retrieving the watermark from the watermarked image. The process is carried out at different levels of the DWT transformation within different sub-bands of the DWT transformation. The extraction process involves the extraction of the watermark image form different channels of the RGB image mainly red, green and blue. The robustness and imperceptibility are tested. In each of the case, the corresponding PSNR and correlation values are noted and the results obtained concludes the scheme as the robust, semi-fragile and fragile digital image watermarking at different levels of the DWT transformation


Author(s):  
Abdul Fatir Ansari ◽  
Harold Soh

We address the problem of unsupervised disentanglement of latent representations learnt via deep generative models. In contrast to current approaches that operate on the evidence lower bound (ELBO), we argue that statistical independence in the latent space of VAEs can be enforced in a principled hierarchical Bayesian manner. To this effect, we augment the standard VAE with an inverse-Wishart (IW) prior on the covariance matrix of the latent code. By tuning the IW parameters, we are able to encourage (or discourage) independence in the learnt latent dimensions. Extensive experimental results on a range of datasets (2DShapes, 3DChairs, 3DFaces and CelebA) show our approach to outperform the β-VAE and is competitive with the state-of-the-art FactorVAE. Our approach achieves significantly better disentanglement and reconstruction on a new dataset (CorrelatedEllipses) which introduces correlations between the factors of variation.


2014 ◽  
Vol 2014 ◽  
pp. 1-13 ◽  
Author(s):  
Chuntao Wang

Designing a practical watermarking scheme with high robustness, feasible imperceptibility, and large capacity remains one of the most important research topics in robust watermarking. This paper presents a posterior hidden Markov model (HMM-) based informed image watermarking scheme, which well enhances the practicability of the prior-HMM-based informed watermarking with favorable robustness, imperceptibility, and capacity. To make the encoder and decoder use the (nearly) identical posterior HMM, each cover image at the encoder and each received image at the decoder are attacked with JPEG compression at an equivalently small quality factor (QF). The attacked images are then employed to estimate HMM parameter sets for both the encoder and decoder, respectively. Numerical simulations show that a small QF of 5 is an optimum setting for practical use. Based on this posterior HMM, we develop an enhanced posterior-HMM-based informed watermarking scheme. Extensive experimental simulations show that the proposed scheme is comparable to its prior counterpart in which the HMM is estimated with the original image, but it avoids the transmission of the prior HMM from the encoder to the decoder. This thus well enhances the practical application of HMM-based informed watermarking systems. Also, it is demonstrated that the proposed scheme has the robustness comparable to the state-of-the-art with significantly reduced computation time.


2021 ◽  
Vol 11 (7) ◽  
pp. 3214
Author(s):  
Huy Manh Nguyen ◽  
Tomo Miyazaki ◽  
Yoshihiro Sugaya ◽  
Shinichiro Omachi

Visual-semantic embedding aims to learn a joint embedding space where related video and sentence instances are located close to each other. Most existing methods put instances in a single embedding space. However, they struggle to embed instances due to the difficulty of matching visual dynamics in videos to textual features in sentences. A single space is not enough to accommodate various videos and sentences. In this paper, we propose a novel framework that maps instances into multiple individual embedding spaces so that we can capture multiple relationships between instances, leading to compelling video retrieval. We propose to produce a final similarity between instances by fusing similarities measured in each embedding space using a weighted sum strategy. We determine the weights according to a sentence. Therefore, we can flexibly emphasize an embedding space. We conducted sentence-to-video retrieval experiments on a benchmark dataset. The proposed method achieved superior performance, and the results are competitive to state-of-the-art methods. These experimental results demonstrated the effectiveness of the proposed multiple embedding approach compared to existing methods.


2011 ◽  
Vol 3 (4) ◽  
pp. 42-53 ◽  
Author(s):  
Chun-Ning Yang ◽  
Zhe-Ming Lu

This paper presents a novel image watermarking scheme utilizing Block Truncation Coding (BTC). This scheme uses BTC to guide the watermark embedding and extraction processes. During the embedding process, the original cover image is first partitioned into non-overlapping 4×4 blocks. Then, BTC is performed on each block to obtain its BTC bitplane, and the number of ‘1’s in the bitplane is counted. If the watermark bit to be embedded is ‘1’ and the number of ‘1’s is odd, or the watermark bit to be embedded is ‘0’ and the number of ‘1’s is even, then no change is made. Otherwise, by changing at most three pixels in the original image block, the number of ‘1’s (or ‘0’s) in the renewed bitplane are forced to be odd for the watermark bit ‘1’ or to be even for the watermark bit ‘0’. During the extraction process, BTC is first performed on each block to obtain its bitplane. If the number of ‘1’s in the bitplane is odd, then the embedded watermark bit is ‘1’. Otherwise, the embedded watermark bit is ‘0’. The experimental results show that the proposed watermarking method is semi-fragile except for the changes in brightness and contrast; therefore, the proposed method can be used for image authentication.


2019 ◽  
Vol 1 (3) ◽  
pp. 289-308 ◽  
Author(s):  
Lingbing Guo ◽  
Qingheng Zhang ◽  
Wei Hu ◽  
Zequn Sun ◽  
Yuzhong Qu

Knowledge graph (KG) completion aims at filling the missing facts in a KG, where a fact is typically represented as a triple in the form of ( head, relation, tail). Traditional KG completion methods compel two-thirds of a triple provided (e.g., head and relation) to predict the remaining one. In this paper, we propose a new method that extends multi-layer recurrent neural networks (RNNs) to model triples in a KG as sequences. It obtains state-of-the-art performance on the common entity prediction task, i.e., giving head (or tail) and relation to predict the tail (or the head), using two benchmark data sets. Furthermore, the deep sequential characteristic of our method enables it to predict the relations given head (or tail) only, and even predict the whole triples. Our experiments on these two new KG completion tasks demonstrate that our method achieves superior performance compared with several alternative methods.


Author(s):  
Byungmin Ahn ◽  
Taewhan Kim

A new algorithm for extracting common kernels and convolutions to maximally eliminate the redundant operations among the convolutions in binary- and ternary-weight convolutional neural networks is presented. Precisely, we propose (1) a new algorithm of common kernel extraction to overcome the local and limited exploration of common kernel candidates by the existing method, and subsequently apply (2) a new concept of common convolution extraction to maximally eliminate the redundancy in the convolution operations. In addition, our algorithm is able to (3) tune in minimizing the number of resulting kernels for convolutions, thereby saving the total memory access latency for kernels. Experimental results on ternary-weight VGG-16 demonstrate that our convolution optimization algorithm is very effective, reducing the total number of operations for all convolutions by [Formula: see text], thereby reducing the total number of execution cycles on hardware platform by 22.4% while using [Formula: see text] fewer kernels over that of the convolution utilizing the common kernels extracted by the state-of-the-art algorithm.


2010 ◽  
Vol 19 (02) ◽  
pp. 451-477 ◽  
Author(s):  
ALIMOHAMMAD LATIF ◽  
AHMAD REZA NAGHSH-NILCHI ◽  
S. AMIRHASAN MONADJEMI

In this paper, a novel watermarking technique based on parametric slant-Hadamard transform is presented. Our approach embeds a pseudo-random sequence of real numbers in a selected set of the parametric slant-Hadamard transform coefficients. By exploiting statistical properties of the embedded sequence, the mark can be reliably extracted without resorting to the original uncorrupted image. The presented method is capable of increasing the flexibility of the watermarking scheme, where the changes in parameter set help to improve fidelity and robustness against a number of attacks. Experimental results show that the proposed technique is secure and indeed highly robust to these attacks.


2015 ◽  
Vol 731 ◽  
pp. 163-168 ◽  
Author(s):  
Yu Xin Liu ◽  
Wei Guo ◽  
Wen Fa Qi

For the poor robustness of current text watermarking schemes, this paper proposed a text watermarking scheme based on the structure of Chinese character glyph. In this method, the different glyphs of character with same semantic is constructed by modifying the connections of Chinese character strokes located on the junction point of skeleton curve, which represents the different digital watermarking information bit strings. Extensive experimental results show that the proposed scheme is effective to resist print-and-scan, copy, photograph attacks etc. It can achieve the purpose of hiding information in paper documents and can be used for information tracking.


2021 ◽  
Vol 2021 ◽  
pp. 1-12
Author(s):  
Mingjie Li ◽  
Zichi Wang ◽  
Haoxian Song ◽  
Yong Liu

The deep learning based image steganalysis is becoming a serious threat to modification-based image steganography in recent years. Generation-based steganography directly produces stego images with secret data and can resist the advanced steganalysis algorithms. This paper proposes a novel generation-based steganography method by disguising the stego images into the kinds of images processed by normal operations (e.g., histogram equalization and sharpening). Firstly, an image processing model is trained using DCGAN and WGAN-GP, which is used to generate the images processed by normal operations. Then, the noise mapped by secret data is inputted into the trained model, and the obtained stego image is indistinguishable from the processed image. In this way, the steganographic process can be covered by the process of image processing, leaving little embedding trace in the process of steganography. As a result, the security of steganography is guaranteed. Experimental results show that the proposed scheme has better security performance than the existing steganographic methods when checked by state-of-the-art steganalytic tools, and the superiority and applicability of the proposed work are shown.


Sign in / Sign up

Export Citation Format

Share Document