scholarly journals Generative Reversible Data Hiding by Image-to-Image Translation via GANs

2019 ◽  
Vol 2019 ◽  
pp. 1-10 ◽  
Author(s):  
Zhuo Zhang ◽  
Guangyuan Fu ◽  
Fuqiang Di ◽  
Changlong Li ◽  
Jia Liu

The traditional reversible data hiding technique is based on cover image modification which inevitably leaves some traces of rewriting that can be more easily analyzed and attacked by the warder. Inspired by the cover synthesis steganography-based generative adversarial networks, in this paper, a novel generative reversible data hiding (GRDH) scheme by image translation is proposed. First, an image generator is used to obtain a realistic image, which is used as an input to the image-to-image translation model with CycleGAN. After image translation, a stego image with different semantic information will be obtained. The secret message and the original input image can be recovered separately by a well-trained message extractor and the inverse transform of the image translation. The experimental results have verified the effectiveness of the scheme.

2020 ◽  
Vol 39 (3) ◽  
pp. 2977-2990
Author(s):  
R. Anushiadevi ◽  
Padmapriya Praveenkumar ◽  
John Bosco Balaguru Rayappan ◽  
Rengarajan Amirtharajan

Digital image steganography algorithms usually suffer from a lossy restoration of the cover content after extraction of a secret message. When a cover object and confidential information are both utilised, the reversible property of the cover is inevitable. With this objective, several reversible data hiding (RDH) algorithms are available in the literature. Conversely, because both are diametrically related parameters, existing RDH algorithms focus on either a good embedding capacity (EC) or better stego-image quality. In this paper, a pixel expansion reversible data hiding (PE-RDH) method with a high EC and good stego-image quality are proposed. The proposed PE-RDH method was based on three typical RDH schemes, namely difference expansion, histogram shifting, and pixel value ordering. The PE-RDH method has an average EC of 0.75 bpp, with an average peak signal-to-noise ratio (PSNR) of 30.89 dB. It offers 100% recovery of the original image and confidential hidden messages. To protect secret as well as cover the proposed PE-RDH is also implemented on the encrypted image by using homomorphic encryption. The strength of the proposed method on the encrypted image was verified based on a comparison with several existing methods, and the approach achieved better results than these methods in terms of its EC, location map size and imperceptibility of directly decrypted images.


2021 ◽  
Vol 11 (5) ◽  
pp. 2013
Author(s):  
Euihyeok Lee ◽  
Seungwoo Kang

What if the window of our cars is a magic window, which transforms dark views outside of the window at night into bright ones as we can see in the daytime? To realize such a window, one of important requirements is that the stream of transformed images displayed on the window should be of high quality so that users perceive it as real scenes in the day. Although image-to-image translation techniques based on Generative Adversarial Networks (GANs) have been widely studied, night-to-day image translation is still a challenging task. In this paper, we propose Daydriex, a processing pipeline to generate enhanced daytime translation focusing on road views. Our key idea is to supplement the missing information in dark areas of input image frames by using existing daytime images corresponding to the input images from street view services. We present a detailed processing flow and address several issues to realize our idea. Our evaluation shows that the results by Daydriex achieves lower Fréchet Inception Distance (FID) scores and higher user perception scores compared to those by CycleGAN only.


In this burgeoning age and society where people are tending towards learning the benefits adversarial network we hereby benefiting the society tend to extend our research towards adversarial networks as a general-purpose solution to image-to-image translation problems. Image to image translation comes under the peripheral class of computer sciences extending our branch in the field of neural networks. We apprentice Generative adversarial networks as an optimum solution for generating Image to image translation where our motive is to learn a mapping between an input image(X) and an output image(Y) using a set of predefined pairs[4]. But it is not necessary that the paired dataset is provided to for our use and hence adversarial methods comes into existence. Further, we advance a method that is able to convert and recapture an image from a domain X to another domain Y in the absence of paired datasets. Our objective is to learn a mapping function G: A —B such that the mapping is able to distinguish the images of G(A) within the distribution of B using an adversarial loss.[1] Because this mapping is high biased, we introduce an inverse mapping function F B—A and introduce a cycle consistency loss[7]. Furthermore we wish to extend our research with various domains and involve them with neural style transfer, semantic image synthesis. Our essential commitment is to show that on a wide assortment of issues, conditional GANs produce sensible outcomes. This paper hence calls for the attention to the purpose of converting image X to image Y and we commit to the transfer learning of training dataset and optimising our code.You can find the source code for the same here.


2018 ◽  
Vol 2018 ◽  
pp. 1-11 ◽  
Author(s):  
Zhuorong Li ◽  
Wanliang Wang ◽  
Yanwei Zhao

Image translation, where the input image is mapped to its synthetic counterpart, is attractive in terms of wide applications in fields of computer graphics and computer vision. Despite significant progress on this problem, largely due to a surge of interest in conditional generative adversarial networks (cGANs), most of the cGAN-based approaches require supervised data, which are rarely available and expensive to provide. Instead we elaborate a common framework that is also applicable to the unsupervised cases, learning the image prior by conditioning the discriminator on unaligned targets to reduce the mapping space and improve the generation quality. Besides, domain-adversarial training inspired by domain adaptation is proposed to capture discriminative and expressive features, for the purpose of improving fidelity. Effectiveness of our method is demonstrated by compelling experimental results of our method and comparisons with several baselines. As for the generality, it could be analyzed from two perspectives: adaptation to both supervised and unsupervised setting and the diversity of tasks.


2020 ◽  
Vol 128 (10-11) ◽  
pp. 2629-2650
Author(s):  
Evangelos Ververas ◽  
Stefanos Zafeiriou

Abstract Image-to-image (i2i) translation is the dense regression problem of learning how to transform an input image into an output using aligned image pairs. Remarkable progress has been made in i2i translation with the advent of deep convolutional neural networks and particular using the learning paradigm of generative adversarial networks (GANs). In the absence of paired images, i2i translation is tackled with one or multiple domain transformations (i.e., CycleGAN, StarGAN etc.). In this paper, we study the problem of image-to-image translation, under a set of continuous parameters that correspond to a model describing a physical process. In particular, we propose the SliderGAN which transforms an input face image into a new one according to the continuous values of a statistical blendshape model of facial motion. We show that it is possible to edit a facial image according to expression and speech blendshapes, using sliders that control the continuous values of the blendshape model. This provides much more flexibility in various tasks, including but not limited to face editing, expression transfer and face neutralisation, comparing to models based on discrete expressions or action units.


2021 ◽  
Vol 11 (15) ◽  
pp. 6741
Author(s):  
Chia-Chen Lin ◽  
Thai-Son Nguyen ◽  
Chin-Chen Chang ◽  
Wen-Chi Chang

Reversible data hiding has attracted significant attention from researchers because it can extract an embedded secret message correctly and recover a cover image without distortion. In this paper, a novel, efficient reversible data hiding scheme is proposed for absolute moment block truncation code (AMBTC) compressed images. The proposed scheme is based on the high correlation of neighboring values in two mean tables of AMBTC-compressed images to further losslessly encode these values and create free space for containing a secret message. Experimental results demonstrated that the proposed scheme obtained a high embedding capacity and guaranteed the same PSNRs as the traditional AMBTC algorithm. In addition, the proposed scheme achieved a higher embedding capacity and higher efficiency rate than those of some previous schemes while maintaining an acceptable bit rate.


Sign in / Sign up

Export Citation Format

Share Document