scholarly journals Cryptospace Invertible Steganography with Conditional Generative Adversarial Networks

2021 ◽  
Vol 2021 ◽  
pp. 1-14
Author(s):  
Ching-Chun Chang

Deep neural networks have become the foundation of many modern intelligent systems. Recently, the author has explored adversarial learning for invertible steganography (ALIS) and demonstrated the potential of deep neural networks to reinvigorate an obsolete invertible steganographic method. With the worldwide popularisation of the Internet of things and cloud computing, invertible steganography can be recognised as a favourable way of facilitating data management and authentication due to the ability to embed information without causing permanent distortion. In light of growing concerns over cybersecurity, it is important to take a step forwards to investigate invertible steganography for encrypted data. Indeed, the multidisciplinary research in invertible steganography and cryptospace computing has received considerable attention. In this paper, we extend previous work and address the problem of cryptospace invertible steganography with deep neural networks. Specifically, we revisit a seminal work on cryptospace invertible steganography in which the problem of message decoding and image recovery is viewed as a type of binary classification. We formulate a general expression encompassing spatial, spectral, and structural analyses towards this particular classification problem and propose a novel discrimination function based on a recurrent conditional generative adversarial network (RCGAN) which predicts bit-planes with stacked neural networks in a top-down manner. Experimental results evaluate the performance of various discrimination functions and validate the superiority of neural-network-aided discrimination function in terms of classification accuracy.

Sensors ◽  
2021 ◽  
Vol 21 (14) ◽  
pp. 4867
Author(s):  
Lu Chen ◽  
Hongjun Wang ◽  
Xianghao Meng

With the development of science and technology, neural networks, as an effective tool in image processing, play an important role in gradual remote-sensing image-processing. However, the training of neural networks requires a large sample database. Therefore, expanding datasets with limited samples has gradually become a research hotspot. The emergence of the generative adversarial network (GAN) provides new ideas for data expansion. Traditional GANs either require a large number of input data, or lack detail in the pictures generated. In this paper, we modify a shuffle attention network and introduce it into GAN to generate higher quality pictures with limited inputs. In addition, we improved the existing resize method and proposed an equal stretch resize method to solve the problem of image distortion caused by different input sizes. In the experiment, we also embed the newly proposed coordinate attention (CA) module into the backbone network as a control test. Qualitative indexes and six quantitative evaluation indexes were used to evaluate the experimental results, which show that, compared with other GANs used for picture generation, the modified Shuffle Attention GAN proposed in this paper can generate more refined and high-quality diversified aircraft pictures with more detailed features of the object under limited datasets.


Electronics ◽  
2021 ◽  
Vol 10 (4) ◽  
pp. 459
Author(s):  
Jialu Wang ◽  
Guowei Teng ◽  
Ping An

With the help of deep neural networks, video super-resolution (VSR) has made a huge breakthrough. However, these deep learning-based methods are rarely used in specific situations. In addition, training sets may not be suitable because many methods only assume that under ideal circumstances, low-resolution (LR) datasets are downgraded from high-resolution (HR) datasets in a fixed manner. In this paper, we proposed a model based on Generative Adversarial Network (GAN) and edge enhancement to perform super-resolution (SR) reconstruction for LR and blur videos, such as closed-circuit television (CCTV). The adversarial loss allows discriminators to be trained to distinguish between SR frames and ground truth (GT) frames, which is helpful to produce realistic and highly detailed results. The edge enhancement function uses the Laplacian edge module to perform edge enhancement on the intermediate result, which helps further improve the final results. In addition, we add the perceptual loss to the loss function to obtain a higher visual experience. At the same time, we also tried training network on different datasets. A large number of experiments show that our method has advantages in the Vid4 dataset and other LR videos.


2020 ◽  
Vol 10 (21) ◽  
pp. 7433
Author(s):  
Michal Varga ◽  
Ján Jadlovský ◽  
Slávka Jadlovská

In this paper, we propose a methodology for generative enhancement of existing 3D image classifiers. This methodology is based on combining the advantages of both non-generative classifiers and generative modeling. Its purpose is to streamline the synthesis of novel deep neural networks by embedding existing compatible classifiers into a generative network architecture. A demonstration of this process and evaluation of its effectiveness is performed using a 3D convolutional classifier and its generative equivalent—a 3D conditional generative adversarial network classifier. The results of the experiments show that the generative classifier delivers higher performance, gaining a relative classification accuracy improvement of 7.43%. An increase of accuracy is also observed when comparing it to a plain convolutional classifier that was trained on a dataset augmented with samples created by the trained generator. This suggests a desirable knowledge sharing mechanism exists within the hybrid discriminator-classifier network.


2021 ◽  
Vol 15 ◽  
Author(s):  
Qianyi Zhan ◽  
Yuanyuan Liu ◽  
Yuan Liu ◽  
Wei Hu

18F-FDG positron emission tomography (PET) imaging of brain glucose use and amyloid accumulation is a research criteria for Alzheimer's disease (AD) diagnosis. Several PET studies have shown widespread metabolic deficits in the frontal cortex for AD patients. Therefore, studying frontal cortex changes is of great importance for AD research. This paper aims to segment frontal cortex from brain PET imaging using deep neural networks. The learning framework called Frontal cortex Segmentation model of brain PET imaging (FSPET) is proposed to tackle this problem. It combines the anatomical prior to frontal cortex into the segmentation model, which is based on conditional generative adversarial network and convolutional auto-encoder. The FSPET method is evaluated on a dataset of 30 brain PET imaging with ground truth annotated by a radiologist. Results that outperform other baselines demonstrate the effectiveness of the FSPET framework.


2020 ◽  
Author(s):  
Belén Vega-Márquez ◽  
Cristina Rubio-Escudero ◽  
Isabel Nepomuceno-Chamorro

Abstract The generation of synthetic data is becoming a fundamental task in the daily life of any organization due to the new protection data laws that are emerging. Because of the rise in the use of Artificial Intelligence, one of the most recent proposals to address this problem is the use of Generative Adversarial Networks (GANs). These types of networks have demonstrated a great capacity to create synthetic data with very good performance. The goal of synthetic data generation is to create data that will perform similarly to the original dataset for many analysis tasks, such as classification. The problem of GANs is that in a classification problem, GANs do not take class labels into account when generating new data, it is treated as any other attribute. This research work has focused on the creation of new synthetic data from datasets with different characteristics with a Conditional Generative Adversarial Network (CGAN). CGANs are an extension of GANs where the class label is taken into account when the new data is generated. The performance of our results has been measured in two different ways: firstly, by comparing the results obtained with classification algorithms, both in the original datasets and in the data generated; secondly, by checking that the correlation between the original data and those generated is minimal.


Atmosphere ◽  
2020 ◽  
Vol 11 (3) ◽  
pp. 304 ◽  
Author(s):  
Jinah Kim ◽  
Jaeil Kim ◽  
Taekyung Kim ◽  
Dong Huh ◽  
Sofia Caires

In this paper, we propose a series of procedures for coastal wave-tracking using coastal video imagery with deep neural networks. It consists of three stages: video enhancement, hydrodynamic scene separation and wave-tracking. First, a generative adversarial network, trained using paired raindrop and clean videos, is applied to remove image distortions by raindrops and to restore background information of coastal waves. Next, a hydrodynamic scene of propagated wave information is separated from surrounding environmental information in the enhanced coastal video imagery using a deep autoencoder network. Finally, propagating waves are tracked by registering consecutive images in the quality-enhanced and scene-separated coastal video imagery using a spatial transformer network. The instantaneous wave speed of each individual wave crest and breaker in the video domain is successfully estimated through learning the behavior of transformed and propagated waves in the surf zone using deep neural networks. Since it enables the acquisition of spatio-temporal information of the surf zone though the characterization of wave breakers inclusively wave run-up, we expect that the proposed framework with the deep neural networks leads to improve understanding of nearshore wave dynamics.


Actuators ◽  
2021 ◽  
Vol 10 (5) ◽  
pp. 86
Author(s):  
Jie Li ◽  
Boyu Zhao ◽  
Kai Wu ◽  
Zhicheng Dong ◽  
Xuerui Zhang ◽  
...  

Gear reliability assessment of vehicle transmission has been a challenging issue of determining vehicle safety in the transmission industry due to a significant amount of classification errors with high-coupling gear parameters and insufficient high-density data. In terms of the preprocessing of gear reliability assessment, this paper presents a representation generation approach based on generative adversarial networks (GAN) to advance the performance of reliability evaluation as a classification problem. First, with no need for complex modeling and massive calculations, a conditional generative adversarial net (CGAN) based model is established to generate gear representations through discovering inherent mapping between features with gear parameters and gear reliability. Instead of producing intact samples like other GAN techniques, the CGAN based model is designed to learn features of gear data. In this model, to raise the diversity of produced features, a mini-batch strategy of randomly sampling from the combination of raw and generated representations is used in the discriminator, instead of using all of the data features. Second, in order to overcome the unlabeled ability of CGAN, a Wasserstein labeling (WL) scheme is proposed to tag the created representations from our model for classification. Lastly, original and produced representations are fused to train classifiers. Experiments on real-world gear data from the industry indicate that the proposed approach outperforms other techniques on operational metrics.


2022 ◽  
Author(s):  
Dmitry Utyamishev ◽  
Inna Partin-Vaisband

Abstract A multiterminal obstacle-avoiding pathfinding approach is proposed. The approach is inspired by deep image learning. The key idea is based on training a conditional generative adversarial network (cGAN) to interpret a pathfinding task as a graphical bitmap and consequently map a pathfinding task onto a pathfinding solution represented by another bitmap. To enable the proposed cGAN pathfinding, a methodology for generating synthetic dataset is also proposed. The cGAN model is implemented in Python/Keras, trained on synthetically generated data, evaluated on practical VLSI benchmarks, and compared with state-of-the-art. Due to effective parallelization on GPU hardware, the proposed approach yields a state-of-the-art like wirelength and a better runtime and throughput for moderately complex pathfinding tasks. However, the runtime and throughput with the proposed approach remain constant with an increasing task complexity, promising orders of magnitude improvement over state-of-the-art in complex pathfinding tasks. The cGAN pathfinder can be exploited in numerous high throughput applications, such as, navigation, tracking, and routing in complex VLSI systems. The last is of particular interest to this work.


Complexity ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-30
Author(s):  
R. Nandhini Abirami ◽  
P. M. Durai Raj Vincent ◽  
Kathiravan Srinivasan ◽  
Usman Tariq ◽  
Chuan-Yu Chang

Computational visual perception, also known as computer vision, is a field of artificial intelligence that enables computers to process digital images and videos in a similar way as biological vision does. It involves methods to be developed to replicate the capabilities of biological vision. The computer vision’s goal is to surpass the capabilities of biological vision in extracting useful information from visual data. The massive data generated today is one of the driving factors for the tremendous growth of computer vision. This survey incorporates an overview of existing applications of deep learning in computational visual perception. The survey explores various deep learning techniques adapted to solve computer vision problems using deep convolutional neural networks and deep generative adversarial networks. The pitfalls of deep learning and their solutions are briefly discussed. The solutions discussed were dropout and augmentation. The results show that there is a significant improvement in the accuracy using dropout and data augmentation. Deep convolutional neural networks’ applications, namely, image classification, localization and detection, document analysis, and speech recognition, are discussed in detail. In-depth analysis of deep generative adversarial network applications, namely, image-to-image translation, image denoising, face aging, and facial attribute editing, is done. The deep generative adversarial network is unsupervised learning, but adding a certain number of labels in practical applications can improve its generating ability. However, it is challenging to acquire many data labels, but a small number of data labels can be acquired. Therefore, combining semisupervised learning and generative adversarial networks is one of the future directions. This article surveys the recent developments in this direction and provides a critical review of the related significant aspects, investigates the current opportunities and future challenges in all the emerging domains, and discusses the current opportunities in many emerging fields such as handwriting recognition, semantic mapping, webcam-based eye trackers, lumen center detection, query-by-string word, intermittently closed and open lakes and lagoons, and landslides.


2021 ◽  
Vol 7 (8) ◽  
pp. 128
Author(s):  
Oliver Giudice ◽  
Luca Guarnera ◽  
Sebastiano Battiato

To properly contrast the Deepfake phenomenon the need to design new Deepfake detection algorithms arises; the misuse of this formidable A.I. technology brings serious consequences in the private life of every involved person. State-of-the-art proliferates with solutions using deep neural networks to detect a fake multimedia content but unfortunately these algorithms appear to be neither generalizable nor explainable. However, traces left by Generative Adversarial Network (GAN) engines during the creation of the Deepfakes can be detected by analyzing ad-hoc frequencies. For this reason, in this paper we propose a new pipeline able to detect the so-called GAN Specific Frequencies (GSF) representing a unique fingerprint of the different generative architectures. By employing Discrete Cosine Transform (DCT), anomalous frequencies were detected. The β statistics inferred by the AC coefficients distribution have been the key to recognize GAN-engine generated data. Robustness tests were also carried out in order to demonstrate the effectiveness of the technique using different attacks on images such as JPEG Compression, mirroring, rotation, scaling, addition of random sized rectangles. Experiments demonstrated that the method is innovative, exceeds the state of the art and also give many insights in terms of explainability.


Sign in / Sign up

Export Citation Format

Share Document