scholarly journals Coverless Image Steganography Based on Generative Adversarial Network

Mathematics ◽  
2020 ◽  
Vol 8 (9) ◽  
pp. 1394 ◽  
Author(s):  
Jiaohua Qin ◽  
Jing Wang ◽  
Yun Tan ◽  
Huajun Huang ◽  
Xuyu Xiang ◽  
...  

Traditional image steganography needs to modify or be embedded into the cover image for transmitting secret messages. However, the distortion of the cover image can be easily detected by steganalysis tools which lead the leakage of the secret message. So coverless steganography has become a topic of research in recent years, which has the advantage of hiding secret messages without modification. But current coverless steganography still has problems such as low capacity and poor quality .To solve these problems, we use a generative adversarial network (GAN), an effective deep learning framework, to encode secret messages into the cover image and optimize the quality of the steganographic image by adversaring. Experiments show that our model not only achieves a payload of 2.36 bits per pixel, but also successfully escapes the detection of steganalysis tools.

2021 ◽  
Author(s):  
Tianyu Liu ◽  
Yuge Wang ◽  
Hong-yu Zhao

With the advancement of technology, we can generate and access large-scale, high dimensional and diverse genomics data, especially through single-cell RNA sequencing (scRNA-seq). However, integrative downstream analysis from multiple scRNA-seq datasets remains challenging due to batch effects. In this paper, we focus on scRNA-seq data integration and propose a new deep learning framework based on Wasserstein Generative Adversarial Network (WGAN) combined with an attention mechanism to reduce the differences among batches. We also discuss the limitations of the existing methods and demonstrate the advantages of our new model from both theoretical and practical aspects, advocating the use of deep learning in genomics research.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Zhijian Yang ◽  
Ilya M. Nasrallah ◽  
Haochang Shou ◽  
Junhao Wen ◽  
Jimit Doshi ◽  
...  

AbstractHeterogeneity of brain diseases is a challenge for precision diagnosis/prognosis. We describe and validate Smile-GAN (SeMI-supervised cLustEring-Generative Adversarial Network), a semi-supervised deep-clustering method, which examines neuroanatomical heterogeneity contrasted against normal brain structure, to identify disease subtypes through neuroimaging signatures. When applied to regional volumes derived from T1-weighted MRI (two studies; 2,832 participants; 8,146 scans) including cognitively normal individuals and those with cognitive impairment and dementia, Smile-GAN identified four patterns or axes of neurodegeneration. Applying this framework to longitudinal data revealed two distinct progression pathways. Measures of expression of these patterns predicted the pathway and rate of future neurodegeneration. Pattern expression offered complementary performance to amyloid/tau in predicting clinical progression. These deep-learning derived biomarkers offer potential for precision diagnostics and targeted clinical trial recruitment.


2021 ◽  
Vol 14 (1) ◽  
pp. 6
Author(s):  
Roberto de de Lima-Hernandez ◽  
Maarten Vergauwen

An increased interest in computer-aided heritage reconstruction has emerged in recent years due to the maturity of sophisticated computer vision techniques. Concretely, feature-based matching methods have been conducted to reassemble heritage assets, yielding plausible results for data that contains enough salient points for matching. However, they fail to register ancient artifacts that have been badly deteriorated over the years. In particular, for monochromatic incomplete data, such as 3D sunk relief eroded decorations, damaged drawings, and ancient inscriptions. The main issue lies in the lack of regions of interest and poor quality of the data, which prevent feature-based algorithms from estimating distinctive descriptors. This paper addresses the reassembly of damaged decorations by deploying a Generative Adversarial Network (GAN) to predict the continuing decoration traces of broken heritage fragments. By extending the texture information of broken counterpart fragments, it is demonstrated that registration methods are now able to find mutual characteristics that allow for accurate optimal rigid transformation estimation for fragments alignment. This work steps away from feature-based approaches, hence employing Mutual Information (MI) as a similarity metric to estimate an alignment transformation. Moreover, high-resolution geometry and imagery are combined to cope with the fragility and severe damage of heritage fragments. Therefore, the testing data is composed of a set of ancient Egyptian decorated broken fragments recorded through 3D remote sensing techniques. More specifically, structured light technology for mesh models creation, as well as orthophotos, upon which digital drawings are created. Even though this study is restricted to Egyptian artifacts, the workflow can be applied to reconstruct different types of decoration patterns in the cultural heritage domain.


2021 ◽  
Vol 2021 ◽  
pp. 1-25
Author(s):  
Young Ha Shin ◽  
Dong-Cheon Lee

Orthoimage, which is geometrically equivalent to a map, is one of the important geospatial products. Displacement and occlusion in optical images are caused by perspective projection, camera tilt, and object relief. A digital surface model (DSM) is essential data for generating true orthoimages to correct displacement and to recover occlusion areas. Light detection and ranging (LiDAR) data collected from an airborne laser scanner (ALS) system is a major source of DSM. The traditional methods require sophisticated procedures to produce a true orthoimage. Most methods utilize 3D coordinates of the DSM and multiview images with overlapping areas for orthorectifying displacement and detecting and recovering occlusion areas. LiDAR point cloud data provides not only 3D coordinates but also intensity information reflected from object surfaces in the georeferenced orthoprojected space. This paper proposes true orthoimage generation based on a generative adversarial network (GAN) deep learning (DL) with the Pix2Pix model using intensity and DSM of the LiDAR data. The major advantage of using LiDAR data is that the data is occlusion-free true orthoimage in terms of projection geometry except in the case of low image quality. Intensive experiments were performed using the benchmark datasets provided by the International Society for Photogrammetry and Remote Sensing (ISPRS). The results demonstrate that the proposed approach could have the capability of efficiently generating true orthoimages directly from LiDAR data. However, it is crucial to find appropriate preprocessing to improve the quality of the intensity of the LiDAR data to produce a higher quality of the true orthoimages.


2020 ◽  
Author(s):  
Abdulkarem Almawgani ◽  
Adam Alhawari ◽  
Wlaed Alarashi ◽  
Ali Alshwal

Abstract Digital images are commonly used in steganography due to the popularity of digital image transfer and exchange through the Internet. However, the tradeoff between managing high capacity of secret data and ensuring high security and quality of stego image is a major challenge. In this paper, a hybrid steganography method based on Haar Discrete Wavelet Transform (HDWT), Lempel Ziv Welch (LZW) algorithm, Genetic Algorithm (GA), and the Optimal Pixel Adjustment Process (OPAP) is proposed. The cover image is divided into non-overlapping blocks of nxn pixels. Then, the HDWT is used to increase the robustness of the stego image against attacks. In order to increase the capacity for, and security of, the hidden image, the LZW algorithm is applied on the secret message. After that, the GA is employed to give the encoded and compressed secret message cover image coefficients. The GA is used to find the optimal mapping function for each block in the image. Lastly, the OPAP is applied to reduce the error, i.e., the difference between the cover image blocks and the stego image blocks. This step is a further improvement to the stego image quality. The proposed method was evaluated using four standard images as covers and three types of secret messages. The results demonstrate higher visual quality of the stego image with a large size of embedded secret data than what is generated by already-known techniques. The experimental results show that the information-hiding capacity of the proposed method reached to 50% with high PSNR (52.83 dB). Thus, the herein proposed hybrid image steganography method improves the quality of the stego image over those of the state-of-the-art methods.


Optik ◽  
2021 ◽  
Vol 227 ◽  
pp. 166060
Author(s):  
Yangdi Hu ◽  
Zhengdong Cheng ◽  
Xiaochun Fan ◽  
Zhenyu Liang ◽  
Xiang Zhai

Information ◽  
2021 ◽  
Vol 12 (6) ◽  
pp. 249
Author(s):  
Xin Jin ◽  
Yuanwen Zou ◽  
Zhongbing Huang

The cell cycle is an important process in cellular life. In recent years, some image processing methods have been developed to determine the cell cycle stages of individual cells. However, in most of these methods, cells have to be segmented, and their features need to be extracted. During feature extraction, some important information may be lost, resulting in lower classification accuracy. Thus, we used a deep learning method to retain all cell features. In order to solve the problems surrounding insufficient numbers of original images and the imbalanced distribution of original images, we used the Wasserstein generative adversarial network-gradient penalty (WGAN-GP) for data augmentation. At the same time, a residual network (ResNet) was used for image classification. ResNet is one of the most used deep learning classification networks. The classification accuracy of cell cycle images was achieved more effectively with our method, reaching 83.88%. Compared with an accuracy of 79.40% in previous experiments, our accuracy increased by 4.48%. Another dataset was used to verify the effect of our model and, compared with the accuracy from previous results, our accuracy increased by 12.52%. The results showed that our new cell cycle image classification system based on WGAN-GP and ResNet is useful for the classification of imbalanced images. Moreover, our method could potentially solve the low classification accuracy in biomedical images caused by insufficient numbers of original images and the imbalanced distribution of original images.


Author(s):  
Lingyu Yan ◽  
Jiarun Fu ◽  
Chunzhi Wang ◽  
Zhiwei Ye ◽  
Hongwei Chen ◽  
...  

AbstractWith the development of image recognition technology, face, body shape, and other factors have been widely used as identification labels, which provide a lot of convenience for our daily life. However, image recognition has much higher requirements for image conditions than traditional identification methods like a password. Therefore, image enhancement plays an important role in the process of image analysis for images with noise, among which the image of low-light is the top priority of our research. In this paper, a low-light image enhancement method based on the enhanced network module optimized Generative Adversarial Networks(GAN) is proposed. The proposed method first applied the enhancement network to input the image into the generator to generate a similar image in the new space, Then constructed a loss function and minimized it to train the discriminator, which is used to compare the image generated by the generator with the real image. We implemented the proposed method on two image datasets (DPED, LOL), and compared it with both the traditional image enhancement method and the deep learning approach. Experiments showed that our proposed network enhanced images have higher PNSR and SSIM, the overall perception of relatively good quality, demonstrating the effectiveness of the method in the aspect of low illumination image enhancement.


Proceedings ◽  
2021 ◽  
Vol 77 (1) ◽  
pp. 17
Author(s):  
Andrea Giussani

In the last decade, advances in statistical modeling and computer science have boosted the production of machine-produced contents in different fields: from language to image generation, the quality of the generated outputs is remarkably high, sometimes better than those produced by a human being. Modern technological advances such as OpenAI’s GPT-2 (and recently GPT-3) permit automated systems to dramatically alter reality with synthetic outputs so that humans are not able to distinguish the real copy from its counteracts. An example is given by an article entirely written by GPT-2, but many other examples exist. In the field of computer vision, Nvidia’s Generative Adversarial Network, commonly known as StyleGAN (Karras et al. 2018), has become the de facto reference point for the production of a huge amount of fake human face portraits; additionally, recent algorithms were developed to create both musical scores and mathematical formulas. This presentation aims to stimulate participants on the state-of-the-art results in this field: we will cover both GANs and language modeling with recent applications. The novelty here is that we apply a transformer-based machine learning technique, namely RoBerta (Liu et al. 2019), to the detection of human-produced versus machine-produced text concerning fake news detection. RoBerta is a recent algorithm that is based on the well-known Bidirectional Encoder Representations from Transformers algorithm, known as BERT (Devlin et al. 2018); this is a bi-directional transformer used for natural language processing developed by Google and pre-trained over a huge amount of unlabeled textual data to learn embeddings. We will then use these representations as an input of our classifier to detect real vs. machine-produced text. The application is demonstrated in the presentation.


Sign in / Sign up

Export Citation Format

Share Document