scholarly journals Detect and Remove Watermark in Deep Neural Networks via Generative Adversarial Networks

2021 ◽  
pp. 341-357
Author(s):  
Shichang Sun ◽  
Haoqi Wang ◽  
Mingfu Xue ◽  
Yushu Zhang ◽  
Jian Wang ◽  
...  
IEEE Access ◽  
2019 ◽  
Vol 7 ◽  
pp. 111168-111180 ◽  
Author(s):  
Jinrui Wang ◽  
Shunming Li ◽  
Baokun Han ◽  
Zenghui An ◽  
Huaiqian Bao ◽  
...  

Author(s):  
Ming Hou ◽  
Brahim Chaib-draa ◽  
Chao Li ◽  
Qibin Zhao

 In this work, we consider the task of classifying binary positive-unlabeled (PU) data. The existing discriminative learning based PU models attempt to seek an optimal reweighting strategy for U data, so that a decent decision boundary can be found. However, given limited P data, the conventional PU models tend to suffer from overfitting when adapted to very flexible deep neural networks. In contrast, we are the first to innovate a totally new paradigm to attack the binary PU task, from perspective of generative learning by leveraging the powerful generative adversarial networks (GAN). Our generative positive-unlabeled (GenPU) framework incorporates an array of discriminators and generators that are endowed with different roles in simultaneously producing positive and negative realistic samples. We provide theoretical analysis to justify that, at equilibrium, GenPU is capable of recovering both positive and negative data distributions. Moreover, we show GenPU is generalizable and closely related to the semi-supervised classification. Given rather limited P data, experiments on both synthetic and real-world dataset demonstrate the effectiveness of our proposed framework. With infinite realistic and diverse sample streams generated from GenPU, a very flexible classifier can then be trained using deep neural networks.


2020 ◽  
Author(s):  
Kun Chen ◽  
Manning Wang ◽  
Zhijian Song

Abstract Background: Deep neural networks have been widely used in medical image segmentation and have achieved state-of-the-art performance in many tasks. However, different from the segmentation of natural images or video frames, the manual segmentation of anatomical structures in medical images needs high expertise so the scale of labeled training data is very small, which is a major obstacle for the improvement of deep neural networks performance in medical image segmentation. Methods: In this paper, we proposed a new end-to-end generation-segmentation framework by integrating Generative Adversarial Networks (GAN) and a segmentation network and train them simultaneously. The novelty is that during the training of the GAN, the intermediate synthetic images generated by the generator of the GAN are used to pre-train the segmentation network. As the advances of the training of the GAN, the synthetic images evolve gradually from being very coarse to containing more realistic textures, and these images help train the segmentation network gradually. After the training of GAN, the segmentation network is then fine-tuned by training with the real labeled images. Results: We evaluated the proposed framework on four different datasets, including 2D cardiac dataset and lung dataset, 3D prostate dataset and liver dataset. Compared with original U-net and CE-Net, our framework can achieve better segmentation performance. Our framework also can get better segmentation results than U-net on small datasets. In addition, our framework is more effective than the usual data augmentation methods. Conclusions: The proposed framework can be used as a pre-train method of segmentation network, which helps to get a better segmentation result. Our method can solve the shortcomings of current data augmentation methods to some extent.


2021 ◽  
Vol 2021 ◽  
pp. 1-15
Author(s):  
Zhidong Shen ◽  
Ting Zhong

Artificial Intelligence has been widely applied today, and the subsequent privacy leakage problems have also been paid attention to. Attacks such as model inference attacks on deep neural networks can easily extract user information from neural networks. Therefore, it is necessary to protect privacy in deep learning. Differential privacy, as a popular topic in privacy-preserving in recent years, which provides rigorous privacy guarantee, can also be used to preserve privacy in deep learning. Although many articles have proposed different methods to combine differential privacy and deep learning, there are no comprehensive papers to analyze and compare the differences and connections between these technologies. For this purpose, this paper is proposed to compare different differential private methods in deep learning. We comparatively analyze and classify several deep learning models under differential privacy. Meanwhile, we also pay attention to the application of differential privacy in Generative Adversarial Networks (GANs), comparing and analyzing these models. Finally, we summarize the application of differential privacy in deep neural networks.


Author(s):  
Chaowei Xiao ◽  
Bo Li ◽  
Jun-yan Zhu ◽  
Warren He ◽  
Mingyan Liu ◽  
...  

Deep neural networks (DNNs) have been found to be vulnerable to adversarial examples resulting from adding small-magnitude perturbations to inputs. Such adversarial examples can mislead DNNs to produce adversary-selected results. Different attack strategies have been proposed to generate adversarial examples, but how to produce them with high perceptual quality and more efficiently requires more research efforts. In this paper, we propose AdvGAN to generate adversarial exam- ples with generative adversarial networks (GANs), which can learn and approximate the distribution of original instances. For AdvGAN, once the generator is trained, it can generate perturbations efficiently for any instance, so as to potentially accelerate adversarial training as defenses. We apply Adv- GAN in both semi-whitebox and black-box attack settings. In semi-whitebox attacks, there is no need to access the original target model after the generator is trained, in contrast to traditional white-box attacks. In black-box attacks, we dynamically train a distilled model for the black-box model and optimize the generator accordingly. Adversarial examples generated by AdvGAN on different target models have high attack success rate under state-of-the-art defenses compared to other attacks. Our attack has placed the first with 92.76% accuracy on a public MNIST black-box attack challenge.


Electronics ◽  
2021 ◽  
Vol 11 (1) ◽  
pp. 118
Author(s):  
Holly Burrows ◽  
Javad Zarrin ◽  
Lakshmi Babu-Saheer ◽  
Mahdi Maktab-Dar-Oghaz

It is becoming increasingly apparent that a significant amount of the population suffers from mental health problems, such as stress, depression, and anxiety. These issues are a result of a vast range of factors, such as genetic conditions, social circumstances, and lifestyle influences. A key cause, or contributor, for many people is their work; poor mental state can be exacerbated by jobs and a person’s working environment. Additionally, as the information age continues to burgeon, people are increasingly sedentary in their working lives, spending more of their days seated, and less time moving around. It is a well-known fact that a decrease in physical activity is detrimental to mental well-being. Therefore, the need for innovative research and development to combat negativity early is required. Implementing solutions using Artificial Intelligence has great potential in this field of research. This work proposes a solution to this problem domain, utilising two concepts of Artificial Intelligence, namely, Convolutional Neural Networks and Generative Adversarial Networks. A CNN is trained to accurately predict when an individual is experiencing negative emotions, achieving a top accuracy of 80.38% with a loss of 0.42. A GAN is trained to synthesise images from an input domain that can be attributed to evoking position emotions. A Graphical User Interface is created to display the generated media to users in order to boost mood and reduce feelings of stress. The work demonstrates the capability for using Deep Learning to identify stress and negative mood, and the strategies that can be implemented to reduce them.


Sign in / Sign up

Export Citation Format

Share Document