scholarly journals A Novel Defensive Strategy for Facial Manipulation Detection Combining Bilateral Filtering and Joint Adversarial Training

2021 ◽  
Vol 2021 ◽  
pp. 1-10
Author(s):  
Yifan Luo ◽  
Feng Ye ◽  
Bin Weng ◽  
Shan Du ◽  
Tianqiang Huang

Facial manipulation enables facial expressions to be tampered with or facial identities to be replaced in videos. The fake videos are so realistic that they are even difficult for human eyes to distinguish. This poses a great threat to social and public information security. A number of facial manipulation detectors have been proposed to address this threat. However, previous studies have shown that the accuracy of these detectors is sensitive to adversarial examples. The existing defense methods are very limited in terms of applicable scenes and defense effects. This paper proposes a new defense strategy for facial manipulation detectors, which combines a passive defense method, bilateral filtering, and a proactive defense method, joint adversarial training, to mitigate the vulnerability of facial manipulation detectors against adversarial examples. The bilateral filtering method is applied in the preprocessing stage of the model without any modification to denoise the input adversarial examples. The joint adversarial training starts from the training stage of the model, which mixes various adversarial examples and original examples to train the model. The introduction of joint adversarial training can train a model that defends against multiple adversarial attacks. The experimental results show that the proposed defense strategy positively helps facial manipulation detectors counter adversarial examples.

Author(s):  
Nag Mani ◽  
Melody Moh ◽  
Teng-Sheng Moh

Deep learning (DL) has been used globally in almost every sector of technology and society. Despite its huge success, DL models and applications have been susceptible to adversarial attacks, impacting the accuracy and integrity of these models. Many state-of-the-art models are vulnerable to attacks by well-crafted adversarial examples, which are perturbed versions of clean data with a small amount of noise added, imperceptible to the human eyes, and can quite easily fool the targeted model. This paper introduces six most effective gradient-based adversarial attacks on the ResNet image recognition model, and demonstrates the limitations of traditional adversarial retraining technique. The authors then present a novel ensemble defense strategy based on adversarial retraining technique. The proposed method is capable of withstanding the six adversarial attacks on cifar10 dataset with accuracy greater than 89.31% and as high as 96.24%. The authors believe the design methodologies and experiments demonstrated are widely applicable to other domains of machine learning, DL, and computation intelligence securities.


2021 ◽  
Vol 11 (3) ◽  
pp. 1093
Author(s):  
Jeonghyun Lee ◽  
Sangkyun Lee

Convolutional neural networks (CNNs) have achieved tremendous success in solving complex classification problems. Motivated by this success, there have been proposed various compression methods for downsizing the CNNs to deploy them on resource-constrained embedded systems. However, a new type of vulnerability of compressed CNNs known as the adversarial examples has been discovered recently, which is critical for security-sensitive systems because the adversarial examples can cause malfunction of CNNs and can be crafted easily in many cases. In this paper, we proposed a compression framework to produce compressed CNNs robust against such adversarial examples. To achieve the goal, our framework uses both pruning and knowledge distillation with adversarial training. We formulate our framework as an optimization problem and provide a solution algorithm based on the proximal gradient method, which is more memory-efficient than the popular ADMM-based compression approaches. In experiments, we show that our framework can improve the trade-off between adversarial robustness and compression rate compared to the existing state-of-the-art adversarial pruning approach.


Sensors ◽  
2021 ◽  
Vol 21 (11) ◽  
pp. 3922
Author(s):  
Sheeba Lal ◽  
Saeed Ur Rehman ◽  
Jamal Hussain Shah ◽  
Talha Meraj ◽  
Hafiz Tayyab Rauf ◽  
...  

Due to the rapid growth in artificial intelligence (AI) and deep learning (DL) approaches, the security and robustness of the deployed algorithms need to be guaranteed. The security susceptibility of the DL algorithms to adversarial examples has been widely acknowledged. The artificially created examples will lead to different instances negatively identified by the DL models that are humanly considered benign. Practical application in actual physical scenarios with adversarial threats shows their features. Thus, adversarial attacks and defense, including machine learning and its reliability, have drawn growing interest and, in recent years, has been a hot topic of research. We introduce a framework that provides a defensive model against the adversarial speckle-noise attack, the adversarial training, and a feature fusion strategy, which preserves the classification with correct labelling. We evaluate and analyze the adversarial attacks and defenses on the retinal fundus images for the Diabetic Retinopathy recognition problem, which is considered a state-of-the-art endeavor. Results obtained on the retinal fundus images, which are prone to adversarial attacks, are 99% accurate and prove that the proposed defensive model is robust.


Symmetry ◽  
2021 ◽  
Vol 13 (3) ◽  
pp. 428
Author(s):  
Hyun Kwon ◽  
Jun Lee

This paper presents research focusing on visualization and pattern recognition based on computer science. Although deep neural networks demonstrate satisfactory performance regarding image and voice recognition, as well as pattern analysis and intrusion detection, they exhibit inferior performance towards adversarial examples. Noise introduction, to some degree, to the original data could lead adversarial examples to be misclassified by deep neural networks, even though they can still be deemed as normal by humans. In this paper, a robust diversity adversarial training method against adversarial attacks was demonstrated. In this approach, the target model is more robust to unknown adversarial examples, as it trains various adversarial samples. During the experiment, Tensorflow was employed as our deep learning framework, while MNIST and Fashion-MNIST were used as experimental datasets. Results revealed that the diversity training method has lowered the attack success rate by an average of 27.2 and 24.3% for various adversarial examples, while maintaining the 98.7 and 91.5% accuracy rates regarding the original data of MNIST and Fashion-MNIST.


2020 ◽  
Vol 10 (22) ◽  
pp. 8079
Author(s):  
Sanglee Park ◽  
Jungmin So

State-of-the-art neural network models are actively used in various fields, but it is well-known that they are vulnerable to adversarial example attacks. Throughout the efforts to make the models robust against adversarial example attacks, it has been found to be a very difficult task. While many defense approaches were shown to be not effective, adversarial training remains as one of the promising methods. In adversarial training, the training data are augmented by “adversarial” samples generated using an attack algorithm. If the attacker uses a similar attack algorithm to generate adversarial examples, the adversarially trained network can be quite robust to the attack. However, there are numerous ways of creating adversarial examples, and the defender does not know what algorithm the attacker may use. A natural question is: Can we use adversarial training to train a model robust to multiple types of attack? Previous work have shown that, when a network is trained with adversarial examples generated from multiple attack methods, the network is still vulnerable to white-box attacks where the attacker has complete access to the model parameters. In this paper, we study this question in the context of black-box attacks, which can be a more realistic assumption for practical applications. Experiments with the MNIST dataset show that adversarially training a network with an attack method helps defending against that particular attack method, but has limited effect for other attack methods. In addition, even if the defender trains a network with multiple types of adversarial examples and the attacker attacks with one of the methods, the network could lose accuracy to the attack if the attacker uses a different data augmentation strategy on the target network. These results show that it is very difficult to make a robust network using adversarial training, even for black-box settings where the attacker has restricted information on the target network.


Author(s):  
Chaowei Xiao ◽  
Bo Li ◽  
Jun-yan Zhu ◽  
Warren He ◽  
Mingyan Liu ◽  
...  

Deep neural networks (DNNs) have been found to be vulnerable to adversarial examples resulting from adding small-magnitude perturbations to inputs. Such adversarial examples can mislead DNNs to produce adversary-selected results. Different attack strategies have been proposed to generate adversarial examples, but how to produce them with high perceptual quality and more efficiently requires more research efforts. In this paper, we propose AdvGAN to generate adversarial exam- ples with generative adversarial networks (GANs), which can learn and approximate the distribution of original instances. For AdvGAN, once the generator is trained, it can generate perturbations efficiently for any instance, so as to potentially accelerate adversarial training as defenses. We apply Adv- GAN in both semi-whitebox and black-box attack settings. In semi-whitebox attacks, there is no need to access the original target model after the generator is trained, in contrast to traditional white-box attacks. In black-box attacks, we dynamically train a distilled model for the black-box model and optimize the generator accordingly. Adversarial examples generated by AdvGAN on different target models have high attack success rate under state-of-the-art defenses compared to other attacks. Our attack has placed the first with 92.76% accuracy on a public MNIST black-box attack challenge.


2020 ◽  
Vol 25 (2(22)) ◽  
pp. 29-35
Author(s):  
Lyubov Zavalska

The article examines the manifestation of the communicative strategy of protection, which is used as a reaction of the addressee in the case of the interlocutor's choice of conflicting strategies of speech interaction in political talk-shows. An overview of approaches to defining and distinguishing communication strategies is presented and a detailed analysis of defense strategy in interactive political discourse is presented. It is argued that the defensive strategy is related to the communicative behavior of the addressee of a political conflict, who has a weaker position, less convincing to theaudience, but defends his beliefs by opposing the position of the speaker. Under such conditions, the addressee reacts to the aggressive speech behavior of the speaker, but does not agree with him, does not seek to understand, but on the contrary - inflates the conflict and acts as a political opponent. The defense strategy was found to be represented by communicative tactics of justification, evasion of response and counteractionto provocation.


2020 ◽  
Vol 76 ◽  
pp. 69-91
Author(s):  
Andrzej Dawidczyk

The overall objective of undertaken research is to present national reflection on defense in Poland, in particular an image of the defense planning culture. The article focuses on issues of defense strategy planning as a part of the complete state defensive strategy and indicates the necessary directions for changes in the essence of defense planning in the light of changes in the security and defense environment of Poland in the 21st century. In the study use has been made of an analysis and a critical review of normative documents, strategies and doctrines. The collected data allow the indication of changes of an application nature (projection method) to the theory of strategic planning in the field of national defense. This allowed proving that the future desired position of Poland in regional and even continental Europe depends on the anticipated direction of development of Polish defense potential, defense system, and of course, a clearly expressed strategy. The most important element of the three listed ones is naturally the strategy, the preparation of which requires regular cycles of strategic reviews, which are unfortunately not feasible in Poland. The article also emphasizes the importance of assuring the stability of the planning system, the basic element of which are experts, and whose regular fluctuation as a result of constant political changes causes a considerable destabilization of the planning system. Finally, attention is drawn to the fact that the defense strategy must take into account the complexity and unpredictability of the security environment – which are its basic features today.


2020 ◽  
Vol 34 (01) ◽  
pp. 1169-1176
Author(s):  
Huangzhao Zhang ◽  
Zhuo Li ◽  
Ge Li ◽  
Lei Ma ◽  
Yang Liu ◽  
...  

Automated processing, analysis, and generation of source code are among the key activities in software and system lifecycle. To this end, while deep learning (DL) exhibits a certain level of capability in handling these tasks, the current state-of-the-art DL models still suffer from non-robust issues and can be easily fooled by adversarial attacks.Different from adversarial attacks for image, audio, and natural languages, the structured nature of programming languages brings new challenges. In this paper, we propose a Metropolis-Hastings sampling-based identifier renaming technique, named \fullmethod (\method), which generates adversarial examples for DL models specialized for source code processing. Our in-depth evaluation on a functionality classification benchmark demonstrates the effectiveness of \method in generating adversarial examples of source code. The higher robustness and performance enhanced through our adversarial training with \method further confirms the usefulness of DL models-based method for future fully automated source code processing.


Sign in / Sign up

Export Citation Format

Share Document