adversarial examples
Recently Published Documents


TOTAL DOCUMENTS

630
(FIVE YEARS 561)

H-INDEX

20
(FIVE YEARS 11)

2023 ◽  
Vol 55 (1) ◽  
pp. 1-35
Author(s):  
Deqiang Li ◽  
Qianmu Li ◽  
Yanfang (Fanny) Ye ◽  
Shouhuai Xu

Malicious software (malware) is a major cyber threat that has to be tackled with Machine Learning (ML) techniques because millions of new malware examples are injected into cyberspace on a daily basis. However, ML is vulnerable to attacks known as adversarial examples. In this article, we survey and systematize the field of Adversarial Malware Detection (AMD) through the lens of a unified conceptual framework of assumptions, attacks, defenses, and security properties. This not only leads us to map attacks and defenses to partial order structures, but also allows us to clearly describe the attack-defense arms race in the AMD context. We draw a number of insights, including: knowing the defender’s feature set is critical to the success of transfer attacks; the effectiveness of practical evasion attacks largely depends on the attacker’s freedom in conducting manipulations in the problem space; knowing the attacker’s manipulation set is critical to the defender’s success; and the effectiveness of adversarial training depends on the defender’s capability in identifying the most powerful attack. We also discuss a number of future research directions.


2023 ◽  
Vol 55 (1) ◽  
pp. 1-38
Author(s):  
Gabriel Resende Machado ◽  
Eugênio Silva ◽  
Ronaldo Ribeiro Goldschmidt

Deep Learning algorithms have achieved state-of-the-art performance for Image Classification. For this reason, they have been used even in security-critical applications, such as biometric recognition systems and self-driving cars. However, recent works have shown those algorithms, which can even surpass human capabilities, are vulnerable to adversarial examples. In Computer Vision, adversarial examples are images containing subtle perturbations generated by malicious optimization algorithms to fool classifiers. As an attempt to mitigate these vulnerabilities, numerous countermeasures have been proposed recently in the literature. However, devising an efficient defense mechanism has proven to be a difficult task, since many approaches demonstrated to be ineffective against adaptive attackers. Thus, this article aims to provide all readerships with a review of the latest research progress on Adversarial Machine Learning in Image Classification, nevertheless, with a defender’s perspective. This article introduces novel taxonomies for categorizing adversarial attacks and defenses, as well as discuss possible reasons regarding the existence of adversarial examples. In addition, relevant guidance is also provided to assist researchers when devising and evaluating defenses. Finally, based on the reviewed literature, this article suggests some promising paths for future research.


2022 ◽  
Vol 122 ◽  
pp. 108249
Author(s):  
Tao Dai ◽  
Yan Feng ◽  
Bin Chen ◽  
Jian Lu ◽  
Shu-Tao Xia

Author(s):  
Jiaqi Zhu ◽  
Feng Dai ◽  
Lingyun Yu ◽  
Hongtao Xie ◽  
Lidong Wang ◽  
...  

2022 ◽  
Author(s):  
Duc-Anh Nguyen ◽  
Kha Do Minh ◽  
Khoi Nguyen Le ◽  
Minh Nguyen Le ◽  
Pham Ngoc Hung

Abstract This paper proposes a method to mitigate two major issues of Adversarial Transformation Networks (ATN) including the low diversity and the low quality of adversarial examples. In order to deal with the first issue, this research proposes a stacked convolutional autoencoder based on pattern to generalize ATN. This proposed autoencoder could support different patterns such as all-feature pattern , border feature pattern , and class model map pattern . In order to deal with the second issue, this paper presents an algorithm to improve the quality of adversarial examples in terms of L 0 -norm and L 2 -norm. This algorithm employs an adversarial feature ranking heuristics such as JSMA and COI to prioritize adversarial features. To demonstrate the advantages of the proposed method, comprehensive experiments have been conducted on the MNIST dataset and the CIFAR-10 dataset. For the first issue, the proposed autoencoder can generate diverse adversarial examples with the average success rate above 99%. For the second issue, the proposed algorithm could not only improve the quality of adversarial examples significantly but also maintain the average success rate. In terms of L 0 -norm, the proposed algorithm could decrease from hundreds of adversarial features to one adversarial feature. In terms of L 2 -norm, the proposed algorithm could reduce the average distance considerably. These results show that the proposed method is capable of generating high-quality and diverse adversarial examples in practice.


2022 ◽  
pp. 108383
Author(s):  
Wenjian Luo ◽  
Chenwang Wu ◽  
Li Ni ◽  
Nan Zhou ◽  
Zhenya Zhang
Keyword(s):  

Author(s):  
Rui Zhang ◽  
Hui Xia ◽  
Chunqiang Hu ◽  
Cheng Zhang ◽  
Chao Liu ◽  
...  
Keyword(s):  

2021 ◽  
pp. 1-10
Author(s):  
Guangling Sun ◽  
Haoqi Hu ◽  
Xinpeng Zhang ◽  
Xiaofeng Lu

Universal Adversarial Perturbations(UAPs), which are image-agnostic adversarial perturbations, have been demonstrated to successfully deceive computer vision models. Proposed UAPs in the case of data-dependent, use the internal layers’ activation or the output layer’s decision values as supervision. In this paper, we use both of them to drive the supervised learning of UAP, termed as fully supervised UAP(FS-UAP), and design a progressive optimization strategy to solve the FS-UAP. Specifically, we define an internal layers supervised objective relying on multiple major internal layers’ activation to estimate the deviations of adversarial examples from legitimate examples. We also define an output layer supervised objective relying on the logits of output layer to evaluate attacking degrees. In addition, we use the UAP found by previous stage as the initial solution of the next stage so as to progressively optimize the UAP stage-wise. We use seven networks and ImageNet dataset to evaluate the proposed FS-UAP, and provide an in-depth analysis for the latent factors affecting the performance of universal attacks. The experimental results show that our FS-UAP (i) has powerful capability of cheating CNNs (ii) has superior transfer-ability across models and weak data-dependent (iii) is appropriate for both untarget and target attacks.


Sign in / Sign up

Export Citation Format

Share Document