scholarly journals A CMA-ES-Based Adversarial Attack on Black-Box Deep Neural Networks

IEEE Access ◽  
2019 ◽  
Vol 7 ◽  
pp. 172938-172947
Author(s):  
Xiaohui Kuang ◽  
Hongyi Liu ◽  
Ye Wang ◽  
Qikun Zhang ◽  
Quanxin Zhang ◽  
...  
2020 ◽  
Vol 34 (07) ◽  
pp. 10901-10908 ◽  
Author(s):  
Abdullah Hamdi ◽  
Matthias Mueller ◽  
Bernard Ghanem

One major factor impeding more widespread adoption of deep neural networks (DNNs) is their lack of robustness, which is essential for safety-critical applications such as autonomous driving. This has motivated much recent work on adversarial attacks for DNNs, which mostly focus on pixel-level perturbations void of semantic meaning. In contrast, we present a general framework for adversarial attacks on trained agents, which covers semantic perturbations to the environment of the agent performing the task as well as pixel-level attacks. To do this, we re-frame the adversarial attack problem as learning a distribution of parameters that always fools the agent. In the semantic case, our proposed adversary (denoted as BBGAN) is trained to sample parameters that describe the environment with which the black-box agent interacts, such that the agent performs its dedicated task poorly in this environment. We apply BBGAN on three different tasks, primarily targeting aspects of autonomous navigation: object detection, self-driving, and autonomous UAV racing. On these tasks, BBGAN can generate failure cases that consistently fool a trained agent.


Symmetry ◽  
2021 ◽  
Vol 13 (3) ◽  
pp. 428
Author(s):  
Hyun Kwon ◽  
Jun Lee

This paper presents research focusing on visualization and pattern recognition based on computer science. Although deep neural networks demonstrate satisfactory performance regarding image and voice recognition, as well as pattern analysis and intrusion detection, they exhibit inferior performance towards adversarial examples. Noise introduction, to some degree, to the original data could lead adversarial examples to be misclassified by deep neural networks, even though they can still be deemed as normal by humans. In this paper, a robust diversity adversarial training method against adversarial attacks was demonstrated. In this approach, the target model is more robust to unknown adversarial examples, as it trains various adversarial samples. During the experiment, Tensorflow was employed as our deep learning framework, while MNIST and Fashion-MNIST were used as experimental datasets. Results revealed that the diversity training method has lowered the attack success rate by an average of 27.2 and 24.3% for various adversarial examples, while maintaining the 98.7 and 91.5% accuracy rates regarding the original data of MNIST and Fashion-MNIST.


2021 ◽  
Vol 3 (4) ◽  
pp. 966-989
Author(s):  
Vanessa Buhrmester ◽  
David Münch ◽  
Michael Arens

Deep Learning is a state-of-the-art technique to make inference on extensive or complex data. As a black box model due to their multilayer nonlinear structure, Deep Neural Networks are often criticized as being non-transparent and their predictions not traceable by humans. Furthermore, the models learn from artificially generated datasets, which often do not reflect reality. By basing decision-making algorithms on Deep Neural Networks, prejudice and unfairness may be promoted unknowingly due to a lack of transparency. Hence, several so-called explanators, or explainers, have been developed. Explainers try to give insight into the inner structure of machine learning black boxes by analyzing the connection between the input and output. In this survey, we present the mechanisms and properties of explaining systems for Deep Neural Networks for Computer Vision tasks. We give a comprehensive overview about the taxonomy of related studies and compare several survey papers that deal with explainability in general. We work out the drawbacks and gaps and summarize further research ideas.


2021 ◽  
Vol 72 ◽  
pp. 1-37
Author(s):  
Mike Wu ◽  
Sonali Parbhoo ◽  
Michael C. Hughes ◽  
Volker Roth ◽  
Finale Doshi-Velez

Deep models have advanced prediction in many domains, but their lack of interpretability  remains a key barrier to the adoption in many real world applications. There exists a large  body of work aiming to help humans understand these black box functions to varying levels  of granularity – for example, through distillation, gradients, or adversarial examples. These  methods however, all tackle interpretability as a separate process after training. In this  work, we take a different approach and explicitly regularize deep models so that they are  well-approximated by processes that humans can step through in little time. Specifically,  we train several families of deep neural networks to resemble compact, axis-aligned decision  trees without significant compromises in accuracy. The resulting axis-aligned decision  functions uniquely make tree regularized models easy for humans to interpret. Moreover,  for situations in which a single, global tree is a poor estimator, we introduce a regional tree regularizer that encourages the deep model to resemble a compact, axis-aligned decision  tree in predefined, human-interpretable contexts. Using intuitive toy examples, benchmark  image datasets, and medical tasks for patients in critical care and with HIV, we demonstrate  that this new family of tree regularizers yield models that are easier for humans to simulate  than L1 or L2 penalties without sacrificing predictive power. 


The authors apply deep neural networks, a type of machine learning method, to model agency mortgage-backed security (MBS) 30-year, fixed-rate pool prepayment behaviors. The neural networks model (NNM) is able to produce highly accurate model fits to the historical prepayment patterns as well as accurate sensitivities to economic and pool-level risk factors. These results are comparable with model results and intuitions obtained from a traditional agency pool-level prepayment model that is in production and was built via many iterations of trial and error over many months and years. This example shows NNM can process large datasets efficiently, capture very complex prepayment patterns, and model large group of risk factors that are highly nonlinear and interactive. The authors also examine various potential shortcomings of this approach, including nontransparency/“black-box” issues, model overfitting, and regime shift issues.


Sign in / Sign up

Export Citation Format

Share Document