scholarly journals Partial Retraining Substitute Model for Query-Limited Black-Box Attacks

2020 ◽  
Vol 10 (20) ◽  
pp. 7168
Author(s):  
Hosung Park ◽  
Gwonsang Ryu ◽  
Daeseon Choi

Black-box attacks against deep neural network (DNN) classifiers are receiving increasing attention because they represent a more practical approach in the real world than white box attacks. In black-box environments, adversaries have limited knowledge regarding the target model. This makes it difficult to estimate gradients for crafting adversarial examples, such that powerful white-box algorithms cannot be directly applied to black-box attacks. Therefore, a well-known black-box attack strategy creates local DNNs, called substitute models, to emulate the target model. The adversaries then craft adversarial examples using the substitute models instead of the unknown target model. The substitute models repeat the query process and are trained by observing labels from the target model’s responses to queries. However, emulating a target model usually requires numerous queries because new DNNs are trained from the beginning. In this study, we propose a new training method for substitute models to minimize the number of queries. We consider the number of queries as an important factor for practical black-box attacks because real-world systems often restrict queries for security and financial purposes. To decrease the number of queries, the proposed method does not emulate the entire target model and only adjusts the partial classification boundary based on a current attack. Furthermore, it does not use queries in the pre-training phase and creates queries only in the retraining phase. The experimental results indicate that the proposed method is effective in terms of the number of queries and attack success ratio against MNIST, VGGFace2, and ImageNet classifiers in query-limited black-box environments. Further, we demonstrate a black-box attack against a commercial classifier, Google AutoML Vision.

2021 ◽  
Vol 7 ◽  
pp. e702
Author(s):  
Gaoming Yang ◽  
Mingwei Li ◽  
Xianjing Fang ◽  
Ji Zhang ◽  
Xingzhu Liang

Adversarial examples are regarded as a security threat to deep learning models, and there are many ways to generate them. However, most existing methods require the query authority of the target during their work. In a more practical situation, the attacker will be easily detected because of too many queries, and this problem is especially obvious under the black-box setting. To solve the problem, we propose the Attack Without a Target Model (AWTM). Our algorithm does not specify any target model in generating adversarial examples, so it does not need to query the target. Experimental results show that it achieved a maximum attack success rate of 81.78% in the MNIST data set and 87.99% in the CIFAR-10 data set. In addition, it has a low time cost because it is a GAN-based method.


Author(s):  
Chun-Chen Tu ◽  
Paishun Ting ◽  
Pin-Yu Chen ◽  
Sijia Liu ◽  
Huan Zhang ◽  
...  

Recent studies have shown that adversarial examples in state-of-the-art image classifiers trained by deep neural networks (DNN) can be easily generated when the target model is transparent to an attacker, known as the white-box setting. However, when attacking a deployed machine learning service, one can only acquire the input-output correspondences of the target model; this is the so-called black-box attack setting. The major drawback of existing black-box attacks is the need for excessive model queries, which may give a false sense of model robustness due to inefficient query designs. To bridge this gap, we propose a generic framework for query-efficient blackbox attacks. Our framework, AutoZOOM, which is short for Autoencoder-based Zeroth Order Optimization Method, has two novel building blocks towards efficient black-box attacks: (i) an adaptive random gradient estimation strategy to balance query counts and distortion, and (ii) an autoencoder that is either trained offline with unlabeled data or a bilinear resizing operation for attack acceleration. Experimental results suggest that, by applying AutoZOOM to a state-of-the-art black-box attack (ZOO), a significant reduction in model queries can be achieved without sacrificing the attack success rate and the visual quality of the resulting adversarial examples. In particular, when compared to the standard ZOO method, AutoZOOM can consistently reduce the mean query counts in finding successful adversarial examples (or reaching the same distortion level) by at least 93% on MNIST, CIFAR-10 and ImageNet datasets, leading to novel insights on adversarial robustness.


2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Sensen Guo ◽  
Jinxiong Zhao ◽  
Xiaoyu Li ◽  
Junhong Duan ◽  
Dejun Mu ◽  
...  

In recent years, machine learning has made tremendous progress in the fields of computer vision, natural language processing, and cybersecurity; however, we cannot ignore that machine learning models are vulnerable to adversarial examples, with some minor malicious input modifications, while appearing unmodified to human observers, the outputs of machine learning-based model can be misled easily. Likewise, attackers can bypass machine-learning-based security defenses model to attack systems in real time by generating adversarial examples. In this paper, we propose a black-box attack method against machine-learning-based anomaly network flow detection algorithms. Our attack strategy consists in training another model to substitute for the target machine learning model. Based on the overall understanding of the substitute model and the migration of the adversarial examples, we use the substitute model to craft adversarial examples. The experiment has shown that our method can attack the target model effectively. We attack several kinds of network flow detection models, which are based on different kinds of machine learning methods, and we find that the adversarial examples crafted by our method can bypass the detection of the target model with high probability.


Author(s):  
Ray Huffaker ◽  
Marco Bittelli ◽  
Rodolfo Rosa

Detecting causal interactions among climatic, environmental, and human forces in complex biophysical systems is essential for understanding how these systems function and how public policies can be devised that protect the flow of essential services to biological diversity, agriculture, and other core economic activities. Convergent Cross Mapping (CCM) detects causal networks in real-world systems diagnosed with deterministic, low-dimension, and nonlinear dynamics. If CCM detects correspondence between phase spaces reconstructed from observed time series variables, then the variables are determined to causally interact in the same dynamic system. CCM can give false positives by misconstruing synchronized variables as causally interactive. Extended (delayed) CCM screens for false positives among synchronized variables.


2021 ◽  
pp. 027836492110333
Author(s):  
Gilhyun Ryou ◽  
Ezra Tal ◽  
Sertac Karaman

We consider the problem of generating a time-optimal quadrotor trajectory for highly maneuverable vehicles, such as quadrotor aircraft. The problem is challenging because the optimal trajectory is located on the boundary of the set of dynamically feasible trajectories. This boundary is hard to model as it involves limitations of the entire system, including complex aerodynamic and electromechanical phenomena, in agile high-speed flight. In this work, we propose a multi-fidelity Bayesian optimization framework that models the feasibility constraints based on analytical approximation, numerical simulation, and real-world flight experiments. By combining evaluations at different fidelities, trajectory time is optimized while the number of costly flight experiments is kept to a minimum. The algorithm is thoroughly evaluated for the trajectory generation problem in two different scenarios: (1) connecting predetermined waypoints; (2) planning in obstacle-rich environments. For each scenario, we conduct both simulation and real-world flight experiments at speeds up to 11 m/s. Resulting trajectories were found to be significantly faster than those obtained through minimum-snap trajectory planning.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Qing Yao ◽  
Bingsheng Chen ◽  
Tim S. Evans ◽  
Kim Christensen

AbstractWe study the evolution of networks through ‘triplets’—three-node graphlets. We develop a method to compute a transition matrix to describe the evolution of triplets in temporal networks. To identify the importance of higher-order interactions in the evolution of networks, we compare both artificial and real-world data to a model based on pairwise interactions only. The significant differences between the computed matrix and the calculated matrix from the fitted parameters demonstrate that non-pairwise interactions exist for various real-world systems in space and time, such as our data sets. Furthermore, this also reveals that different patterns of higher-order interaction are involved in different real-world situations. To test our approach, we then use these transition matrices as the basis of a link prediction algorithm. We investigate our algorithm’s performance on four temporal networks, comparing our approach against ten other link prediction methods. Our results show that higher-order interactions in both space and time play a crucial role in the evolution of networks as we find our method, along with two other methods based on non-local interactions, give the best overall performance. The results also confirm the concept that the higher-order interaction patterns, i.e., triplet dynamics, can help us understand and predict the evolution of different real-world systems.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Ferenc Molnar ◽  
Takashi Nishikawa ◽  
Adilson E. Motter

AbstractBehavioral homogeneity is often critical for the functioning of network systems of interacting entities. In power grids, whose stable operation requires generator frequencies to be synchronized—and thus homogeneous—across the network, previous work suggests that the stability of synchronous states can be improved by making the generators homogeneous. Here, we show that a substantial additional improvement is possible by instead making the generators suitably heterogeneous. We develop a general method for attributing this counterintuitive effect to converse symmetry breaking, a recently established phenomenon in which the system must be asymmetric to maintain a stable symmetric state. These findings constitute the first demonstration of converse symmetry breaking in real-world systems, and our method promises to enable identification of this phenomenon in other networks whose functions rely on behavioral homogeneity.


Symmetry ◽  
2021 ◽  
Vol 13 (3) ◽  
pp. 428
Author(s):  
Hyun Kwon ◽  
Jun Lee

This paper presents research focusing on visualization and pattern recognition based on computer science. Although deep neural networks demonstrate satisfactory performance regarding image and voice recognition, as well as pattern analysis and intrusion detection, they exhibit inferior performance towards adversarial examples. Noise introduction, to some degree, to the original data could lead adversarial examples to be misclassified by deep neural networks, even though they can still be deemed as normal by humans. In this paper, a robust diversity adversarial training method against adversarial attacks was demonstrated. In this approach, the target model is more robust to unknown adversarial examples, as it trains various adversarial samples. During the experiment, Tensorflow was employed as our deep learning framework, while MNIST and Fashion-MNIST were used as experimental datasets. Results revealed that the diversity training method has lowered the attack success rate by an average of 27.2 and 24.3% for various adversarial examples, while maintaining the 98.7 and 91.5% accuracy rates regarding the original data of MNIST and Fashion-MNIST.


2019 ◽  
Author(s):  
Leon Y. Xiao

Loot boxes represent a popular and prevalent contemporary monetisation innovation in video games that offers the purchasing player-consumer, who always pays a set amount of money for each attempt, the opportunity to obtain randomised virtual rewards of uncertain in-game and real-world value. Loot boxes have been and continue to be scrutinised by regulators and policymakers because their randomised nature is akin to gambling. The regulation of loot boxes is a current and challenging international public policy and consumer protection issue. This paper reviews the psychology literature on the potential harms of loot boxes and applies the behavioural economics literature in order to identify the potentially abusive nature and harmful effects of loot boxes, which justify their regulation. This paper calls on the industry to publish loot box spending data and cooperate with independent empirical research to avoid overregulation. By examining existing regulation, this paper identifies the flaws of the ‘regulate loot boxes as gambling’ approach and critiques the alternative consumer protection approach of adopting ethical game design, such as disclosing the probabilities of obtaining randomised rewards and setting maximum spending limits. This paper recommends a combined legal and self-regulatory approach: the law should set out minimal acceptable standards of consumer protection and industry self-regulation should thrive to achieve an even higher standard.


Sign in / Sign up

Export Citation Format

Share Document