scholarly journals Timing Black-Box Attacks: Crafting Adversarial Examples through Timing Leaks against DNNs on Embedded Devices

Author(s):  
Tsunato Nakai ◽  
Daisuke Suzuki ◽  
Takeshi Fujino

Deep neural networks (DNNs) have been applied to various industries. In particular, DNNs on embedded devices have attracted considerable interest because they allow real-time and distributed processing on site. However, adversarial examples (AEs), which add small perturbations to the input data of DNNs to cause misclassification, are serious threats to DNNs. In this paper, a novel black-box attack is proposed to craft AEs based only on processing time, i.e., the side-channel leaks from DNNs on embedded devices. Unlike several existing black-box attacks that utilize output probability, the proposed attack exploits the relationship between the number of activated nodes and processing time without using training data, model architecture, parameters, substitute models, or output probability. The perturbations for AEs are determined by the differential processing time based on the input data of the DNNs in the proposed attack. The experimental results show that the AEs of the proposed attack effectively cause an increase in the number of activated nodes and the misclassification of one of the incorrect labels against the DNNs on a microcontroller unit. Moreover, these results indicate that the attack can evade gradient-masking and confidence reduction countermeasures, which conceal the output probability, to prevent the crafting of AEs against several black-box attacks. Finally, the countermeasures against the attack are implemented and evaluated to clarify that the implementation of an activation function with data-dependent timing leaks is the cause of the proposed attack.

2020 ◽  
Vol 10 (22) ◽  
pp. 8079
Author(s):  
Sanglee Park ◽  
Jungmin So

State-of-the-art neural network models are actively used in various fields, but it is well-known that they are vulnerable to adversarial example attacks. Throughout the efforts to make the models robust against adversarial example attacks, it has been found to be a very difficult task. While many defense approaches were shown to be not effective, adversarial training remains as one of the promising methods. In adversarial training, the training data are augmented by “adversarial” samples generated using an attack algorithm. If the attacker uses a similar attack algorithm to generate adversarial examples, the adversarially trained network can be quite robust to the attack. However, there are numerous ways of creating adversarial examples, and the defender does not know what algorithm the attacker may use. A natural question is: Can we use adversarial training to train a model robust to multiple types of attack? Previous work have shown that, when a network is trained with adversarial examples generated from multiple attack methods, the network is still vulnerable to white-box attacks where the attacker has complete access to the model parameters. In this paper, we study this question in the context of black-box attacks, which can be a more realistic assumption for practical applications. Experiments with the MNIST dataset show that adversarially training a network with an attack method helps defending against that particular attack method, but has limited effect for other attack methods. In addition, even if the defender trains a network with multiple types of adversarial examples and the attacker attacks with one of the methods, the network could lose accuracy to the attack if the attacker uses a different data augmentation strategy on the target network. These results show that it is very difficult to make a robust network using adversarial training, even for black-box settings where the attacker has restricted information on the target network.


2021 ◽  
Vol 11 (15) ◽  
pp. 7148
Author(s):  
Bedada Endale ◽  
Abera Tullu ◽  
Hayoung Shi ◽  
Beom-Soo Kang

Unmanned aerial vehicles (UAVs) are being widely utilized for various missions: in both civilian and military sectors. Many of these missions demand UAVs to acquire artificial intelligence about the environments they are navigating in. This perception can be realized by training a computing machine to classify objects in the environment. One of the well known machine training approaches is supervised deep learning, which enables a machine to classify objects. However, supervised deep learning comes with huge sacrifice in terms of time and computational resources. Collecting big input data, pre-training processes, such as labeling training data, and the need for a high performance computer for training are some of the challenges that supervised deep learning poses. To address these setbacks, this study proposes mission specific input data augmentation techniques and the design of light-weight deep neural network architecture that is capable of real-time object classification. Semi-direct visual odometry (SVO) data of augmented images are used to train the network for object classification. Ten classes of 10,000 different images in each class were used as input data where 80% were for training the network and the remaining 20% were used for network validation. For the optimization of the designed deep neural network, a sequential gradient descent algorithm was implemented. This algorithm has the advantage of handling redundancy in the data more efficiently than other algorithms.


2019 ◽  
Vol 2019 ◽  
pp. 1-14 ◽  
Author(s):  
Yikui Zhai ◽  
He Cao ◽  
Wenbo Deng ◽  
Junying Gan ◽  
Vincenzo Piuri ◽  
...  

Because of the lack of discriminative face representations and scarcity of labeled training data, facial beauty prediction (FBP), which aims at assessing facial attractiveness automatically, has become a challenging pattern recognition problem. Inspired by recent promising work on fine-grained image classification using the multiscale architecture to extend the diversity of deep features, BeautyNet for unconstrained facial beauty prediction is proposed in this paper. Firstly, a multiscale network is adopted to improve the discriminative of face features. Secondly, to alleviate the computational burden of the multiscale architecture, MFM (max-feature-map) is utilized as an activation function which can not only lighten the network and speed network convergence but also benefit the performance. Finally, transfer learning strategy is introduced here to mitigate the overfitting phenomenon which is caused by the scarcity of labeled facial beauty samples and improves the proposed BeautyNet’s performance. Extensive experiments performed on LSFBD demonstrate that the proposed scheme outperforms the state-of-the-art methods, which can achieve 67.48% classification accuracy.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
A. H. Homid ◽  
M. Abdel-Aty ◽  
M. Qasymeh ◽  
H. Eleuch

AbstractIn this work, trapped ultracold atoms are proposed as a platform for efficient quantum gate circuits and algorithms. We also develop and evaluate quantum algorithms, including those for the Simon problem and the black-box string-finding problem. Our analytical model describes an open system with non-Hermitian Hamiltonian. It is shown that our proposed scheme offers better performance (in terms of the number of required gates and the processing time) for realizing the quantum gates and algorithms compared to previously reported approaches.


2021 ◽  
Vol 11 (19) ◽  
pp. 9041
Author(s):  
Alex Halle ◽  
Lucio Flavio Campanile ◽  
Alexander Hasse

Engineers widely use topology optimization during the initial process of product development to obtain a first possible geometry design. The state-of-the-art method is iterative calculation, which requires both time and computational power. This paper proposes an AI-assisted design method for topology optimization, which does not require any optimized data. An artificial neural network—the predictor—provides the designs on the basis of boundary conditions and degree of filling as input data. In the training phase, the so-called evaluators evaluate the generated geometries on the basis of random input data with respect to given criteria. The results of those evaluations flow into an objective function, which is minimized by adapting the predictor’s parameters. After training, the presented AI-assisted design procedure generates geometries that are similar to those of conventional topology optimizers, but require only a fraction of the computational effort. We believe that our work could be a clue for AI-based methods that require data that are difficult to compute or unavailable.


2020 ◽  
Vol 34 (04) ◽  
pp. 3405-3413
Author(s):  
Zhaohui Che ◽  
Ali Borji ◽  
Guangtao Zhai ◽  
Suiyi Ling ◽  
Jing Li ◽  
...  

Deep neural networks are vulnerable to adversarial attacks. More importantly, some adversarial examples crafted against an ensemble of pre-trained source models can transfer to other new target models, thus pose a security threat to black-box applications (when the attackers have no access to the target models). Despite adopting diverse architectures and parameters, source and target models often share similar decision boundaries. Therefore, if an adversary is capable of fooling several source models concurrently, it can potentially capture intrinsic transferable adversarial information that may allow it to fool a broad class of other black-box target models. Current ensemble attacks, however, only consider a limited number of source models to craft an adversary, and obtain poor transferability. In this paper, we propose a novel black-box attack, dubbed Serial-Mini-Batch-Ensemble-Attack (SMBEA). SMBEA divides a large number of pre-trained source models into several mini-batches. For each single batch, we design 3 new ensemble strategies to improve the intra-batch transferability. Besides, we propose a new algorithm that recursively accumulates the “long-term” gradient memories of the previous batch to the following batch. This way, the learned adversarial information can be preserved and the inter-batch transferability can be improved. Experiments indicate that our method outperforms state-of-the-art ensemble attacks over multiple pixel-to-pixel vision tasks including image translation and salient region prediction. Our method successfully fools two online black-box saliency prediction systems including DeepGaze-II (Kummerer 2017) and SALICON (Huang et al. 2017). Finally, we also contribute a new repository to promote the research on adversarial attack and defense over pixel-to-pixel tasks: https://github.com/CZHQuality/AAA-Pix2pix.


2000 ◽  
Author(s):  
Arturo Pacheco-Vega ◽  
Mihir Sen ◽  
Rodney L. McClain

Abstract In the current study we consider the problem of accuracy in heat rate estimations from artificial neural network models of heat exchangers used for refrigeration applications. The network configuration is of the feedforward type with a sigmoid activation function and a backpropagation algorithm. Limited experimental measurements from a manufacturer are used to show the capability of the neural network technique in modeling the heat transfer in these systems. Results from this exercise show that a well-trained network correlates the data with errors of the same order as the uncertainty of the measurements. It is also shown that the number and distribution of the training data are linked to the performance of the network when estimating the heat rates under different operating conditions, and that networks trained from few tests may give large errors. A methodology based on the cross-validation technique is presented to find regions where not enough data are available to construct a reliable neural network. The results from three tests show that the proposed methodology gives an upper bound of the estimated error in the heat rates.


Author(s):  
Sabrina Cocca

Analysing and optimising service productivity is a widely discussed task in service management. While directly measurable factors such as processing time or service turnover are frequently used in order to measure the productivity of services, underlying factors that are, in many cases, not (directly) measurable are not considered in-depth. However, these “qualitative” factors influence service productivity to a high degree. The idea behind the approach provided in this article is to open the former “black box” view on productivity (input–output) to a process efficiency-oriented perspective instead and to show which qualitative factors play a crucial role regarding service productivity.


Electronics ◽  
2020 ◽  
Vol 9 (3) ◽  
pp. 516
Author(s):  
Jesús Peral ◽  
David Gil ◽  
Sayna Rotbei ◽  
Sandra Amador ◽  
Marga Guerrero ◽  
...  

About 15% of the world’s population suffers from some form of disability. In developed countries, about 1.5% of children are diagnosed with autism. Autism is a developmental disorder distinguished mainly by impairments in social interaction and communication and by restricted and repetitive behavior. Since the cause of autism is still unknown, there have been many studies focused on screening for autism based on behavioral features. Thus, the main purpose of this paper is to present an architecture focused on data integration and analytics, allowing the distributed processing of input data. Furthermore, the proposed architecture allows the identification of relevant features as well as of hidden correlations among parameters. To this end, we propose a methodology able to integrate diverse data sources, even data that are collected separately. This methodology increases the data variety which can lead to the identification of more correlations between diverse parameters. We conclude the paper with a case study that used autism data in order to validate our proposed architecture, which showed very promising results.


Sign in / Sign up

Export Citation Format

Share Document