scholarly journals Adv-Makeup: A New Imperceptible and Transferable Attack on Face Recognition

Author(s):  
Bangjie Yin ◽  
Wenxuan Wang ◽  
Taiping Yao ◽  
Junfeng Guo ◽  
Zelun Kong ◽  
...  

Deep neural networks, particularly face recognition models, have been shown to be vulnerable to both digital and physical adversarial examples. However, existing adversarial examples against face recognition systems either lack transferability to black-box models, or fail to be implemented in practice. In this paper, we propose a unified adversarial face generation method - Adv-Makeup, which can realize imperceptible and transferable attack under the black-box setting. Adv-Makeup develops a task-driven makeup generation method with the blending module to synthesize imperceptible eye shadow over the orbital region on faces. And to achieve transferability, Adv-Makeup implements a fine-grained meta-learning based adversarial attack strategy to learn more vulnerable or sensitive features from various models. Compared to existing techniques, sufficient visualization results demonstrate that Adv-Makeup is capable to generate much more imperceptible attacks under both digital and physical scenarios. Meanwhile, extensive quantitative experiments show that Adv-Makeup can significantly improve the attack success rate under black-box setting, even attacking commercial systems.

Author(s):  
Evren Daglarli

Today, the effects of promising technologies such as explainable artificial intelligence (xAI) and meta-learning (ML) on the internet of things (IoT) and the cyber-physical systems (CPS), which are important components of Industry 4.0, are increasingly intensified. However, there are important shortcomings that current deep learning models are currently inadequate. These artificial neural network based models are black box models that generalize the data transmitted to it and learn from the data. Therefore, the relational link between input and output is not observable. For these reasons, it is necessary to make serious efforts on the explanability and interpretability of black box models. In the near future, the integration of explainable artificial intelligence and meta-learning approaches to cyber-physical systems will have effects on a high level of virtualization and simulation infrastructure, real-time supply chain, cyber factories with smart machines communicating over the internet, maximizing production efficiency, analysis of service quality and competition level.


Energies ◽  
2020 ◽  
Vol 13 (24) ◽  
pp. 6749
Author(s):  
Reda El Bechari ◽  
Stéphane Brisset ◽  
Stéphane Clénet ◽  
Frédéric Guyomarch ◽  
Jean Claude Mipo

Metamodels proved to be a very efficient strategy for optimizing expensive black-box models, e.g., Finite Element simulation for electromagnetic devices. It enables the reduction of the computational burden for optimization purposes. However, the conventional approach of using metamodels presents limitations such as the cost of metamodel fitting and infill criteria problem-solving. This paper proposes a new algorithm that combines metamodels with a branch and bound (B&B) strategy. However, the efficiency of the B&B algorithm relies on the estimation of the bounds; therefore, we investigated the prediction error given by metamodels to predict the bounds. This combination leads to high fidelity global solutions. We propose a comparison protocol to assess the approach’s performances with respect to those of other algorithms of different categories. Then, two electromagnetic optimization benchmarks are treated. This paper gives practical insights into algorithms that can be used when optimizing electromagnetic devices.


We provide a framework for investment managers to create dynamic pretrade models. The approach helps market participants shed light on vendor black-box models that often do not provide any transparency into the model’s functional form or working mechanics. In addition, this allows portfolio managers to create consensus estimates based on their own expectations, such as forecasted liquidity and volatility, and to incorporate firm proprietary alpha estimates into the solution. These techniques allow managers to reduce overdependency on any one black-box model, incorporate costs into the stock selection and portfolio optimization phase of the investment cycle, and perform “what-if” and sensitivity analyses without the risk of information leakage to any outside party or vendor.


2020 ◽  
Vol 34 (04) ◽  
pp. 3405-3413
Author(s):  
Zhaohui Che ◽  
Ali Borji ◽  
Guangtao Zhai ◽  
Suiyi Ling ◽  
Jing Li ◽  
...  

Deep neural networks are vulnerable to adversarial attacks. More importantly, some adversarial examples crafted against an ensemble of pre-trained source models can transfer to other new target models, thus pose a security threat to black-box applications (when the attackers have no access to the target models). Despite adopting diverse architectures and parameters, source and target models often share similar decision boundaries. Therefore, if an adversary is capable of fooling several source models concurrently, it can potentially capture intrinsic transferable adversarial information that may allow it to fool a broad class of other black-box target models. Current ensemble attacks, however, only consider a limited number of source models to craft an adversary, and obtain poor transferability. In this paper, we propose a novel black-box attack, dubbed Serial-Mini-Batch-Ensemble-Attack (SMBEA). SMBEA divides a large number of pre-trained source models into several mini-batches. For each single batch, we design 3 new ensemble strategies to improve the intra-batch transferability. Besides, we propose a new algorithm that recursively accumulates the “long-term” gradient memories of the previous batch to the following batch. This way, the learned adversarial information can be preserved and the inter-batch transferability can be improved. Experiments indicate that our method outperforms state-of-the-art ensemble attacks over multiple pixel-to-pixel vision tasks including image translation and salient region prediction. Our method successfully fools two online black-box saliency prediction systems including DeepGaze-II (Kummerer 2017) and SALICON (Huang et al. 2017). Finally, we also contribute a new repository to promote the research on adversarial attack and defense over pixel-to-pixel tasks: https://github.com/CZHQuality/AAA-Pix2pix.


Author(s):  
Kacper Sokol ◽  
Peter Flach

Understanding data, models and predictions is important for machine learning applications. Due to the limitations of our spatial perception and intuition, analysing high-dimensional data is inherently difficult. Furthermore, black-box models achieving high predictive accuracy are widely used, yet the logic behind their predictions is often opaque. Use of textualisation -- a natural language narrative of selected phenomena -- can tackle these shortcomings. When extended with argumentation theory we could envisage machine learning models and predictions arguing persuasively for their choices.


Author(s):  
Marjan Popov ◽  
Bjørn Gustavsen ◽  
Juan A. Martinez-Velasco

Voltage surges arising from transient events, such as switching operations or lightning discharges, are one of the main causes of transformer winding failure. The voltage distribution along a transformer winding depends greatly on the waveshape of the voltage applied to the winding. This distribution is not uniform in the case of steep-fronted transients since a large portion of the applied voltage is usually concentrated on the first few turns of the winding. High frequency electromagnetic transients in transformers can be studied using internal models (i.e., models for analyzing the propagation and distribution of the incident impulse along the transformer windings), and black-box models (i.e., models for analyzing the response of the transformer from its terminals and for calculating voltage transfer). This chapter presents a summary of the most common models developed for analyzing the behaviour of transformers subjected to steep-fronted waves and a description of procedures for determining the parameters to be specified in those models. The main section details some test studies based on actual transformers in which models are validated by comparing simulation results to laboratory measurements.


Sign in / Sign up

Export Citation Format

Share Document