Crafting an Adversarial Example in the DNN Representation Space by Minimizing the Distance from the Decision Boundary

Author(s):  
Li Li ◽  
Milos Doroslovacki ◽  
Murray H. Loew
Author(s):  
Jiliang Zhang ◽  
Shuang Peng ◽  
Yupeng Hu ◽  
Fei Peng ◽  
Wei Hu ◽  
...  
Keyword(s):  

2020 ◽  
pp. 147592172097970
Author(s):  
Liangliang Cheng ◽  
Vahid Yaghoubi ◽  
Wim Van Paepegem ◽  
Mathias Kersemans

The Mahalanobis–Taguchi system is considered as a promising and powerful tool for handling binary classification cases. Though, the Mahalanobis–Taguchi system has several restrictions in screening useful features and determining the decision boundary in an optimal manner. In this article, an integrated Mahalanobis classification system is proposed which builds on the concept of Mahalanobis distance and its space. The integrated Mahalanobis classification system integrates the decision boundary searching process, based on particle swarm optimizer, directly into the feature selection phase for constructing the Mahalanobis distance space. This integration (a) avoids the need for user-dependent input parameters and (b) improves the classification performance. For the feature selection phase, both the use of binary particle swarm optimizer and binary gravitational search algorithm is investigated. To deal with possible overfitting problems in case of sparse data sets, k-fold cross-validation is considered. The integrated Mahalanobis classification system procedure is benchmarked with the classical Mahalanobis–Taguchi system as well as the recently proposed two-stage Mahalanobis classification system in terms of classification performance. Results are presented on both an experimental case study of complex-shaped metallic turbine blades with various damage types and a synthetic case study of cylindrical dogbone samples with creep and microstructural damage. The results indicate that the proposed integrated Mahalanobis classification system shows good and robust classification performance.


2006 ◽  
Vol 15 (12) ◽  
pp. 2267-2278 ◽  
Author(s):  
D. V. AHLUWALIA-KHALILOVA

Assuming the validity of the general relativistic description of gravitation on astrophysical and cosmological length scales, we analytically infer that the Friedmann–Robertson–Walker cosmology with Einsteinian cosmological constant, and a vanishing spatial curvature constant, unambiguously requires a significant amount of dark matter. This requirement is consistent with other indications for dark matter. The same space–time symmetries that underlie the freely falling frames of Einsteinian gravity also provide symmetries which, for the spin one half representation space, furnish a novel construct that carries extremely limited interactions with respect to the terrestrial detectors made of the standard model material. Both the "luminous" and "dark" matter turn out to be residents of the same representation space but they derive their respective "luminosity" and "darkness" from either belonging to the sector with (CPT)2 = +𝟙, or to the sector with (CPT)2 = -𝟙.


2018 ◽  
Vol 30 (12) ◽  
pp. 3151-3167 ◽  
Author(s):  
Dmitry Krotov ◽  
John Hopfield

Deep neural networks (DNNs) trained in a supervised way suffer from two known problems. First, the minima of the objective function used in learning correspond to data points (also known as rubbish examples or fooling images) that lack semantic similarity with the training data. Second, a clean input can be changed by a small, and often imperceptible for human vision, perturbation so that the resulting deformed input is misclassified by the network. These findings emphasize the differences between the ways DNNs and humans classify patterns and raise a question of designing learning algorithms that more accurately mimic human perception compared to the existing methods. Our article examines these questions within the framework of dense associative memory (DAM) models. These models are defined by the energy function, with higher-order (higher than quadratic) interactions between the neurons. We show that in the limit when the power of the interaction vertex in the energy function is sufficiently large, these models have the following three properties. First, the minima of the objective function are free from rubbish images, so that each minimum is a semantically meaningful pattern. Second, artificial patterns poised precisely at the decision boundary look ambiguous to human subjects and share aspects of both classes that are separated by that decision boundary. Third, adversarial images constructed by models with small power of the interaction vertex, which are equivalent to DNN with rectified linear units, fail to transfer to and fool the models with higher-order interactions. This opens up the possibility of using higher-order models for detecting and stopping malicious adversarial attacks. The results we present suggest that DAMs with higher-order energy functions are more robust to adversarial and rubbish inputs than DNNs with rectified linear units.


Sign in / Sign up

Export Citation Format

Share Document