scholarly journals Distributionally Adversarial Attack

Author(s):  
Tianhang Zheng ◽  
Changyou Chen ◽  
Kui Ren

Recent work on adversarial attack has shown that Projected Gradient Descent (PGD) Adversary is a universal first-order adversary, and the classifier adversarially trained by PGD is robust against a wide range of first-order attacks. It is worth noting that the original objective of an attack/defense model relies on a data distribution p(x), typically in the form of risk maximization/minimization, e.g., max/min Ep(x) L(x) with p(x) some unknown data distribution and L(·) a loss function. However, since PGD generates attack samples independently for each data sample based on L(·), the procedure does not necessarily lead to good generalization in terms of risk optimization. In this paper, we achieve the goal by proposing distributionally adversarial attack (DAA), a framework to solve an optimal adversarial-data distribution, a perturbed distribution that satisfies the L∞ constraint but deviates from the original data distribution to increase the generalization risk maximally. Algorithmically, DAA performs optimization on the space of potential data distributions, which introduces direct dependency between all data points when generating adversarial samples. DAA is evaluated by attacking state-of-the-art defense models, including the adversarially-trained models provided by MIT MadryLab. Notably, DAA ranks the first place on MadryLab’s white-box leaderboards, reducing the accuracy of their secret MNIST model to 88.56% (with l∞ perturbations of ε = 0.3) and the accuracy of their secret CIFAR model to 44.71% (with l∞ perturbations of ε = 8.0). Code for the experiments is released on https://github.com/tianzheng4/Distributionally-Adversarial-Attack.

Information ◽  
2019 ◽  
Vol 10 (3) ◽  
pp. 96
Author(s):  
Lkhagvadorj Battulga ◽  
Aziz Nasridinov

Recently, the skyline query has attracted interest in a wide range of applications from recommendation systems to computer networks. The skyline query is useful to obtain the dominant data points from the given dataset. In the low-dimensional dataset, the skyline query may return a small number of skyline points. However, as the dimensionality of the dataset increases, the number of skyline points also increases. In other words, depending on the data distribution and dimensionality, most of the data points may become skyline points. With the emergence of big data applications, where the data distribution and dimensionality are a significant problem, obtaining representative skyline points among resulting skyline points is necessary. There have been several methods that focused on extracting representative skyline points with various success. However, existing methods have a problem of re-computation when the global threshold changes. Moreover, in certain cases, the resulting representative skyline points may not satisfy a user with multiple preferences. Thus, in this paper, we propose a new representative skyline query processing method, called representative skyline cluster (RSC), which solves the problems of the existing methods. Our method utilizes the hierarchical agglomerative clustering method to find the exact representative skyline points, which enable us to reduce the re-computation time significantly. We show the superiority of our proposed method over the existing state-of-the-art methods with various types of experiments.


2021 ◽  
pp. 1-17
Author(s):  
Sebastian Köhler

Abstract Quasi-realism prominently figures in the expressivist research program. However, many complain that it has become increasingly unclear what exactly quasi-realism involves. This paper offers clarification. It argues that we need to distinguish two distinctive views that might be and have been pursued under the label “quasi-realism”: conciliatory expressivism and quasi-realism properly so-called. Of these, only conciliatory expressivism is a genuinely meta-ethical project, while quasi-realism is a first-order normative view. This paper demonstrates the fruitfulness of these clarifications by using them to address Terence Cuneo’s recent challenge that quasi-realist expressivists lack the resources to plausibly accommodate certain sorts of data points.


2020 ◽  
Vol 8 (1) ◽  
pp. 45-69
Author(s):  
Eckhard Liebscher ◽  
Wolf-Dieter Richter

AbstractWe prove and describe in great detail a general method for constructing a wide range of multivariate probability density functions. We introduce probabilistic models for a large variety of clouds of multivariate data points. In the present paper, the focus is on star-shaped distributions of an arbitrary dimension, where in case of spherical distributions dependence is modeled by a non-Gaussian density generating function.


Symmetry ◽  
2021 ◽  
Vol 13 (3) ◽  
pp. 428
Author(s):  
Hyun Kwon ◽  
Jun Lee

This paper presents research focusing on visualization and pattern recognition based on computer science. Although deep neural networks demonstrate satisfactory performance regarding image and voice recognition, as well as pattern analysis and intrusion detection, they exhibit inferior performance towards adversarial examples. Noise introduction, to some degree, to the original data could lead adversarial examples to be misclassified by deep neural networks, even though they can still be deemed as normal by humans. In this paper, a robust diversity adversarial training method against adversarial attacks was demonstrated. In this approach, the target model is more robust to unknown adversarial examples, as it trains various adversarial samples. During the experiment, Tensorflow was employed as our deep learning framework, while MNIST and Fashion-MNIST were used as experimental datasets. Results revealed that the diversity training method has lowered the attack success rate by an average of 27.2 and 24.3% for various adversarial examples, while maintaining the 98.7 and 91.5% accuracy rates regarding the original data of MNIST and Fashion-MNIST.


Author(s):  
Damian Clarke ◽  
Joseph P. Romano ◽  
Michael Wolf

When considering multiple-hypothesis tests simultaneously, standard statistical techniques will lead to overrejection of null hypotheses unless the multiplicity of the testing framework is explicitly considered. In this article, we discuss the Romano–Wolf multiple-hypothesis correction and document its implementation in Stata. The Romano–Wolf correction (asymptotically) controls the familywise error rate, that is, the probability of rejecting at least one true null hypothesis among a family of hypotheses under test. This correction is considerably more powerful than earlier multiple-testing procedures, such as the Bonferroni and Holm corrections, given that it takes into account the dependence structure of the test statistics by resampling from the original data. We describe a command, rwolf, that implements this correction and provide several examples based on a wide range of models. We document and discuss the performance gains from using rwolf over other multiple-testing procedures that control the familywise error rate.


1994 ◽  
Vol 29 (1) ◽  
pp. 43-55 ◽  
Author(s):  
M Raoof ◽  
I Kraincanic

Using theoretical parametric studies covering a wide range of cable (and wire) diameters and lay angles, the range of validity of various approaches used for analysing helical cables are critically examined. Numerical results strongly suggest that for multi-layered steel strands with small wire/cable diameter ratios, the bending and torsional stiffnesses of the individual wires may safely be ignored when calculating the 2 × 2 matrix for strand axial/torsional stiffnesses. However, such bending and torsional wire stiffnesses are shown to be first order parameters in analysing the overall axial and torsional stiffnesses of, say, seven wire stands, especially under free-fixed end conditions with respect to torsional movements. Interwire contact deformations are shown to be of great importance in evaluating the axial and torsional stiffnesses of large diameter multi-layered steel strands. Their importance diminishes as the number of wires associated with smaller diameter cables decreases. Using a modified version of a previously reported theoretical model for analysing multilayered instrumentation cables, the importance of allowing for the influence of contact deformations in compliant layers on cable overall characteristics such as axial or torsional stiffnesses is demonstrated by theoretical numerical results. In particular, non-Hertzian contact formulations are used to obtain the interlayer compliances in instrumentation cables in preference to a previously reported model employing Hertzian theory with its associated limitations.


2015 ◽  
Vol 12 (3) ◽  
pp. 835-844 ◽  
Author(s):  
P. J. Rayner ◽  
A. Stavert ◽  
M. Scholze ◽  
A. Ahlström ◽  
C. E. Allison ◽  
...  

Abstract. We analyse global and regional changes in CO2 fluxes using two simple models, an airborne fraction of anthropogenic emissions and a linear relationship with CO2 concentrations. We show that both models are able to fit the non-anthropogenic (hereafter natural) flux over the length of the atmospheric concentration record. Analysis of the linear model (including its uncertainties) suggests no significant decrease in the response of the natural carbon cycle. Recent data points rather to an increase. We apply the same linear diagnostic to fluxes from atmospheric inversions. Flux responses show clear regional and seasonal patterns driven by terrestrial uptake in the northern summer. Ocean fluxes show little or no linear response. Terrestrial models show clear responses, agreeing globally with the inversion responses, however the spatial structure is quite different, with dominant responses in the tropics rather than the northern extratropics.


1980 ◽  
Vol 12 (3) ◽  
pp. 727-745 ◽  
Author(s):  
D. P. Gaver ◽  
P. A. W. Lewis

It is shown that there is an innovation process {∊n} such that the sequence of random variables {Xn} generated by the linear, additive first-order autoregressive scheme Xn = pXn-1 + ∊n are marginally distributed as gamma (λ, k) variables if 0 ≦p ≦ 1. This first-order autoregressive gamma sequence is useful for modelling a wide range of observed phenomena. Properties of sums of random variables from this process are studied, as well as Laplace-Stieltjes transforms of adjacent variables and joint moments of variables with different separations. The process is not time-reversible and has a zero-defect which makes parameter estimation straightforward. Other positive-valued variables generated by the first-order autoregressive scheme are studied, as well as extensions of the scheme for generating sequences with given marginal distributions and negative serial correlations.


1980 ◽  
Vol 12 (03) ◽  
pp. 727-745 ◽  
Author(s):  
D. P. Gaver ◽  
P. A. W. Lewis

It is shown that there is an innovation process {∊ n } such that the sequence of random variables {X n } generated by the linear, additive first-order autoregressive scheme X n = pXn-1 + ∊ n are marginally distributed as gamma (λ, k) variables if 0 ≦p ≦ 1. This first-order autoregressive gamma sequence is useful for modelling a wide range of observed phenomena. Properties of sums of random variables from this process are studied, as well as Laplace-Stieltjes transforms of adjacent variables and joint moments of variables with different separations. The process is not time-reversible and has a zero-defect which makes parameter estimation straightforward. Other positive-valued variables generated by the first-order autoregressive scheme are studied, as well as extensions of the scheme for generating sequences with given marginal distributions and negative serial correlations.


2021 ◽  
Vol 13 (19) ◽  
pp. 3796
Author(s):  
Lei Fan ◽  
Yuanzhi Cai

Laser scanning is a popular means of acquiring the indoor scene data of buildings for a wide range of applications concerning indoor environment. During data acquisition, unwanted data points beyond the indoor space of interest can also be recorded due to the presence of openings, such as windows and doors on walls. For better visualization and further modeling, it is beneficial to filter out those data, which is often achieved manually in practice. To automate this process, an efficient image-based filtering approach was explored in this research. In this approach, a binary mask image was created and updated through mathematical morphology operations, hole filling and connectively analysis. The final mask obtained was used to remove the data points located outside the indoor space of interest. The application of the approach to several point cloud datasets considered confirms its ability to effectively keep the data points in the indoor space of interest with an average precision of 99.50%. The application cases also demonstrate the computational efficiency (0.53 s, at most) of the approach proposed.


Sign in / Sign up

Export Citation Format

Share Document