# convex hullRecently Published Documents

1658
(FIVE YEARS 402)

## H-INDEX

45
(FIVE YEARS 9)

2022 ◽
Vol 6 (POPL) ◽
pp. 1-33
Author(s):
Mark Niklas Müller ◽
Gleb Makarchuk ◽
Gagandeep Singh ◽
Markus Püschel ◽
Martin Vechev
Keyword(s):

Formal verification of neural networks is critical for their safe adoption in real-world applications. However, designing a precise and scalable verifier which can handle different activation functions, realistic network architectures and relevant specifications remains an open and difficult challenge. In this paper, we take a major step forward in addressing this challenge and present a new verification framework, called PRIMA. PRIMA is both (i) general: it handles any non-linear activation function, and (ii) precise: it computes precise convex abstractions involving multiple neurons via novel convex hull approximation algorithms that leverage concepts from computational geometry. The algorithms have polynomial complexity, yield fewer constraints, and minimize precision loss. We evaluate the effectiveness of PRIMA on a variety of challenging tasks from prior work. Our results show that PRIMA is significantly more precise than the state-of-the-art, verifying robustness to input perturbations for up to 20%, 30%, and 34% more images than existing work on ReLU-, Sigmoid-, and Tanh-based networks, respectively. Further, PRIMA enables, for the first time, the precise verification of a realistic neural network for autonomous driving within a few minutes.

Author(s):
Abel Díaz-González ◽
Héctor Pijeira-Cabrera ◽
Javier Quintero-Roba
Keyword(s):

AbstractThe first part of this paper complements previous results on characterization of polynomials of least deviation from zero in Sobolev p-norm ($$1<p<\infty$$ 1 < p < ∞ ) for the case $$p=1$$ p = 1 . Some relevant examples are indicated. The second part deals with the location of zeros of polynomials of least deviation in discrete Sobolev p-norm. The asymptotic distribution of zeros is established on general conditions. Under some order restriction in the discrete part, we prove that the n-th polynomial of least deviation has at least $$n-\mathbf {d}^*$$ n - d ∗ zeros on the convex hull of the support of the measure, where $$\mathbf {d}^*$$ d ∗ denotes the number of terms in the discrete part.

Author(s):
Mark Niklas Müller ◽
Gleb Makarchuk ◽
Gagandeep Singh ◽
Markus Püschel ◽
Martin Vechev
Keyword(s):

2022 ◽
Vol 505 (2) ◽
pp. 125652
Author(s):
Artur Kulykov ◽
Olha Shevchenko
Keyword(s):

2022 ◽
pp. 1-1
Author(s):
Nicolas Stevens ◽
Anthony Papavasiliou
Keyword(s):

2022 ◽
pp. 100021
Author(s):
Keyword(s):

Author(s):
Zonghao Guo ◽
Xiaosong Zhang ◽
Chang Liu ◽
Xiangyang Ji ◽
Jianbin Jiao ◽
...
Keyword(s):

2021 ◽
Vol 19 (6) ◽
pp. 633-643
Author(s):
Wayan Firdaus Mahmudy ◽
Candra Dewi ◽
Rio Arifando ◽
Muh Arif Rahman
Keyword(s):

Patchouli plants are main raw materials for essential oils in Indonesia. Patchouli leaves have a very varied physical form based on the area planted, making it difficult to recognize the variety. This condition makes it difficult for farmers to recognize these varieties and they need experts’ advice. As there are few experts in this field, a technology for identifying the types of patchouli varieties is required. In this study, the identification model is constructed using a combination of leaf morphological features, texture features extracted with Wavelet and shape features extracted with convex hull. The results of feature extraction are used as input data for training of classification algorithms. The effectiveness of the input features is tested using three classification methods in class artificial neural network algorithms: (1) feedforward neural networks with backpropagation algorithm for training, (2) learning vector quantization (LVQ), (3) extreme learning machine (ELM). Synthetic minority over-sampling technique (SMOTE) is applied to solve the problem of class imbalance in the patchouli variety dataset. The results of the patchouli variety identification system by combining these three features indicate the level of recognition with an average accuracy of 72.61%, accuracy with the combination of these three features is higher when compared to using only morphological features (58.68%) or using only Wavelet features (59.03 %) or both (67.25%). In this study also showed that the use of SMOTE in imbalance data increases the accuracy with the highest average accuracy of 88.56%.

Author(s):
Karl-Hermann Neeb ◽
Daniel Oeh
Keyword(s):

AbstractIn this note, we study in a finite dimensional Lie algebra $${\mathfrak g}$$ g the set of all those elements x for which the closed convex hull of the adjoint orbit contains no affine lines; this contains in particular elements whose adjoint orbits generates a pointed convex cone $$C_x$$ C x . Assuming that $${\mathfrak g}$$ g is admissible, i.e., contains a generating invariant convex subset not containing affine lines, we obtain a natural characterization of such elements, also for non-reductive Lie algebras. Motivated by the concept of standard (Borchers) pairs in QFT, we also study pairs (x, h) of Lie algebra elements satisfying $$[h,x]=x$$ [ h , x ] = x for which $$C_x$$ C x pointed. Given x, we show that such elements h can be constructed in such a way that $$\mathop {\mathrm{ad}}\nolimits h$$ ad h defines a 5-grading, and characterize the cases where we even get a 3-grading.

Author(s):
Jinhak Kim ◽
Mohit Tawarmalani ◽
Jean-Philippe P. Richard

We develop techniques to convexify a set that is invariant under permutation and/or change of sign of variables and discuss applications of these results. First, we convexify the intersection of the unit ball of a permutation and sign-invariant norm with a cardinality constraint. This gives a nonlinear formulation for the feasible set of sparse principal component analysis (PCA) and an alternative proof of the K-support norm. Second, we characterize the convex hull of sets of matrices defined by constraining their singular values. As a consequence, we generalize an earlier result that characterizes the convex hull of rank-constrained matrices whose spectral norm is below a given threshold. Third, we derive convex and concave envelopes of various permutation-invariant nonlinear functions and their level sets over hypercubes, with congruent bounds on all variables. Finally, we develop new relaxations for the exterior product of sparse vectors. Using these relaxations for sparse PCA, we show that our relaxation closes 98% of the gap left by a classical semidefinite programming relaxation for instances where the covariance matrices are of dimension up to 50 × 50.