Black-Box Optimization of Hadoop Parameters Using Derivative-Free Optimization

Author(s):  
Diego Desani ◽  
Veronica Gil-Costa ◽  
Cesar A. C. Marcondes ◽  
Hermes Senger
2019 ◽  
Vol 29 (4) ◽  
pp. 3012-3035 ◽  
Author(s):  
Giampaolo Liuzzi ◽  
Stefano Lucidi ◽  
Francesco Rinaldi ◽  
Luis Nunes Vicente

Acta Numerica ◽  
2019 ◽  
Vol 28 ◽  
pp. 287-404 ◽  
Author(s):  
Jeffrey Larson ◽  
Matt Menickelly ◽  
Stefan M. Wild

In many optimization problems arising from scientific, engineering and artificial intelligence applications, objective and constraint functions are available only as the output of a black-box or simulation oracle that does not provide derivative information. Such settings necessitate the use of methods for derivative-free, or zeroth-order, optimization. We provide a review and perspectives on developments in these methods, with an emphasis on highlighting recent developments and on unifying treatment of such problems in the non-linear optimization and machine learning literature. We categorize methods based on assumed properties of the black-box functions, as well as features of the methods. We first overview the primary setting of deterministic methods applied to unconstrained, non-convex optimization problems where the objective function is defined by a deterministic black-box oracle. We then discuss developments in randomized methods, methods that assume some additional structure about the objective (including convexity, separability and general non-smooth compositions), methods for problems where the output of the black-box oracle is stochastic, and methods for handling different types of constraints.


Author(s):  
Giuseppe Ughi ◽  
Vinayak Abrol ◽  
Jared Tanner

AbstractWe perform a comprehensive study on the performance of derivative free optimization (DFO) algorithms for the generation of targeted black-box adversarial attacks on Deep Neural Network (DNN) classifiers assuming the perturbation energy is bounded by an $$\ell _\infty$$ ℓ ∞ constraint and the number of queries to the network is limited. This paper considers four pre-existing state-of-the-art DFO-based algorithms along with a further developed algorithm built on BOBYQA, a model-based DFO method. We compare these algorithms in a variety of settings according to the fraction of images that they successfully misclassify given a maximum number of queries to the DNN. The experiments disclose how the likelihood of finding an adversarial example depends on both the algorithm used and the setting of the attack; algorithms limiting the search of adversarial example to the vertices of the $$\ell ^\infty$$ ℓ ∞ constraint work particularly well without structural defenses, while the presented BOBYQA based algorithm works better for especially small perturbation energies. This variance in performance highlights the importance of new algorithms being compared to the state-of-the-art in a variety of settings, and the effectiveness of adversarial defenses being tested using as wide a range of algorithms as possible.


Sign in / Sign up

Export Citation Format

Share Document