derivative free optimization
Recently Published Documents


TOTAL DOCUMENTS

216
(FIVE YEARS 73)

H-INDEX

22
(FIVE YEARS 4)

2021 ◽  
Author(s):  
Abhishek Dutta

Abstract COVID-19 together with variants have caused an unprecedented amount of mental and economic turmoil with ever increasing fatality and no proven therapies in sight. The healthcare industry is racing to find a cure with multitude of clinical trials underway to access the efficacy of repurposed antivirals, however the much needed insights into the dynamics of pathogenesis of SARS-CoV-2 and corresponding pharmacology of antivirals are lacking. This paper introduces systematic pathological model learning of COVID-19 dynamics followed by derivative free optimization based multi objective drug rescheduling. The pathological model learnt from clinical data of severe COVID-19 patients treated with Remdesivir could additionally predict immune T cells response and resulted in a dramatic reduction in Remdesivir dose and schedule leading to lower toxicities, however maintaining a high virological efficacy.


2021 ◽  
Author(s):  
Faruk Alpak ◽  
Yixuan Wang ◽  
Guohua Gao ◽  
Vivek Jain

Abstract Recently, a novel distributed quasi-Newton (DQN) derivative-free optimization (DFO) method was developed for generic reservoir performance optimization problems including well-location optimization (WLO) and well-control optimization (WCO). DQN is designed to effectively locate multiple local optima of highly nonlinear optimization problems. However, its performance has neither been validated by realistic applications nor compared to other DFO methods. We have integrated DQN into a versatile field-development optimization platform designed specifically for iterative workflows enabled through distributed-parallel flow simulations. DQN is benchmarked against alternative DFO techniques, namely, the Broyden–Fletcher–Goldfarb–Shanno (BFGS) method hybridized with Direct Pattern Search (BFGS-DPS), Mesh Adaptive Direct Search (MADS), Particle Swarm Optimization (PSO), and Genetic Algorithm (GA). DQN is a multi-thread optimization method that distributes an ensemble of optimization tasks among multiple high-performance-computing nodes. Thus, it can locate multiple optima of the objective function in parallel within a single run. Simulation results computed from one DQN optimization thread are shared with others by updating a unified set of training data points composed of responses (implicit variables) of all successful simulation jobs. The sensitivity matrix at the current best solution of each optimization thread is approximated by a linear-interpolation technique using all or a subset of training-data points. The gradient of the objective function is analytically computed using the estimated sensitivities of implicit variables with respect to explicit variables. The Hessian matrix is then updated using the quasi-Newton method. A new search point for each thread is solved from a trust-region subproblem for the next iteration. In contrast, other DFO methods rely on a single-thread optimization paradigm that can only locate a single optimum. To locate multiple optima, one must repeat the same optimization process multiple times starting from different initial guesses for such methods. Moreover, simulation results generated from a single-thread optimization task cannot be shared with other tasks. Benchmarking results are presented for synthetic yet challenging WLO and WCO problems. Finally, DQN method is field-tested on two realistic applications. DQN identifies the global optimum with the least number of simulations and the shortest run time on a synthetic problem with known solution. On other benchmarking problems without a known solution, DQN identified compatible local optima with reasonably smaller numbers of simulations compared to alternative techniques. Field-testing results reinforce the auspicious computational attributes of DQN. Overall, the results indicate that DQN is a novel and effective parallel algorithm for field-scale development optimization problems.


Author(s):  
Nikolaos Ploskas ◽  
Nikolaos V. Sahinidis

AbstractThis paper reviews the literature on algorithms for solving bound-constrained mixed-integer derivative-free optimization problems and presents a systematic comparison of available implementations of these algorithms on a large collection of test problems. Thirteen derivative-free optimization solvers are compared using a test set of 267 problems. The testbed includes: (i) pure-integer and mixed-integer problems, and (ii) small, medium, and large problems covering a wide range of characteristics found in applications. We evaluate the solvers according to their ability to find a near-optimal solution, find the best solution among currently available solvers, and improve a given starting point. Computational results show that the ability of all these solvers to obtain good solutions diminishes with increasing problem size, but the solvers evaluated collectively found optimal solutions for 93% of the problems in our test set. The open-source solvers MISO and NOMAD were the best performers among all solvers tested. MISO outperformed all other solvers on large and binary problems, while NOMAD was the best performer on mixed-integer, non-binary discrete, small, and medium-sized problems.


2021 ◽  
Author(s):  
Raviv Gal ◽  
Eldad Haber ◽  
Brian Irwin ◽  
Marwa Mouallem ◽  
Bilal Saleh ◽  
...  

2021 ◽  
Vol 7 ◽  
pp. e693
Author(s):  
Runze Yang ◽  
Teng Long

In recent years, graph convolutional networks (GCNs) have emerged rapidly due to their excellent performance in graph data processing. However, recent researches show that GCNs are vulnerable to adversarial attacks. An attacker can maliciously modify edges or nodes of the graph to mislead the model’s classification of the target nodes, or even cause a degradation of the model’s overall classification performance. In this paper, we first propose a black-box adversarial attack framework based on derivative-free optimization (DFO) to generate graph adversarial examples without using gradient and apply advanced DFO algorithms conveniently. Second, we implement a direct attack algorithm (DFDA) using the Nevergrad library based on the framework. Additionally, we overcome the problem of large search space by redesigning the perturbation vector using constraint size. Finally, we conducted a series of experiments on different datasets and parameters. The results show that DFDA outperforms Nettack in most cases, and it can achieve an average attack success rate of more than 95% on the Cora dataset when perturbing at most eight edges. This demonstrates that our framework can fully exploit the potential of DFO methods in node classification adversarial attacks.


2021 ◽  
Author(s):  
Muhammad Jalil Ahmad ◽  
Korhan Günel

This study gives a different numerical approach for solving second order differential equation with a Dirichlet boundary condition. Mesh Adaptive Direct Search (MADS) algorithm is adopted to train the feed forward neural network used in this approach. As MADS is a derivative-free optimization algorithm, it helps us to reduce the time-consuming workload in the training stage. The results obtained from this approach are also compared with Generalized Pattern Search (GPS) algorithm.


Author(s):  
Ke Xue ◽  
Chao Qian ◽  
Ling Xu ◽  
Xudong Fei

Non-convex optimization is often involved in artificial intelligence tasks, which may have many saddle points, and is NP-hard to solve. Evolutionary algorithms (EAs) are general-purpose derivative-free optimization algorithms with a good ability to find the global optimum, which can be naturally applied to non-convex optimization. Their performance is, however, limited due to low efficiency. Gradient descent (GD) runs efficiently, but only converges to a first-order stationary point, which may be a saddle point and thus arbitrarily bad. Some recent efforts have been put into combining EAs and GD. However, previous works either utilized only a specific component of EAs, or just combined them heuristically without theoretical guarantee. In this paper, we propose an evolutionary GD (EGD) algorithm by combining typical components, i.e., population and mutation, of EAs with GD. We prove that EGD can converge to a second-order stationary point by escaping the saddle points, and is more efficient than previous algorithms. Empirical results on non-convex synthetic functions as well as reinforcement learning (RL) tasks also show its superiority.


Sign in / Sign up

Export Citation Format

Share Document