Empirically comparing the finite-time performance of simulation-optimization algorithms

Author(s):  
Naijia Anna Dong ◽  
David J. Eckman ◽  
Xueqi Zhao ◽  
Shane G. Henderson ◽  
Matthias Poloczek
2014 ◽  
Vol 70 ◽  
pp. 60-68 ◽  
Author(s):  
X. Cai ◽  
Y. Tan ◽  
S.Y. Li ◽  
I. Mareels

Author(s):  
Adrian S. Lewis ◽  
Calvin Wylie

Diverse optimization algorithms correctly identify, in finite time, intrinsic constraints that must be active at optimality. Analogous behavior extends beyond optimization to systems involving partly smooth operators, and in particular to variational inequalities over partly smooth sets. As in classical nonlinear programming, such active‐set structure underlies the design of accelerated local algorithms of Newton type. We formalize this idea in broad generality as a simple linearization scheme for two intersecting manifolds.


Author(s):  
Dongwook Shin ◽  
Mark Broadie ◽  
Assaf Zeevi

Given a finite number of stochastic systems, the goal of our problem is to dynamically allocate a finite sampling budget to maximize the probability of selecting the “best” system. Systems are encoded with the probability distributions that govern sample observations, which are unknown and only assumed to belong to a broad family of distributions that need not admit any parametric representation. The best system is defined as the one with the highest quantile value. The objective of maximizing the probability of selecting this best system is not analytically tractable. In lieu of that, we use the rate function for the probability of error relying on large deviations theory. Our point of departure is an algorithm that naively combines sequential estimation and myopic optimization. This algorithm is shown to be asymptotically optimal; however, it exhibits poor finite-time performance and does not lead itself to implementation in settings with a large number of systems. To address this, we propose practically implementable variants that retain the asymptotic performance of the former while dramatically improving its finite-time performance.


2021 ◽  
Author(s):  
Min Wang ◽  
Lixue Wang

Abstract This paper studies the issue of finite-time performance guaranteed event-triggered (ET) adaptive neural tracking control for strict-feedback nonlinear systems with unknown control direction. A novel finite-time performance function is first constructed to describe the prescribed tracking performance, and then a new lemma is given to show the differentiability and boundedness for the performance function, which is important for the verification of the closed-loop stability. Furthermore, with the help of the error transformation technique, the origin constrained tracking error is transformed into an equivalent unconstrained one. By utilizing the first-order sliding mode differentiator, the issue of ``explosion of complexity'' caused by the backstepping design is adequately addressed. Subsequently, an ingenious adaptive updated law is given to co-design the controller and the ET mechanism by the combination of the Nussbaum-type function, thus effectively handling the influences of the measurement error resulted from the ET mechanism and the challenge of the controller design caused by the unknown control direction. The presented event-triggered control scheme can not only guarantee the prescribed tracking performance, but also alleviate the communication burden simultaneously. Finally, numerical and practical examples are provided to demonstrate the validity of the proposed control strategy.


Sign in / Sign up

Export Citation Format

Share Document