Universal Feedback Controllers and Inverse Optimality for Nonlinear Stochastic Systems

2019 ◽  
Vol 142 (2) ◽  
Author(s):  
Wassim M. Haddad ◽  
Xu Jin

Abstract In this paper, we develop a constructive finite time stabilizing feedback control law for stochastic dynamical systems driven by Wiener processes based on the existence of a stochastic control Lyapunov function. In addition, we present necessary and sufficient conditions for continuity of such controllers. Moreover, using stochastic control Lyapunov functions, we construct a universal inverse optimal feedback control law for nonlinear stochastic dynamical systems that possess guaranteed gain and sector margins. An illustrative numerical example involving the control of thermoacoustic instabilities in combustion processes is presented to demonstrate the efficacy of the proposed framework.

Author(s):  
Teymur Sadikhov ◽  
Wassim M. Haddad

The consideration of nonsmooth Lyapunov functions for proving stability of feedback discontinuous systems is an important extension to classical stability theory since there exist nonsmooth dynamical systems whose equilibria cannot be proved to be stable using standard continuously differentiable Lyapunov function theory. For dynamical systems with continuously differentiable flows, the concept of smooth control Lyapunov functions was developed by Artstein to show the existence of a feedback stabilizing controller. A constructive feedback control law based on a universal construction of smooth control Lyapunov functions was given by Sontag. Even though a stabilizing continuous feedback controller guarantees the existence of a smooth control Lyapunov function, many systems that possess smooth control Lyapunov functions do not necessarily admit a continuous stabilizing feedback controller. However, the existence of a control Lyapunov function allows for the design of a stabilizing feedback controller that admits Filippov and Krasovskii closed-loop system solutions. In this paper, we develop a constructive feedback control law for discontinuous dynamical systems based on the existence of a nonsmooth control Lyapunov function defined in the sense of generalized Clarke gradients and set-valued Lie derivatives.


2020 ◽  
Vol 30 (11) ◽  
pp. 2050216
Author(s):  
Hui Wang ◽  
Athanasios Tsiairis ◽  
Jinqiao Duan

We investigate the bifurcation phenomena for stochastic systems with multiplicative Gaussian noise, by examining qualitative changes in mean phase portraits. Starting from the Fokker–Planck equation for the probability density function of solution processes, we compute the mean orbits and mean equilibrium states. A change in the number or stability type, when a parameter varies, indicates a stochastic bifurcation. Specifically, we study stochastic bifurcation for three prototypical dynamical systems (i.e. saddle-node, transcritical, and pitchfork systems) under multiplicative Gaussian noise, and have found some interesting phenomena in contrast to the corresponding deterministic counterparts.


2006 ◽  
Vol 2006 ◽  
pp. 1-16 ◽  
Author(s):  
Fouad Mesquine ◽  
Fernando Tadeo ◽  
Abdellah Benzaouia

This paper is devoted to the control of linear systems with constrained control and rate or increment with additive bounded disturbances. Necessary and sufficient conditions such that the system evolution respects rate or increment constraints are used to derive stabilizing feedback control. The control law respects both constraints on control and its rate or increment and is robust against additive bounded disturbances. An application to a surface mount robot, where theY-axis of the machine uses a typical ball screw transmission driven by a DC motor to position circuits boards, is achieved.


1999 ◽  
Vol 121 (4) ◽  
pp. 594-598 ◽  
Author(s):  
V. Radisavljevic ◽  
H. Baruh

A feedback control law is developed for dynamical systems described by constrained generalized coordinates. For certain complex dynamical systems, it is more desirable to develop the mathematical model using more general coordinates then degrees of freedom which leads to differential-algebraic equations of motion. Research in the last few decades has led to several advances in the treatment and in obtaining the solution of differential-algebraic equations. We take advantage of these advances and introduce the differential-algebraic equations and dependent generalized coordinate formulation to control. A tracking feedback control law is designed based on a pointwise-optimal formulation. The stability of pointwise optimal control law is examined.


1975 ◽  
Vol 97 (2) ◽  
pp. 164-171 ◽  
Author(s):  
M. K. O¨zgo¨ren ◽  
R. W. Longman ◽  
C. A. Cooper

The control of artificial in-stream aeration of polluted rivers with multiple waste effluent sources is treated. The optimal feedback control law for this distributed parameter system is determined by solving the partial differential equations along characteristic lines. In this process the double integral cost functional of the distributed parameter system is reduced to a single integral cost. Because certain measurements are time consuming, the feedback control law is obtained in the presence of observation delay in some but not all of the system variables. The open loop optimal control is then found, showing explicity the effect of changes in any of the effluent sources on the aeration strategy. It is shown that the optimal strategy for a distribution of sources can be written as an affine transformation upon the optimal controls for sources of unit strength.


Author(s):  
Javad Sovizi ◽  
Suren Kumar ◽  
Venkat Krovi

Abstract We present a computationally efficient approach for the intra-operative update of the feedback control policy for the steerable needle in the presence of the motion uncertainty. The solution to dynamic programming (DP) equations, to obtain the optimal control policy, is difficult or intractable for nonlinear problems such as steering flexible needle in soft tissue. We use the method of approximating Markov chain to approximate the continuous (and controlled) process with its discrete and locally consistent counterpart. This provides the ground to examine the linear programming (LP) approach to solve the imposed DP problem that significantly reduces the computational demand. A concrete example of the two-dimensional (2D) needle steering is considered to investigate the effectiveness of the LP method for both deterministic and stochastic systems. We compare the performance of the LP-based policy with the results obtained through more computationally demanding algorithm, iterative policy space approximation. Finally, the reliability of the LP-based policy dealing with motion and parametric uncertainties as well as the effect of insertion point/angle on the probability of success is investigated.


Sign in / Sign up

Export Citation Format

Share Document