A Feasible Control Strategy for LQG Control Problem with Parameter and Structure Uncertainties

Author(s):  
Guo Xie ◽  
Dan Zhang ◽  
Xinhong Hei ◽  
Fucai Qian
2019 ◽  
Vol 19 (03) ◽  
pp. 1950019 ◽  
Author(s):  
R. C. Hu ◽  
X. F. Wang ◽  
X. D. Gu ◽  
R. H. Huan

In this paper, nonlinear stochastic optimal control of multi-degree-of-freedom (MDOF) partially observable linear systems subjected to combined harmonic and wide-band random excitations is investigated. Based on the separation principle, the control problem of a partially observable system is converted into a completely observable one. The dynamic programming equation for the completely observable control problem is then set up based on the stochastic averaging method and stochastic dynamic programming principle, from which the nonlinear optimal control law is derived. To illustrate the feasibility and efficiency of the proposed control strategy, the responses of the uncontrolled and optimal controlled systems are respectively obtained by solving the associated Fokker–Planck–Kolmogorov (FPK) equation. Numerical results show the proposed control strategy can dramatically reduce the response of stochastic systems subjected to both harmonic and wide-band random excitations.


Robotica ◽  
2021 ◽  
pp. 1-20
Author(s):  
Shubo Liu ◽  
Guoquan Liu ◽  
Shengbiao Wu

Abstract This study is concerned with the tracking control problem for nonlinear uncertain robotic systems in the presence of unknown actuator nonlinearities. A novel adaptive sliding controller is designed based on a robust disturbance observer without any prior knowledge of actuator nonlinearities and system dynamics. The proposed control strategy can guarantee that the tracking error eventually converges to an arbitrarily small neighborhood of zero. Simulation results are included to demonstrate the effectiveness and superiority of the proposed strategy.


2019 ◽  
Vol 2019 ◽  
pp. 1-19
Author(s):  
Xingge Li ◽  
Gang Li

This article investigates a novel fuzzy-approximation-based nonaffine control strategy for a flexible air-breathing hypersonic vehicle (FHV). Firstly, the nonaffine models are decomposed into an altitude subsystem and a velocity subsystem, and the nonaffine dynamics of the subsystems are processed by using low-pass filters. For the unknown functions and uncertainties in each subsystem, fuzzy approximators are used to approximate the total uncertainties, and norm estimation approach is introduced to reduce the computational complexity of the algorithm. Aiming at the saturation problem of actuator, a saturation auxiliary system is designed to transform the original control problem with input constraints into a new control problem without input constraints. Finally, the superiority of the proposed method is verified by simulation.


2015 ◽  
Vol 47 (1) ◽  
pp. 106-127 ◽  
Author(s):  
François Dufour ◽  
Alexei B. Piunovskiy

In this paper our objective is to study continuous-time Markov decision processes on a general Borel state space with both impulsive and continuous controls for the infinite time horizon discounted cost. The continuous-time controlled process is shown to be nonexplosive under appropriate hypotheses. The so-called Bellman equation associated to this control problem is studied. Sufficient conditions ensuring the existence and the uniqueness of a bounded measurable solution to this optimality equation are provided. Moreover, it is shown that the value function of the optimization problem under consideration satisfies this optimality equation. Sufficient conditions are also presented to ensure on the one hand the existence of an optimal control strategy, and on the other hand the existence of a ε-optimal control strategy. The decomposition of the state space into two disjoint subsets is exhibited where, roughly speaking, one should apply a gradual action or an impulsive action correspondingly to obtain an optimal or ε-optimal strategy. An interesting consequence of our previous results is as follows: the set of strategies that allow interventions at time t = 0 and only immediately after natural jumps is a sufficient set for the control problem under consideration.


1987 ◽  
Vol 109 (3) ◽  
pp. 224-231 ◽  
Author(s):  
H. Hemami ◽  
C. Wongchaisuwat ◽  
J. L. Brinker

A major control problem in the robotic field is the simultaneous and independent control of constrained trajectories and forces of constraint. The trajectories and the forces are related through the mechanical structure of the system. The task of the controller is to influence the mechanical coupling and allow separate control of the trajectories and the forces. A feasible control strategy is by relegation of control to the state or to the input. Relegation by inputs implies assigning the control of trajectories and forces to independent groups of inputs. In this paper, exact and approximate input relegation strategies are investigated. The effectiveness of the input relegation strategy is tested by digital computer simulation of a three link planar robot in a periodic rubbing maneuver.


2020 ◽  
pp. 107754632092989
Author(s):  
Xudong Gu ◽  
Zichen Deng ◽  
Rongchun Hu

An optimal bounded control strategy for strongly nonlinear vibro-impact systems under stochastic excitations with actuator saturation is proposed. First, the impact effect is incorporated in an equivalent equation by using a nonsmooth transformation. Under the assumption of light damping and weak random perturbation, the system energy is a slowly varying process. By using the stochastic averaging of envelope for strongly nonlinear systems, the partially averaged Itô stochastic differential equation for system energy can be derived. The optimal control problem is transformed from the original optimal control problem for the state variables to an equivalent optimal control problem for the system energy, which decreases the dimensions of the optimal control problem. Then, based on stochastic maximum principle, an adjoint equation for the adjoint variable and the maximum condition of partially averaged control problem are established. For infinite time-interval ergodic control, the adjoint variable is assumed to be a stationary process and the adjoint equation can be further simplified. Finally, the probability density function of the system energy and other statistics of the optimally controlled system are derived by calculating the associated Fokker–Plank–Kolmogorov equation. For comparison, the bang–bang control is also investigated and the control results are compared to show the advantages of the developed control strategy.


2013 ◽  
Vol 2013 ◽  
pp. 1-13 ◽  
Author(s):  
Lin-Fei Nie ◽  
Zhi-Dong Teng ◽  
Juan J. Nieto ◽  
Il Hyo Jung

The dynamic behavior of a two-language competitive model is analyzed systemically in this paper. By the linearization and the Bendixson-Dulac theorem on dynamical system, some sufficient conditions on the globally asymptotical stability of the trivial equilibria and the existence and the stability of the positive equilibrium of this model are presented. Nextly, in order to protect the endangered language, an optimal control problem relative to this model is explored. We derive some necessary conditions to solve the optimal control problem and present some numerical simulations using a Runge-Kutta fourth-order method. Finally, the languages competitive model is extended to this model assessing the impact of state-dependent pulse control strategy. Using the Poincaré map, differential inequality, and method of qualitative analysis, we prove the existence and stability of positive order-1 periodic solution for this control model. Numerical simulations are carried out to illustrate the main results and the feasibility of state-dependent impulsive control strategy.


2015 ◽  
Vol 47 (01) ◽  
pp. 106-127 ◽  
Author(s):  
François Dufour ◽  
Alexei B. Piunovskiy

In this paper our objective is to study continuous-time Markov decision processes on a general Borel state space with both impulsive and continuous controls for the infinite time horizon discounted cost. The continuous-time controlled process is shown to be nonexplosive under appropriate hypotheses. The so-called Bellman equation associated to this control problem is studied. Sufficient conditions ensuring the existence and the uniqueness of a bounded measurable solution to this optimality equation are provided. Moreover, it is shown that the value function of the optimization problem under consideration satisfies this optimality equation. Sufficient conditions are also presented to ensure on the one hand the existence of an optimal control strategy, and on the other hand the existence of a ε-optimal control strategy. The decomposition of the state space into two disjoint subsets is exhibited where, roughly speaking, one should apply a gradual action or an impulsive action correspondingly to obtain an optimal or ε-optimal strategy. An interesting consequence of our previous results is as follows: the set of strategies that allow interventions at time t = 0 and only immediately after natural jumps is a sufficient set for the control problem under consideration.


Sign in / Sign up

Export Citation Format

Share Document