Multivariable Direct Adaptive Control of Thermal Mixing Processes

1985 ◽  
Vol 107 (4) ◽  
pp. 278-283 ◽  
Author(s):  
Qiusheng Zhang ◽  
Masayoshi Tomizuka

Multivariable direct adaptive control is tested on a nonlinear thermal mixing process and is compared with state space based nonadaptive controllers. The linear quadratic optimal control approach is used to design two nonadaptive controllers: one without integral action (ordinary LQ) and the other with integral action (LQI). The operating point is changed over a wide region in the experiment. The adaptive controller is verified to perform most consistently under the tested conditions.

Author(s):  
Mark Balas ◽  
Susan A. Frost

Flexible structures containing a large number of modes can benefit from adaptive control techniques which are well suited to applications that have unknown modeling parameters and poorly known operating conditions. In this paper, we focus on a direct adaptive control approach that has been extended to adaptively reject persistent disturbances. This theory will be extended to accommodate troublesome modal subsystems of a plant that might inhibit the adaptive controller. In some cases the plant in does not satisfy the adaptive controller requirements of Almost Strict Positive Realness. Instead, there maybe be a modal subsystem that inhibits this property. In this paper we will modify the adaptive controller with a Residual Mode Filter (RMF) to compensate for the troublesome modal subsystem, or the Q modes. This paper addresses leakage, or propagation, of the disturbances into the Q modes. We will apply the above theoretical results to a flexible structure example to illustrate the behavior with and without the residual mode filter.


Author(s):  
Mark J. Balas

The goal of this paper is to investigate the use of a very simple direct adaptive controller in the guidance of a large, flexible launch vehicle. The adaptive controller, requiring no on-line information about the plant other than sensor outputs, would be a more robust candidate controller in the presence of unmodeled plant dynamics than a model-based fixed gain linear controller. NASA’s seven-state FRACTAL academic model for ARES I-X was employed as an example launch vehicle on which to develop the controller. To better understand the difficult dynamic issues, we started with a simplified model that incorporated the inherent instability of the plant and the nonminimum phase nature of the dynamics: an inverted pendulum with an attachable slosh tank. We formulated controllers for this simplified plant with slosh dynamics using control algorithms developed only on a reduced–order model consisting of the rigid body dynamics without slosh. The controllers must be designed to reject three different persistent input disturbances: persistent pulse, step, and sine. We assumed that only position feedback was available, and that rates would have to be estimated. For comparison, a fixed gain linear controller was developed using the well-known Linear Quadratic Gaussian methodology employing state estimation to obtain rate estimates. For a stable adaptive controller, we used direct adaptive control theory developed by Balas, et al. For this theory we need CB > 0 and a minimum-phase open-loop transfer function. We employed a new transmission zero selection method to develop a blended output shaping matrix which would satisfy these conditions robustly. We used approximate differentiation filters to obtain rates for the adaptive controller. Again for comparison, we redesigned the LQG controller to use the same blended output matrix and filters. Following the work on the pendulum, the same method was applied to develop an adaptive controller for the FRACTAL launch vehicle model. An adaptive controller stabilizes a rigid body version of FRACTAL over a very long timeline while exceeding all reasonable state and output limits.


Author(s):  
Andrea Pesare ◽  
Michele Palladino ◽  
Maurizio Falcone

AbstractIn this paper, we will deal with a linear quadratic optimal control problem with unknown dynamics. As a modeling assumption, we will suppose that the knowledge that an agent has on the current system is represented by a probability distribution $$\pi $$ π on the space of matrices. Furthermore, we will assume that such a probability measure is opportunely updated to take into account the increased experience that the agent obtains while exploring the environment, approximating with increasing accuracy the underlying dynamics. Under these assumptions, we will show that the optimal control obtained by solving the “average” linear quadratic optimal control problem with respect to a certain $$\pi $$ π converges to the optimal control driven related to the linear quadratic optimal control problem governed by the actual, underlying dynamics. This approach is closely related to model-based reinforcement learning algorithms where prior and posterior probability distributions describing the knowledge on the uncertain system are recursively updated. In the last section, we will show a numerical test that confirms the theoretical results.


Axioms ◽  
2021 ◽  
Vol 10 (3) ◽  
pp. 137
Author(s):  
Vladimir Turetsky

Two inverse ill-posed problems are considered. The first problem is an input restoration of a linear system. The second one is a restoration of time-dependent coefficients of a linear ordinary differential equation. Both problems are reformulated as auxiliary optimal control problems with regularizing cost functional. For the coefficients restoration problem, two control models are proposed. In the first model, the control coefficients are approximated by the output and the estimates of its derivatives. This model yields an approximating linear-quadratic optimal control problem having a known explicit solution. The derivatives are also obtained as auxiliary linear-quadratic tracking controls. The second control model is accurate and leads to a bilinear-quadratic optimal control problem. The latter is tackled in two ways: by an iterative procedure and by a feedback linearization. Simulation results show that a bilinear model provides more accurate coefficients estimates.


Author(s):  
Nacira Agram ◽  
Bernt Øksendal

The classical maximum principle for optimal stochastic control states that if a control [Formula: see text] is optimal, then the corresponding Hamiltonian has a maximum at [Formula: see text]. The first proofs for this result assumed that the control did not enter the diffusion coefficient. Moreover, it was assumed that there were no jumps in the system. Subsequently, it was discovered by Shige Peng (still assuming no jumps) that one could also allow the diffusion coefficient to depend on the control, provided that the corresponding adjoint backward stochastic differential equation (BSDE) for the first-order derivative was extended to include an extra BSDE for the second-order derivatives. In this paper, we present an alternative approach based on Hida–Malliavin calculus and white noise theory. This enables us to handle the general case with jumps, allowing both the diffusion coefficient and the jump coefficient to depend on the control, and we do not need the extra BSDE with second-order derivatives. The result is illustrated by an example of a constrained linear-quadratic optimal control.


Sign in / Sign up

Export Citation Format

Share Document