Optimal control for stochastic linear quadratic singular neuro Takagi–Sugeno fuzzy system with singular cost using genetic programming

2014 ◽  
Vol 24 ◽  
pp. 1136-1144 ◽  
Author(s):  
N. Kumaresan ◽  
Kuru Ratnavelu
Filomat ◽  
2012 ◽  
Vol 26 (3) ◽  
pp. 415-426 ◽  
Author(s):  
N. Kumaresan

In this paper, optimal control for stochastic singular integro-differential Takagi-Sugeno (T-S) fuzzy system with quadratic performance is obtained using ant colony programming (ACP). To obtain the optimal control, the solution of MRDE is computed by solving differential algebraic equation (DAE) using a novel and nontraditional ACP approach. The obtained solution in this method is equivalent or very close to the exact solution of the problem. An illustrative numerical example is presented for the proposed method.


Author(s):  
Van-Nam Giap ◽  
Shyh-Chour Huang ◽  
Quang D Nguyen ◽  
Te-Jen Su

This paper presents a robust control methodology based on a disturbance observer and an optimal states feedback for Takagi–Sugeno fuzzy system. Firstly, the nonlinear systems were solved by applying a sector nonlinearity method to get the inner linear subsystems and outer fuzzy membership functions, which guaranteed the conversion without any loss generality characteristics of the system. Secondly, an exponentially convergent disturbance observer was constructed to the system with an assumption that the system states are temporarily bounded. Thirdly, a states observer was built by poles placement of linear quadratic regulation optimization, which was used to place the system states error poles located on the stable region. Finally, simulation examples were given to figure out that the proposed controller is effective to control the Takagi–Sugeno fuzzy system. The obtained results are disturbance mostly rejected, state estimation errors are quite small, and the output signal precisely tracked input signal.


2020 ◽  
Vol 26 ◽  
pp. 41
Author(s):  
Tianxiao Wang

This article is concerned with linear quadratic optimal control problems of mean-field stochastic differential equations (MF-SDE) with deterministic coefficients. To treat the time inconsistency of the optimal control problems, linear closed-loop equilibrium strategies are introduced and characterized by variational approach. Our developed methodology drops the delicate convergence procedures in Yong [Trans. Amer. Math. Soc. 369 (2017) 5467–5523]. When the MF-SDE reduces to SDE, our Riccati system coincides with the analogue in Yong [Trans. Amer. Math. Soc. 369 (2017) 5467–5523]. However, these two systems are in general different from each other due to the conditional mean-field terms in the MF-SDE. Eventually, the comparisons with pre-committed optimal strategies, open-loop equilibrium strategies are given in details.


1996 ◽  
Vol 118 (3) ◽  
pp. 482-488 ◽  
Author(s):  
Sergio Bittanti ◽  
Fabrizio Lorito ◽  
Silvia Strada

In this paper, Linear Quadratic (LQ) optimal control concepts are applied for the active control of vibrations in helicopters. The study is based on an identified dynamic model of the rotor. The vibration effect is captured by suitably augmenting the state vector of the rotor model. Then, Kalman filtering concepts can be used to obtain a real-time estimate of the vibration, which is then fed back to form a suitable compensation signal. This design rationale is derived here starting from a rigorous problem position in an optimal control context. Among other things, this calls for a suitable definition of the performance index, of nonstandard type. The application of these ideas to a test helicopter, by means of computer simulations, shows good performances both in terms of disturbance rejection effectiveness and control effort limitation. The performance of the obtained controller is compared with the one achievable by the so called Higher Harmonic Control (HHC) approach, well known within the helicopter community.


Author(s):  
Andrea Pesare ◽  
Michele Palladino ◽  
Maurizio Falcone

AbstractIn this paper, we will deal with a linear quadratic optimal control problem with unknown dynamics. As a modeling assumption, we will suppose that the knowledge that an agent has on the current system is represented by a probability distribution $$\pi $$ π on the space of matrices. Furthermore, we will assume that such a probability measure is opportunely updated to take into account the increased experience that the agent obtains while exploring the environment, approximating with increasing accuracy the underlying dynamics. Under these assumptions, we will show that the optimal control obtained by solving the “average” linear quadratic optimal control problem with respect to a certain $$\pi $$ π converges to the optimal control driven related to the linear quadratic optimal control problem governed by the actual, underlying dynamics. This approach is closely related to model-based reinforcement learning algorithms where prior and posterior probability distributions describing the knowledge on the uncertain system are recursively updated. In the last section, we will show a numerical test that confirms the theoretical results.


Sign in / Sign up

Export Citation Format

Share Document